You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
model = predict.load_model('path\model.h5') cache.set('model', model)
When I try to retrieve it:
cache.get('model')
I get this error:
Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ram://cf101767-7409-47ba-8f98-0921cc47a20a/variables/variables
You may be trying to load on a different device from the computational device. Consider setting the experimental_io_device option in tf.saved_model.LoadOptions to the io_device such as '/job:localhost'.
Is it possible to load the model once and keep using it in the Django webapp?
The text was updated successfully, but these errors were encountered:
In Django, if I set the loaded model in cache:
model = predict.load_model('path\model.h5')
cache.set('model', model)
When I try to retrieve it:
cache.get('model')
I get this error:
Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ram://cf101767-7409-47ba-8f98-0921cc47a20a/variables/variables
You may be trying to load on a different device from the computational device. Consider setting the
experimental_io_device
option intf.saved_model.LoadOptions
to the io_device such as '/job:localhost'.Is it possible to load the model once and keep using it in the Django webapp?
The text was updated successfully, but these errors were encountered: