You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Environment, CPU architecture, OS, and Version: win11 wsl ubuntu
Describe the bug
Thanks a lot for the wonderful product that is localai!
I have a small problem and I could not find any clear documentation on how to solve it, so I made some tests and share here the results in case they are useful for somebody else.
The problem is that for a big model that I don't want to re-download I copied the weights in the localai container into /build/models/mymodel, but I am not able to load it. The format of the model is HF with safetensors and the model is loadable from normal python and vllm outside of localai.
I tried to load the model from the folder using this config:
the vllm backend cannot find the cached mymodel in the directory /build/models/mymodel and thinks it has to go to HF, which fails as well with:
11:23PM DBG GRPC(llama-3.2-11B-Vision-Instruct-127.0.0.1:41395): stderr Unexpected err=ValueError('No supported config format found in mymodel'), type(err)=<class 'ValueError'>
For me the fix is to modify LoadModel function in backend/python/backend.py to check if the combination of modelpath and model are a real directory, and pass this to the engineargs:
but it would probably be nicer to have the possibility to set a model path in the localai model config directly, as it is more intuitive and customizable.
Any thoughts on that? Maybe there is a simple other config option that I missed? I also tried download_dir but it was not useful.
The text was updated successfully, but these errors were encountered:
mrceresa
changed the title
[vllm backend] Problem loading a model from a local cached copy using the python vllm backend
[vllm backend] Problem loading a model from a local weights cached dir
Jan 4, 2025
LocalAI version: 2.24.2
Environment, CPU architecture, OS, and Version: win11 wsl ubuntu
Describe the bug
Thanks a lot for the wonderful product that is localai!
I have a small problem and I could not find any clear documentation on how to solve it, so I made some tests and share here the results in case they are useful for somebody else.
The problem is that for a big model that I don't want to re-download I copied the weights in the localai container into /build/models/mymodel, but I am not able to load it. The format of the model is HF with safetensors and the model is loadable from normal python and vllm outside of localai.
I tried to load the model from the folder using this config:
but, when I re-launched local-ai run, I got the error:
10:55PM ERR config is not valid
which is caused by the c.Validate() line 173 in core/config/backend_config_loader.go
looking at the Validate function at line 422 of core/config/backend_config.go it seems that this line:
invalidates the config if there is a path separator in the Model, and the whole config is ignored.
But then I have no way to load from my folder because if I use this configuration instead:
the vllm backend cannot find the cached mymodel in the directory /build/models/mymodel and thinks it has to go to HF, which fails as well with:
11:23PM DBG GRPC(llama-3.2-11B-Vision-Instruct-127.0.0.1:41395): stderr Unexpected err=ValueError('No supported config format found in mymodel'), type(err)=<class 'ValueError'>
For me the fix is to modify LoadModel function in backend/python/backend.py to check if the combination of modelpath and model are a real directory, and pass this to the engineargs:
but it would probably be nicer to have the possibility to set a model path in the localai model config directly, as it is more intuitive and customizable.
Any thoughts on that? Maybe there is a simple other config option that I missed? I also tried download_dir but it was not useful.
The text was updated successfully, but these errors were encountered: