-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Instructions for BYOM - Bring Your Own Model - LLM #91
Comments
Sorry, the built-in "LLM picker" should have provided Mistral. We'll fix that and release a patched version. |
Thanks for that. But it doesn't reply generally, how could someone install a different model than those proposed by 'LLM picker" ? Where can I find instructions? |
You can try to download model files from huggingface into a dedicated folder under |
@Nuullll And it could probably not working (not referring to Mistral obviously), eventually using AI Playground. I think I'll wait for your next version with Mistral as a choice of "LLM picker". Thank you |
Hello.
I downloaded AI Playground v1.22.1 for desktop GPUs which has a built-in "LLM picker" but unfortunately the dGPU version does not provide Mistral model- as mentioned in the release notes (it has a download link for gemma 7B - a gated LLM with 47.7 GB download requirements but not for Mistral)
I tried to find some instructions of where to download and install Mistral or any other HuggingFace tranformers 4.39 PyTorch LLM compatible model.
So, how do I use Mistral with AI Playground v1.22.1 Arc dGPU version ?
Thank you
The text was updated successfully, but these errors were encountered: