-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
please how to call it locally #185
Comments
@NanshaNansha To load a model locally using the PeftModel class from a pretrained model, you need to ensure that the base_model and other required files are available locally.
Let me know, if it works |
I did make it work on my m4 MacBook Pro for most part of the demo. You will need to change some code and install packages, but it's doable. The parts working are:
The part is not working for me is: after step 1 above, I actually need to fine-tune llama to generate my own adapter, as the adapter inside this repo is around 1 year old. Looks like training step require cuda, which I don't have. I tried to run it on the cloud with Nvidia card. I'm using around 1000 rows as training data set and 200 rows testing data set. The training is working also, but it's super slow. It requires around 13 hours of training. And, for the training process, I was running the
@BruceYanghy is this expected? |
The text was updated successfully, but these errors were encountered: