Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use LoRa #21

Open
zhlhlhlhl opened this issue Oct 9, 2024 · 1 comment
Open

How to use LoRa #21

zhlhlhlhl opened this issue Oct 9, 2024 · 1 comment

Comments

@zhlhlhlhl
Copy link

Great job! After pretraining, I want to use Lora to finetune. Can I simply follow the LLava https://github.com/haotian-liu/LLaVA/tree/main, just add --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 in the finetunig script?
I noticed that you comment out a section of the code in the train.py:
# if training_args.lora_enable: # state_dict = get_peft_state_maybe_zero_3( # model.named_parameters(), training_args.lora_bias # ) # non_lora_state_dict = get_peft_state_non_lora_maybe_zero_3( # model.named_parameters() # ) # if training_args.local_rank == 0 or training_args.local_rank == -1: # model.config.save_pretrained(training_args.output_dir) # model.save_pretrained(training_args.output_dir, state_dict=state_dict) # torch.save(non_lora_state_dict, os.path.join(training_args.output_dir, 'non_lora_trainables.bin')) # else: # safe_save_model_for_hf_trainer(trainer=trainer, # output_dir=training_args.output_dir)

Will this have an effect on the trained model?

@xyyandxyy
Copy link

Great job! After pretraining, I want to use Lora to finetune. Can I simply follow the LLava https://github.com/haotian-liu/LLaVA/tree/main, just add --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 in the finetunig script? I noticed that you comment out a section of the code in the train.py: # if training_args.lora_enable: # state_dict = get_peft_state_maybe_zero_3( # model.named_parameters(), training_args.lora_bias # ) # non_lora_state_dict = get_peft_state_non_lora_maybe_zero_3( # model.named_parameters() # ) # if training_args.local_rank == 0 or training_args.local_rank == -1: # model.config.save_pretrained(training_args.output_dir) # model.save_pretrained(training_args.output_dir, state_dict=state_dict) # torch.save(non_lora_state_dict, os.path.join(training_args.output_dir, 'non_lora_trainables.bin')) # else: # safe_save_model_for_hf_trainer(trainer=trainer, # output_dir=training_args.output_dir)

Will this have an effect on the trained model?

Hey, have you gotten LoRA training working?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants