You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey @Qt4arab , we've just published an initial approach for finetuning the last N transformer blocks of the first stage LLM. Best to play around with the hyperparams in finetune_params.py as we didn't determine the optimal set. Let us know if you have any issues or if you're up for contributing any improvements (via param sweep or otherwise!)
Next step to improve finetuning effectiveness is to have LoRA adapters for the first stage LLM which is being worked on here.
I have 50k high quality Arabic dataset,is possible to train the model on Arabic?
The text was updated successfully, but these errors were encountered: