Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

v0.6.3

Compare
Choose a tag to compare
@bwanglzu bwanglzu released this 13 Oct 12:48
· 125 commits to main since this release
ef54aae

Release Note

This release contains 2 new features, 2 bug fixes, and 1 documentation improvement.

🆕 Features

Allocate more GPU memory in GPU environments

Previously, the run scheduler was allocating 16GB of VRAM for GPU runs. Now, it allocates 24GB.

Users can now fine-tune significantly larger models and use larger batch sizes.

Add WiSE-FT to CLIP finetuning (#571)

WiSE-FT is a recent development that has proven to be an effective way to fine-tune
models with a strong zero-shot capability, such as CLIP. We have added it to Finetuner
along with documentation on its use.

Finetuner allows you to apply WiSE-FT easily using WiSEFTCallback. Finetuner will
trigger the callback when fine-tuning job finished and merge the weights between the
pre-trained model and the fine-tuned model:

from finetuner.callbacks import WiSEFTCallback

run = finetuner.fit(
    model='ViT-B-32#openai',
    ...,
    loss='CLIPLoss',
    callbacks=[WiSEFTCallback(alpha=0.5)],
)

See the documentation for advice on how to set alpha.

🐞 Bug Fixes

Fix Image Normalization for CLIP Models (#569)

  • Finetuner's image processing was not identical to that used by OpenAI for training CLIP, potentially leading to inconsistent results.
  • The new version fixes the bug and matches OpenAI's preprocessing.

Add open_clip to FinetunerExecutor requirements

The previous version of FinetunerExecutor failed to include the open_clip package in its requirements, forcing users to add it
manually to their executors. This has now been repaired.

📗 Documentation Improvements

Add callbacks documentation (#564)

There is now full documentation for using callbacks with the Finetuner.

🤟 Contributors

We would like to thank all contributors to this release: