Skip to content

tykim0507/Motion-LoRA

Repository files navigation

Motion LoRA: Learn motion using Low-Rank Adaptation

Input Image Naive SVD Forward LoRA
input image Image 2 Image 3
input image Image 2 Image 3

Backward Camera movement LoRA, trained with 512 X 512 resolution

Input Image Naive SVD Backward LoRA
input image Image 2 Image 3
input image Image 2 Image 3
Backward Camera movement LoRA, trained on 512 X 512 resolution

Main features

  • Simple codebase for finetuning StableVideoDiffusion
  • Motion LoRA training codebase for StableVideoDiffusion
    • You can train a LoRA for motion control!
  • Compatible with diffusers

News 📰

[2024.05.28] The training code for Motion LoRA based on Stable Video Diffusion is uploaded!

You can also perfom Stable Video Diffusion fine tuning when you turn off the argument --train_lora

Clone our repository

git clone https://github.com/tykim0507/Motion-LoRA.git
cd MotionLoRA

☀️ Start with StableVideoDiffusion

1. Environment Setup ⚙️ (python==3.10.14 recommended)

conda create -n motionlora python=3.10
conda activate motionlora
pip install -r requirements.txt

2.1 Download the models from Hugging Face🤗

Model Resolution Checkpoint
Stable-Video-Diffusion (Text2Video) 1024x576 Hugging Face

2.2 Set file structure

Store them as following structure:

cd MotionLoRA
    .
    └── checkpoints
        └── stable-video-diffusion-img2vid-xt-1-1

We recommend git cloning the huggingface repository using git lfs.

mkdir checkpoints
cd checkpoints
git lfs install
git clone https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt-1-1

3. Prepare video datasets

We have used the Mixkit dataset.
You can simply prepare any type of videos, but with similar motion encoded.
Actually using 1 video is enough for training the motion LoRA!

4. Run Training!

sh train.sh

😆 Citation

@article{motionloratykim,
	title = {MotionLoRA: Learn motion using Low-Rank Adaptation},
	author = {Taeyoon Kim},
	year = {2024},
}

🤓 Acknowledgements

Our codebase builds on Stable Video Diffusion. Thanks to the authors for sharing their codebases!

Additionally, GPU and NFS resources for training are supported by fal.ai🔥.

Feel free to refer to the fal Research Grants!

About

Learning Motion from Low-Rank Adaptation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published