Sonic: Shifting Focus to Global Audio Perception in Portrait Animation
2025/01/17
: Our Online huggingface Demo is released.
2025/01/14
: Our inference code and weights are released. Stay tuned, we will continue to polish the model.
2024/12/16
: Our Online Demo is released.
2025/01/17
: Thank you to NewGenAI for promoting our Sonic and creating a Windows-based tutorial on YouTube .
- install pytorch
pip3 install -r requirements.txt
- All models are stored in
checkpoints
by default, and the file structure is as follows
Sonic
├──checkpoints
│ ├──Sonic
│ │ ├──audio2bucket.pth
│ │ ├──audio2token.pth
│ │ ├──unet.pth
│ ├──stable-video-diffusion-img2vid-xt
│ │ ├──...
│ ├──whisper-tiny
│ │ ├──...
│ ├──RIFE
│ │ ├──flownet.pkl
│ ├──yoloface_v5m.pt
├──...
Download by huggingface-cli
follow
python3 -m pip install "huggingface_hub[cli]"
huggingface-cli download LeonJoe13/Sonic --local-dir checkpoints
huggingface-cli download stabilityai/stable-video-diffusion-img2vid-xt --local-dir checkpoints/stable-video-diffusion-img2vid-xt
huggingface-cli download openai/whisper-tiny --local-dir checkpoints/whisper-tiny
or manully download pretrain model, svd-xt-1-1 and whisper-tiny to checkpoints/
python3 demo.py \
'/path/to/input_image' \
'/path/to/input_audio' \
'/path/to/output_video'
@misc{ji2024sonicshiftingfocusglobal,
title={Sonic: Shifting Focus to Global Audio Perception in Portrait Animation},
author={Xiaozhong Ji and Xiaobin Hu and Zhihong Xu and Junwei Zhu and Chuming Lin and Qingdong He and Jiangning Zhang and Donghao Luo and Yi Chen and Qin Lin and Qinglin Lu and Chengjie Wang},
year={2024},
eprint={2411.16331},
archivePrefix={arXiv},
primaryClass={cs.MM},
url={https://arxiv.org/abs/2411.16331},
}