diff --git a/video_gen.md b/video_gen.md index de5851b13a..8e0438dc3a 100644 --- a/video_gen.md +++ b/video_gen.md @@ -175,8 +175,8 @@ We used HunyuanVideo for this study, as it is sufficiently large enough, to show | BF16 + Group offload (leaf) + VAE tiling | 6.66 GB | 887s | | FP8 Upcasting + Group offload (leaf) + VAE tiling | 6.56 GB^ | 885s | -*8Bit models in `bitsandbytes` cannot be moved to CPU from GPU, unlike the 4Bit ones. -^The memory usage does not reduce further because the peak utilizations comes from computing attention and feed-forward. Using [Flash Attention](https://github.com/Dao-AILab/flash-attention) and [Optimized Feed-Forward](https://github.com/huggingface/diffusers/pull/10623) can help lower this requirement to ~5 GB. +*8Bit models in `bitsandbytes` cannot be moved to CPU from GPU, unlike the 4Bit ones. +
^The memory usage does not reduce further because the peak utilizations comes from computing attention and feed-forward. Using [Flash Attention](https://github.com/Dao-AILab/flash-attention) and [Optimized Feed-Forward](https://github.com/huggingface/diffusers/pull/10623) can help lower this requirement to ~5 GB. We used the same settings as above to obtain these numbers. Also note that due to numerical precision loss, quantization can impact the quality of the outputs, effects of which are more prominent in videos than images. @@ -331,4 +331,4 @@ We cited a number of links throughout the post. To make sure you don’t miss ou - [Memory optimization guide for CogVideoX](https://huggingface.co/docs/diffusers/main/en/api/pipelines/cogvideox#memory-optimization) (it applies to other video models, too) - [`finetrainers`](https://github.com/a-r-r-o-w/finetrainers) for fine-tuning -*Acknowledgements: Thanks to [Chunte](https://huggingface.co/Chunte) for creating the beautiful thumbnail for this post. Thanks to [Vaibhav](https://huggingface.co/reach-vb) and [Pedro](https://huggingface.co/pcuenq) for their helpful feedback.* \ No newline at end of file +*Acknowledgements: Thanks to [Chunte](https://huggingface.co/Chunte) for creating the beautiful thumbnail for this post. Thanks to [Vaibhav](https://huggingface.co/reach-vb) and [Pedro](https://huggingface.co/pcuenq) for their helpful feedback.*