From a815a48c3eb2ec1efdc16b290310d24038943641 Mon Sep 17 00:00:00 2001 From: WanX-Video Date: Wed, 26 Feb 2025 11:18:26 +0800 Subject: [PATCH] Update README.md --- README.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/README.md b/README.md index d22a050..5d26fde 100644 --- a/README.md +++ b/README.md @@ -320,6 +320,8 @@ We test the computational efficiency of different **Wan2.1** models on different > (3) For the 1.3B model on a single 4090 GPU, set `--offload_model True --t5_cpu`; > (4) For all testings, no prompt extension was applied, meaning `--use_prompt_extend` was not enabled. +> 💡Note: T2V-14B is slower than I2V-14B because the former samples 50 steps while the latter uses 40 steps. + ## Community Contributions - [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio) provides more support for Wan, including video-to-video, FP8 quantization, VRAM optimization, LoRA training, and more. Please refer to [their examples](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/wanvideo).