From 204f899b6436fe2e1705a0b67c464b30b8137799 Mon Sep 17 00:00:00 2001 From: Shiwei Zhang <134917139+Steven-SWZhang@users.noreply.github.com> Date: Tue, 22 Apr 2025 15:10:03 +0800 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 165c464..cdec2f1 100644 --- a/README.md +++ b/README.md @@ -35,7 +35,7 @@ In this repository, we present **Wan2.1**, a comprehensive and open suite of vid ## Community Works If your work has improved **Wan2.1** and you would like more people to see it, please inform us. -- [Phantom](https://github.com/Phantom-video/Phantom) has developed a unified video generation framework for single and multi-subject references based on **Wan2.1**. Please refire to [their examples](https://github.com/Phantom-video/Phantom). +- [Phantom](https://github.com/Phantom-video/Phantom) has developed a unified video generation framework for single and multi-subject references based on **Wan2.1-T2V-1.3B**. Please refer to [their examples](https://github.com/Phantom-video/Phantom). - [UniAnimate-DiT](https://github.com/ali-vilab/UniAnimate-DiT), based on **Wan2.1-14B-I2V**, has trained a Human image animation model and has open-sourced the inference and training code. Feel free to enjoy it! - [CFG-Zero](https://github.com/WeichenFan/CFG-Zero-star) enhances **Wan2.1** (covering both T2V and I2V models) from the perspective of CFG. - [TeaCache](https://github.com/ali-vilab/TeaCache) now supports **Wan2.1** acceleration, capable of increasing speed by approximately 2x. Feel free to give it a try!