Compare commits

...

8 Commits

Author SHA1 Message Date
Harsh K Dadiya
9b7820ad3b
Merge e16713998a into ae487cc653 2025-12-16 12:07:15 +08:00
Yuxuan BIAN
ae487cc653
Add Wan2.1-related community project Video-As-Prompt (#561)
Co-authored-by: Shiwei Zhang <134917139+Steven-SWZhang@users.noreply.github.com>
2025-12-16 00:18:50 +08:00
Shiwei Zhang
854bd88e7f
update README 2025-12-15 17:03:42 +08:00
Yang Yong (雍洋)
8177ee5bc6
Add LightX2V Community Works (#558)
* Add LightX2V Community Works

* update

* update

* update
2025-12-15 16:59:29 +08:00
Shalfun
f134d60bcc
Update README.md (#487)
an open driving world model based on WAN!

Co-authored-by: Shiwei Zhang <134917139+Steven-SWZhang@users.noreply.github.com>
2025-12-15 11:51:44 +08:00
kyujinHan
bcc437daed
Update community works section in README.md (#557) 2025-12-14 19:09:54 +08:00
Shiwei Zhang
e4f90fa81f
Update community works section in README.md 2025-12-10 21:13:53 +08:00
Harsh Dadiya
e16713998a Add Windows-specific installation instructions and requirements file 2025-09-28 14:00:02 +05:30
2 changed files with 35 additions and 0 deletions

View File

@ -36,6 +36,11 @@ In this repository, we present **Wan2.1**, a comprehensive and open suite of vid
## Community Works ## Community Works
If your work has improved **Wan2.1** and you would like more people to see it, please inform us. If your work has improved **Wan2.1** and you would like more people to see it, please inform us.
- [Video-As-Prompt](https://github.com/bytedance/Video-As-Prompt), the first unified semantic-controlled video generation model based on **Wan2.1-14B-I2V** with a Mixture-of-Transformers architecture and in-context controls (e.g., concept, style, motion, camera). Refer to the [project page](https://bytedance.github.io/Video-As-Prompt/) for more examples.
- [LightX2V](https://github.com/ModelTC/LightX2V), a lightweight and efficient video generation framework that integrates **Wan2.1** and **Wan2.2**, supports multiple engineering acceleration techniques for fast inference, which can run on RTX 5090 and RTX 4060 (8GB VRAM).
- [DriVerse](https://github.com/shalfun/DriVerse), an autonomous driving world model based on **Wan2.1-14B-I2V**, generates future driving videos conditioned on any scene frame and given trajectory. Refer to the [project page](https://github.com/shalfun/DriVerse/tree/main) for more examples.
- [Training-Free-WAN-Editing](https://github.com/KyujinHan/Awesome-Training-Free-WAN2.1-Editing), built on **Wan2.1-T2V-1.3B**, allows training-free video editing with image-based training-free methods, such as [FlowEdit](https://arxiv.org/abs/2412.08629) and [FlowAlign](https://arxiv.org/abs/2505.23145).
- [Wan-Move](https://github.com/ali-vilab/Wan-Move), accepted to NeurIPS 2025, a framework that brings **Wan2.1-I2V-14B** to SOTA fine-grained, point-level motion control! Refer to [their project page](https://wan-move.github.io/) for more information.
- [EchoShot](https://github.com/JoHnneyWang/EchoShot), a native multi-shot portrait video generation model based on **Wan2.1-T2V-1.3B**, allows generation of multiple video clips featuring the same character as well as highly flexible content controllability. Refer to [their project page](https://johnneywang.github.io/EchoShot-webpage/) for more information. - [EchoShot](https://github.com/JoHnneyWang/EchoShot), a native multi-shot portrait video generation model based on **Wan2.1-T2V-1.3B**, allows generation of multiple video clips featuring the same character as well as highly flexible content controllability. Refer to [their project page](https://johnneywang.github.io/EchoShot-webpage/) for more information.
- [AniCrafter](https://github.com/MyNiuuu/AniCrafter), a human-centric animation model based on **Wan2.1-14B-I2V**, controls the Video Diffusion Models with 3DGS Avatars to insert and animate anyone into any scene following given motion sequences. Refer to the [project page](https://myniuuu.github.io/AniCrafter) for more examples. - [AniCrafter](https://github.com/MyNiuuu/AniCrafter), a human-centric animation model based on **Wan2.1-14B-I2V**, controls the Video Diffusion Models with 3DGS Avatars to insert and animate anyone into any scene following given motion sequences. Refer to the [project page](https://myniuuu.github.io/AniCrafter) for more examples.
- [HyperMotion](https://vivocameraresearch.github.io/hypermotion/), a human image animation framework based on **Wan2.1**, addresses the challenge of generating complex human body motions in pose-guided animation. Refer to [their website](https://vivocameraresearch.github.io/magictryon/) for more examples. - [HyperMotion](https://vivocameraresearch.github.io/hypermotion/), a human image animation framework based on **Wan2.1**, addresses the challenge of generating complex human body motions in pose-guided animation. Refer to [their website](https://vivocameraresearch.github.io/magictryon/) for more examples.
@ -93,6 +98,11 @@ Install dependencies:
pip install -r requirements.txt pip install -r requirements.txt
``` ```
for windows:
```sh
pip install -r requirements-win.txt
```
#### Model Download #### Model Download

25
requirements-win.txt Normal file
View File

@ -0,0 +1,25 @@
# PyTorch + TorchVision (CUDA 12.8 build for Windows)
--extra-index-url https://download.pytorch.org/whl/cu128
torch
torchvision
# Core dependencies
opencv-python>=4.9.0.80
diffusers>=0.31.0
transformers>=4.49.0
tokenizers>=0.20.3
accelerate>=1.1.1
tqdm
imageio
easydict
ftfy
dashscope
imageio-ffmpeg
gradio>=5.0.0
numpy>=1.23.5,<2
# Known issue:
# flash_attn is not supported on Windows (fails to build).
# Users can skip it or run the project in WSL/Linux if needed.