mirror of
https://github.com/Wan-Video/Wan2.1.git
synced 2025-11-04 14:16:57 +00:00
Added instructions to install on RTX 50xx
This commit is contained in:
parent
1c69191954
commit
4282a4c095
56
README.md
56
README.md
@ -19,6 +19,7 @@ In this repository, we present **Wan2.1**, a comprehensive and open suite of vid
|
||||
|
||||
|
||||
## 🔥 Latest News!!
|
||||
* Mar 20 2025: 👋 Good news ! Official support for RTX 50xx please check the installation instructions below.
|
||||
* Mar 19 2025: 👋 Wan2.1GP v3.2:
|
||||
- Added Classifier-Free Guidance Zero Star. The video should match better the text prompt (especially with text2video) at no performance cost: many thanks to the **CFG Zero * Team:**\
|
||||
Dont hesitate to give them a star if you appreciate the results: https://github.com/WeichenFan/CFG-Zero-star
|
||||
@ -88,7 +89,7 @@ You will find the original Wan2.1 Video repository here: https://github.com/Wan-
|
||||
|
||||
|
||||
|
||||
## Installation Guide for Linux and Windows
|
||||
## Installation Guide for Linux and Windows for GPUs up to RTX40xx
|
||||
|
||||
**If you are looking for a one click installation, just go to the Pinokio App store : https://pinokio.computer/**
|
||||
|
||||
@ -109,15 +110,23 @@ pip install torch==2.6.0 torchvision torchaudio --index-url https://download.pyt
|
||||
# 2. Install pip dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# 3.1 optional Sage attention support (30% faster, easy to install on Linux but much harder on Windows)
|
||||
# 3.1 optional Sage attention support (30% faster)
|
||||
# Windows only: extra step only needed for windows as triton is included in pytorch with the Linux version of pytorch
|
||||
pip install triton-windows
|
||||
# For both Windows and Linux
|
||||
pip install sageattention==1.0.6
|
||||
|
||||
# or for Sage Attention 2 (40% faster, sorry only manual compilation for the moment)
|
||||
|
||||
# 3.2 optional Sage 2 attention support (40% faster)
|
||||
# Windows only
|
||||
pip install triton-windows
|
||||
pip install https://github.com/woct0rdho/SageAttention/releases/download/v2.1.1-windows/sageattention-2.1.1+cu126torch2.6.0-cp310-cp310-win_amd64.whl
|
||||
# Linux only (sorry only manual compilation for the moment, but is straight forward with Linux)
|
||||
git clone https://github.com/thu-ml/SageAttention
|
||||
cd SageAttention
|
||||
pip install -e .
|
||||
|
||||
# 3.2 optional Flash attention support (easy to install on Linux but much harder on Windows)
|
||||
# 3.3 optional Flash attention support (easy to install on Linux but may be complex on Windows as it will try to compile the cuda kernels)
|
||||
pip install flash-attn==2.7.2.post1
|
||||
|
||||
```
|
||||
@ -125,17 +134,38 @@ pip install flash-attn==2.7.2.post1
|
||||
Note pytorch *sdpa attention* is available by default. It is worth installing *Sage attention* (albout not as simple as it sounds) because it offers a 30% speed boost over *sdpa attention* at a small quality cost.
|
||||
In order to install Sage, you will need to install also Triton. If Triton is installed you can turn on *Pytorch Compilation* which will give you an additional 20% speed boost and reduced VRAM consumption.
|
||||
|
||||
### Ready to use python wheels for Windows users
|
||||
I provide here links to simplify the installation for Windows users with Python 3.10 / Pytorch 2.51 / Cuda 12.4. I won't be able to provide support neither guarantee they do what they should do.
|
||||
- Triton attention (needed for *pytorch compilation* and *Sage attention*)
|
||||
```
|
||||
pip install https://github.com/woct0rdho/triton-windows/releases/download/v3.2.0-windows.post9/triton-3.2.0-cp310-cp310-win_amd64.whl # triton for pytorch 2.6.0
|
||||
## Installation Guide for Linux and Windows for GPUs up to RTX50xx
|
||||
RTX50XX are only supported by pytorch starting from pytorch 2.7.0 which is still in beta. Therefore this version may be less stable.\
|
||||
It is important to use Python 3.10 otherwise the pip wheels may not be compatible.
|
||||
```
|
||||
# 0 Download the source and create a Python 3.10.9 environment using conda or create a venv using python
|
||||
git clone https://github.com/deepbeepmeep/Wan2GP.git
|
||||
cd Wan2GP
|
||||
conda create -n wan2gp python=3.10.9
|
||||
conda activate wan2gp
|
||||
|
||||
- Sage attention
|
||||
```
|
||||
pip install https://github.com/deepbeepmeep/SageAttention/raw/refs/heads/main/releases/sageattention-2.1.0-cp310-cp310-win_amd64.whl # for pytorch 2.6.0 (experimental, if it works, otherwise you you will need to install and compile manually, see above)
|
||||
|
||||
# 1 Install pytorch 2.7.0:
|
||||
pip install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu128
|
||||
|
||||
# 2. Install pip dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# 3.1 optional Sage attention support (30% faster)
|
||||
# Windows only: extra step only needed for windows as triton is included in pytorch with the Linux version of pytorch
|
||||
pip install triton-windows
|
||||
# For both Windows and Linux
|
||||
pip install sageattention==1.0.6
|
||||
|
||||
|
||||
# 3.2 optional Sage 2 attention support (40% faster)
|
||||
# Windows only
|
||||
pip install triton-windows
|
||||
pip install https://github.com/woct0rdho/SageAttention/releases/download/v2.1.1-windows/sageattention-2.1.1+cu128torch2.7.0-cp310-cp310-win_amd64.whl
|
||||
|
||||
# Linux only (sorry only manual compilation for the moment, but is straight forward with Linux)
|
||||
git clone https://github.com/thu-ml/SageAttention
|
||||
cd SageAttention
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
## Run the application
|
||||
|
||||
Loading…
Reference in New Issue
Block a user