mirror of
https://github.com/Wan-Video/Wan2.1.git
synced 2026-01-11 16:53:34 +00:00
Update README.md
added docker info
This commit is contained in:
parent
cedb800259
commit
6dfd173152
26
README.md
26
README.md
@ -148,6 +148,32 @@ git pull
|
|||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## 🐳 Docker:
|
||||||
|
|
||||||
|
**For Debian-based systems (Ubuntu, Debian, etc.):**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./run-docker-cuda-deb.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
This automated script will:
|
||||||
|
|
||||||
|
- Detect your GPU model and VRAM automatically
|
||||||
|
- Select optimal CUDA architecture for your GPU
|
||||||
|
- Install NVIDIA Docker runtime if needed
|
||||||
|
- Build a Docker image with all dependencies
|
||||||
|
- Run WanGP with optimal settings for your hardware
|
||||||
|
|
||||||
|
**Docker environment includes:**
|
||||||
|
|
||||||
|
- NVIDIA CUDA 12.4.1 with cuDNN support
|
||||||
|
- PyTorch 2.6.0 with CUDA 12.4 support
|
||||||
|
- SageAttention compiled for your specific GPU architecture
|
||||||
|
- Optimized environment variables for performance (TF32, threading, etc.)
|
||||||
|
- Automatic cache directory mounting for faster subsequent runs
|
||||||
|
- Current directory mounted in container - all downloaded models, loras, generated videos and files are saved locally
|
||||||
|
|
||||||
|
**Supported GPUs:** RTX 40XX, RTX 30XX, RTX 20XX, GTX 16XX, GTX 10XX, Tesla V100, A100, H100, and more.
|
||||||
|
|
||||||
## 📦 Installation
|
## 📦 Installation
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user