diff --git a/README.md b/README.md index d9be0fd..3ee2d88 100644 --- a/README.md +++ b/README.md @@ -148,6 +148,32 @@ git pull pip install -r requirements.txt ``` +## 🐳 Docker: + +**For Debian-based systems (Ubuntu, Debian, etc.):** + +```bash +./run-docker-cuda-deb.sh +``` + +This automated script will: + +- Detect your GPU model and VRAM automatically +- Select optimal CUDA architecture for your GPU +- Install NVIDIA Docker runtime if needed +- Build a Docker image with all dependencies +- Run WanGP with optimal settings for your hardware + +**Docker environment includes:** + +- NVIDIA CUDA 12.4.1 with cuDNN support +- PyTorch 2.6.0 with CUDA 12.4 support +- SageAttention compiled for your specific GPU architecture +- Optimized environment variables for performance (TF32, threading, etc.) +- Automatic cache directory mounting for faster subsequent runs +- Current directory mounted in container - all downloaded models, loras, generated videos and files are saved locally + +**Supported GPUs:** RTX 40XX, RTX 30XX, RTX 20XX, GTX 16XX, GTX 10XX, Tesla V100, A100, H100, and more. ## 📦 Installation