mirror of
				https://github.com/Wan-Video/Wan2.1.git
				synced 2025-11-04 06:15:17 +00:00 
			
		
		
		
	This commit introduces comprehensive Docker support for running Wan2.1 video generation models locally with GPU acceleration. Changes: - Add Dockerfile with CUDA 12.1 support and optimized layer caching - Add docker-compose.yml for easy container orchestration - Add .dockerignore for efficient Docker builds - Add DOCKER_SETUP.md with detailed setup and troubleshooting guide - Add DOCKER_QUICKSTART.md for rapid deployment - Add docker-run.sh helper script for container management - Update Makefile with Docker management commands Features: - Full GPU support with NVIDIA Docker runtime - Single-GPU and multi-GPU (FSDP + xDiT) configurations - Memory optimization flags for consumer GPUs (8GB+) - Gradio web interface support on port 7860 - Volume mounts for models, outputs, and cache - Comprehensive troubleshooting and optimization guides - Production-ready security best practices The Docker setup supports all Wan2.1 models (T2V, I2V, FLF2V, VACE) and includes both 1.3B (consumer GPU) and 14B (high-end GPU) variants. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
		
			
				
	
	
	
		
			3.8 KiB
		
	
	
	
	
	
	
	
			
		
		
	
	
			3.8 KiB
		
	
	
	
	
	
	
	
Wan2.1 Docker Quick Start
Get Wan2.1 running in Docker in 5 minutes!
Prerequisites
- Docker 20.10+ installed (Get Docker)
 - NVIDIA GPU with 8GB+ VRAM (for GPU acceleration)
 - NVIDIA Docker runtime installed (Install Guide)
 
Quick Start (3 Steps)
Step 1: Clone and Navigate
git clone https://github.com/Wan-Video/Wan2.1.git
cd Wan2.1
Step 2: Build and Start
Option A: Using the helper script (Recommended)
./docker-run.sh start
Option B: Using Make
make docker-build
make docker-up
Option C: Using Docker Compose directly
docker compose up -d wan2-1
Step 3: Download Models and Run
# Enter the container
./docker-run.sh shell
# OR
make docker-shell
# OR
docker compose exec wan2-1 bash
# Download a model (1.3B for consumer GPUs)
pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.1-T2V-1.3B --local-dir /app/models/Wan2.1-T2V-1.3B
# Generate your first video!
python generate.py \
  --task t2v-1.3B \
  --size 832*480 \
  --ckpt_dir /app/models/Wan2.1-T2V-1.3B \
  --offload_model True \
  --t5_cpu \
  --sample_shift 8 \
  --sample_guide_scale 6 \
  --prompt "A cute cat playing with a ball of yarn"
# Your video will be in /app/outputs (accessible at ./outputs on your host)
Common Commands
Container Management
# Start container
./docker-run.sh start
# Stop container
./docker-run.sh stop
# Restart container
./docker-run.sh restart
# View logs
./docker-run.sh logs
# Enter shell
./docker-run.sh shell
# Check status
./docker-run.sh status
Using Make Commands
make docker-up        # Start
make docker-down      # Stop
make docker-shell     # Enter shell
make docker-logs      # View logs
make docker-status    # Check status
make help            # Show all commands
Run Gradio Web Interface
# Inside the container
cd gradio
python t2v_14B_singleGPU.py --ckpt_dir /app/models/Wan2.1-T2V-1.3B
# Open browser to: http://localhost:7860
Available Models
| Model | VRAM | Resolution | Download Command | 
|---|---|---|---|
| T2V-1.3B | 8GB+ | 480P | huggingface-cli download Wan-AI/Wan2.1-T2V-1.3B --local-dir /app/models/Wan2.1-T2V-1.3B | 
| T2V-14B | 24GB+ | 720P | huggingface-cli download Wan-AI/Wan2.1-T2V-14B --local-dir /app/models/Wan2.1-T2V-14B | 
| I2V-14B-720P | 24GB+ | 720P | huggingface-cli download Wan-AI/Wan2.1-I2V-14B-720P --local-dir /app/models/Wan2.1-I2V-14B-720P | 
| I2V-14B-480P | 16GB+ | 480P | huggingface-cli download Wan-AI/Wan2.1-I2V-14B-480P --local-dir /app/models/Wan2.1-I2V-14B-480P | 
Troubleshooting
"CUDA out of memory"
- Use the 1.3B model with 
--offload_model True --t5_cpu - Reduce resolution to 480P
 
"nvidia-smi not found"
- Ensure NVIDIA Docker runtime is installed
 - Run: 
docker run --rm --gpus all nvidia/cuda:12.1.1-base-ubuntu22.04 nvidia-smi 
Can't access Gradio interface
- Check if port 7860 is exposed: 
docker ps | grep 7860 - Try: 
http://127.0.0.1:7860instead oflocalhost 
Next Steps
- Read the full DOCKER_SETUP.md for advanced configuration
 - Check the main README.md for model details
 - Join the Discord community
 
File Structure
Wan2.1/
├── models/          # Downloaded models (created automatically)
├── outputs/         # Generated videos (accessible from host)
├── cache/           # Model cache
├── Dockerfile       # Docker image definition
├── docker-compose.yml  # Container orchestration
├── docker-run.sh    # Helper script
├── Makefile         # Make commands
└── DOCKER_SETUP.md  # Detailed documentation
Happy Generating! 🎬