Wan2.1/docker-compose.yml
Claude 0bd40b9bf0
Add professional-grade Docker setup for local deployment
This commit introduces comprehensive Docker support for running Wan2.1
video generation models locally with GPU acceleration.

Changes:
- Add Dockerfile with CUDA 12.1 support and optimized layer caching
- Add docker-compose.yml for easy container orchestration
- Add .dockerignore for efficient Docker builds
- Add DOCKER_SETUP.md with detailed setup and troubleshooting guide
- Add DOCKER_QUICKSTART.md for rapid deployment
- Add docker-run.sh helper script for container management
- Update Makefile with Docker management commands

Features:
- Full GPU support with NVIDIA Docker runtime
- Single-GPU and multi-GPU (FSDP + xDiT) configurations
- Memory optimization flags for consumer GPUs (8GB+)
- Gradio web interface support on port 7860
- Volume mounts for models, outputs, and cache
- Comprehensive troubleshooting and optimization guides
- Production-ready security best practices

The Docker setup supports all Wan2.1 models (T2V, I2V, FLF2V, VACE)
and includes both 1.3B (consumer GPU) and 14B (high-end GPU) variants.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 03:06:50 +00:00

100 lines
2.2 KiB
YAML

version: '3.8'
services:
wan2-1:
build:
context: .
dockerfile: Dockerfile
image: wan2.1:latest
container_name: wan2.1-gpu
# GPU support - requires NVIDIA Docker runtime
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
# Environment variables
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,utility
- CUDA_VISIBLE_DEVICES=0
- PYTHONUNBUFFERED=1
- TORCH_HOME=/app/cache
- HF_HOME=/app/cache/huggingface
- TRANSFORMERS_CACHE=/app/cache/transformers
# Optional: Set your Dashscope API key for prompt extension
# - DASH_API_KEY=your_api_key_here
# - DASH_API_URL=https://dashscope.aliyuncs.com/api/v1
# Volume mounts
volumes:
# Mount models directory (download models here)
- ./models:/app/models
# Mount outputs directory
- ./outputs:/app/outputs
# Mount cache directory for model downloads
- ./cache:/app/cache
# Optional: Mount examples directory if you modify it
- ./examples:/app/examples
# Port mapping for Gradio interface
ports:
- "7860:7860"
# Shared memory size (important for DataLoader workers)
shm_size: '16gb'
# Keep container running
stdin_open: true
tty: true
# Network mode
network_mode: bridge
# Restart policy
restart: unless-stopped
# CPU-only service (for systems without GPU)
wan2-1-cpu:
build:
context: .
dockerfile: Dockerfile
image: wan2.1:latest
container_name: wan2.1-cpu
profiles:
- cpu
environment:
- PYTHONUNBUFFERED=1
- TORCH_HOME=/app/cache
- HF_HOME=/app/cache/huggingface
- TRANSFORMERS_CACHE=/app/cache/transformers
- CUDA_VISIBLE_DEVICES=""
volumes:
- ./models:/app/models
- ./outputs:/app/outputs
- ./cache:/app/cache
- ./examples:/app/examples
ports:
- "7860:7860"
shm_size: '8gb'
stdin_open: true
tty: true
network_mode: bridge
restart: unless-stopped
volumes:
models:
outputs:
cache: