mirror of
https://github.com/Wan-Video/Wan2.1.git
synced 2025-06-05 14:54:54 +00:00
Update text2video.py to reduce GPU memory by emptying cache (#44)
* Update text2video.py to reduce GPU memory by emptying cache If offload_model is set, empty_cache() must be called after the model is moved to CPU to actually free the GPU. I verified on a RTX 4090 that without calling empty_cache the model remains in memory and the subsequent vae decoding never finishes. * Update text2video.py only one empty_cache needed before vae decode
This commit is contained in:
parent
73648654c5
commit
0e3c42a830
@ -252,6 +252,7 @@ class WanT2V:
|
||||
x0 = latents
|
||||
if offload_model:
|
||||
self.model.cpu()
|
||||
torch.cuda.empty_cache()
|
||||
if self.rank == 0:
|
||||
videos = self.vae.decode(x0)
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user