Commit Graph

13 Commits

Author SHA1 Message Date
DeepBeepMeep
f8d9edeb50 Added 10% boost, improved Loras and Teacache 2025-03-10 23:26:42 +01:00
DeepBeepMeep
f9ce97a1ba Fixed pytorch compilation 2025-03-08 16:37:21 +01:00
DeepBeepMeep
28f19586a5 Fixed Flash attention 2025-03-05 23:45:45 +01:00
DeepBeepMeep
24d8beb490 Added support for multiple input images 2025-03-04 14:22:06 +01:00
DeepBeepMeep
697cc2cce5 Implemented VAE tiling 2025-03-04 02:39:44 +01:00
DeepBeepMeep
ec1159bb59 Added TeaCache support 2025-03-03 18:41:33 +01:00
DeepBeepMeep
4b04e6971b Fix pb with Sage1 2025-03-02 22:16:30 +01:00
DeepBeepMeep
f3b365a5da RIFLEx support 2025-03-02 17:55:30 +01:00
DeepBeepMeep
7a8dcbf63d RIFLEx support 2025-03-02 17:48:57 +01:00
DeepBeepMeep
3731ab70e1 Added RIFLEx support 2025-03-02 16:46:52 +01:00
DeepBeepMeep
18940291d4 beta version 2025-03-02 04:05:49 +01:00
Adrian Corduneanu
0e3c42a830
Update text2video.py to reduce GPU memory by emptying cache (#44)
* Update text2video.py to reduce GPU memory by emptying cache

If offload_model is set, empty_cache() must be called after the model is moved to CPU to actually free the GPU. I verified on a RTX 4090 that without calling empty_cache the model remains in memory and the subsequent vae decoding never finishes.

* Update text2video.py only one empty_cache needed before vae decode
2025-02-26 18:56:57 +08:00
WanX-Video-1
65386b2e03 init upload 2025-02-25 22:07:47 +08:00