Commit Graph

13 Commits

Author SHA1 Message Date
DeepBeepMeep
1949b61a20 RAM optimizations and faster launch 2025-03-25 00:14:14 +01:00
DeepBeepMeep
f2c5a06626 New Multitabs, Save Settings, End Frame 2025-03-24 01:00:52 +01:00
DeepBeepMeep
e554e1a3d6 Lora fest + Skip Layer Guidance 2025-03-15 01:12:51 +01:00
DeepBeepMeep
d233dd7ed9 Refactored Loras 2025-03-14 23:43:04 +01:00
DeepBeepMeep
f8d9edeb50 Added 10% boost, improved Loras and Teacache 2025-03-10 23:26:42 +01:00
DeepBeepMeep
f9ce97a1ba Fixed pytorch compilation 2025-03-08 16:37:21 +01:00
DeepBeepMeep
697cc2cce5 Implemented VAE tiling 2025-03-04 02:39:44 +01:00
DeepBeepMeep
ec1159bb59 Added TeaCache support 2025-03-03 18:41:33 +01:00
DeepBeepMeep
7a8dcbf63d RIFLEx support 2025-03-02 17:48:57 +01:00
DeepBeepMeep
3731ab70e1 Added RIFLEx support 2025-03-02 16:46:52 +01:00
DeepBeepMeep
18940291d4 beta version 2025-03-02 04:05:49 +01:00
Adrian Corduneanu
0e3c42a830
Update text2video.py to reduce GPU memory by emptying cache (#44)
* Update text2video.py to reduce GPU memory by emptying cache

If offload_model is set, empty_cache() must be called after the model is moved to CPU to actually free the GPU. I verified on a RTX 4090 that without calling empty_cache the model remains in memory and the subsequent vae decoding never finishes.

* Update text2video.py only one empty_cache needed before vae decode
2025-02-26 18:56:57 +08:00
WanX-Video-1
65386b2e03 init upload 2025-02-25 22:07:47 +08:00