Commit Graph

26 Commits

Author SHA1 Message Date
DeepBeepMeep
71697bd7c5 Added Fun InP models support 2025-03-27 16:49:05 +01:00
DeepBeepMeep
16b6fbacec Fixe Star zero bug 2025-03-26 19:07:21 +01:00
DeepBeepMeep
1c69191954 Fixe Sage test 2025-03-26 13:55:38 +01:00
DeepBeepMeep
826d5ac84f Added CFG Zero * 2025-03-26 00:47:10 +01:00
DeepBeepMeep
d7fcce24c3 Fixed Sage detection support 2025-03-25 08:13:00 +01:00
DeepBeepMeep
1949b61a20 RAM optimizations and faster launch 2025-03-25 00:14:14 +01:00
DeepBeepMeep
f2c5a06626 New Multitabs, Save Settings, End Frame 2025-03-24 01:00:52 +01:00
DeepBeepMeep
35071d4c95 fixed bug with Sage2 sm86 architecture 2025-03-20 09:28:13 +01:00
DeepBeepMeep
f2163e0984 This UI color is the good one + slightly reduced VRAM when using Sage2 attention 2025-03-19 23:33:18 +01:00
DeepBeepMeep
a15060267a Lora festival part 2, new macros, new user interface 2025-03-17 23:43:34 +01:00
DeepBeepMeep
e554e1a3d6 Lora fest + Skip Layer Guidance 2025-03-15 01:12:51 +01:00
DeepBeepMeep
d233dd7ed9 Refactored Loras 2025-03-14 23:43:04 +01:00
Jimmy
936db03daa Add skip layer guidance 2025-03-14 00:15:45 -04:00
DeepBeepMeep
f8d9edeb50 Added 10% boost, improved Loras and Teacache 2025-03-10 23:26:42 +01:00
DeepBeepMeep
f9ce97a1ba Fixed pytorch compilation 2025-03-08 16:37:21 +01:00
DeepBeepMeep
28f19586a5 Fixed Flash attention 2025-03-05 23:45:45 +01:00
DeepBeepMeep
24d8beb490 Added support for multiple input images 2025-03-04 14:22:06 +01:00
DeepBeepMeep
697cc2cce5 Implemented VAE tiling 2025-03-04 02:39:44 +01:00
DeepBeepMeep
ec1159bb59 Added TeaCache support 2025-03-03 18:41:33 +01:00
DeepBeepMeep
4b04e6971b Fix pb with Sage1 2025-03-02 22:16:30 +01:00
DeepBeepMeep
f3b365a5da RIFLEx support 2025-03-02 17:55:30 +01:00
DeepBeepMeep
7a8dcbf63d RIFLEx support 2025-03-02 17:48:57 +01:00
DeepBeepMeep
3731ab70e1 Added RIFLEx support 2025-03-02 16:46:52 +01:00
DeepBeepMeep
18940291d4 beta version 2025-03-02 04:05:49 +01:00
Adrian Corduneanu
0e3c42a830
Update text2video.py to reduce GPU memory by emptying cache (#44)
* Update text2video.py to reduce GPU memory by emptying cache

If offload_model is set, empty_cache() must be called after the model is moved to CPU to actually free the GPU. I verified on a RTX 4090 that without calling empty_cache the model remains in memory and the subsequent vae decoding never finishes.

* Update text2video.py only one empty_cache needed before vae decode
2025-02-26 18:56:57 +08:00
WanX-Video-1
65386b2e03 init upload 2025-02-25 22:07:47 +08:00