fa3 latest version changed the return shape of the varlen func to be consistent w fa2. this pr fixes the fa3 attention call as done in https://github.com/Wan-Video/Wan2.2/pull/64
* Update text2video.py to reduce GPU memory by emptying cache
If offload_model is set, empty_cache() must be called after the model is moved to CPU to actually free the GPU. I verified on a RTX 4090 that without calling empty_cache the model remains in memory and the subsequent vae decoding never finishes.
* Update text2video.py only one empty_cache needed before vae decode