diff --git a/Custom Resolutions Instructions.txt b/Custom Resolutions Instructions.txt new file mode 100644 index 0000000..c11f25d --- /dev/null +++ b/Custom Resolutions Instructions.txt @@ -0,0 +1,16 @@ +You can override the choice of Resolutions offered by WanGP, if you create a file "resolutions.json" in the main WanGP folder. +This file is composed of a list of 2 elements sublists. Each 2 elements sublist should have the format ["Label", "WxH"] where W, H are respectively the Width and Height of the resolution. Please make sure that W and H are multiples of 16. The letter "x" should be placed inbetween these two dimensions. + +Here is below a sample "resolutions.json" file : + +[ + ["1280x720 (16:9, 720p)", "1280x720"], + ["720x1280 (9:16, 720p)", "720x1280"], + ["1024x1024 (1:1, 720p)", "1024x1024"], + ["1280x544 (21:9, 720p)", "1280x544"], + ["544x1280 (9:21, 720p)", "544x1280"], + ["1104x832 (4:3, 720p)", "1104x832"], + ["832x1104 (3:4, 720p)", "832x1104"], + ["960x960 (1:1, 720p)", "960x960"], + ["832x480 (16:9, 480p)", "832x480"] +] \ No newline at end of file diff --git a/README.md b/README.md index 46a7dd1..2301d87 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ WanGP supports the Wan (and derived models), Hunyuan Video and LTV Video models - Very Fast on the latest GPUs - Easy to use Full Web based interface - Auto download of the required model adapted to your specific architecture -- Tools integrated to facilitate Video Generation : Mask Editor, Prompt Enhancer, Temporal and Spatial Generation +- Tools integrated to facilitate Video Generation : Mask Editor, Prompt Enhancer, Temporal and Spatial Generation, MMAudio, Vew - Loras Support to customize each model - Queuing system : make your shopping list of videos to generate and come back later @@ -20,6 +20,21 @@ WanGP supports the Wan (and derived models), Hunyuan Video and LTV Video models **Follow DeepBeepMeep on Twitter/X to get the Latest News**: https://x.com/deepbeepmeep ## 🔥 Latest Updates +### July 2 2025: WanGP v6.5, WanGP takes care of you: lots of quality of life features: +- View directly inside WanGP the properties (seed, resolutions, length, most settings...) of the past generations +- In one click use the newly generated video as a Control Video or Source Video to be continued +- Manage multiple settings for the same model and switch between them using a dropdown box +- WanGP will keep the last generated videos in the Gallery and will remember the last model you used if you restart the app but kept the Web page open + +Taking care of your life is not enough, you want new stuff to play with ? +- MMAudio directly inside WanGP : add an audio soundtrack that matches the content of your video. By the way it is a low VRAM MMAudio and 6 GB of VRAM should be sufficient +- Forgot to upsample your video during the generation ? want to try another MMAudio variation ? Fear not you can also apply upsampling or add an MMAudio track once the video generation is done. Even better you can ask WangGP for multiple variations of MMAudio to pick the one you like best +- MagCache support: a new step skipping approach, supposed to be better than TeaCache. Makes a difference if you usually generate with a high number of steps +- SageAttention2++ support : not just the compatibility but also a slightly reduced VRAM usage +- Video2Video in Wan Text2Video : this is the paradox, a text2video can become a video2video if you start the denoising process later on an existing video +- FusioniX upsampler: this is an illustration of Video2Video in Text2Video. Use the FusioniX text2video model with an output resolution of 1080p and a denoising strength of 0.25 and you will get one of the best upsamplers (in only 2/3 steps, you will need lots of VRAM though). Increase the denoising strength and you will get one of the best Video Restorer +- Preliminary support for multiple Wan Samplers + ### June 23 2025: WanGP v6.3, Vace Unleashed. Thought we couldnt squeeze Vace even more ? - Multithreaded preprocessing when possible for faster generations - Multithreaded frames Lanczos Upsampling as a bonus diff --git a/docs/LORAS.md b/docs/LORAS.md index 82dded4..be53458 100644 --- a/docs/LORAS.md +++ b/docs/LORAS.md @@ -206,17 +206,38 @@ https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_T2V_14B_lightx2v_cfg ## Macro System (Advanced) -Create multiple prompts from templates using macros: +Create multiple prompts from templates using macros. This allows you to generate variations of a sentence by defining lists of values for different variables. + +**Syntax Rule:** + +Define your variables on a single line starting with `!`. Each complete variable definition, including its name and values, **must be separated by a colon (`:`)**. + +**Format:** ``` -! {Subject}="cat","woman","man", {Location}="forest","lake","city", {Possessive}="its","her","his" +! {Variable1}="valueA","valueB" : {Variable2}="valueC","valueD" +This is a template using {Variable1} and {Variable2}. +``` + +**Example:** + +The following macro will generate three distinct prompts by cycling through the values for each variable. + +**Macro Definition:** + +``` +! {Subject}="cat","woman","man" : {Location}="forest","lake","city" : {Possessive}="its","her","his" In the video, a {Subject} is presented. The {Subject} is in a {Location} and looks at {Possessive} watch. ``` -This generates: -1. "In the video, a cat is presented. The cat is in a forest and looks at its watch." -2. "In the video, a woman is presented. The woman is in a lake and looks at her watch." -3. "In the video, a man is presented. The man is in a city and looks at his watch." +**Generated Output:** + +``` +In the video, a cat is presented. The cat is in a forest and looks at its watch. +In the video, a woman is presented. The woman is in a lake and looks at her watch. +In the video, a man is presented. The man is in a city and looks at his watch. +``` + ## Troubleshooting diff --git a/hyvideo/diffusion/pipelines/pipeline_hunyuan_video.py b/hyvideo/diffusion/pipelines/pipeline_hunyuan_video.py index d55fb39..22f652e 100644 --- a/hyvideo/diffusion/pipelines/pipeline_hunyuan_video.py +++ b/hyvideo/diffusion/pipelines/pipeline_hunyuan_video.py @@ -949,11 +949,15 @@ class HunyuanVideoPipeline(DiffusionPipeline): # width = width or self.transformer.config.sample_size * self.vae_scale_factor # to deal with lora scaling and other possible forward hooks trans = self.transformer - if trans.enable_cache: - teacache_multiplier = trans.teacache_multiplier + if trans.enable_cache == "tea": + teacache_multiplier = trans.cache_multiplier trans.accumulated_rel_l1_distance = 0 trans.rel_l1_thresh = 0.1 if teacache_multiplier < 2 else 0.15 - # trans.cache_start_step = int(tea_cache_start_step_perc*num_inference_steps/100) + elif trans.enable_cache == "mag": + trans.compute_magcache_threshold(trans.cache_start_step, num_inference_steps, trans.cache_multiplier) + trans.accumulated_err, trans.accumulated_steps, trans.accumulated_ratio = 0, 0, 1.0 + else: + trans.enable_cache == None # 1. Check inputs. Raise error if not correct self.check_inputs( prompt, diff --git a/hyvideo/diffusion/pipelines/pipeline_hunyuan_video_audio.py b/hyvideo/diffusion/pipelines/pipeline_hunyuan_video_audio.py index b7d8751..191f9ab 100644 --- a/hyvideo/diffusion/pipelines/pipeline_hunyuan_video_audio.py +++ b/hyvideo/diffusion/pipelines/pipeline_hunyuan_video_audio.py @@ -934,10 +934,15 @@ class HunyuanVideoAudioPipeline(DiffusionPipeline): transformer = self.transformer - if transformer.enable_cache: - teacache_multiplier = transformer.teacache_multiplier + if transformer.enable_cache == "tea": + teacache_multiplier = transformer.cache_multiplier transformer.accumulated_rel_l1_distance = 0 transformer.rel_l1_thresh = 0.1 if teacache_multiplier < 2 else 0.15 + elif transformer.enable_cache == "mag": + transformer.compute_magcache_threshold(transformer.cache_start_step, num_inference_steps, transformer.cache_multiplier) + transformer.accumulated_err, transformer.accumulated_steps, transformer.accumulated_ratio = 0, 0, 1.0 + else: + transformer.enable_cache == None # 1. Check inputs. Raise error if not correct self.check_inputs( @@ -1136,7 +1141,7 @@ class HunyuanVideoAudioPipeline(DiffusionPipeline): if self._interrupt: return [None] - if transformer.enable_cache: + if transformer.enable_cache == "tea": cache_size = round( infer_length / frames_per_batch ) transformer.previous_residual = [None] * latent_items cache_all_previous_residual = [None] * latent_items @@ -1144,6 +1149,8 @@ class HunyuanVideoAudioPipeline(DiffusionPipeline): cache_should_calc = [True] * cache_size cache_accumulated_rel_l1_distance = [0.] * cache_size cache_teacache_skipped_steps = [0] * cache_size + elif transformer.enable_cache == "mag": + transformer.previous_residual = [None] * latent_items with self.progress_bar(total=num_inference_steps) as progress_bar: @@ -1180,7 +1187,7 @@ class HunyuanVideoAudioPipeline(DiffusionPipeline): img_ref_len = (latent_model_input.shape[-1] // 2) * (latent_model_input.shape[-2] // 2) * ( 1) img_all_len = (latents_all.shape[-1] // 2) * (latents_all.shape[-2] // 2) * latents_all.shape[-3] - if transformer.enable_cache and cache_size > 1: + if transformer.enable_cache == "tea" and cache_size > 1: for l in range(latent_items): if cache_all_previous_residual[l] != None: bsz = cache_all_previous_residual[l].shape[0] @@ -1297,7 +1304,7 @@ class HunyuanVideoAudioPipeline(DiffusionPipeline): pred_latents[:, :, p] += latents[:, :, iii] counter[:, :, p] += 1 - if transformer.enable_cache and cache_size > 1: + if transformer.enable_cache == "tea" and cache_size > 1: for l in range(latent_items): if transformer.previous_residual[l] != None: bsz = transformer.previous_residual[l].shape[0] diff --git a/hyvideo/modules/models.py b/hyvideo/modules/models.py index 0c5c130..a29e4bd 100644 --- a/hyvideo/modules/models.py +++ b/hyvideo/modules/models.py @@ -494,7 +494,7 @@ class MMSingleStreamBlock(nn.Module): class HYVideoDiffusionTransformer(ModelMixin, ConfigMixin): def preprocess_loras(self, model_type, sd): - if model_type != "i2v" : + if model_type != "hunyuan_i2v" : return sd new_sd = {} for k,v in sd.items(): @@ -797,6 +797,59 @@ class HYVideoDiffusionTransformer(ModelMixin, ConfigMixin): for block in self.single_blocks: block.disable_deterministic() + def compute_magcache_threshold(self, start_step, num_inference_steps = 0, speed_factor =0): + def nearest_interp(src_array, target_length): + src_length = len(src_array) + if target_length == 1: + return np.array([src_array[-1]]) + scale = (src_length - 1) / (target_length - 1) + mapped_indices = np.round(np.arange(target_length) * scale).astype(int) + return src_array[mapped_indices] + + if len(self.def_mag_ratios) != num_inference_steps: + self.mag_ratios = nearest_interp(self.def_mag_ratios, num_inference_steps) + else: + self.mag_ratios = self.def_mag_ratios + + best_deltas = None + best_threshold = 0.01 + best_diff = 1000 + best_signed_diff = 1000 + target_nb_steps= int(num_inference_steps / speed_factor) + threshold = 0.01 + while threshold <= 0.6: + nb_steps = 0 + diff = 1000 + accumulated_err, accumulated_steps, accumulated_ratio = 0, 0, 1.0 + for i in range(num_inference_steps): + if i<=start_step: + skip = False + else: + cur_mag_ratio = self.mag_ratios[i] # conditional and unconditional in one list + accumulated_ratio *= cur_mag_ratio # magnitude ratio between current step and the cached step + accumulated_steps += 1 # skip steps plus 1 + cur_skip_err = np.abs(1-accumulated_ratio) # skip error of current steps + accumulated_err += cur_skip_err # accumulated error of multiple steps + if accumulated_err best_diff: + break + threshold += 0.01 + self.magcache_thresh = best_threshold + print(f"Mag Cache, best threshold found:{best_threshold:0.2f} with gain x{num_inference_steps/(target_nb_steps - best_signed_diff):0.2f} for a target of x{speed_factor}") + return best_threshold + def forward( self, x: torch.Tensor, @@ -925,25 +978,38 @@ class HYVideoDiffusionTransformer(ModelMixin, ConfigMixin): if self.enable_cache: if x_id == 0: self.should_calc = True - inp = img[0:1] - vec_ = vec[0:1] - ( img_mod1_shift, img_mod1_scale, _ , _ , _ , _ , ) = self.double_blocks[0].img_mod(vec_).chunk(6, dim=-1) - normed_inp = self.double_blocks[0].img_norm1(inp) - normed_inp = normed_inp.to(torch.bfloat16) - modulated_inp = modulate( normed_inp, shift=img_mod1_shift, scale=img_mod1_scale ) - del normed_inp, img_mod1_shift, img_mod1_scale - if step_no <= self.cache_start_step or step_no == self.num_steps-1: - self.accumulated_rel_l1_distance = 0 - else: - coefficients = [7.33226126e+02, -4.01131952e+02, 6.75869174e+01, -3.14987800e+00, 9.61237896e-02] - rescale_func = np.poly1d(coefficients) - self.accumulated_rel_l1_distance += rescale_func(((modulated_inp-self.previous_modulated_input).abs().mean() / self.previous_modulated_input.abs().mean()).cpu().item()) - if self.accumulated_rel_l1_distance < self.rel_l1_thresh: - self.should_calc = False - self.teacache_skipped_steps += 1 - else: + if self.enable_cache == "mag": + if step_no > self.cache_start_step: + cur_mag_ratio = self.mag_ratios[step_no] + self.accumulated_ratio = self.accumulated_ratio*cur_mag_ratio + cur_skip_err = np.abs(1-self.accumulated_ratio) + self.accumulated_err += cur_skip_err + self.accumulated_steps += 1 + if self.accumulated_err<=self.magcache_thresh and self.accumulated_steps<=self.magcache_K: + self.should_calc = False + self.cache_skipped_steps += 1 + else: + self.accumulated_ratio, self.accumulated_steps, self.accumulated_err = 1.0, 0, 0 + else: + inp = img[0:1] + vec_ = vec[0:1] + ( img_mod1_shift, img_mod1_scale, _ , _ , _ , _ , ) = self.double_blocks[0].img_mod(vec_).chunk(6, dim=-1) + normed_inp = self.double_blocks[0].img_norm1(inp) + normed_inp = normed_inp.to(torch.bfloat16) + modulated_inp = modulate( normed_inp, shift=img_mod1_shift, scale=img_mod1_scale ) + del normed_inp, img_mod1_shift, img_mod1_scale + if step_no <= self.cache_start_step or step_no == self.num_steps-1: self.accumulated_rel_l1_distance = 0 - self.previous_modulated_input = modulated_inp + else: + coefficients = [7.33226126e+02, -4.01131952e+02, 6.75869174e+01, -3.14987800e+00, 9.61237896e-02] + rescale_func = np.poly1d(coefficients) + self.accumulated_rel_l1_distance += rescale_func(((modulated_inp-self.previous_modulated_input).abs().mean() / self.previous_modulated_input.abs().mean()).cpu().item()) + if self.accumulated_rel_l1_distance < self.rel_l1_thresh: + self.should_calc = False + self.cache_skipped_steps += 1 + else: + self.accumulated_rel_l1_distance = 0 + self.previous_modulated_input = modulated_inp else: self.should_calc = True diff --git a/i2v_inference.py b/i2v_inference.py index 7f345b9..f833868 100644 --- a/i2v_inference.py +++ b/i2v_inference.py @@ -584,10 +584,10 @@ def main(): # If teacache => reset counters if trans.enable_cache: trans.teacache_counter = 0 - trans.teacache_multiplier = args.teacache + trans.cache_multiplier = args.teacache trans.cache_start_step = int(args.teacache_start * args.steps / 100.0) trans.num_steps = args.steps - trans.teacache_skipped_steps = 0 + trans.cache_skipped_steps = 0 trans.previous_residual_uncond = None trans.previous_residual_cond = None diff --git a/postprocessing/mmaudio/__init__.py b/postprocessing/mmaudio/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/postprocessing/mmaudio/data/__init__.py b/postprocessing/mmaudio/data/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/postprocessing/mmaudio/data/av_utils.py b/postprocessing/mmaudio/data/av_utils.py new file mode 100644 index 0000000..9780ab0 --- /dev/null +++ b/postprocessing/mmaudio/data/av_utils.py @@ -0,0 +1,188 @@ +from dataclasses import dataclass +from fractions import Fraction +from pathlib import Path +from typing import Optional + +import av +import numpy as np +import torch +from av import AudioFrame + + +@dataclass +class VideoInfo: + duration_sec: float + fps: Fraction + clip_frames: torch.Tensor + sync_frames: torch.Tensor + all_frames: Optional[list[np.ndarray]] + + @property + def height(self): + return self.all_frames[0].shape[0] + + @property + def width(self): + return self.all_frames[0].shape[1] + + @classmethod + def from_image_info(cls, image_info: 'ImageInfo', duration_sec: float, + fps: Fraction) -> 'VideoInfo': + num_frames = int(duration_sec * fps) + all_frames = [image_info.original_frame] * num_frames + return cls(duration_sec=duration_sec, + fps=fps, + clip_frames=image_info.clip_frames, + sync_frames=image_info.sync_frames, + all_frames=all_frames) + + +@dataclass +class ImageInfo: + clip_frames: torch.Tensor + sync_frames: torch.Tensor + original_frame: Optional[np.ndarray] + + @property + def height(self): + return self.original_frame.shape[0] + + @property + def width(self): + return self.original_frame.shape[1] + + +def read_frames(video_path: Path, list_of_fps: list[float], start_sec: float, end_sec: float, + need_all_frames: bool) -> tuple[list[np.ndarray], list[np.ndarray], Fraction]: + output_frames = [[] for _ in list_of_fps] + next_frame_time_for_each_fps = [0.0 for _ in list_of_fps] + time_delta_for_each_fps = [1 / fps for fps in list_of_fps] + all_frames = [] + + # container = av.open(video_path) + with av.open(video_path) as container: + stream = container.streams.video[0] + fps = stream.guessed_rate + stream.thread_type = 'AUTO' + for packet in container.demux(stream): + for frame in packet.decode(): + frame_time = frame.time + if frame_time < start_sec: + continue + if frame_time > end_sec: + break + + frame_np = None + if need_all_frames: + frame_np = frame.to_ndarray(format='rgb24') + all_frames.append(frame_np) + + for i, _ in enumerate(list_of_fps): + this_time = frame_time + while this_time >= next_frame_time_for_each_fps[i]: + if frame_np is None: + frame_np = frame.to_ndarray(format='rgb24') + + output_frames[i].append(frame_np) + next_frame_time_for_each_fps[i] += time_delta_for_each_fps[i] + + output_frames = [np.stack(frames) for frames in output_frames] + return output_frames, all_frames, fps + + +def reencode_with_audio(video_info: VideoInfo, output_path: Path, audio: torch.Tensor, + sampling_rate: int): + container = av.open(output_path, 'w') + output_video_stream = container.add_stream('h264', video_info.fps) + output_video_stream.codec_context.bit_rate = 10 * 1e6 # 10 Mbps + output_video_stream.width = video_info.width + output_video_stream.height = video_info.height + output_video_stream.pix_fmt = 'yuv420p' + + output_audio_stream = container.add_stream('aac', sampling_rate) + + # encode video + for image in video_info.all_frames: + image = av.VideoFrame.from_ndarray(image) + packet = output_video_stream.encode(image) + container.mux(packet) + + for packet in output_video_stream.encode(): + container.mux(packet) + + # convert float tensor audio to numpy array + audio_np = audio.numpy().astype(np.float32) + audio_frame = AudioFrame.from_ndarray(audio_np, format='flt', layout='mono') + audio_frame.sample_rate = sampling_rate + + for packet in output_audio_stream.encode(audio_frame): + container.mux(packet) + + for packet in output_audio_stream.encode(): + container.mux(packet) + + container.close() + + + +import subprocess +import tempfile +from pathlib import Path +import torch + +def remux_with_audio(video_path: Path, output_path: Path, audio: torch.Tensor, sampling_rate: int): + """Remux video with new audio using FFmpeg.""" + with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as f: + temp_path = Path(f.name) + + try: + # Write audio as WAV + import torchaudio + torchaudio.save(str(temp_path), audio.unsqueeze(0) if audio.dim() == 1 else audio, sampling_rate) + + # Remux with FFmpeg + subprocess.run([ + 'ffmpeg', '-i', str(video_path), '-i', str(temp_path), + '-c:v', 'copy', '-c:a', 'aac', '-map', '0:v', '-map', '1:a', + '-shortest', '-y', str(output_path) + ], check=True, capture_output=True) + + finally: + temp_path.unlink(missing_ok=True) + +def remux_with_audio_old(video_path: Path, audio: torch.Tensor, output_path: Path, sampling_rate: int): + """ + NOTE: I don't think we can get the exact video duration right without re-encoding + so we are not using this but keeping it here for reference + """ + video = av.open(video_path) + output = av.open(output_path, 'w') + input_video_stream = video.streams.video[0] + output_video_stream = output.add_stream(template=input_video_stream) + output_audio_stream = output.add_stream('aac', sampling_rate) + + duration_sec = audio.shape[-1] / sampling_rate + + for packet in video.demux(input_video_stream): + # We need to skip the "flushing" packets that `demux` generates. + if packet.dts is None: + continue + # We need to assign the packet to the new stream. + packet.stream = output_video_stream + output.mux(packet) + + # convert float tensor audio to numpy array + audio_np = audio.numpy().astype(np.float32) + audio_frame = av.AudioFrame.from_ndarray(audio_np, format='flt', layout='mono') + audio_frame.sample_rate = sampling_rate + + for packet in output_audio_stream.encode(audio_frame): + output.mux(packet) + + for packet in output_audio_stream.encode(): + output.mux(packet) + + video.close() + output.close() + + output.close() diff --git a/postprocessing/mmaudio/data/data_setup.py b/postprocessing/mmaudio/data/data_setup.py new file mode 100644 index 0000000..13c9c33 --- /dev/null +++ b/postprocessing/mmaudio/data/data_setup.py @@ -0,0 +1,174 @@ +import logging +import random + +import numpy as np +import torch +from omegaconf import DictConfig +from torch.utils.data import DataLoader, Dataset +from torch.utils.data.dataloader import default_collate +from torch.utils.data.distributed import DistributedSampler + +from .eval.audiocaps import AudioCapsData +from .eval.video_dataset import MovieGen, VGGSound +from .extracted_audio import ExtractedAudio +from .extracted_vgg import ExtractedVGG +from .mm_dataset import MultiModalDataset +from ..utils.dist_utils import local_rank + +log = logging.getLogger() + + +# Re-seed randomness every time we start a worker +def worker_init_fn(worker_id: int): + worker_seed = torch.initial_seed() % (2**31) + worker_id + local_rank * 1000 + np.random.seed(worker_seed) + random.seed(worker_seed) + log.debug(f'Worker {worker_id} re-seeded with seed {worker_seed} in rank {local_rank}') + + +def load_vgg_data(cfg: DictConfig, data_cfg: DictConfig) -> Dataset: + dataset = ExtractedVGG(tsv_path=data_cfg.tsv, + data_dim=cfg.data_dim, + premade_mmap_dir=data_cfg.memmap_dir) + + return dataset + + +def load_audio_data(cfg: DictConfig, data_cfg: DictConfig) -> Dataset: + dataset = ExtractedAudio(tsv_path=data_cfg.tsv, + data_dim=cfg.data_dim, + premade_mmap_dir=data_cfg.memmap_dir) + + return dataset + + +def setup_training_datasets(cfg: DictConfig) -> tuple[Dataset, DistributedSampler, DataLoader]: + if cfg.mini_train: + vgg = load_vgg_data(cfg, cfg.data.ExtractedVGG_val) + audiocaps = load_audio_data(cfg, cfg.data.AudioCaps) + dataset = MultiModalDataset([vgg], [audiocaps]) + if cfg.example_train: + video = load_vgg_data(cfg, cfg.data.Example_video) + audio = load_audio_data(cfg, cfg.data.Example_audio) + dataset = MultiModalDataset([video], [audio]) + else: + # load the largest one first + freesound = load_audio_data(cfg, cfg.data.FreeSound) + vgg = load_vgg_data(cfg, cfg.data.ExtractedVGG) + audiocaps = load_audio_data(cfg, cfg.data.AudioCaps) + audioset_sl = load_audio_data(cfg, cfg.data.AudioSetSL) + bbcsound = load_audio_data(cfg, cfg.data.BBCSound) + clotho = load_audio_data(cfg, cfg.data.Clotho) + dataset = MultiModalDataset([vgg] * cfg.vgg_oversample_rate, + [audiocaps, audioset_sl, bbcsound, freesound, clotho]) + + batch_size = cfg.batch_size + num_workers = cfg.num_workers + pin_memory = cfg.pin_memory + sampler, loader = construct_loader(dataset, + batch_size, + num_workers, + shuffle=True, + drop_last=True, + pin_memory=pin_memory) + + return dataset, sampler, loader + + +def setup_test_datasets(cfg): + dataset = load_vgg_data(cfg, cfg.data.ExtractedVGG_test) + + batch_size = cfg.batch_size + num_workers = cfg.num_workers + pin_memory = cfg.pin_memory + sampler, loader = construct_loader(dataset, + batch_size, + num_workers, + shuffle=False, + drop_last=False, + pin_memory=pin_memory) + + return dataset, sampler, loader + + +def setup_val_datasets(cfg: DictConfig) -> tuple[Dataset, DataLoader, DataLoader]: + if cfg.example_train: + dataset = load_vgg_data(cfg, cfg.data.Example_video) + else: + dataset = load_vgg_data(cfg, cfg.data.ExtractedVGG_val) + + val_batch_size = cfg.batch_size + val_eval_batch_size = cfg.eval_batch_size + num_workers = cfg.num_workers + pin_memory = cfg.pin_memory + _, val_loader = construct_loader(dataset, + val_batch_size, + num_workers, + shuffle=False, + drop_last=False, + pin_memory=pin_memory) + _, eval_loader = construct_loader(dataset, + val_eval_batch_size, + num_workers, + shuffle=False, + drop_last=False, + pin_memory=pin_memory) + + return dataset, val_loader, eval_loader + + +def setup_eval_dataset(dataset_name: str, cfg: DictConfig) -> tuple[Dataset, DataLoader]: + if dataset_name.startswith('audiocaps_full'): + dataset = AudioCapsData(cfg.eval_data.AudioCaps_full.audio_path, + cfg.eval_data.AudioCaps_full.csv_path) + elif dataset_name.startswith('audiocaps'): + dataset = AudioCapsData(cfg.eval_data.AudioCaps.audio_path, + cfg.eval_data.AudioCaps.csv_path) + elif dataset_name.startswith('moviegen'): + dataset = MovieGen(cfg.eval_data.MovieGen.video_path, + cfg.eval_data.MovieGen.jsonl_path, + duration_sec=cfg.duration_s) + elif dataset_name.startswith('vggsound'): + dataset = VGGSound(cfg.eval_data.VGGSound.video_path, + cfg.eval_data.VGGSound.csv_path, + duration_sec=cfg.duration_s) + else: + raise ValueError(f'Invalid dataset name: {dataset_name}') + + batch_size = cfg.batch_size + num_workers = cfg.num_workers + pin_memory = cfg.pin_memory + _, loader = construct_loader(dataset, + batch_size, + num_workers, + shuffle=False, + drop_last=False, + pin_memory=pin_memory, + error_avoidance=True) + return dataset, loader + + +def error_avoidance_collate(batch): + batch = list(filter(lambda x: x is not None, batch)) + return default_collate(batch) + + +def construct_loader(dataset: Dataset, + batch_size: int, + num_workers: int, + *, + shuffle: bool = True, + drop_last: bool = True, + pin_memory: bool = False, + error_avoidance: bool = False) -> tuple[DistributedSampler, DataLoader]: + train_sampler = DistributedSampler(dataset, rank=local_rank, shuffle=shuffle) + train_loader = DataLoader(dataset, + batch_size, + sampler=train_sampler, + num_workers=num_workers, + worker_init_fn=worker_init_fn, + drop_last=drop_last, + persistent_workers=num_workers > 0, + pin_memory=pin_memory, + collate_fn=error_avoidance_collate if error_avoidance else None) + return train_sampler, train_loader diff --git a/postprocessing/mmaudio/data/eval/__init__.py b/postprocessing/mmaudio/data/eval/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/postprocessing/mmaudio/data/eval/audiocaps.py b/postprocessing/mmaudio/data/eval/audiocaps.py new file mode 100644 index 0000000..35f4fd9 --- /dev/null +++ b/postprocessing/mmaudio/data/eval/audiocaps.py @@ -0,0 +1,39 @@ +import logging +import os +from collections import defaultdict +from pathlib import Path +from typing import Union + +import pandas as pd +import torch +from torch.utils.data.dataset import Dataset + +log = logging.getLogger() + + +class AudioCapsData(Dataset): + + def __init__(self, audio_path: Union[str, Path], csv_path: Union[str, Path]): + df = pd.read_csv(csv_path).to_dict(orient='records') + + audio_files = sorted(os.listdir(audio_path)) + audio_files = set( + [Path(f).stem for f in audio_files if f.endswith('.wav') or f.endswith('.flac')]) + + self.data = [] + for row in df: + self.data.append({ + 'name': row['name'], + 'caption': row['caption'], + }) + + self.audio_path = Path(audio_path) + self.csv_path = Path(csv_path) + + log.info(f'Found {len(self.data)} matching audio files in {self.audio_path}') + + def __getitem__(self, idx: int) -> torch.Tensor: + return self.data[idx] + + def __len__(self): + return len(self.data) diff --git a/postprocessing/mmaudio/data/eval/moviegen.py b/postprocessing/mmaudio/data/eval/moviegen.py new file mode 100644 index 0000000..a08f628 --- /dev/null +++ b/postprocessing/mmaudio/data/eval/moviegen.py @@ -0,0 +1,131 @@ +import json +import logging +import os +from pathlib import Path +from typing import Union + +import torch +from torch.utils.data.dataset import Dataset +from torchvision.transforms import v2 +from torio.io import StreamingMediaDecoder + +from ...utils.dist_utils import local_rank + +log = logging.getLogger() + +_CLIP_SIZE = 384 +_CLIP_FPS = 8.0 + +_SYNC_SIZE = 224 +_SYNC_FPS = 25.0 + + +class MovieGenData(Dataset): + + def __init__( + self, + video_root: Union[str, Path], + sync_root: Union[str, Path], + jsonl_root: Union[str, Path], + *, + duration_sec: float = 10.0, + read_clip: bool = True, + ): + self.video_root = Path(video_root) + self.sync_root = Path(sync_root) + self.jsonl_root = Path(jsonl_root) + self.read_clip = read_clip + + videos = sorted(os.listdir(self.video_root)) + videos = [v[:-4] for v in videos] # remove extensions + self.captions = {} + + for v in videos: + with open(self.jsonl_root / (v + '.jsonl')) as f: + data = json.load(f) + self.captions[v] = data['audio_prompt'] + + if local_rank == 0: + log.info(f'{len(videos)} videos found in {video_root}') + + self.duration_sec = duration_sec + + self.clip_expected_length = int(_CLIP_FPS * self.duration_sec) + self.sync_expected_length = int(_SYNC_FPS * self.duration_sec) + + self.clip_augment = v2.Compose([ + v2.Resize((_CLIP_SIZE, _CLIP_SIZE), interpolation=v2.InterpolationMode.BICUBIC), + v2.ToImage(), + v2.ToDtype(torch.float32, scale=True), + ]) + + self.sync_augment = v2.Compose([ + v2.Resize((_SYNC_SIZE, _SYNC_SIZE), interpolation=v2.InterpolationMode.BICUBIC), + v2.CenterCrop(_SYNC_SIZE), + v2.ToImage(), + v2.ToDtype(torch.float32, scale=True), + v2.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), + ]) + + self.videos = videos + + def sample(self, idx: int) -> dict[str, torch.Tensor]: + video_id = self.videos[idx] + caption = self.captions[video_id] + + reader = StreamingMediaDecoder(self.video_root / (video_id + '.mp4')) + reader.add_basic_video_stream( + frames_per_chunk=int(_CLIP_FPS * self.duration_sec), + frame_rate=_CLIP_FPS, + format='rgb24', + ) + reader.add_basic_video_stream( + frames_per_chunk=int(_SYNC_FPS * self.duration_sec), + frame_rate=_SYNC_FPS, + format='rgb24', + ) + + reader.fill_buffer() + data_chunk = reader.pop_chunks() + + clip_chunk = data_chunk[0] + sync_chunk = data_chunk[1] + if clip_chunk is None: + raise RuntimeError(f'CLIP video returned None {video_id}') + if clip_chunk.shape[0] < self.clip_expected_length: + raise RuntimeError(f'CLIP video too short {video_id}') + + if sync_chunk is None: + raise RuntimeError(f'Sync video returned None {video_id}') + if sync_chunk.shape[0] < self.sync_expected_length: + raise RuntimeError(f'Sync video too short {video_id}') + + # truncate the video + clip_chunk = clip_chunk[:self.clip_expected_length] + if clip_chunk.shape[0] != self.clip_expected_length: + raise RuntimeError(f'CLIP video wrong length {video_id}, ' + f'expected {self.clip_expected_length}, ' + f'got {clip_chunk.shape[0]}') + clip_chunk = self.clip_augment(clip_chunk) + + sync_chunk = sync_chunk[:self.sync_expected_length] + if sync_chunk.shape[0] != self.sync_expected_length: + raise RuntimeError(f'Sync video wrong length {video_id}, ' + f'expected {self.sync_expected_length}, ' + f'got {sync_chunk.shape[0]}') + sync_chunk = self.sync_augment(sync_chunk) + + data = { + 'name': video_id, + 'caption': caption, + 'clip_video': clip_chunk, + 'sync_video': sync_chunk, + } + + return data + + def __getitem__(self, idx: int) -> dict[str, torch.Tensor]: + return self.sample(idx) + + def __len__(self): + return len(self.captions) diff --git a/postprocessing/mmaudio/data/eval/video_dataset.py b/postprocessing/mmaudio/data/eval/video_dataset.py new file mode 100644 index 0000000..7c30fcd --- /dev/null +++ b/postprocessing/mmaudio/data/eval/video_dataset.py @@ -0,0 +1,197 @@ +import json +import logging +import os +from pathlib import Path +from typing import Union + +import pandas as pd +import torch +from torch.utils.data.dataset import Dataset +from torchvision.transforms import v2 +from torio.io import StreamingMediaDecoder + +from ...utils.dist_utils import local_rank + +log = logging.getLogger() + +_CLIP_SIZE = 384 +_CLIP_FPS = 8.0 + +_SYNC_SIZE = 224 +_SYNC_FPS = 25.0 + + +class VideoDataset(Dataset): + + def __init__( + self, + video_root: Union[str, Path], + *, + duration_sec: float = 8.0, + ): + self.video_root = Path(video_root) + + self.duration_sec = duration_sec + + self.clip_expected_length = int(_CLIP_FPS * self.duration_sec) + self.sync_expected_length = int(_SYNC_FPS * self.duration_sec) + + self.clip_transform = v2.Compose([ + v2.Resize((_CLIP_SIZE, _CLIP_SIZE), interpolation=v2.InterpolationMode.BICUBIC), + v2.ToImage(), + v2.ToDtype(torch.float32, scale=True), + ]) + + self.sync_transform = v2.Compose([ + v2.Resize(_SYNC_SIZE, interpolation=v2.InterpolationMode.BICUBIC), + v2.CenterCrop(_SYNC_SIZE), + v2.ToImage(), + v2.ToDtype(torch.float32, scale=True), + v2.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), + ]) + + # to be implemented by subclasses + self.captions = {} + self.videos = sorted(list(self.captions.keys())) + + def sample(self, idx: int) -> dict[str, torch.Tensor]: + video_id = self.videos[idx] + caption = self.captions[video_id] + + reader = StreamingMediaDecoder(self.video_root / (video_id + '.mp4')) + reader.add_basic_video_stream( + frames_per_chunk=int(_CLIP_FPS * self.duration_sec), + frame_rate=_CLIP_FPS, + format='rgb24', + ) + reader.add_basic_video_stream( + frames_per_chunk=int(_SYNC_FPS * self.duration_sec), + frame_rate=_SYNC_FPS, + format='rgb24', + ) + + reader.fill_buffer() + data_chunk = reader.pop_chunks() + + clip_chunk = data_chunk[0] + sync_chunk = data_chunk[1] + if clip_chunk is None: + raise RuntimeError(f'CLIP video returned None {video_id}') + if clip_chunk.shape[0] < self.clip_expected_length: + raise RuntimeError( + f'CLIP video too short {video_id}, expected {self.clip_expected_length}, got {clip_chunk.shape[0]}' + ) + + if sync_chunk is None: + raise RuntimeError(f'Sync video returned None {video_id}') + if sync_chunk.shape[0] < self.sync_expected_length: + raise RuntimeError( + f'Sync video too short {video_id}, expected {self.sync_expected_length}, got {sync_chunk.shape[0]}' + ) + + # truncate the video + clip_chunk = clip_chunk[:self.clip_expected_length] + if clip_chunk.shape[0] != self.clip_expected_length: + raise RuntimeError(f'CLIP video wrong length {video_id}, ' + f'expected {self.clip_expected_length}, ' + f'got {clip_chunk.shape[0]}') + clip_chunk = self.clip_transform(clip_chunk) + + sync_chunk = sync_chunk[:self.sync_expected_length] + if sync_chunk.shape[0] != self.sync_expected_length: + raise RuntimeError(f'Sync video wrong length {video_id}, ' + f'expected {self.sync_expected_length}, ' + f'got {sync_chunk.shape[0]}') + sync_chunk = self.sync_transform(sync_chunk) + + data = { + 'name': video_id, + 'caption': caption, + 'clip_video': clip_chunk, + 'sync_video': sync_chunk, + } + + return data + + def __getitem__(self, idx: int) -> dict[str, torch.Tensor]: + try: + return self.sample(idx) + except Exception as e: + log.error(f'Error loading video {self.videos[idx]}: {e}') + return None + + def __len__(self): + return len(self.captions) + + +class VGGSound(VideoDataset): + + def __init__( + self, + video_root: Union[str, Path], + csv_path: Union[str, Path], + *, + duration_sec: float = 8.0, + ): + super().__init__(video_root, duration_sec=duration_sec) + self.video_root = Path(video_root) + self.csv_path = Path(csv_path) + + videos = sorted(os.listdir(self.video_root)) + if local_rank == 0: + log.info(f'{len(videos)} videos found in {video_root}') + self.captions = {} + + df = pd.read_csv(csv_path, header=None, names=['id', 'sec', 'caption', + 'split']).to_dict(orient='records') + + videos_no_found = [] + for row in df: + if row['split'] == 'test': + start_sec = int(row['sec']) + video_id = str(row['id']) + # this is how our videos are named + video_name = f'{video_id}_{start_sec:06d}' + if video_name + '.mp4' not in videos: + videos_no_found.append(video_name) + continue + + self.captions[video_name] = row['caption'] + + if local_rank == 0: + log.info(f'{len(videos)} videos found in {video_root}') + log.info(f'{len(self.captions)} useable videos found') + if videos_no_found: + log.info(f'{len(videos_no_found)} found in {csv_path} but not in {video_root}') + log.info( + 'A small amount is expected, as not all videos are still available on YouTube') + + self.videos = sorted(list(self.captions.keys())) + + +class MovieGen(VideoDataset): + + def __init__( + self, + video_root: Union[str, Path], + jsonl_root: Union[str, Path], + *, + duration_sec: float = 10.0, + ): + super().__init__(video_root, duration_sec=duration_sec) + self.video_root = Path(video_root) + self.jsonl_root = Path(jsonl_root) + + videos = sorted(os.listdir(self.video_root)) + videos = [v[:-4] for v in videos] # remove extensions + self.captions = {} + + for v in videos: + with open(self.jsonl_root / (v + '.jsonl')) as f: + data = json.load(f) + self.captions[v] = data['audio_prompt'] + + if local_rank == 0: + log.info(f'{len(videos)} videos found in {video_root}') + + self.videos = videos diff --git a/postprocessing/mmaudio/data/extracted_audio.py b/postprocessing/mmaudio/data/extracted_audio.py new file mode 100644 index 0000000..7e92e81 --- /dev/null +++ b/postprocessing/mmaudio/data/extracted_audio.py @@ -0,0 +1,88 @@ +import logging +from pathlib import Path +from typing import Union + +import pandas as pd +import torch +from tensordict import TensorDict +from torch.utils.data.dataset import Dataset + +from ..utils.dist_utils import local_rank + +log = logging.getLogger() + + +class ExtractedAudio(Dataset): + + def __init__( + self, + tsv_path: Union[str, Path], + *, + premade_mmap_dir: Union[str, Path], + data_dim: dict[str, int], + ): + super().__init__() + + self.data_dim = data_dim + self.df_list = pd.read_csv(tsv_path, sep='\t').to_dict('records') + self.ids = [str(d['id']) for d in self.df_list] + + log.info(f'Loading precomputed mmap from {premade_mmap_dir}') + # load precomputed memory mapped tensors + premade_mmap_dir = Path(premade_mmap_dir) + td = TensorDict.load_memmap(premade_mmap_dir) + log.info(f'Loaded precomputed mmap from {premade_mmap_dir}') + self.mean = td['mean'] + self.std = td['std'] + self.text_features = td['text_features'] + + log.info(f'Loaded {len(self)} samples from {premade_mmap_dir}.') + log.info(f'Loaded mean: {self.mean.shape}.') + log.info(f'Loaded std: {self.std.shape}.') + log.info(f'Loaded text features: {self.text_features.shape}.') + + assert self.mean.shape[1] == self.data_dim['latent_seq_len'], \ + f'{self.mean.shape[1]} != {self.data_dim["latent_seq_len"]}' + assert self.std.shape[1] == self.data_dim['latent_seq_len'], \ + f'{self.std.shape[1]} != {self.data_dim["latent_seq_len"]}' + + assert self.text_features.shape[1] == self.data_dim['text_seq_len'], \ + f'{self.text_features.shape[1]} != {self.data_dim["text_seq_len"]}' + assert self.text_features.shape[-1] == self.data_dim['text_dim'], \ + f'{self.text_features.shape[-1]} != {self.data_dim["text_dim"]}' + + self.fake_clip_features = torch.zeros(self.data_dim['clip_seq_len'], + self.data_dim['clip_dim']) + self.fake_sync_features = torch.zeros(self.data_dim['sync_seq_len'], + self.data_dim['sync_dim']) + self.video_exist = torch.tensor(0, dtype=torch.bool) + self.text_exist = torch.tensor(1, dtype=torch.bool) + + def compute_latent_stats(self) -> tuple[torch.Tensor, torch.Tensor]: + latents = self.mean + return latents.mean(dim=(0, 1)), latents.std(dim=(0, 1)) + + def get_memory_mapped_tensor(self) -> TensorDict: + td = TensorDict({ + 'mean': self.mean, + 'std': self.std, + 'text_features': self.text_features, + }) + return td + + def __getitem__(self, idx: int) -> dict[str, torch.Tensor]: + data = { + 'id': str(self.df_list[idx]['id']), + 'a_mean': self.mean[idx], + 'a_std': self.std[idx], + 'clip_features': self.fake_clip_features, + 'sync_features': self.fake_sync_features, + 'text_features': self.text_features[idx], + 'caption': self.df_list[idx]['caption'], + 'video_exist': self.video_exist, + 'text_exist': self.text_exist, + } + return data + + def __len__(self): + return len(self.ids) diff --git a/postprocessing/mmaudio/data/extracted_vgg.py b/postprocessing/mmaudio/data/extracted_vgg.py new file mode 100644 index 0000000..cfa8612 --- /dev/null +++ b/postprocessing/mmaudio/data/extracted_vgg.py @@ -0,0 +1,101 @@ +import logging +from pathlib import Path +from typing import Union + +import pandas as pd +import torch +from tensordict import TensorDict +from torch.utils.data.dataset import Dataset + +from ..utils.dist_utils import local_rank + +log = logging.getLogger() + + +class ExtractedVGG(Dataset): + + def __init__( + self, + tsv_path: Union[str, Path], + *, + premade_mmap_dir: Union[str, Path], + data_dim: dict[str, int], + ): + super().__init__() + + self.data_dim = data_dim + self.df_list = pd.read_csv(tsv_path, sep='\t').to_dict('records') + self.ids = [d['id'] for d in self.df_list] + + log.info(f'Loading precomputed mmap from {premade_mmap_dir}') + # load precomputed memory mapped tensors + premade_mmap_dir = Path(premade_mmap_dir) + td = TensorDict.load_memmap(premade_mmap_dir) + log.info(f'Loaded precomputed mmap from {premade_mmap_dir}') + self.mean = td['mean'] + self.std = td['std'] + self.clip_features = td['clip_features'] + self.sync_features = td['sync_features'] + self.text_features = td['text_features'] + + if local_rank == 0: + log.info(f'Loaded {len(self)} samples.') + log.info(f'Loaded mean: {self.mean.shape}.') + log.info(f'Loaded std: {self.std.shape}.') + log.info(f'Loaded clip_features: {self.clip_features.shape}.') + log.info(f'Loaded sync_features: {self.sync_features.shape}.') + log.info(f'Loaded text_features: {self.text_features.shape}.') + + assert self.mean.shape[1] == self.data_dim['latent_seq_len'], \ + f'{self.mean.shape[1]} != {self.data_dim["latent_seq_len"]}' + assert self.std.shape[1] == self.data_dim['latent_seq_len'], \ + f'{self.std.shape[1]} != {self.data_dim["latent_seq_len"]}' + + assert self.clip_features.shape[1] == self.data_dim['clip_seq_len'], \ + f'{self.clip_features.shape[1]} != {self.data_dim["clip_seq_len"]}' + assert self.sync_features.shape[1] == self.data_dim['sync_seq_len'], \ + f'{self.sync_features.shape[1]} != {self.data_dim["sync_seq_len"]}' + assert self.text_features.shape[1] == self.data_dim['text_seq_len'], \ + f'{self.text_features.shape[1]} != {self.data_dim["text_seq_len"]}' + + assert self.clip_features.shape[-1] == self.data_dim['clip_dim'], \ + f'{self.clip_features.shape[-1]} != {self.data_dim["clip_dim"]}' + assert self.sync_features.shape[-1] == self.data_dim['sync_dim'], \ + f'{self.sync_features.shape[-1]} != {self.data_dim["sync_dim"]}' + assert self.text_features.shape[-1] == self.data_dim['text_dim'], \ + f'{self.text_features.shape[-1]} != {self.data_dim["text_dim"]}' + + self.video_exist = torch.tensor(1, dtype=torch.bool) + self.text_exist = torch.tensor(1, dtype=torch.bool) + + def compute_latent_stats(self) -> tuple[torch.Tensor, torch.Tensor]: + latents = self.mean + return latents.mean(dim=(0, 1)), latents.std(dim=(0, 1)) + + def get_memory_mapped_tensor(self) -> TensorDict: + td = TensorDict({ + 'mean': self.mean, + 'std': self.std, + 'clip_features': self.clip_features, + 'sync_features': self.sync_features, + 'text_features': self.text_features, + }) + return td + + def __getitem__(self, idx: int) -> dict[str, torch.Tensor]: + data = { + 'id': self.df_list[idx]['id'], + 'a_mean': self.mean[idx], + 'a_std': self.std[idx], + 'clip_features': self.clip_features[idx], + 'sync_features': self.sync_features[idx], + 'text_features': self.text_features[idx], + 'caption': self.df_list[idx]['label'], + 'video_exist': self.video_exist, + 'text_exist': self.text_exist, + } + + return data + + def __len__(self): + return len(self.ids) diff --git a/postprocessing/mmaudio/data/extraction/__init__.py b/postprocessing/mmaudio/data/extraction/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/postprocessing/mmaudio/data/extraction/vgg_sound.py b/postprocessing/mmaudio/data/extraction/vgg_sound.py new file mode 100644 index 0000000..1ac43cb --- /dev/null +++ b/postprocessing/mmaudio/data/extraction/vgg_sound.py @@ -0,0 +1,193 @@ +import logging +import os +from pathlib import Path +from typing import Optional, Union + +import pandas as pd +import torch +import torchaudio +from torch.utils.data.dataset import Dataset +from torchvision.transforms import v2 +from torio.io import StreamingMediaDecoder + +from ...utils.dist_utils import local_rank + +log = logging.getLogger() + +_CLIP_SIZE = 384 +_CLIP_FPS = 8.0 + +_SYNC_SIZE = 224 +_SYNC_FPS = 25.0 + + +class VGGSound(Dataset): + + def __init__( + self, + root: Union[str, Path], + *, + tsv_path: Union[str, Path] = 'sets/vgg3-train.tsv', + sample_rate: int = 16_000, + duration_sec: float = 8.0, + audio_samples: Optional[int] = None, + normalize_audio: bool = False, + ): + self.root = Path(root) + self.normalize_audio = normalize_audio + if audio_samples is None: + self.audio_samples = int(sample_rate * duration_sec) + else: + self.audio_samples = audio_samples + effective_duration = audio_samples / sample_rate + # make sure the duration is close enough, within 15ms + assert abs(effective_duration - duration_sec) < 0.015, \ + f'audio_samples {audio_samples} does not match duration_sec {duration_sec}' + + videos = sorted(os.listdir(self.root)) + videos = set([Path(v).stem for v in videos]) # remove extensions + self.labels = {} + self.videos = [] + missing_videos = [] + + # read the tsv for subset information + df_list = pd.read_csv(tsv_path, sep='\t', dtype={'id': str}).to_dict('records') + for record in df_list: + id = record['id'] + label = record['label'] + if id in videos: + self.labels[id] = label + self.videos.append(id) + else: + missing_videos.append(id) + + if local_rank == 0: + log.info(f'{len(videos)} videos found in {root}') + log.info(f'{len(self.videos)} videos found in {tsv_path}') + log.info(f'{len(missing_videos)} videos missing in {root}') + + self.sample_rate = sample_rate + self.duration_sec = duration_sec + + self.expected_audio_length = audio_samples + self.clip_expected_length = int(_CLIP_FPS * self.duration_sec) + self.sync_expected_length = int(_SYNC_FPS * self.duration_sec) + + self.clip_transform = v2.Compose([ + v2.Resize((_CLIP_SIZE, _CLIP_SIZE), interpolation=v2.InterpolationMode.BICUBIC), + v2.ToImage(), + v2.ToDtype(torch.float32, scale=True), + ]) + + self.sync_transform = v2.Compose([ + v2.Resize(_SYNC_SIZE, interpolation=v2.InterpolationMode.BICUBIC), + v2.CenterCrop(_SYNC_SIZE), + v2.ToImage(), + v2.ToDtype(torch.float32, scale=True), + v2.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), + ]) + + self.resampler = {} + + def sample(self, idx: int) -> dict[str, torch.Tensor]: + video_id = self.videos[idx] + label = self.labels[video_id] + + reader = StreamingMediaDecoder(self.root / (video_id + '.mp4')) + reader.add_basic_video_stream( + frames_per_chunk=int(_CLIP_FPS * self.duration_sec), + frame_rate=_CLIP_FPS, + format='rgb24', + ) + reader.add_basic_video_stream( + frames_per_chunk=int(_SYNC_FPS * self.duration_sec), + frame_rate=_SYNC_FPS, + format='rgb24', + ) + reader.add_basic_audio_stream(frames_per_chunk=2**30, ) + + reader.fill_buffer() + data_chunk = reader.pop_chunks() + + clip_chunk = data_chunk[0] + sync_chunk = data_chunk[1] + audio_chunk = data_chunk[2] + + if clip_chunk is None: + raise RuntimeError(f'CLIP video returned None {video_id}') + if clip_chunk.shape[0] < self.clip_expected_length: + raise RuntimeError( + f'CLIP video too short {video_id}, expected {self.clip_expected_length}, got {clip_chunk.shape[0]}' + ) + + if sync_chunk is None: + raise RuntimeError(f'Sync video returned None {video_id}') + if sync_chunk.shape[0] < self.sync_expected_length: + raise RuntimeError( + f'Sync video too short {video_id}, expected {self.sync_expected_length}, got {sync_chunk.shape[0]}' + ) + + # process audio + sample_rate = int(reader.get_out_stream_info(2).sample_rate) + audio_chunk = audio_chunk.transpose(0, 1) + audio_chunk = audio_chunk.mean(dim=0) # mono + if self.normalize_audio: + abs_max = audio_chunk.abs().max() + audio_chunk = audio_chunk / abs_max * 0.95 + if abs_max <= 1e-6: + raise RuntimeError(f'Audio is silent {video_id}') + + # resample + if sample_rate == self.sample_rate: + audio_chunk = audio_chunk + else: + if sample_rate not in self.resampler: + # https://pytorch.org/audio/stable/tutorials/audio_resampling_tutorial.html#kaiser-best + self.resampler[sample_rate] = torchaudio.transforms.Resample( + sample_rate, + self.sample_rate, + lowpass_filter_width=64, + rolloff=0.9475937167399596, + resampling_method='sinc_interp_kaiser', + beta=14.769656459379492, + ) + audio_chunk = self.resampler[sample_rate](audio_chunk) + + if audio_chunk.shape[0] < self.expected_audio_length: + raise RuntimeError(f'Audio too short {video_id}') + audio_chunk = audio_chunk[:self.expected_audio_length] + + # truncate the video + clip_chunk = clip_chunk[:self.clip_expected_length] + if clip_chunk.shape[0] != self.clip_expected_length: + raise RuntimeError(f'CLIP video wrong length {video_id}, ' + f'expected {self.clip_expected_length}, ' + f'got {clip_chunk.shape[0]}') + clip_chunk = self.clip_transform(clip_chunk) + + sync_chunk = sync_chunk[:self.sync_expected_length] + if sync_chunk.shape[0] != self.sync_expected_length: + raise RuntimeError(f'Sync video wrong length {video_id}, ' + f'expected {self.sync_expected_length}, ' + f'got {sync_chunk.shape[0]}') + sync_chunk = self.sync_transform(sync_chunk) + + data = { + 'id': video_id, + 'caption': label, + 'audio': audio_chunk, + 'clip_video': clip_chunk, + 'sync_video': sync_chunk, + } + + return data + + def __getitem__(self, idx: int) -> dict[str, torch.Tensor]: + try: + return self.sample(idx) + except Exception as e: + log.error(f'Error loading video {self.videos[idx]}: {e}') + return None + + def __len__(self): + return len(self.labels) diff --git a/postprocessing/mmaudio/data/extraction/wav_dataset.py b/postprocessing/mmaudio/data/extraction/wav_dataset.py new file mode 100644 index 0000000..95bfbb3 --- /dev/null +++ b/postprocessing/mmaudio/data/extraction/wav_dataset.py @@ -0,0 +1,132 @@ +import logging +import os +from pathlib import Path +from typing import Union + +import open_clip +import pandas as pd +import torch +import torchaudio +from torch.utils.data.dataset import Dataset + +log = logging.getLogger() + + +class WavTextClipsDataset(Dataset): + + def __init__( + self, + root: Union[str, Path], + *, + captions_tsv: Union[str, Path], + clips_tsv: Union[str, Path], + sample_rate: int, + num_samples: int, + normalize_audio: bool = False, + reject_silent: bool = False, + tokenizer_id: str = 'ViT-H-14-378-quickgelu', + ): + self.root = Path(root) + self.sample_rate = sample_rate + self.num_samples = num_samples + self.normalize_audio = normalize_audio + self.reject_silent = reject_silent + self.tokenizer = open_clip.get_tokenizer(tokenizer_id) + + audios = sorted(os.listdir(self.root)) + audios = set([ + Path(audio).stem for audio in audios + if audio.endswith('.wav') or audio.endswith('.flac') + ]) + self.captions = {} + + # read the caption tsv + df_list = pd.read_csv(captions_tsv, sep='\t', dtype={'id': str}).to_dict('records') + for record in df_list: + id = record['id'] + caption = record['caption'] + self.captions[id] = caption + + # read the clip tsv + df_list = pd.read_csv(clips_tsv, sep='\t', dtype={ + 'id': str, + 'name': str + }).to_dict('records') + self.clips = [] + for record in df_list: + record['id'] = record['id'] + record['name'] = record['name'] + id = record['id'] + name = record['name'] + if name not in self.captions: + log.warning(f'Audio {name} not found in {captions_tsv}') + continue + record['caption'] = self.captions[name] + self.clips.append(record) + + log.info(f'Found {len(self.clips)} audio files in {self.root}') + + self.resampler = {} + + def __getitem__(self, idx: int) -> torch.Tensor: + try: + clip = self.clips[idx] + audio_name = clip['name'] + audio_id = clip['id'] + caption = clip['caption'] + start_sample = clip['start_sample'] + end_sample = clip['end_sample'] + + audio_path = self.root / f'{audio_name}.flac' + if not audio_path.exists(): + audio_path = self.root / f'{audio_name}.wav' + assert audio_path.exists() + + audio_chunk, sample_rate = torchaudio.load(audio_path) + audio_chunk = audio_chunk.mean(dim=0) # mono + abs_max = audio_chunk.abs().max() + if self.normalize_audio: + audio_chunk = audio_chunk / abs_max * 0.95 + + if self.reject_silent and abs_max < 1e-6: + log.warning(f'Rejecting silent audio') + return None + + audio_chunk = audio_chunk[start_sample:end_sample] + + # resample + if sample_rate == self.sample_rate: + audio_chunk = audio_chunk + else: + if sample_rate not in self.resampler: + # https://pytorch.org/audio/stable/tutorials/audio_resampling_tutorial.html#kaiser-best + self.resampler[sample_rate] = torchaudio.transforms.Resample( + sample_rate, + self.sample_rate, + lowpass_filter_width=64, + rolloff=0.9475937167399596, + resampling_method='sinc_interp_kaiser', + beta=14.769656459379492, + ) + audio_chunk = self.resampler[sample_rate](audio_chunk) + + if audio_chunk.shape[0] < self.num_samples: + raise ValueError('Audio is too short') + audio_chunk = audio_chunk[:self.num_samples] + + tokens = self.tokenizer([caption])[0] + + output = { + 'waveform': audio_chunk, + 'id': audio_id, + 'caption': caption, + 'tokens': tokens, + } + + return output + except Exception as e: + log.error(f'Error reading {audio_path}: {e}') + return None + + def __len__(self): + return len(self.clips) diff --git a/postprocessing/mmaudio/data/mm_dataset.py b/postprocessing/mmaudio/data/mm_dataset.py new file mode 100644 index 0000000..a9c7d3d --- /dev/null +++ b/postprocessing/mmaudio/data/mm_dataset.py @@ -0,0 +1,45 @@ +import bisect + +import torch +from torch.utils.data.dataset import Dataset + + +# modified from https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#ConcatDataset +class MultiModalDataset(Dataset): + datasets: list[Dataset] + cumulative_sizes: list[int] + + @staticmethod + def cumsum(sequence): + r, s = [], 0 + for e in sequence: + l = len(e) + r.append(l + s) + s += l + return r + + def __init__(self, video_datasets: list[Dataset], audio_datasets: list[Dataset]): + super().__init__() + self.video_datasets = list(video_datasets) + self.audio_datasets = list(audio_datasets) + self.datasets = self.video_datasets + self.audio_datasets + + self.cumulative_sizes = self.cumsum(self.datasets) + + def __len__(self): + return self.cumulative_sizes[-1] + + def __getitem__(self, idx): + if idx < 0: + if -idx > len(self): + raise ValueError("absolute value of index should not exceed dataset length") + idx = len(self) + idx + dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) + if dataset_idx == 0: + sample_idx = idx + else: + sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] + return self.datasets[dataset_idx][sample_idx] + + def compute_latent_stats(self) -> tuple[torch.Tensor, torch.Tensor]: + return self.video_datasets[0].compute_latent_stats() diff --git a/postprocessing/mmaudio/data/utils.py b/postprocessing/mmaudio/data/utils.py new file mode 100644 index 0000000..ad6be1a --- /dev/null +++ b/postprocessing/mmaudio/data/utils.py @@ -0,0 +1,148 @@ +import logging +import os +import random +import tempfile +from pathlib import Path +from typing import Any, Optional, Union + +import torch +import torch.distributed as dist +from tensordict import MemoryMappedTensor +from torch.utils.data import DataLoader +from torch.utils.data.dataset import Dataset +from tqdm import tqdm + +from ..utils.dist_utils import local_rank, world_size + +scratch_path = Path(os.environ['SLURM_SCRATCH'] if 'SLURM_SCRATCH' in os.environ else '/dev/shm') +shm_path = Path('/dev/shm') + +log = logging.getLogger() + + +def reseed(seed): + random.seed(seed) + torch.manual_seed(seed) + + +def local_scatter_torch(obj: Optional[Any]): + if world_size == 1: + # Just one worker. Do nothing. + return obj + + array = [obj] * world_size + target_array = [None] + if local_rank == 0: + dist.scatter_object_list(target_array, scatter_object_input_list=array, src=0) + else: + dist.scatter_object_list(target_array, scatter_object_input_list=None, src=0) + return target_array[0] + + +class ShardDataset(Dataset): + + def __init__(self, root): + self.root = root + self.shards = sorted(os.listdir(root)) + + def __len__(self): + return len(self.shards) + + def __getitem__(self, idx): + return torch.load(os.path.join(self.root, self.shards[idx]), weights_only=True) + + +def get_tmp_dir(in_memory: bool) -> Path: + return shm_path if in_memory else scratch_path + + +def load_shards_and_share(data_path: Union[str, Path], ids: list[int], + in_memory: bool) -> MemoryMappedTensor: + if local_rank == 0: + with tempfile.NamedTemporaryFile(prefix='shared-tensor-', dir=get_tmp_dir(in_memory)) as f: + log.info(f'Loading shards from {data_path} into {f.name}...') + data = load_shards(data_path, ids=ids, tmp_file_path=f.name) + data = share_tensor_to_all(data) + torch.distributed.barrier() + f.close() # why does the context manager not close the file for me? + else: + log.info('Waiting for the data to be shared with me...') + data = share_tensor_to_all(None) + torch.distributed.barrier() + + return data + + +def load_shards( + data_path: Union[str, Path], + ids: list[int], + *, + tmp_file_path: str, +) -> Union[torch.Tensor, dict[str, torch.Tensor]]: + + id_set = set(ids) + shards = sorted(os.listdir(data_path)) + log.info(f'Found {len(shards)} shards in {data_path}.') + first_shard = torch.load(os.path.join(data_path, shards[0]), weights_only=True) + + log.info(f'Rank {local_rank} created file {tmp_file_path}') + first_item = next(iter(first_shard.values())) + log.info(f'First item shape: {first_item.shape}') + mm_tensor = MemoryMappedTensor.empty(shape=(len(ids), *first_item.shape), + dtype=torch.float32, + filename=tmp_file_path, + existsok=True) + total_count = 0 + used_index = set() + id_indexing = {i: idx for idx, i in enumerate(ids)} + # faster with no workers; otherwise we need to set_sharing_strategy('file_system') + loader = DataLoader(ShardDataset(data_path), batch_size=1, num_workers=0) + for data in tqdm(loader, desc='Loading shards'): + for i, v in data.items(): + if i not in id_set: + continue + + # tensor_index = ids.index(i) + tensor_index = id_indexing[i] + if tensor_index in used_index: + raise ValueError(f'Duplicate id {i} found in {data_path}.') + used_index.add(tensor_index) + mm_tensor[tensor_index] = v + total_count += 1 + + assert total_count == len(ids), f'Expected {len(ids)} tensors, got {total_count}.' + log.info(f'Loaded {total_count} tensors from {data_path}.') + + return mm_tensor + + +def share_tensor_to_all(x: Optional[MemoryMappedTensor]) -> MemoryMappedTensor: + """ + x: the tensor to be shared; None if local_rank != 0 + return: the shared tensor + """ + + # there is no need to share your stuff with anyone if you are alone; must be in memory + if world_size == 1: + return x + + if local_rank == 0: + assert x is not None, 'x must not be None if local_rank == 0' + else: + assert x is None, 'x must be None if local_rank != 0' + + if local_rank == 0: + filename = x.filename + meta_information = (filename, x.shape, x.dtype) + else: + meta_information = None + + filename, data_shape, data_type = local_scatter_torch(meta_information) + if local_rank == 0: + data = x + else: + data = MemoryMappedTensor.from_filename(filename=filename, + dtype=data_type, + shape=data_shape) + + return data diff --git a/postprocessing/mmaudio/eval_utils.py b/postprocessing/mmaudio/eval_utils.py new file mode 100644 index 0000000..53b2b82 --- /dev/null +++ b/postprocessing/mmaudio/eval_utils.py @@ -0,0 +1,259 @@ +import dataclasses +import logging +from pathlib import Path +from typing import Optional + +import numpy as np +import torch +# from colorlog import ColoredFormatter +from PIL import Image +from torchvision.transforms import v2 + +from .data.av_utils import ImageInfo, VideoInfo, read_frames, reencode_with_audio, remux_with_audio +from .model.flow_matching import FlowMatching +from .model.networks import MMAudio +from .model.sequence_config import CONFIG_16K, CONFIG_44K, SequenceConfig +from .model.utils.features_utils import FeaturesUtils +from .utils.download_utils import download_model_if_needed + +log = logging.getLogger() + + +@dataclasses.dataclass +class ModelConfig: + model_name: str + model_path: Path + vae_path: Path + bigvgan_16k_path: Optional[Path] + mode: str + synchformer_ckpt: Path = Path('ckpts/mmaudio/synchformer_state_dict.pth') + + @property + def seq_cfg(self) -> SequenceConfig: + if self.mode == '16k': + return CONFIG_16K + elif self.mode == '44k': + return CONFIG_44K + + def download_if_needed(self): + download_model_if_needed(self.model_path) + download_model_if_needed(self.vae_path) + if self.bigvgan_16k_path is not None: + download_model_if_needed(self.bigvgan_16k_path) + download_model_if_needed(self.synchformer_ckpt) + + +small_16k = ModelConfig(model_name='small_16k', + model_path=Path('./weights/mmaudio_small_16k.pth'), + vae_path=Path('./ext_weights/v1-16.pth'), + bigvgan_16k_path=Path('./ext_weights/best_netG.pt'), + mode='16k') +small_44k = ModelConfig(model_name='small_44k', + model_path=Path('./weights/mmaudio_small_44k.pth'), + vae_path=Path('./ext_weights/v1-44.pth'), + bigvgan_16k_path=None, + mode='44k') +medium_44k = ModelConfig(model_name='medium_44k', + model_path=Path('./weights/mmaudio_medium_44k.pth'), + vae_path=Path('./ext_weights/v1-44.pth'), + bigvgan_16k_path=None, + mode='44k') +large_44k = ModelConfig(model_name='large_44k', + model_path=Path('./weights/mmaudio_large_44k.pth'), + vae_path=Path('./ext_weights/v1-44.pth'), + bigvgan_16k_path=None, + mode='44k') +large_44k_v2 = ModelConfig(model_name='large_44k_v2', + model_path=Path('ckpts/mmaudio/mmaudio_large_44k_v2.pth'), + vae_path=Path('ckpts/mmaudio/v1-44.pth'), + bigvgan_16k_path=None, + mode='44k') +all_model_cfg: dict[str, ModelConfig] = { + 'small_16k': small_16k, + 'small_44k': small_44k, + 'medium_44k': medium_44k, + 'large_44k': large_44k, + 'large_44k_v2': large_44k_v2, +} + + +def generate( + clip_video: Optional[torch.Tensor], + sync_video: Optional[torch.Tensor], + text: Optional[list[str]], + *, + negative_text: Optional[list[str]] = None, + feature_utils: FeaturesUtils, + net: MMAudio, + fm: FlowMatching, + rng: torch.Generator, + cfg_strength: float, + clip_batch_size_multiplier: int = 40, + sync_batch_size_multiplier: int = 40, + image_input: bool = False, + offloadobj = None +) -> torch.Tensor: + device = feature_utils.device + dtype = feature_utils.dtype + + bs = len(text) + if clip_video is not None: + clip_video = clip_video.to(device, dtype, non_blocking=True) + clip_features = feature_utils.encode_video_with_clip(clip_video, + batch_size=bs * + clip_batch_size_multiplier) + if image_input: + clip_features = clip_features.expand(-1, net.clip_seq_len, -1) + else: + clip_features = net.get_empty_clip_sequence(bs) + + if sync_video is not None and not image_input: + sync_video = sync_video.to(device, dtype, non_blocking=True) + sync_features = feature_utils.encode_video_with_sync(sync_video, + batch_size=bs * + sync_batch_size_multiplier) + else: + sync_features = net.get_empty_sync_sequence(bs) + + if text is not None: + text_features = feature_utils.encode_text(text) + else: + text_features = net.get_empty_string_sequence(bs) + + if negative_text is not None: + assert len(negative_text) == bs + negative_text_features = feature_utils.encode_text(negative_text) + else: + negative_text_features = net.get_empty_string_sequence(bs) + if offloadobj != None: + offloadobj.ensure_model_loaded("net") + x0 = torch.randn(bs, + net.latent_seq_len, + net.latent_dim, + device=device, + dtype=dtype, + generator=rng) + preprocessed_conditions = net.preprocess_conditions(clip_features, sync_features, text_features) + empty_conditions = net.get_empty_conditions( + bs, negative_text_features=negative_text_features if negative_text is not None else None) + + cfg_ode_wrapper = lambda t, x: net.ode_wrapper(t, x, preprocessed_conditions, empty_conditions, + cfg_strength) + x1 = fm.to_data(cfg_ode_wrapper, x0) + x1 = net.unnormalize(x1) + spec = feature_utils.decode(x1) + audio = feature_utils.vocode(spec) + return audio + + +LOGFORMAT = "[%(log_color)s%(levelname)-8s%(reset)s]: %(log_color)s%(message)s%(reset)s" + + +def setup_eval_logging(log_level: int = logging.INFO): + logging.root.setLevel(log_level) + # formatter = ColoredFormatter(LOGFORMAT) + formatter = None + stream = logging.StreamHandler() + stream.setLevel(log_level) + stream.setFormatter(formatter) + log = logging.getLogger() + log.setLevel(log_level) + log.addHandler(stream) + + +_CLIP_SIZE = 384 +_CLIP_FPS = 8.0 + +_SYNC_SIZE = 224 +_SYNC_FPS = 25.0 + + +def load_video(video_path: Path, duration_sec: float, load_all_frames: bool = True) -> VideoInfo: + + clip_transform = v2.Compose([ + v2.Resize((_CLIP_SIZE, _CLIP_SIZE), interpolation=v2.InterpolationMode.BICUBIC), + v2.ToImage(), + v2.ToDtype(torch.float32, scale=True), + ]) + + sync_transform = v2.Compose([ + v2.Resize(_SYNC_SIZE, interpolation=v2.InterpolationMode.BICUBIC), + v2.CenterCrop(_SYNC_SIZE), + v2.ToImage(), + v2.ToDtype(torch.float32, scale=True), + v2.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), + ]) + + output_frames, all_frames, orig_fps = read_frames(video_path, + list_of_fps=[_CLIP_FPS, _SYNC_FPS], + start_sec=0, + end_sec=duration_sec, + need_all_frames=load_all_frames) + + clip_chunk, sync_chunk = output_frames + clip_chunk = torch.from_numpy(clip_chunk).permute(0, 3, 1, 2) + sync_chunk = torch.from_numpy(sync_chunk).permute(0, 3, 1, 2) + + clip_frames = clip_transform(clip_chunk) + sync_frames = sync_transform(sync_chunk) + + clip_length_sec = clip_frames.shape[0] / _CLIP_FPS + sync_length_sec = sync_frames.shape[0] / _SYNC_FPS + + if clip_length_sec < duration_sec: + log.warning(f'Clip video is too short: {clip_length_sec:.2f} < {duration_sec:.2f}') + log.warning(f'Truncating to {clip_length_sec:.2f} sec') + duration_sec = clip_length_sec + + if sync_length_sec < duration_sec: + log.warning(f'Sync video is too short: {sync_length_sec:.2f} < {duration_sec:.2f}') + log.warning(f'Truncating to {sync_length_sec:.2f} sec') + duration_sec = sync_length_sec + + clip_frames = clip_frames[:int(_CLIP_FPS * duration_sec)] + sync_frames = sync_frames[:int(_SYNC_FPS * duration_sec)] + + video_info = VideoInfo( + duration_sec=duration_sec, + fps=orig_fps, + clip_frames=clip_frames, + sync_frames=sync_frames, + all_frames=all_frames if load_all_frames else None, + ) + return video_info + + +def load_image(image_path: Path) -> VideoInfo: + clip_transform = v2.Compose([ + v2.Resize((_CLIP_SIZE, _CLIP_SIZE), interpolation=v2.InterpolationMode.BICUBIC), + v2.ToImage(), + v2.ToDtype(torch.float32, scale=True), + ]) + + sync_transform = v2.Compose([ + v2.Resize(_SYNC_SIZE, interpolation=v2.InterpolationMode.BICUBIC), + v2.CenterCrop(_SYNC_SIZE), + v2.ToImage(), + v2.ToDtype(torch.float32, scale=True), + v2.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), + ]) + + frame = np.array(Image.open(image_path)) + + clip_chunk = torch.from_numpy(frame).unsqueeze(0).permute(0, 3, 1, 2) + sync_chunk = torch.from_numpy(frame).unsqueeze(0).permute(0, 3, 1, 2) + + clip_frames = clip_transform(clip_chunk) + sync_frames = sync_transform(sync_chunk) + + video_info = ImageInfo( + clip_frames=clip_frames, + sync_frames=sync_frames, + original_frame=frame, + ) + return video_info + + +def make_video(source_path, video_info: VideoInfo, output_path: Path, audio: torch.Tensor, sampling_rate: int): + # reencode_with_audio(video_info, output_path, audio, sampling_rate) + remux_with_audio(source_path, output_path, audio, sampling_rate) \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/__init__.py b/postprocessing/mmaudio/ext/__init__.py new file mode 100644 index 0000000..8b13789 --- /dev/null +++ b/postprocessing/mmaudio/ext/__init__.py @@ -0,0 +1 @@ + diff --git a/postprocessing/mmaudio/ext/autoencoder/__init__.py b/postprocessing/mmaudio/ext/autoencoder/__init__.py new file mode 100644 index 0000000..e5a8763 --- /dev/null +++ b/postprocessing/mmaudio/ext/autoencoder/__init__.py @@ -0,0 +1 @@ +from .autoencoder import AutoEncoderModule diff --git a/postprocessing/mmaudio/ext/autoencoder/autoencoder.py b/postprocessing/mmaudio/ext/autoencoder/autoencoder.py new file mode 100644 index 0000000..e40f3fc --- /dev/null +++ b/postprocessing/mmaudio/ext/autoencoder/autoencoder.py @@ -0,0 +1,52 @@ +from typing import Literal, Optional + +import torch +import torch.nn as nn + +from ..autoencoder.vae import VAE, get_my_vae +from ..bigvgan import BigVGAN +from ..bigvgan_v2.bigvgan import BigVGAN as BigVGANv2 +from ...model.utils.distributions import DiagonalGaussianDistribution + + +class AutoEncoderModule(nn.Module): + + def __init__(self, + *, + vae_ckpt_path, + vocoder_ckpt_path: Optional[str] = None, + mode: Literal['16k', '44k'], + need_vae_encoder: bool = True): + super().__init__() + self.vae: VAE = get_my_vae(mode).eval() + vae_state_dict = torch.load(vae_ckpt_path, weights_only=True, map_location='cpu') + self.vae.load_state_dict(vae_state_dict) + self.vae.remove_weight_norm() + + if mode == '16k': + assert vocoder_ckpt_path is not None + self.vocoder = BigVGAN(vocoder_ckpt_path).eval() + elif mode == '44k': + self.vocoder = BigVGANv2.from_pretrained('nvidia/bigvgan_v2_44khz_128band_512x', + use_cuda_kernel=False) + self.vocoder.remove_weight_norm() + else: + raise ValueError(f'Unknown mode: {mode}') + + for param in self.parameters(): + param.requires_grad = False + + if not need_vae_encoder: + del self.vae.encoder + + @torch.inference_mode() + def encode(self, x: torch.Tensor) -> DiagonalGaussianDistribution: + return self.vae.encode(x) + + @torch.inference_mode() + def decode(self, z: torch.Tensor) -> torch.Tensor: + return self.vae.decode(z) + + @torch.inference_mode() + def vocode(self, spec: torch.Tensor) -> torch.Tensor: + return self.vocoder(spec) diff --git a/postprocessing/mmaudio/ext/autoencoder/edm2_utils.py b/postprocessing/mmaudio/ext/autoencoder/edm2_utils.py new file mode 100644 index 0000000..a18ffba --- /dev/null +++ b/postprocessing/mmaudio/ext/autoencoder/edm2_utils.py @@ -0,0 +1,168 @@ +# Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# +# This work is licensed under a Creative Commons +# Attribution-NonCommercial-ShareAlike 4.0 International License. +# You should have received a copy of the license along with this +# work. If not, see http://creativecommons.org/licenses/by-nc-sa/4.0/ +"""Improved diffusion model architecture proposed in the paper +"Analyzing and Improving the Training Dynamics of Diffusion Models".""" + +import numpy as np +import torch + +#---------------------------------------------------------------------------- +# Variant of constant() that inherits dtype and device from the given +# reference tensor by default. + +_constant_cache = dict() + + +def constant(value, shape=None, dtype=None, device=None, memory_format=None): + value = np.asarray(value) + if shape is not None: + shape = tuple(shape) + if dtype is None: + dtype = torch.get_default_dtype() + if device is None: + device = torch.device('cpu') + if memory_format is None: + memory_format = torch.contiguous_format + + key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format) + tensor = _constant_cache.get(key, None) + if tensor is None: + tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device) + if shape is not None: + tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape)) + tensor = tensor.contiguous(memory_format=memory_format) + _constant_cache[key] = tensor + return tensor + + +def const_like(ref, value, shape=None, dtype=None, device=None, memory_format=None): + if dtype is None: + dtype = ref.dtype + if device is None: + device = ref.device + return constant(value, shape=shape, dtype=dtype, device=device, memory_format=memory_format) + + +#---------------------------------------------------------------------------- +# Normalize given tensor to unit magnitude with respect to the given +# dimensions. Default = all dimensions except the first. + + +def normalize(x, dim=None, eps=1e-4): + if dim is None: + dim = list(range(1, x.ndim)) + norm = torch.linalg.vector_norm(x, dim=dim, keepdim=True, dtype=torch.float32) + norm = torch.add(eps, norm, alpha=np.sqrt(norm.numel() / x.numel())) + return x / norm.to(x.dtype) + + +class Normalize(torch.nn.Module): + + def __init__(self, dim=None, eps=1e-4): + super().__init__() + self.dim = dim + self.eps = eps + + def forward(self, x): + return normalize(x, dim=self.dim, eps=self.eps) + + +#---------------------------------------------------------------------------- +# Upsample or downsample the given tensor with the given filter, +# or keep it as is. + + +def resample(x, f=[1, 1], mode='keep'): + if mode == 'keep': + return x + f = np.float32(f) + assert f.ndim == 1 and len(f) % 2 == 0 + pad = (len(f) - 1) // 2 + f = f / f.sum() + f = np.outer(f, f)[np.newaxis, np.newaxis, :, :] + f = const_like(x, f) + c = x.shape[1] + if mode == 'down': + return torch.nn.functional.conv2d(x, + f.tile([c, 1, 1, 1]), + groups=c, + stride=2, + padding=(pad, )) + assert mode == 'up' + return torch.nn.functional.conv_transpose2d(x, (f * 4).tile([c, 1, 1, 1]), + groups=c, + stride=2, + padding=(pad, )) + + +#---------------------------------------------------------------------------- +# Magnitude-preserving SiLU (Equation 81). + + +def mp_silu(x): + return torch.nn.functional.silu(x) / 0.596 + + +class MPSiLU(torch.nn.Module): + + def forward(self, x): + return mp_silu(x) + + +#---------------------------------------------------------------------------- +# Magnitude-preserving sum (Equation 88). + + +def mp_sum(a, b, t=0.5): + return a.lerp(b, t) / np.sqrt((1 - t)**2 + t**2) + + +#---------------------------------------------------------------------------- +# Magnitude-preserving concatenation (Equation 103). + + +def mp_cat(a, b, dim=1, t=0.5): + Na = a.shape[dim] + Nb = b.shape[dim] + C = np.sqrt((Na + Nb) / ((1 - t)**2 + t**2)) + wa = C / np.sqrt(Na) * (1 - t) + wb = C / np.sqrt(Nb) * t + return torch.cat([wa * a, wb * b], dim=dim) + + +#---------------------------------------------------------------------------- +# Magnitude-preserving convolution or fully-connected layer (Equation 47) +# with force weight normalization (Equation 66). + + +class MPConv1D(torch.nn.Module): + + def __init__(self, in_channels, out_channels, kernel_size): + super().__init__() + self.out_channels = out_channels + self.weight = torch.nn.Parameter(torch.randn(out_channels, in_channels, kernel_size)) + + self.weight_norm_removed = False + + def forward(self, x, gain=1): + assert self.weight_norm_removed, 'call remove_weight_norm() before inference' + + w = self.weight * gain + if w.ndim == 2: + return x @ w.t() + assert w.ndim == 3 + return torch.nn.functional.conv1d(x, w, padding=(w.shape[-1] // 2, )) + + def remove_weight_norm(self): + w = self.weight.to(torch.float32) + w = normalize(w) # traditional weight normalization + w = w / np.sqrt(w[0].numel()) + w = w.to(self.weight.dtype) + self.weight.data.copy_(w) + + self.weight_norm_removed = True + return self diff --git a/postprocessing/mmaudio/ext/autoencoder/vae.py b/postprocessing/mmaudio/ext/autoencoder/vae.py new file mode 100644 index 0000000..9fb69cb --- /dev/null +++ b/postprocessing/mmaudio/ext/autoencoder/vae.py @@ -0,0 +1,369 @@ +import logging +from typing import Optional + +import torch +import torch.nn as nn + +from ...ext.autoencoder.edm2_utils import MPConv1D +from ...ext.autoencoder.vae_modules import (AttnBlock1D, Downsample1D, ResnetBlock1D, + Upsample1D, nonlinearity) +from ...model.utils.distributions import DiagonalGaussianDistribution + +log = logging.getLogger() + +DATA_MEAN_80D = [ + -1.6058, -1.3676, -1.2520, -1.2453, -1.2078, -1.2224, -1.2419, -1.2439, -1.2922, -1.2927, + -1.3170, -1.3543, -1.3401, -1.3836, -1.3907, -1.3912, -1.4313, -1.4152, -1.4527, -1.4728, + -1.4568, -1.5101, -1.5051, -1.5172, -1.5623, -1.5373, -1.5746, -1.5687, -1.6032, -1.6131, + -1.6081, -1.6331, -1.6489, -1.6489, -1.6700, -1.6738, -1.6953, -1.6969, -1.7048, -1.7280, + -1.7361, -1.7495, -1.7658, -1.7814, -1.7889, -1.8064, -1.8221, -1.8377, -1.8417, -1.8643, + -1.8857, -1.8929, -1.9173, -1.9379, -1.9531, -1.9673, -1.9824, -2.0042, -2.0215, -2.0436, + -2.0766, -2.1064, -2.1418, -2.1855, -2.2319, -2.2767, -2.3161, -2.3572, -2.3954, -2.4282, + -2.4659, -2.5072, -2.5552, -2.6074, -2.6584, -2.7107, -2.7634, -2.8266, -2.8981, -2.9673 +] + +DATA_STD_80D = [ + 1.0291, 1.0411, 1.0043, 0.9820, 0.9677, 0.9543, 0.9450, 0.9392, 0.9343, 0.9297, 0.9276, 0.9263, + 0.9242, 0.9254, 0.9232, 0.9281, 0.9263, 0.9315, 0.9274, 0.9247, 0.9277, 0.9199, 0.9188, 0.9194, + 0.9160, 0.9161, 0.9146, 0.9161, 0.9100, 0.9095, 0.9145, 0.9076, 0.9066, 0.9095, 0.9032, 0.9043, + 0.9038, 0.9011, 0.9019, 0.9010, 0.8984, 0.8983, 0.8986, 0.8961, 0.8962, 0.8978, 0.8962, 0.8973, + 0.8993, 0.8976, 0.8995, 0.9016, 0.8982, 0.8972, 0.8974, 0.8949, 0.8940, 0.8947, 0.8936, 0.8939, + 0.8951, 0.8956, 0.9017, 0.9167, 0.9436, 0.9690, 1.0003, 1.0225, 1.0381, 1.0491, 1.0545, 1.0604, + 1.0761, 1.0929, 1.1089, 1.1196, 1.1176, 1.1156, 1.1117, 1.1070 +] + +DATA_MEAN_128D = [ + -3.3462, -2.6723, -2.4893, -2.3143, -2.2664, -2.3317, -2.1802, -2.4006, -2.2357, -2.4597, + -2.3717, -2.4690, -2.5142, -2.4919, -2.6610, -2.5047, -2.7483, -2.5926, -2.7462, -2.7033, + -2.7386, -2.8112, -2.7502, -2.9594, -2.7473, -3.0035, -2.8891, -2.9922, -2.9856, -3.0157, + -3.1191, -2.9893, -3.1718, -3.0745, -3.1879, -3.2310, -3.1424, -3.2296, -3.2791, -3.2782, + -3.2756, -3.3134, -3.3509, -3.3750, -3.3951, -3.3698, -3.4505, -3.4509, -3.5089, -3.4647, + -3.5536, -3.5788, -3.5867, -3.6036, -3.6400, -3.6747, -3.7072, -3.7279, -3.7283, -3.7795, + -3.8259, -3.8447, -3.8663, -3.9182, -3.9605, -3.9861, -4.0105, -4.0373, -4.0762, -4.1121, + -4.1488, -4.1874, -4.2461, -4.3170, -4.3639, -4.4452, -4.5282, -4.6297, -4.7019, -4.7960, + -4.8700, -4.9507, -5.0303, -5.0866, -5.1634, -5.2342, -5.3242, -5.4053, -5.4927, -5.5712, + -5.6464, -5.7052, -5.7619, -5.8410, -5.9188, -6.0103, -6.0955, -6.1673, -6.2362, -6.3120, + -6.3926, -6.4797, -6.5565, -6.6511, -6.8130, -6.9961, -7.1275, -7.2457, -7.3576, -7.4663, + -7.6136, -7.7469, -7.8815, -8.0132, -8.1515, -8.3071, -8.4722, -8.7418, -9.3975, -9.6628, + -9.7671, -9.8863, -9.9992, -10.0860, -10.1709, -10.5418, -11.2795, -11.3861 +] + +DATA_STD_128D = [ + 2.3804, 2.4368, 2.3772, 2.3145, 2.2803, 2.2510, 2.2316, 2.2083, 2.1996, 2.1835, 2.1769, 2.1659, + 2.1631, 2.1618, 2.1540, 2.1606, 2.1571, 2.1567, 2.1612, 2.1579, 2.1679, 2.1683, 2.1634, 2.1557, + 2.1668, 2.1518, 2.1415, 2.1449, 2.1406, 2.1350, 2.1313, 2.1415, 2.1281, 2.1352, 2.1219, 2.1182, + 2.1327, 2.1195, 2.1137, 2.1080, 2.1179, 2.1036, 2.1087, 2.1036, 2.1015, 2.1068, 2.0975, 2.0991, + 2.0902, 2.1015, 2.0857, 2.0920, 2.0893, 2.0897, 2.0910, 2.0881, 2.0925, 2.0873, 2.0960, 2.0900, + 2.0957, 2.0958, 2.0978, 2.0936, 2.0886, 2.0905, 2.0845, 2.0855, 2.0796, 2.0840, 2.0813, 2.0817, + 2.0838, 2.0840, 2.0917, 2.1061, 2.1431, 2.1976, 2.2482, 2.3055, 2.3700, 2.4088, 2.4372, 2.4609, + 2.4731, 2.4847, 2.5072, 2.5451, 2.5772, 2.6147, 2.6529, 2.6596, 2.6645, 2.6726, 2.6803, 2.6812, + 2.6899, 2.6916, 2.6931, 2.6998, 2.7062, 2.7262, 2.7222, 2.7158, 2.7041, 2.7485, 2.7491, 2.7451, + 2.7485, 2.7233, 2.7297, 2.7233, 2.7145, 2.6958, 2.6788, 2.6439, 2.6007, 2.4786, 2.2469, 2.1877, + 2.1392, 2.0717, 2.0107, 1.9676, 1.9140, 1.7102, 0.9101, 0.7164 +] + + +class VAE(nn.Module): + + def __init__( + self, + *, + data_dim: int, + embed_dim: int, + hidden_dim: int, + ): + super().__init__() + + if data_dim == 80: + self.data_mean = nn.Buffer(torch.tensor(DATA_MEAN_80D, dtype=torch.float32)) + self.data_std = nn.Buffer(torch.tensor(DATA_STD_80D, dtype=torch.float32)) + elif data_dim == 128: + self.data_mean = nn.Buffer(torch.tensor(DATA_MEAN_128D, dtype=torch.float32)) + self.data_std = nn.Buffer(torch.tensor(DATA_STD_128D, dtype=torch.float32)) + + self.data_mean = self.data_mean.view(1, -1, 1) + self.data_std = self.data_std.view(1, -1, 1) + + self.encoder = Encoder1D( + dim=hidden_dim, + ch_mult=(1, 2, 4), + num_res_blocks=2, + attn_layers=[3], + down_layers=[0], + in_dim=data_dim, + embed_dim=embed_dim, + ) + self.decoder = Decoder1D( + dim=hidden_dim, + ch_mult=(1, 2, 4), + num_res_blocks=2, + attn_layers=[3], + down_layers=[0], + in_dim=data_dim, + out_dim=data_dim, + embed_dim=embed_dim, + ) + + self.embed_dim = embed_dim + # self.quant_conv = nn.Conv1d(2 * embed_dim, 2 * embed_dim, 1) + # self.post_quant_conv = nn.Conv1d(embed_dim, embed_dim, 1) + + self.initialize_weights() + + def initialize_weights(self): + pass + + def encode(self, x: torch.Tensor, normalize: bool = True) -> DiagonalGaussianDistribution: + if normalize: + x = self.normalize(x) + moments = self.encoder(x) + posterior = DiagonalGaussianDistribution(moments) + return posterior + + def decode(self, z: torch.Tensor, unnormalize: bool = True) -> torch.Tensor: + dec = self.decoder(z) + if unnormalize: + dec = self.unnormalize(dec) + return dec + + def normalize(self, x: torch.Tensor) -> torch.Tensor: + return (x - self.data_mean) / self.data_std + + def unnormalize(self, x: torch.Tensor) -> torch.Tensor: + return x * self.data_std + self.data_mean + + def forward( + self, + x: torch.Tensor, + sample_posterior: bool = True, + rng: Optional[torch.Generator] = None, + normalize: bool = True, + unnormalize: bool = True, + ) -> tuple[torch.Tensor, DiagonalGaussianDistribution]: + + posterior = self.encode(x, normalize=normalize) + if sample_posterior: + z = posterior.sample(rng) + else: + z = posterior.mode() + dec = self.decode(z, unnormalize=unnormalize) + return dec, posterior + + def load_weights(self, src_dict) -> None: + self.load_state_dict(src_dict, strict=True) + + @property + def device(self) -> torch.device: + return next(self.parameters()).device + + def get_last_layer(self): + return self.decoder.conv_out.weight + + def remove_weight_norm(self): + for name, m in self.named_modules(): + if isinstance(m, MPConv1D): + m.remove_weight_norm() + log.debug(f"Removed weight norm from {name}") + return self + + +class Encoder1D(nn.Module): + + def __init__(self, + *, + dim: int, + ch_mult: tuple[int] = (1, 2, 4, 8), + num_res_blocks: int, + attn_layers: list[int] = [], + down_layers: list[int] = [], + resamp_with_conv: bool = True, + in_dim: int, + embed_dim: int, + double_z: bool = True, + kernel_size: int = 3, + clip_act: float = 256.0): + super().__init__() + self.dim = dim + self.num_layers = len(ch_mult) + self.num_res_blocks = num_res_blocks + self.in_channels = in_dim + self.clip_act = clip_act + self.down_layers = down_layers + self.attn_layers = attn_layers + self.conv_in = MPConv1D(in_dim, self.dim, kernel_size=kernel_size) + + in_ch_mult = (1, ) + tuple(ch_mult) + self.in_ch_mult = in_ch_mult + # downsampling + self.down = nn.ModuleList() + for i_level in range(self.num_layers): + block = nn.ModuleList() + attn = nn.ModuleList() + block_in = dim * in_ch_mult[i_level] + block_out = dim * ch_mult[i_level] + for i_block in range(self.num_res_blocks): + block.append( + ResnetBlock1D(in_dim=block_in, + out_dim=block_out, + kernel_size=kernel_size, + use_norm=True)) + block_in = block_out + if i_level in attn_layers: + attn.append(AttnBlock1D(block_in)) + down = nn.Module() + down.block = block + down.attn = attn + if i_level in down_layers: + down.downsample = Downsample1D(block_in, resamp_with_conv) + self.down.append(down) + + # middle + self.mid = nn.Module() + self.mid.block_1 = ResnetBlock1D(in_dim=block_in, + out_dim=block_in, + kernel_size=kernel_size, + use_norm=True) + self.mid.attn_1 = AttnBlock1D(block_in) + self.mid.block_2 = ResnetBlock1D(in_dim=block_in, + out_dim=block_in, + kernel_size=kernel_size, + use_norm=True) + + # end + self.conv_out = MPConv1D(block_in, + 2 * embed_dim if double_z else embed_dim, + kernel_size=kernel_size) + + self.learnable_gain = nn.Parameter(torch.zeros([])) + + def forward(self, x): + + # downsampling + hs = [self.conv_in(x)] + for i_level in range(self.num_layers): + for i_block in range(self.num_res_blocks): + h = self.down[i_level].block[i_block](hs[-1]) + if len(self.down[i_level].attn) > 0: + h = self.down[i_level].attn[i_block](h) + h = h.clamp(-self.clip_act, self.clip_act) + hs.append(h) + if i_level in self.down_layers: + hs.append(self.down[i_level].downsample(hs[-1])) + + # middle + h = hs[-1] + h = self.mid.block_1(h) + h = self.mid.attn_1(h) + h = self.mid.block_2(h) + h = h.clamp(-self.clip_act, self.clip_act) + + # end + h = nonlinearity(h) + h = self.conv_out(h, gain=(self.learnable_gain + 1)) + return h + + +class Decoder1D(nn.Module): + + def __init__(self, + *, + dim: int, + out_dim: int, + ch_mult: tuple[int] = (1, 2, 4, 8), + num_res_blocks: int, + attn_layers: list[int] = [], + down_layers: list[int] = [], + kernel_size: int = 3, + resamp_with_conv: bool = True, + in_dim: int, + embed_dim: int, + clip_act: float = 256.0): + super().__init__() + self.ch = dim + self.num_layers = len(ch_mult) + self.num_res_blocks = num_res_blocks + self.in_channels = in_dim + self.clip_act = clip_act + self.down_layers = [i + 1 for i in down_layers] # each downlayer add one + + # compute in_ch_mult, block_in and curr_res at lowest res + block_in = dim * ch_mult[self.num_layers - 1] + + # z to block_in + self.conv_in = MPConv1D(embed_dim, block_in, kernel_size=kernel_size) + + # middle + self.mid = nn.Module() + self.mid.block_1 = ResnetBlock1D(in_dim=block_in, out_dim=block_in, use_norm=True) + self.mid.attn_1 = AttnBlock1D(block_in) + self.mid.block_2 = ResnetBlock1D(in_dim=block_in, out_dim=block_in, use_norm=True) + + # upsampling + self.up = nn.ModuleList() + for i_level in reversed(range(self.num_layers)): + block = nn.ModuleList() + attn = nn.ModuleList() + block_out = dim * ch_mult[i_level] + for i_block in range(self.num_res_blocks + 1): + block.append(ResnetBlock1D(in_dim=block_in, out_dim=block_out, use_norm=True)) + block_in = block_out + if i_level in attn_layers: + attn.append(AttnBlock1D(block_in)) + up = nn.Module() + up.block = block + up.attn = attn + if i_level in self.down_layers: + up.upsample = Upsample1D(block_in, resamp_with_conv) + self.up.insert(0, up) # prepend to get consistent order + + # end + self.conv_out = MPConv1D(block_in, out_dim, kernel_size=kernel_size) + self.learnable_gain = nn.Parameter(torch.zeros([])) + + def forward(self, z): + # z to block_in + h = self.conv_in(z) + + # middle + h = self.mid.block_1(h) + h = self.mid.attn_1(h) + h = self.mid.block_2(h) + h = h.clamp(-self.clip_act, self.clip_act) + + # upsampling + for i_level in reversed(range(self.num_layers)): + for i_block in range(self.num_res_blocks + 1): + h = self.up[i_level].block[i_block](h) + if len(self.up[i_level].attn) > 0: + h = self.up[i_level].attn[i_block](h) + h = h.clamp(-self.clip_act, self.clip_act) + if i_level in self.down_layers: + h = self.up[i_level].upsample(h) + + h = nonlinearity(h) + h = self.conv_out(h, gain=(self.learnable_gain + 1)) + return h + + +def VAE_16k(**kwargs) -> VAE: + return VAE(data_dim=80, embed_dim=20, hidden_dim=384, **kwargs) + + +def VAE_44k(**kwargs) -> VAE: + return VAE(data_dim=128, embed_dim=40, hidden_dim=512, **kwargs) + + +def get_my_vae(name: str, **kwargs) -> VAE: + if name == '16k': + return VAE_16k(**kwargs) + if name == '44k': + return VAE_44k(**kwargs) + raise ValueError(f'Unknown model: {name}') + + +if __name__ == '__main__': + network = get_my_vae('standard') + + # print the number of parameters in terms of millions + num_params = sum(p.numel() for p in network.parameters()) / 1e6 + print(f'Number of parameters: {num_params:.2f}M') diff --git a/postprocessing/mmaudio/ext/autoencoder/vae_modules.py b/postprocessing/mmaudio/ext/autoencoder/vae_modules.py new file mode 100644 index 0000000..3dbd517 --- /dev/null +++ b/postprocessing/mmaudio/ext/autoencoder/vae_modules.py @@ -0,0 +1,117 @@ +import torch +import torch.nn as nn +import torch.nn.functional as F +from einops import rearrange + +from ...ext.autoencoder.edm2_utils import (MPConv1D, mp_silu, mp_sum, normalize) + + +def nonlinearity(x): + # swish + return mp_silu(x) + + +class ResnetBlock1D(nn.Module): + + def __init__(self, *, in_dim, out_dim=None, conv_shortcut=False, kernel_size=3, use_norm=True): + super().__init__() + self.in_dim = in_dim + out_dim = in_dim if out_dim is None else out_dim + self.out_dim = out_dim + self.use_conv_shortcut = conv_shortcut + self.use_norm = use_norm + + self.conv1 = MPConv1D(in_dim, out_dim, kernel_size=kernel_size) + self.conv2 = MPConv1D(out_dim, out_dim, kernel_size=kernel_size) + if self.in_dim != self.out_dim: + if self.use_conv_shortcut: + self.conv_shortcut = MPConv1D(in_dim, out_dim, kernel_size=kernel_size) + else: + self.nin_shortcut = MPConv1D(in_dim, out_dim, kernel_size=1) + + def forward(self, x: torch.Tensor) -> torch.Tensor: + + # pixel norm + if self.use_norm: + x = normalize(x, dim=1) + + h = x + h = nonlinearity(h) + h = self.conv1(h) + + h = nonlinearity(h) + h = self.conv2(h) + + if self.in_dim != self.out_dim: + if self.use_conv_shortcut: + x = self.conv_shortcut(x) + else: + x = self.nin_shortcut(x) + + return mp_sum(x, h, t=0.3) + + +class AttnBlock1D(nn.Module): + + def __init__(self, in_channels, num_heads=1): + super().__init__() + self.in_channels = in_channels + + self.num_heads = num_heads + self.qkv = MPConv1D(in_channels, in_channels * 3, kernel_size=1) + self.proj_out = MPConv1D(in_channels, in_channels, kernel_size=1) + + def forward(self, x): + h = x + y = self.qkv(h) + y = y.reshape(y.shape[0], self.num_heads, -1, 3, y.shape[-1]) + q, k, v = normalize(y, dim=2).unbind(3) + + q = rearrange(q, 'b h c l -> b h l c') + k = rearrange(k, 'b h c l -> b h l c') + v = rearrange(v, 'b h c l -> b h l c') + + h = F.scaled_dot_product_attention(q, k, v) + h = rearrange(h, 'b h l c -> b (h c) l') + + h = self.proj_out(h) + + return mp_sum(x, h, t=0.3) + + +class Upsample1D(nn.Module): + + def __init__(self, in_channels, with_conv): + super().__init__() + self.with_conv = with_conv + if self.with_conv: + self.conv = MPConv1D(in_channels, in_channels, kernel_size=3) + + def forward(self, x): + x = F.interpolate(x, scale_factor=2.0, mode='nearest-exact') # support 3D tensor(B,C,T) + if self.with_conv: + x = self.conv(x) + return x + + +class Downsample1D(nn.Module): + + def __init__(self, in_channels, with_conv): + super().__init__() + self.with_conv = with_conv + if self.with_conv: + # no asymmetric padding in torch conv, must do it ourselves + self.conv1 = MPConv1D(in_channels, in_channels, kernel_size=1) + self.conv2 = MPConv1D(in_channels, in_channels, kernel_size=1) + + def forward(self, x): + + if self.with_conv: + x = self.conv1(x) + + x = F.avg_pool1d(x, kernel_size=2, stride=2) + + if self.with_conv: + x = self.conv2(x) + + return x diff --git a/postprocessing/mmaudio/ext/bigvgan/LICENSE b/postprocessing/mmaudio/ext/bigvgan/LICENSE new file mode 100644 index 0000000..e966359 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2022 NVIDIA CORPORATION. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/__init__.py b/postprocessing/mmaudio/ext/bigvgan/__init__.py new file mode 100644 index 0000000..00f13e9 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/__init__.py @@ -0,0 +1 @@ +from .bigvgan import BigVGAN diff --git a/postprocessing/mmaudio/ext/bigvgan/activations.py b/postprocessing/mmaudio/ext/bigvgan/activations.py new file mode 100644 index 0000000..61f2808 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/activations.py @@ -0,0 +1,120 @@ +# Implementation adapted from https://github.com/EdwardDixon/snake under the MIT license. +# LICENSE is in incl_licenses directory. + +import torch +from torch import nn, sin, pow +from torch.nn import Parameter + + +class Snake(nn.Module): + ''' + Implementation of a sine-based periodic activation function + Shape: + - Input: (B, C, T) + - Output: (B, C, T), same shape as the input + Parameters: + - alpha - trainable parameter + References: + - This activation function is from this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda: + https://arxiv.org/abs/2006.08195 + Examples: + >>> a1 = snake(256) + >>> x = torch.randn(256) + >>> x = a1(x) + ''' + def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False): + ''' + Initialization. + INPUT: + - in_features: shape of the input + - alpha: trainable parameter + alpha is initialized to 1 by default, higher values = higher-frequency. + alpha will be trained along with the rest of your model. + ''' + super(Snake, self).__init__() + self.in_features = in_features + + # initialize alpha + self.alpha_logscale = alpha_logscale + if self.alpha_logscale: # log scale alphas initialized to zeros + self.alpha = Parameter(torch.zeros(in_features) * alpha) + else: # linear scale alphas initialized to ones + self.alpha = Parameter(torch.ones(in_features) * alpha) + + self.alpha.requires_grad = alpha_trainable + + self.no_div_by_zero = 0.000000001 + + def forward(self, x): + ''' + Forward pass of the function. + Applies the function to the input elementwise. + Snake ∶= x + 1/a * sin^2 (xa) + ''' + alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # line up with x to [B, C, T] + if self.alpha_logscale: + alpha = torch.exp(alpha) + x = x + (1.0 / (alpha + self.no_div_by_zero)) * pow(sin(x * alpha), 2) + + return x + + +class SnakeBeta(nn.Module): + ''' + A modified Snake function which uses separate parameters for the magnitude of the periodic components + Shape: + - Input: (B, C, T) + - Output: (B, C, T), same shape as the input + Parameters: + - alpha - trainable parameter that controls frequency + - beta - trainable parameter that controls magnitude + References: + - This activation function is a modified version based on this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda: + https://arxiv.org/abs/2006.08195 + Examples: + >>> a1 = snakebeta(256) + >>> x = torch.randn(256) + >>> x = a1(x) + ''' + def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False): + ''' + Initialization. + INPUT: + - in_features: shape of the input + - alpha - trainable parameter that controls frequency + - beta - trainable parameter that controls magnitude + alpha is initialized to 1 by default, higher values = higher-frequency. + beta is initialized to 1 by default, higher values = higher-magnitude. + alpha will be trained along with the rest of your model. + ''' + super(SnakeBeta, self).__init__() + self.in_features = in_features + + # initialize alpha + self.alpha_logscale = alpha_logscale + if self.alpha_logscale: # log scale alphas initialized to zeros + self.alpha = Parameter(torch.zeros(in_features) * alpha) + self.beta = Parameter(torch.zeros(in_features) * alpha) + else: # linear scale alphas initialized to ones + self.alpha = Parameter(torch.ones(in_features) * alpha) + self.beta = Parameter(torch.ones(in_features) * alpha) + + self.alpha.requires_grad = alpha_trainable + self.beta.requires_grad = alpha_trainable + + self.no_div_by_zero = 0.000000001 + + def forward(self, x): + ''' + Forward pass of the function. + Applies the function to the input elementwise. + SnakeBeta ∶= x + 1/b * sin^2 (xa) + ''' + alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # line up with x to [B, C, T] + beta = self.beta.unsqueeze(0).unsqueeze(-1) + if self.alpha_logscale: + alpha = torch.exp(alpha) + beta = torch.exp(beta) + x = x + (1.0 / (beta + self.no_div_by_zero)) * pow(sin(x * alpha), 2) + + return x \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/__init__.py b/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/__init__.py new file mode 100644 index 0000000..a2318b6 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/__init__.py @@ -0,0 +1,6 @@ +# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 +# LICENSE is in incl_licenses directory. + +from .filter import * +from .resample import * +from .act import * \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/act.py b/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/act.py new file mode 100644 index 0000000..028debd --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/act.py @@ -0,0 +1,28 @@ +# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 +# LICENSE is in incl_licenses directory. + +import torch.nn as nn +from .resample import UpSample1d, DownSample1d + + +class Activation1d(nn.Module): + def __init__(self, + activation, + up_ratio: int = 2, + down_ratio: int = 2, + up_kernel_size: int = 12, + down_kernel_size: int = 12): + super().__init__() + self.up_ratio = up_ratio + self.down_ratio = down_ratio + self.act = activation + self.upsample = UpSample1d(up_ratio, up_kernel_size) + self.downsample = DownSample1d(down_ratio, down_kernel_size) + + # x: [B,C,T] + def forward(self, x): + x = self.upsample(x) + x = self.act(x) + x = self.downsample(x) + + return x \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/filter.py b/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/filter.py new file mode 100644 index 0000000..7ad6ea8 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/filter.py @@ -0,0 +1,95 @@ +# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 +# LICENSE is in incl_licenses directory. + +import torch +import torch.nn as nn +import torch.nn.functional as F +import math + +if 'sinc' in dir(torch): + sinc = torch.sinc +else: + # This code is adopted from adefossez's julius.core.sinc under the MIT License + # https://adefossez.github.io/julius/julius/core.html + # LICENSE is in incl_licenses directory. + def sinc(x: torch.Tensor): + """ + Implementation of sinc, i.e. sin(pi * x) / (pi * x) + __Warning__: Different to julius.sinc, the input is multiplied by `pi`! + """ + return torch.where(x == 0, + torch.tensor(1., device=x.device, dtype=x.dtype), + torch.sin(math.pi * x) / math.pi / x) + + +# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License +# https://adefossez.github.io/julius/julius/lowpass.html +# LICENSE is in incl_licenses directory. +def kaiser_sinc_filter1d(cutoff, half_width, kernel_size): # return filter [1,1,kernel_size] + even = (kernel_size % 2 == 0) + half_size = kernel_size // 2 + + #For kaiser window + delta_f = 4 * half_width + A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95 + if A > 50.: + beta = 0.1102 * (A - 8.7) + elif A >= 21.: + beta = 0.5842 * (A - 21)**0.4 + 0.07886 * (A - 21.) + else: + beta = 0. + window = torch.kaiser_window(kernel_size, beta=beta, periodic=False) + + # ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio + if even: + time = (torch.arange(-half_size, half_size) + 0.5) + else: + time = torch.arange(kernel_size) - half_size + if cutoff == 0: + filter_ = torch.zeros_like(time) + else: + filter_ = 2 * cutoff * window * sinc(2 * cutoff * time) + # Normalize filter to have sum = 1, otherwise we will have a small leakage + # of the constant component in the input signal. + filter_ /= filter_.sum() + filter = filter_.view(1, 1, kernel_size) + + return filter + + +class LowPassFilter1d(nn.Module): + def __init__(self, + cutoff=0.5, + half_width=0.6, + stride: int = 1, + padding: bool = True, + padding_mode: str = 'replicate', + kernel_size: int = 12): + # kernel_size should be even number for stylegan3 setup, + # in this implementation, odd number is also possible. + super().__init__() + if cutoff < -0.: + raise ValueError("Minimum cutoff must be larger than zero.") + if cutoff > 0.5: + raise ValueError("A cutoff above 0.5 does not make sense.") + self.kernel_size = kernel_size + self.even = (kernel_size % 2 == 0) + self.pad_left = kernel_size // 2 - int(self.even) + self.pad_right = kernel_size // 2 + self.stride = stride + self.padding = padding + self.padding_mode = padding_mode + filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size) + self.register_buffer("filter", filter) + + #input [B, C, T] + def forward(self, x): + _, C, _ = x.shape + + if self.padding: + x = F.pad(x, (self.pad_left, self.pad_right), + mode=self.padding_mode) + out = F.conv1d(x, self.filter.expand(C, -1, -1), + stride=self.stride, groups=C) + + return out \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/resample.py b/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/resample.py new file mode 100644 index 0000000..750e6c3 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/alias_free_torch/resample.py @@ -0,0 +1,49 @@ +# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 +# LICENSE is in incl_licenses directory. + +import torch.nn as nn +from torch.nn import functional as F +from .filter import LowPassFilter1d +from .filter import kaiser_sinc_filter1d + + +class UpSample1d(nn.Module): + def __init__(self, ratio=2, kernel_size=None): + super().__init__() + self.ratio = ratio + self.kernel_size = int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size + self.stride = ratio + self.pad = self.kernel_size // ratio - 1 + self.pad_left = self.pad * self.stride + (self.kernel_size - self.stride) // 2 + self.pad_right = self.pad * self.stride + (self.kernel_size - self.stride + 1) // 2 + filter = kaiser_sinc_filter1d(cutoff=0.5 / ratio, + half_width=0.6 / ratio, + kernel_size=self.kernel_size) + self.register_buffer("filter", filter) + + # x: [B, C, T] + def forward(self, x): + _, C, _ = x.shape + + x = F.pad(x, (self.pad, self.pad), mode='replicate') + x = self.ratio * F.conv_transpose1d( + x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C) + x = x[..., self.pad_left:-self.pad_right] + + return x + + +class DownSample1d(nn.Module): + def __init__(self, ratio=2, kernel_size=None): + super().__init__() + self.ratio = ratio + self.kernel_size = int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size + self.lowpass = LowPassFilter1d(cutoff=0.5 / ratio, + half_width=0.6 / ratio, + stride=ratio, + kernel_size=self.kernel_size) + + def forward(self, x): + xx = self.lowpass(x) + + return xx \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/bigvgan.py b/postprocessing/mmaudio/ext/bigvgan/bigvgan.py new file mode 100644 index 0000000..9401956 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/bigvgan.py @@ -0,0 +1,32 @@ +from pathlib import Path + +import torch +import torch.nn as nn +from omegaconf import OmegaConf + +from ...ext.bigvgan.models import BigVGANVocoder + +_bigvgan_vocoder_path = Path(__file__).parent / 'bigvgan_vocoder.yml' + + +class BigVGAN(nn.Module): + + def __init__(self, ckpt_path, config_path=_bigvgan_vocoder_path): + super().__init__() + vocoder_cfg = OmegaConf.load(config_path) + self.vocoder = BigVGANVocoder(vocoder_cfg).eval() + vocoder_ckpt = torch.load(ckpt_path, map_location='cpu', weights_only=True)['generator'] + self.vocoder.load_state_dict(vocoder_ckpt) + + self.weight_norm_removed = False + self.remove_weight_norm() + + @torch.inference_mode() + def forward(self, x): + assert self.weight_norm_removed, 'call remove_weight_norm() before inference' + return self.vocoder(x) + + def remove_weight_norm(self): + self.vocoder.remove_weight_norm() + self.weight_norm_removed = True + return self diff --git a/postprocessing/mmaudio/ext/bigvgan/bigvgan_vocoder.yml b/postprocessing/mmaudio/ext/bigvgan/bigvgan_vocoder.yml new file mode 100644 index 0000000..d4db31e --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/bigvgan_vocoder.yml @@ -0,0 +1,63 @@ +resblock: '1' +num_gpus: 0 +batch_size: 64 +num_mels: 80 +learning_rate: 0.0001 +adam_b1: 0.8 +adam_b2: 0.99 +lr_decay: 0.999 +seed: 1234 +upsample_rates: +- 4 +- 4 +- 2 +- 2 +- 2 +- 2 +upsample_kernel_sizes: +- 8 +- 8 +- 4 +- 4 +- 4 +- 4 +upsample_initial_channel: 1536 +resblock_kernel_sizes: +- 3 +- 7 +- 11 +resblock_dilation_sizes: +- - 1 + - 3 + - 5 +- - 1 + - 3 + - 5 +- - 1 + - 3 + - 5 +activation: snakebeta +snake_logscale: true +resolutions: +- - 1024 + - 120 + - 600 +- - 2048 + - 240 + - 1200 +- - 512 + - 50 + - 240 +mpd_reshapes: +- 2 +- 3 +- 5 +- 7 +- 11 +use_spectral_norm: false +discriminator_channel_mult: 1 +num_workers: 4 +dist_config: + dist_backend: nccl + dist_url: tcp://localhost:54341 + world_size: 1 diff --git a/postprocessing/mmaudio/ext/bigvgan/env.py b/postprocessing/mmaudio/ext/bigvgan/env.py new file mode 100644 index 0000000..b8be238 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/env.py @@ -0,0 +1,18 @@ +# Adapted from https://github.com/jik876/hifi-gan under the MIT license. +# LICENSE is in incl_licenses directory. + +import os +import shutil + + +class AttrDict(dict): + def __init__(self, *args, **kwargs): + super(AttrDict, self).__init__(*args, **kwargs) + self.__dict__ = self + + +def build_env(config, config_name, path): + t_path = os.path.join(path, config_name) + if config != t_path: + os.makedirs(path, exist_ok=True) + shutil.copyfile(config, os.path.join(path, config_name)) \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_1 b/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_1 new file mode 100644 index 0000000..5afae39 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_1 @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2020 Jungil Kong + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_2 b/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_2 new file mode 100644 index 0000000..322b758 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_2 @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2020 Edward Dixon + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_3 b/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_3 new file mode 100644 index 0000000..56ee3c8 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_3 @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_4 b/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_4 new file mode 100644 index 0000000..48fd1a1 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_4 @@ -0,0 +1,29 @@ +BSD 3-Clause License + +Copyright (c) 2019, Seungwon Park 박승원 +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_5 b/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_5 new file mode 100644 index 0000000..01ae553 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_5 @@ -0,0 +1,16 @@ +Copyright 2020 Alexandre Défossez + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and +associated documentation files (the "Software"), to deal in the Software without restriction, +including without limitation the rights to use, copy, modify, merge, publish, distribute, +sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or +substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT +NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan/models.py b/postprocessing/mmaudio/ext/bigvgan/models.py new file mode 100644 index 0000000..3e2b7d6 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/models.py @@ -0,0 +1,255 @@ +# Copyright (c) 2022 NVIDIA CORPORATION. +# Licensed under the MIT license. + +# Adapted from https://github.com/jik876/hifi-gan under the MIT license. +# LICENSE is in incl_licenses directory. + +import torch +import torch.nn as nn +from torch.nn import Conv1d, ConvTranspose1d +from torch.nn.utils.parametrizations import weight_norm +from torch.nn.utils.parametrize import remove_parametrizations + +from ...ext.bigvgan import activations +from ...ext.bigvgan.alias_free_torch import * +from ...ext.bigvgan.utils import get_padding, init_weights + +LRELU_SLOPE = 0.1 + + +class AMPBlock1(torch.nn.Module): + + def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5), activation=None): + super(AMPBlock1, self).__init__() + self.h = h + + self.convs1 = nn.ModuleList([ + weight_norm( + Conv1d(channels, + channels, + kernel_size, + 1, + dilation=dilation[0], + padding=get_padding(kernel_size, dilation[0]))), + weight_norm( + Conv1d(channels, + channels, + kernel_size, + 1, + dilation=dilation[1], + padding=get_padding(kernel_size, dilation[1]))), + weight_norm( + Conv1d(channels, + channels, + kernel_size, + 1, + dilation=dilation[2], + padding=get_padding(kernel_size, dilation[2]))) + ]) + self.convs1.apply(init_weights) + + self.convs2 = nn.ModuleList([ + weight_norm( + Conv1d(channels, + channels, + kernel_size, + 1, + dilation=1, + padding=get_padding(kernel_size, 1))), + weight_norm( + Conv1d(channels, + channels, + kernel_size, + 1, + dilation=1, + padding=get_padding(kernel_size, 1))), + weight_norm( + Conv1d(channels, + channels, + kernel_size, + 1, + dilation=1, + padding=get_padding(kernel_size, 1))) + ]) + self.convs2.apply(init_weights) + + self.num_layers = len(self.convs1) + len(self.convs2) # total number of conv layers + + if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing + self.activations = nn.ModuleList([ + Activation1d( + activation=activations.Snake(channels, alpha_logscale=h.snake_logscale)) + for _ in range(self.num_layers) + ]) + elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing + self.activations = nn.ModuleList([ + Activation1d( + activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale)) + for _ in range(self.num_layers) + ]) + else: + raise NotImplementedError( + "activation incorrectly specified. check the config file and look for 'activation'." + ) + + def forward(self, x): + acts1, acts2 = self.activations[::2], self.activations[1::2] + for c1, c2, a1, a2 in zip(self.convs1, self.convs2, acts1, acts2): + xt = a1(x) + xt = c1(xt) + xt = a2(xt) + xt = c2(xt) + x = xt + x + + return x + + def remove_weight_norm(self): + for l in self.convs1: + remove_parametrizations(l, 'weight') + for l in self.convs2: + remove_parametrizations(l, 'weight') + + +class AMPBlock2(torch.nn.Module): + + def __init__(self, h, channels, kernel_size=3, dilation=(1, 3), activation=None): + super(AMPBlock2, self).__init__() + self.h = h + + self.convs = nn.ModuleList([ + weight_norm( + Conv1d(channels, + channels, + kernel_size, + 1, + dilation=dilation[0], + padding=get_padding(kernel_size, dilation[0]))), + weight_norm( + Conv1d(channels, + channels, + kernel_size, + 1, + dilation=dilation[1], + padding=get_padding(kernel_size, dilation[1]))) + ]) + self.convs.apply(init_weights) + + self.num_layers = len(self.convs) # total number of conv layers + + if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing + self.activations = nn.ModuleList([ + Activation1d( + activation=activations.Snake(channels, alpha_logscale=h.snake_logscale)) + for _ in range(self.num_layers) + ]) + elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing + self.activations = nn.ModuleList([ + Activation1d( + activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale)) + for _ in range(self.num_layers) + ]) + else: + raise NotImplementedError( + "activation incorrectly specified. check the config file and look for 'activation'." + ) + + def forward(self, x): + for c, a in zip(self.convs, self.activations): + xt = a(x) + xt = c(xt) + x = xt + x + + return x + + def remove_weight_norm(self): + for l in self.convs: + remove_parametrizations(l, 'weight') + + +class BigVGANVocoder(torch.nn.Module): + # this is our main BigVGAN model. Applies anti-aliased periodic activation for resblocks. + def __init__(self, h): + super().__init__() + self.h = h + + self.num_kernels = len(h.resblock_kernel_sizes) + self.num_upsamples = len(h.upsample_rates) + + # pre conv + self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3)) + + # define which AMPBlock to use. BigVGAN uses AMPBlock1 as default + resblock = AMPBlock1 if h.resblock == '1' else AMPBlock2 + + # transposed conv-based upsamplers. does not apply anti-aliasing + self.ups = nn.ModuleList() + for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): + self.ups.append( + nn.ModuleList([ + weight_norm( + ConvTranspose1d(h.upsample_initial_channel // (2**i), + h.upsample_initial_channel // (2**(i + 1)), + k, + u, + padding=(k - u) // 2)) + ])) + + # residual blocks using anti-aliased multi-periodicity composition modules (AMP) + self.resblocks = nn.ModuleList() + for i in range(len(self.ups)): + ch = h.upsample_initial_channel // (2**(i + 1)) + for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): + self.resblocks.append(resblock(h, ch, k, d, activation=h.activation)) + + # post conv + if h.activation == "snake": # periodic nonlinearity with snake function and anti-aliasing + activation_post = activations.Snake(ch, alpha_logscale=h.snake_logscale) + self.activation_post = Activation1d(activation=activation_post) + elif h.activation == "snakebeta": # periodic nonlinearity with snakebeta function and anti-aliasing + activation_post = activations.SnakeBeta(ch, alpha_logscale=h.snake_logscale) + self.activation_post = Activation1d(activation=activation_post) + else: + raise NotImplementedError( + "activation incorrectly specified. check the config file and look for 'activation'." + ) + + self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) + + # weight initialization + for i in range(len(self.ups)): + self.ups[i].apply(init_weights) + self.conv_post.apply(init_weights) + + def forward(self, x): + # pre conv + x = self.conv_pre(x) + + for i in range(self.num_upsamples): + # upsampling + for i_up in range(len(self.ups[i])): + x = self.ups[i][i_up](x) + # AMP blocks + xs = None + for j in range(self.num_kernels): + if xs is None: + xs = self.resblocks[i * self.num_kernels + j](x) + else: + xs += self.resblocks[i * self.num_kernels + j](x) + x = xs / self.num_kernels + + # post conv + x = self.activation_post(x) + x = self.conv_post(x) + x = torch.tanh(x) + + return x + + def remove_weight_norm(self): + print('Removing weight norm...') + for l in self.ups: + for l_i in l: + remove_parametrizations(l_i, 'weight') + for l in self.resblocks: + l.remove_weight_norm() + remove_parametrizations(self.conv_pre, 'weight') + remove_parametrizations(self.conv_post, 'weight') diff --git a/postprocessing/mmaudio/ext/bigvgan/utils.py b/postprocessing/mmaudio/ext/bigvgan/utils.py new file mode 100644 index 0000000..aff7e65 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan/utils.py @@ -0,0 +1,31 @@ +# Adapted from https://github.com/jik876/hifi-gan under the MIT license. +# LICENSE is in incl_licenses directory. + +import os + +import torch +from torch.nn.utils.parametrizations import weight_norm + + +def init_weights(m, mean=0.0, std=0.01): + classname = m.__class__.__name__ + if classname.find("Conv") != -1: + m.weight.data.normal_(mean, std) + + +def apply_weight_norm(m): + classname = m.__class__.__name__ + if classname.find("Conv") != -1: + weight_norm(m) + + +def get_padding(kernel_size, dilation=1): + return int((kernel_size * dilation - dilation) / 2) + + +def load_checkpoint(filepath, device): + assert os.path.isfile(filepath) + print("Loading '{}'".format(filepath)) + checkpoint_dict = torch.load(filepath, map_location=device) + print("Complete.") + return checkpoint_dict diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/LICENSE b/postprocessing/mmaudio/ext/bigvgan_v2/LICENSE new file mode 100644 index 0000000..4c78361 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2024 NVIDIA CORPORATION. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/__init__.py b/postprocessing/mmaudio/ext/bigvgan_v2/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/activations.py b/postprocessing/mmaudio/ext/bigvgan_v2/activations.py new file mode 100644 index 0000000..4f08dda --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/activations.py @@ -0,0 +1,126 @@ +# Implementation adapted from https://github.com/EdwardDixon/snake under the MIT license. +# LICENSE is in incl_licenses directory. + +import torch +from torch import nn, sin, pow +from torch.nn import Parameter + + +class Snake(nn.Module): + """ + Implementation of a sine-based periodic activation function + Shape: + - Input: (B, C, T) + - Output: (B, C, T), same shape as the input + Parameters: + - alpha - trainable parameter + References: + - This activation function is from this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda: + https://arxiv.org/abs/2006.08195 + Examples: + >>> a1 = snake(256) + >>> x = torch.randn(256) + >>> x = a1(x) + """ + + def __init__( + self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False + ): + """ + Initialization. + INPUT: + - in_features: shape of the input + - alpha: trainable parameter + alpha is initialized to 1 by default, higher values = higher-frequency. + alpha will be trained along with the rest of your model. + """ + super(Snake, self).__init__() + self.in_features = in_features + + # Initialize alpha + self.alpha_logscale = alpha_logscale + if self.alpha_logscale: # Log scale alphas initialized to zeros + self.alpha = Parameter(torch.zeros(in_features) * alpha) + else: # Linear scale alphas initialized to ones + self.alpha = Parameter(torch.ones(in_features) * alpha) + + self.alpha.requires_grad = alpha_trainable + + self.no_div_by_zero = 0.000000001 + + def forward(self, x): + """ + Forward pass of the function. + Applies the function to the input elementwise. + Snake ∶= x + 1/a * sin^2 (xa) + """ + alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # Line up with x to [B, C, T] + if self.alpha_logscale: + alpha = torch.exp(alpha) + x = x + (1.0 / (alpha + self.no_div_by_zero)) * pow(sin(x * alpha), 2) + + return x + + +class SnakeBeta(nn.Module): + """ + A modified Snake function which uses separate parameters for the magnitude of the periodic components + Shape: + - Input: (B, C, T) + - Output: (B, C, T), same shape as the input + Parameters: + - alpha - trainable parameter that controls frequency + - beta - trainable parameter that controls magnitude + References: + - This activation function is a modified version based on this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda: + https://arxiv.org/abs/2006.08195 + Examples: + >>> a1 = snakebeta(256) + >>> x = torch.randn(256) + >>> x = a1(x) + """ + + def __init__( + self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False + ): + """ + Initialization. + INPUT: + - in_features: shape of the input + - alpha - trainable parameter that controls frequency + - beta - trainable parameter that controls magnitude + alpha is initialized to 1 by default, higher values = higher-frequency. + beta is initialized to 1 by default, higher values = higher-magnitude. + alpha will be trained along with the rest of your model. + """ + super(SnakeBeta, self).__init__() + self.in_features = in_features + + # Initialize alpha + self.alpha_logscale = alpha_logscale + if self.alpha_logscale: # Log scale alphas initialized to zeros + self.alpha = Parameter(torch.zeros(in_features) * alpha) + self.beta = Parameter(torch.zeros(in_features) * alpha) + else: # Linear scale alphas initialized to ones + self.alpha = Parameter(torch.ones(in_features) * alpha) + self.beta = Parameter(torch.ones(in_features) * alpha) + + self.alpha.requires_grad = alpha_trainable + self.beta.requires_grad = alpha_trainable + + self.no_div_by_zero = 0.000000001 + + def forward(self, x): + """ + Forward pass of the function. + Applies the function to the input elementwise. + SnakeBeta ∶= x + 1/b * sin^2 (xa) + """ + alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # Line up with x to [B, C, T] + beta = self.beta.unsqueeze(0).unsqueeze(-1) + if self.alpha_logscale: + alpha = torch.exp(alpha) + beta = torch.exp(beta) + x = x + (1.0 / (beta + self.no_div_by_zero)) * pow(sin(x * alpha), 2) + + return x diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/__init__.py b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/activation1d.py b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/activation1d.py new file mode 100644 index 0000000..fbc0fd8 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/activation1d.py @@ -0,0 +1,77 @@ +# Copyright (c) 2024 NVIDIA CORPORATION. +# Licensed under the MIT license. + +import torch +import torch.nn as nn +from alias_free_activation.torch.resample import UpSample1d, DownSample1d + +# load fused CUDA kernel: this enables importing anti_alias_activation_cuda +from alias_free_activation.cuda import load + +anti_alias_activation_cuda = load.load() + + +class FusedAntiAliasActivation(torch.autograd.Function): + """ + Assumes filter size 12, replication padding on upsampling/downsampling, and logscale alpha/beta parameters as inputs. + The hyperparameters are hard-coded in the kernel to maximize speed. + NOTE: The fused kenrel is incorrect for Activation1d with different hyperparameters. + """ + + @staticmethod + def forward(ctx, inputs, up_ftr, down_ftr, alpha, beta): + activation_results = anti_alias_activation_cuda.forward( + inputs, up_ftr, down_ftr, alpha, beta + ) + + return activation_results + + @staticmethod + def backward(ctx, output_grads): + raise NotImplementedError + return output_grads, None, None + + +class Activation1d(nn.Module): + def __init__( + self, + activation, + up_ratio: int = 2, + down_ratio: int = 2, + up_kernel_size: int = 12, + down_kernel_size: int = 12, + fused: bool = True, + ): + super().__init__() + self.up_ratio = up_ratio + self.down_ratio = down_ratio + self.act = activation + self.upsample = UpSample1d(up_ratio, up_kernel_size) + self.downsample = DownSample1d(down_ratio, down_kernel_size) + + self.fused = fused # Whether to use fused CUDA kernel or not + + def forward(self, x): + if not self.fused: + x = self.upsample(x) + x = self.act(x) + x = self.downsample(x) + return x + else: + if self.act.__class__.__name__ == "Snake": + beta = self.act.alpha.data # Snake uses same params for alpha and beta + else: + beta = ( + self.act.beta.data + ) # Snakebeta uses different params for alpha and beta + alpha = self.act.alpha.data + if ( + not self.act.alpha_logscale + ): # Exp baked into cuda kernel, cancel it out with a log + alpha = torch.log(alpha) + beta = torch.log(beta) + + x = FusedAntiAliasActivation.apply( + x, self.upsample.filter, self.downsample.lowpass.filter, alpha, beta + ) + return x diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation.cpp b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation.cpp new file mode 100644 index 0000000..c5651f7 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation.cpp @@ -0,0 +1,23 @@ +/* coding=utf-8 + * Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + + #include + +extern "C" torch::Tensor fwd_cuda(torch::Tensor const &input, torch::Tensor const &up_filter, torch::Tensor const &down_filter, torch::Tensor const &alpha, torch::Tensor const &beta); + +PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { + m.def("forward", &fwd_cuda, "Anti-Alias Activation forward (CUDA)"); +} \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation_cuda.cu b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation_cuda.cu new file mode 100644 index 0000000..8c44233 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/anti_alias_activation_cuda.cu @@ -0,0 +1,246 @@ +/* coding=utf-8 + * Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include +#include +#include +#include +#include +#include +#include "type_shim.h" +#include +#include +#include +#include +#include + +namespace +{ + // Hard-coded hyperparameters + // WARP_SIZE and WARP_BATCH must match the return values batches_per_warp and + constexpr int ELEMENTS_PER_LDG_STG = 1; //(WARP_ITERATIONS < 4) ? 1 : 4; + constexpr int BUFFER_SIZE = 32; + constexpr int FILTER_SIZE = 12; + constexpr int HALF_FILTER_SIZE = 6; + constexpr int UPSAMPLE_REPLICATION_PAD = 5; // 5 on each side, matching torch impl + constexpr int DOWNSAMPLE_REPLICATION_PAD_LEFT = 5; // matching torch impl + constexpr int DOWNSAMPLE_REPLICATION_PAD_RIGHT = 6; // matching torch impl + + template + __global__ void anti_alias_activation_forward( + output_t *dst, + const input_t *src, + const input_t *up_ftr, + const input_t *down_ftr, + const input_t *alpha, + const input_t *beta, + int batch_size, + int channels, + int seq_len) + { + // Up and downsample filters + input_t up_filter[FILTER_SIZE]; + input_t down_filter[FILTER_SIZE]; + + // Load data from global memory including extra indices reserved for replication paddings + input_t elements[2 * FILTER_SIZE + 2 * BUFFER_SIZE + 2 * UPSAMPLE_REPLICATION_PAD] = {0}; + input_t intermediates[2 * FILTER_SIZE + 2 * BUFFER_SIZE + DOWNSAMPLE_REPLICATION_PAD_LEFT + DOWNSAMPLE_REPLICATION_PAD_RIGHT] = {0}; + + // Output stores downsampled output before writing to dst + output_t output[BUFFER_SIZE]; + + // blockDim/threadIdx = (128, 1, 1) + // gridDim/blockIdx = (seq_blocks, channels, batches) + int block_offset = (blockIdx.x * 128 * BUFFER_SIZE + seq_len * (blockIdx.y + gridDim.y * blockIdx.z)); + int local_offset = threadIdx.x * BUFFER_SIZE; + int seq_offset = blockIdx.x * 128 * BUFFER_SIZE + local_offset; + + // intermediate have double the seq_len + int intermediate_local_offset = threadIdx.x * BUFFER_SIZE * 2; + int intermediate_seq_offset = blockIdx.x * 128 * BUFFER_SIZE * 2 + intermediate_local_offset; + + // Get values needed for replication padding before moving pointer + const input_t *right_most_pntr = src + (seq_len * (blockIdx.y + gridDim.y * blockIdx.z)); + input_t seq_left_most_value = right_most_pntr[0]; + input_t seq_right_most_value = right_most_pntr[seq_len - 1]; + + // Move src and dst pointers + src += block_offset + local_offset; + dst += block_offset + local_offset; + + // Alpha and beta values for snake activatons. Applies exp by default + alpha = alpha + blockIdx.y; + input_t alpha_val = expf(alpha[0]); + beta = beta + blockIdx.y; + input_t beta_val = expf(beta[0]); + + #pragma unroll + for (int it = 0; it < FILTER_SIZE; it += 1) + { + up_filter[it] = up_ftr[it]; + down_filter[it] = down_ftr[it]; + } + + // Apply replication padding for upsampling, matching torch impl + #pragma unroll + for (int it = -HALF_FILTER_SIZE; it < BUFFER_SIZE + HALF_FILTER_SIZE; it += 1) + { + int element_index = seq_offset + it; // index for element + if ((element_index < 0) && (element_index >= -UPSAMPLE_REPLICATION_PAD)) + { + elements[2 * (HALF_FILTER_SIZE + it)] = 2 * seq_left_most_value; + } + if ((element_index >= seq_len) && (element_index < seq_len + UPSAMPLE_REPLICATION_PAD)) + { + elements[2 * (HALF_FILTER_SIZE + it)] = 2 * seq_right_most_value; + } + if ((element_index >= 0) && (element_index < seq_len)) + { + elements[2 * (HALF_FILTER_SIZE + it)] = 2 * src[it]; + } + } + + // Apply upsampling strided convolution and write to intermediates. It reserves DOWNSAMPLE_REPLICATION_PAD_LEFT for replication padding of the downsampilng conv later + #pragma unroll + for (int it = 0; it < (2 * BUFFER_SIZE + 2 * FILTER_SIZE); it += 1) + { + input_t acc = 0.0; + int element_index = intermediate_seq_offset + it; // index for intermediate + #pragma unroll + for (int f_idx = 0; f_idx < FILTER_SIZE; f_idx += 1) + { + if ((element_index + f_idx) >= 0) + { + acc += up_filter[f_idx] * elements[it + f_idx]; + } + } + intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] = acc; + } + + // Apply activation function. It reserves DOWNSAMPLE_REPLICATION_PAD_LEFT and DOWNSAMPLE_REPLICATION_PAD_RIGHT for replication padding of the downsampilng conv later + double no_div_by_zero = 0.000000001; + #pragma unroll + for (int it = 0; it < 2 * BUFFER_SIZE + 2 * FILTER_SIZE; it += 1) + { + intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] += (1.0 / (beta_val + no_div_by_zero)) * sinf(intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] * alpha_val) * sinf(intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] * alpha_val); + } + + // Apply replication padding before downsampling conv from intermediates + #pragma unroll + for (int it = 0; it < DOWNSAMPLE_REPLICATION_PAD_LEFT; it += 1) + { + intermediates[it] = intermediates[DOWNSAMPLE_REPLICATION_PAD_LEFT]; + } + #pragma unroll + for (int it = DOWNSAMPLE_REPLICATION_PAD_LEFT + 2 * BUFFER_SIZE + 2 * FILTER_SIZE; it < DOWNSAMPLE_REPLICATION_PAD_LEFT + 2 * BUFFER_SIZE + 2 * FILTER_SIZE + DOWNSAMPLE_REPLICATION_PAD_RIGHT; it += 1) + { + intermediates[it] = intermediates[DOWNSAMPLE_REPLICATION_PAD_LEFT + 2 * BUFFER_SIZE + 2 * FILTER_SIZE - 1]; + } + + // Apply downsample strided convolution (assuming stride=2) from intermediates + #pragma unroll + for (int it = 0; it < BUFFER_SIZE; it += 1) + { + input_t acc = 0.0; + #pragma unroll + for (int f_idx = 0; f_idx < FILTER_SIZE; f_idx += 1) + { + // Add constant DOWNSAMPLE_REPLICATION_PAD_RIGHT to match torch implementation + acc += down_filter[f_idx] * intermediates[it * 2 + f_idx + DOWNSAMPLE_REPLICATION_PAD_RIGHT]; + } + output[it] = acc; + } + + // Write output to dst + #pragma unroll + for (int it = 0; it < BUFFER_SIZE; it += ELEMENTS_PER_LDG_STG) + { + int element_index = seq_offset + it; + if (element_index < seq_len) + { + dst[it] = output[it]; + } + } + + } + + template + void dispatch_anti_alias_activation_forward( + output_t *dst, + const input_t *src, + const input_t *up_ftr, + const input_t *down_ftr, + const input_t *alpha, + const input_t *beta, + int batch_size, + int channels, + int seq_len) + { + if (seq_len == 0) + { + return; + } + else + { + // Use 128 threads per block to maximimize gpu utilization + constexpr int threads_per_block = 128; + constexpr int seq_len_per_block = 4096; + int blocks_per_seq_len = (seq_len + seq_len_per_block - 1) / seq_len_per_block; + dim3 blocks(blocks_per_seq_len, channels, batch_size); + dim3 threads(threads_per_block, 1, 1); + + anti_alias_activation_forward + <<>>(dst, src, up_ftr, down_ftr, alpha, beta, batch_size, channels, seq_len); + } + } +} + +extern "C" torch::Tensor fwd_cuda(torch::Tensor const &input, torch::Tensor const &up_filter, torch::Tensor const &down_filter, torch::Tensor const &alpha, torch::Tensor const &beta) +{ + // Input is a 3d tensor with dimensions [batches, channels, seq_len] + const int batches = input.size(0); + const int channels = input.size(1); + const int seq_len = input.size(2); + + // Output + auto act_options = input.options().requires_grad(false); + + torch::Tensor anti_alias_activation_results = + torch::empty({batches, channels, seq_len}, act_options); + + void *input_ptr = static_cast(input.data_ptr()); + void *up_filter_ptr = static_cast(up_filter.data_ptr()); + void *down_filter_ptr = static_cast(down_filter.data_ptr()); + void *alpha_ptr = static_cast(alpha.data_ptr()); + void *beta_ptr = static_cast(beta.data_ptr()); + void *anti_alias_activation_results_ptr = static_cast(anti_alias_activation_results.data_ptr()); + + DISPATCH_FLOAT_HALF_AND_BFLOAT( + input.scalar_type(), + "dispatch anti alias activation_forward", + dispatch_anti_alias_activation_forward( + reinterpret_cast(anti_alias_activation_results_ptr), + reinterpret_cast(input_ptr), + reinterpret_cast(up_filter_ptr), + reinterpret_cast(down_filter_ptr), + reinterpret_cast(alpha_ptr), + reinterpret_cast(beta_ptr), + batches, + channels, + seq_len);); + return anti_alias_activation_results; +} \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/compat.h b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/compat.h new file mode 100644 index 0000000..25818b2 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/compat.h @@ -0,0 +1,29 @@ +/* coding=utf-8 + * Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/*This code is copied fron NVIDIA apex: + * https://github.com/NVIDIA/apex + * with minor changes. */ + +#ifndef TORCH_CHECK +#define TORCH_CHECK AT_CHECK +#endif + +#ifdef VERSION_GE_1_3 +#define DATA_PTR data_ptr +#else +#define DATA_PTR data +#endif diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/load.py b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/load.py new file mode 100644 index 0000000..ca5d01d --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/load.py @@ -0,0 +1,86 @@ +# Copyright (c) 2024 NVIDIA CORPORATION. +# Licensed under the MIT license. + +import os +import pathlib +import subprocess + +from torch.utils import cpp_extension + +""" +Setting this param to a list has a problem of generating different compilation commands (with diferent order of architectures) and leading to recompilation of fused kernels. +Set it to empty stringo avoid recompilation and assign arch flags explicity in extra_cuda_cflags below +""" +os.environ["TORCH_CUDA_ARCH_LIST"] = "" + + +def load(): + # Check if cuda 11 is installed for compute capability 8.0 + cc_flag = [] + _, bare_metal_major, _ = _get_cuda_bare_metal_version(cpp_extension.CUDA_HOME) + if int(bare_metal_major) >= 11: + cc_flag.append("-gencode") + cc_flag.append("arch=compute_80,code=sm_80") + + # Build path + srcpath = pathlib.Path(__file__).parent.absolute() + buildpath = srcpath / "build" + _create_build_dir(buildpath) + + # Helper function to build the kernels. + def _cpp_extention_load_helper(name, sources, extra_cuda_flags): + return cpp_extension.load( + name=name, + sources=sources, + build_directory=buildpath, + extra_cflags=[ + "-O3", + ], + extra_cuda_cflags=[ + "-O3", + "-gencode", + "arch=compute_70,code=sm_70", + "--use_fast_math", + ] + + extra_cuda_flags + + cc_flag, + verbose=True, + ) + + extra_cuda_flags = [ + "-U__CUDA_NO_HALF_OPERATORS__", + "-U__CUDA_NO_HALF_CONVERSIONS__", + "--expt-relaxed-constexpr", + "--expt-extended-lambda", + ] + + sources = [ + srcpath / "anti_alias_activation.cpp", + srcpath / "anti_alias_activation_cuda.cu", + ] + anti_alias_activation_cuda = _cpp_extention_load_helper( + "anti_alias_activation_cuda", sources, extra_cuda_flags + ) + + return anti_alias_activation_cuda + + +def _get_cuda_bare_metal_version(cuda_dir): + raw_output = subprocess.check_output( + [cuda_dir + "/bin/nvcc", "-V"], universal_newlines=True + ) + output = raw_output.split() + release_idx = output.index("release") + 1 + release = output[release_idx].split(".") + bare_metal_major = release[0] + bare_metal_minor = release[1][0] + + return raw_output, bare_metal_major, bare_metal_minor + + +def _create_build_dir(buildpath): + try: + os.mkdir(buildpath) + except OSError: + if not os.path.isdir(buildpath): + print(f"Creation of the build directory {buildpath} failed") diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/type_shim.h b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/type_shim.h new file mode 100644 index 0000000..5db7e8a --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/cuda/type_shim.h @@ -0,0 +1,92 @@ +/* coding=utf-8 + * Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include +#include "compat.h" + +#define DISPATCH_FLOAT_HALF_AND_BFLOAT(TYPE, NAME, ...) \ + switch (TYPE) \ + { \ + case at::ScalarType::Float: \ + { \ + using scalar_t = float; \ + __VA_ARGS__; \ + break; \ + } \ + case at::ScalarType::Half: \ + { \ + using scalar_t = at::Half; \ + __VA_ARGS__; \ + break; \ + } \ + case at::ScalarType::BFloat16: \ + { \ + using scalar_t = at::BFloat16; \ + __VA_ARGS__; \ + break; \ + } \ + default: \ + AT_ERROR(#NAME, " not implemented for '", toString(TYPE), "'"); \ + } + +#define DISPATCH_FLOAT_HALF_AND_BFLOAT_INOUT_TYPES(TYPEIN, TYPEOUT, NAME, ...) \ + switch (TYPEIN) \ + { \ + case at::ScalarType::Float: \ + { \ + using scalar_t_in = float; \ + switch (TYPEOUT) \ + { \ + case at::ScalarType::Float: \ + { \ + using scalar_t_out = float; \ + __VA_ARGS__; \ + break; \ + } \ + case at::ScalarType::Half: \ + { \ + using scalar_t_out = at::Half; \ + __VA_ARGS__; \ + break; \ + } \ + case at::ScalarType::BFloat16: \ + { \ + using scalar_t_out = at::BFloat16; \ + __VA_ARGS__; \ + break; \ + } \ + default: \ + AT_ERROR(#NAME, " not implemented for '", toString(TYPEOUT), "'"); \ + } \ + break; \ + } \ + case at::ScalarType::Half: \ + { \ + using scalar_t_in = at::Half; \ + using scalar_t_out = at::Half; \ + __VA_ARGS__; \ + break; \ + } \ + case at::ScalarType::BFloat16: \ + { \ + using scalar_t_in = at::BFloat16; \ + using scalar_t_out = at::BFloat16; \ + __VA_ARGS__; \ + break; \ + } \ + default: \ + AT_ERROR(#NAME, " not implemented for '", toString(TYPEIN), "'"); \ + } diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/__init__.py b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/__init__.py new file mode 100644 index 0000000..8f756ed --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/__init__.py @@ -0,0 +1,6 @@ +# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 +# LICENSE is in incl_licenses directory. + +from .filter import * +from .resample import * +from .act import * diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/act.py b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/act.py new file mode 100644 index 0000000..f25dda1 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/act.py @@ -0,0 +1,32 @@ +# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 +# LICENSE is in incl_licenses directory. + +import torch.nn as nn + +from .resample import (DownSample1d, UpSample1d) + + +class Activation1d(nn.Module): + + def __init__( + self, + activation, + up_ratio: int = 2, + down_ratio: int = 2, + up_kernel_size: int = 12, + down_kernel_size: int = 12, + ): + super().__init__() + self.up_ratio = up_ratio + self.down_ratio = down_ratio + self.act = activation + self.upsample = UpSample1d(up_ratio, up_kernel_size) + self.downsample = DownSample1d(down_ratio, down_kernel_size) + + # x: [B,C,T] + def forward(self, x): + x = self.upsample(x) + x = self.act(x) + x = self.downsample(x) + + return x diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/filter.py b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/filter.py new file mode 100644 index 0000000..0fa35b0 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/filter.py @@ -0,0 +1,101 @@ +# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 +# LICENSE is in incl_licenses directory. + +import torch +import torch.nn as nn +import torch.nn.functional as F +import math + +if "sinc" in dir(torch): + sinc = torch.sinc +else: + # This code is adopted from adefossez's julius.core.sinc under the MIT License + # https://adefossez.github.io/julius/julius/core.html + # LICENSE is in incl_licenses directory. + def sinc(x: torch.Tensor): + """ + Implementation of sinc, i.e. sin(pi * x) / (pi * x) + __Warning__: Different to julius.sinc, the input is multiplied by `pi`! + """ + return torch.where( + x == 0, + torch.tensor(1.0, device=x.device, dtype=x.dtype), + torch.sin(math.pi * x) / math.pi / x, + ) + + +# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License +# https://adefossez.github.io/julius/julius/lowpass.html +# LICENSE is in incl_licenses directory. +def kaiser_sinc_filter1d( + cutoff, half_width, kernel_size +): # return filter [1,1,kernel_size] + even = kernel_size % 2 == 0 + half_size = kernel_size // 2 + + # For kaiser window + delta_f = 4 * half_width + A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95 + if A > 50.0: + beta = 0.1102 * (A - 8.7) + elif A >= 21.0: + beta = 0.5842 * (A - 21) ** 0.4 + 0.07886 * (A - 21.0) + else: + beta = 0.0 + window = torch.kaiser_window(kernel_size, beta=beta, periodic=False) + + # ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio + if even: + time = torch.arange(-half_size, half_size) + 0.5 + else: + time = torch.arange(kernel_size) - half_size + if cutoff == 0: + filter_ = torch.zeros_like(time) + else: + filter_ = 2 * cutoff * window * sinc(2 * cutoff * time) + """ + Normalize filter to have sum = 1, otherwise we will have a small leakage of the constant component in the input signal. + """ + filter_ /= filter_.sum() + filter = filter_.view(1, 1, kernel_size) + + return filter + + +class LowPassFilter1d(nn.Module): + def __init__( + self, + cutoff=0.5, + half_width=0.6, + stride: int = 1, + padding: bool = True, + padding_mode: str = "replicate", + kernel_size: int = 12, + ): + """ + kernel_size should be even number for stylegan3 setup, in this implementation, odd number is also possible. + """ + super().__init__() + if cutoff < -0.0: + raise ValueError("Minimum cutoff must be larger than zero.") + if cutoff > 0.5: + raise ValueError("A cutoff above 0.5 does not make sense.") + self.kernel_size = kernel_size + self.even = kernel_size % 2 == 0 + self.pad_left = kernel_size // 2 - int(self.even) + self.pad_right = kernel_size // 2 + self.stride = stride + self.padding = padding + self.padding_mode = padding_mode + filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size) + self.register_buffer("filter", filter) + + # Input [B, C, T] + def forward(self, x): + _, C, _ = x.shape + + if self.padding: + x = F.pad(x, (self.pad_left, self.pad_right), mode=self.padding_mode) + out = F.conv1d(x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C) + + return out diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/resample.py b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/resample.py new file mode 100644 index 0000000..038c60f --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/alias_free_activation/torch/resample.py @@ -0,0 +1,54 @@ +# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 +# LICENSE is in incl_licenses directory. + +import torch.nn as nn +from torch.nn import functional as F + +from .filter import (LowPassFilter1d, + kaiser_sinc_filter1d) + + +class UpSample1d(nn.Module): + + def __init__(self, ratio=2, kernel_size=None): + super().__init__() + self.ratio = ratio + self.kernel_size = (int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size) + self.stride = ratio + self.pad = self.kernel_size // ratio - 1 + self.pad_left = self.pad * self.stride + (self.kernel_size - self.stride) // 2 + self.pad_right = (self.pad * self.stride + (self.kernel_size - self.stride + 1) // 2) + filter = kaiser_sinc_filter1d(cutoff=0.5 / ratio, + half_width=0.6 / ratio, + kernel_size=self.kernel_size) + self.register_buffer("filter", filter) + + # x: [B, C, T] + def forward(self, x): + _, C, _ = x.shape + + x = F.pad(x, (self.pad, self.pad), mode="replicate") + x = self.ratio * F.conv_transpose1d( + x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C) + x = x[..., self.pad_left:-self.pad_right] + + return x + + +class DownSample1d(nn.Module): + + def __init__(self, ratio=2, kernel_size=None): + super().__init__() + self.ratio = ratio + self.kernel_size = (int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size) + self.lowpass = LowPassFilter1d( + cutoff=0.5 / ratio, + half_width=0.6 / ratio, + stride=ratio, + kernel_size=self.kernel_size, + ) + + def forward(self, x): + xx = self.lowpass(x) + + return xx diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/bigvgan.py b/postprocessing/mmaudio/ext/bigvgan_v2/bigvgan.py new file mode 100644 index 0000000..96b87c2 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/bigvgan.py @@ -0,0 +1,439 @@ +# Copyright (c) 2024 NVIDIA CORPORATION. +# Licensed under the MIT license. + +# Adapted from https://github.com/jik876/hifi-gan under the MIT license. +# LICENSE is in incl_licenses directory. + +import json +import os +from pathlib import Path +from typing import Dict, Optional, Union + +import torch +import torch.nn as nn +from huggingface_hub import PyTorchModelHubMixin, hf_hub_download +from torch.nn import Conv1d, ConvTranspose1d +from torch.nn.utils.parametrizations import weight_norm +from torch.nn.utils.parametrize import remove_parametrizations + +from ...ext.bigvgan_v2 import activations +from ...ext.bigvgan_v2.alias_free_activation.torch.act import \ + Activation1d as TorchActivation1d +from ...ext.bigvgan_v2.env import AttrDict +from ...ext.bigvgan_v2.utils import get_padding, init_weights + + +def load_hparams_from_json(path) -> AttrDict: + with open(path) as f: + data = f.read() + return AttrDict(json.loads(data)) + + +class AMPBlock1(torch.nn.Module): + """ + AMPBlock applies Snake / SnakeBeta activation functions with trainable parameters that control periodicity, defined for each layer. + AMPBlock1 has additional self.convs2 that contains additional Conv1d layers with a fixed dilation=1 followed by each layer in self.convs1 + + Args: + h (AttrDict): Hyperparameters. + channels (int): Number of convolution channels. + kernel_size (int): Size of the convolution kernel. Default is 3. + dilation (tuple): Dilation rates for the convolutions. Each dilation layer has two convolutions. Default is (1, 3, 5). + activation (str): Activation function type. Should be either 'snake' or 'snakebeta'. Default is None. + """ + + def __init__( + self, + h: AttrDict, + channels: int, + kernel_size: int = 3, + dilation: tuple = (1, 3, 5), + activation: str = None, + ): + super().__init__() + + self.h = h + + self.convs1 = nn.ModuleList([ + weight_norm( + Conv1d( + channels, + channels, + kernel_size, + stride=1, + dilation=d, + padding=get_padding(kernel_size, d), + )) for d in dilation + ]) + self.convs1.apply(init_weights) + + self.convs2 = nn.ModuleList([ + weight_norm( + Conv1d( + channels, + channels, + kernel_size, + stride=1, + dilation=1, + padding=get_padding(kernel_size, 1), + )) for _ in range(len(dilation)) + ]) + self.convs2.apply(init_weights) + + self.num_layers = len(self.convs1) + len(self.convs2) # Total number of conv layers + + # Select which Activation1d, lazy-load cuda version to ensure backward compatibility + if self.h.get("use_cuda_kernel", False): + from alias_free_activation.cuda.activation1d import \ + Activation1d as CudaActivation1d + + Activation1d = CudaActivation1d + else: + Activation1d = TorchActivation1d + + # Activation functions + if activation == "snake": + self.activations = nn.ModuleList([ + Activation1d( + activation=activations.Snake(channels, alpha_logscale=h.snake_logscale)) + for _ in range(self.num_layers) + ]) + elif activation == "snakebeta": + self.activations = nn.ModuleList([ + Activation1d( + activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale)) + for _ in range(self.num_layers) + ]) + else: + raise NotImplementedError( + "activation incorrectly specified. check the config file and look for 'activation'." + ) + + def forward(self, x): + acts1, acts2 = self.activations[::2], self.activations[1::2] + for c1, c2, a1, a2 in zip(self.convs1, self.convs2, acts1, acts2): + xt = a1(x) + xt = c1(xt) + xt = a2(xt) + xt = c2(xt) + x = xt + x + + return x + + def remove_weight_norm(self): + for l in self.convs1: + remove_parametrizations(l, 'weight') + for l in self.convs2: + remove_parametrizations(l, 'weight') + + +class AMPBlock2(torch.nn.Module): + """ + AMPBlock applies Snake / SnakeBeta activation functions with trainable parameters that control periodicity, defined for each layer. + Unlike AMPBlock1, AMPBlock2 does not contain extra Conv1d layers with fixed dilation=1 + + Args: + h (AttrDict): Hyperparameters. + channels (int): Number of convolution channels. + kernel_size (int): Size of the convolution kernel. Default is 3. + dilation (tuple): Dilation rates for the convolutions. Each dilation layer has two convolutions. Default is (1, 3, 5). + activation (str): Activation function type. Should be either 'snake' or 'snakebeta'. Default is None. + """ + + def __init__( + self, + h: AttrDict, + channels: int, + kernel_size: int = 3, + dilation: tuple = (1, 3, 5), + activation: str = None, + ): + super().__init__() + + self.h = h + + self.convs = nn.ModuleList([ + weight_norm( + Conv1d( + channels, + channels, + kernel_size, + stride=1, + dilation=d, + padding=get_padding(kernel_size, d), + )) for d in dilation + ]) + self.convs.apply(init_weights) + + self.num_layers = len(self.convs) # Total number of conv layers + + # Select which Activation1d, lazy-load cuda version to ensure backward compatibility + if self.h.get("use_cuda_kernel", False): + from alias_free_activation.cuda.activation1d import \ + Activation1d as CudaActivation1d + + Activation1d = CudaActivation1d + else: + Activation1d = TorchActivation1d + + # Activation functions + if activation == "snake": + self.activations = nn.ModuleList([ + Activation1d( + activation=activations.Snake(channels, alpha_logscale=h.snake_logscale)) + for _ in range(self.num_layers) + ]) + elif activation == "snakebeta": + self.activations = nn.ModuleList([ + Activation1d( + activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale)) + for _ in range(self.num_layers) + ]) + else: + raise NotImplementedError( + "activation incorrectly specified. check the config file and look for 'activation'." + ) + + def forward(self, x): + for c, a in zip(self.convs, self.activations): + xt = a(x) + xt = c(xt) + x = xt + x + return x + + def remove_weight_norm(self): + for l in self.convs: + remove_weight_norm(l) + + +class BigVGAN( + torch.nn.Module, + PyTorchModelHubMixin, + library_name="bigvgan", + repo_url="https://github.com/NVIDIA/BigVGAN", + docs_url="https://github.com/NVIDIA/BigVGAN/blob/main/README.md", + pipeline_tag="audio-to-audio", + license="mit", + tags=["neural-vocoder", "audio-generation", "arxiv:2206.04658"], +): + """ + BigVGAN is a neural vocoder model that applies anti-aliased periodic activation for residual blocks (resblocks). + New in BigVGAN-v2: it can optionally use optimized CUDA kernels for AMP (anti-aliased multi-periodicity) blocks. + + Args: + h (AttrDict): Hyperparameters. + use_cuda_kernel (bool): If set to True, loads optimized CUDA kernels for AMP. This should be used for inference only, as training is not supported with CUDA kernels. + + Note: + - The `use_cuda_kernel` parameter should be used for inference only, as training with CUDA kernels is not supported. + - Ensure that the activation function is correctly specified in the hyperparameters (h.activation). + """ + + def __init__(self, h: AttrDict, use_cuda_kernel: bool = False): + super().__init__() + self.h = h + self.h["use_cuda_kernel"] = use_cuda_kernel + + # Select which Activation1d, lazy-load cuda version to ensure backward compatibility + if self.h.get("use_cuda_kernel", False): + from alias_free_activation.cuda.activation1d import \ + Activation1d as CudaActivation1d + + Activation1d = CudaActivation1d + else: + Activation1d = TorchActivation1d + + self.num_kernels = len(h.resblock_kernel_sizes) + self.num_upsamples = len(h.upsample_rates) + + # Pre-conv + self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3)) + + # Define which AMPBlock to use. BigVGAN uses AMPBlock1 as default + if h.resblock == "1": + resblock_class = AMPBlock1 + elif h.resblock == "2": + resblock_class = AMPBlock2 + else: + raise ValueError( + f"Incorrect resblock class specified in hyperparameters. Got {h.resblock}") + + # Transposed conv-based upsamplers. does not apply anti-aliasing + self.ups = nn.ModuleList() + for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): + self.ups.append( + nn.ModuleList([ + weight_norm( + ConvTranspose1d( + h.upsample_initial_channel // (2**i), + h.upsample_initial_channel // (2**(i + 1)), + k, + u, + padding=(k - u) // 2, + )) + ])) + + # Residual blocks using anti-aliased multi-periodicity composition modules (AMP) + self.resblocks = nn.ModuleList() + for i in range(len(self.ups)): + ch = h.upsample_initial_channel // (2**(i + 1)) + for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): + self.resblocks.append(resblock_class(h, ch, k, d, activation=h.activation)) + + # Post-conv + activation_post = (activations.Snake(ch, alpha_logscale=h.snake_logscale) + if h.activation == "snake" else + (activations.SnakeBeta(ch, alpha_logscale=h.snake_logscale) + if h.activation == "snakebeta" else None)) + if activation_post is None: + raise NotImplementedError( + "activation incorrectly specified. check the config file and look for 'activation'." + ) + + self.activation_post = Activation1d(activation=activation_post) + + # Whether to use bias for the final conv_post. Default to True for backward compatibility + self.use_bias_at_final = h.get("use_bias_at_final", True) + self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3, bias=self.use_bias_at_final)) + + # Weight initialization + for i in range(len(self.ups)): + self.ups[i].apply(init_weights) + self.conv_post.apply(init_weights) + + # Final tanh activation. Defaults to True for backward compatibility + self.use_tanh_at_final = h.get("use_tanh_at_final", True) + + def forward(self, x): + # Pre-conv + x = self.conv_pre(x) + + for i in range(self.num_upsamples): + # Upsampling + for i_up in range(len(self.ups[i])): + x = self.ups[i][i_up](x) + # AMP blocks + xs = None + for j in range(self.num_kernels): + if xs is None: + xs = self.resblocks[i * self.num_kernels + j](x) + else: + xs += self.resblocks[i * self.num_kernels + j](x) + x = xs / self.num_kernels + + # Post-conv + x = self.activation_post(x) + x = self.conv_post(x) + # Final tanh activation + if self.use_tanh_at_final: + x = torch.tanh(x) + else: + x = torch.clamp(x, min=-1.0, max=1.0) # Bound the output to [-1, 1] + + return x + + def remove_weight_norm(self): + try: + print("Removing weight norm...") + for l in self.ups: + for l_i in l: + remove_parametrizations(l_i, 'weight') + for l in self.resblocks: + l.remove_weight_norm() + remove_parametrizations(self.conv_pre, 'weight') + remove_parametrizations(self.conv_post, 'weight') + except ValueError: + print("[INFO] Model already removed weight norm. Skipping!") + pass + + # Additional methods for huggingface_hub support + def _save_pretrained(self, save_directory: Path) -> None: + """Save weights and config.json from a Pytorch model to a local directory.""" + + model_path = save_directory / "bigvgan_generator.pt" + torch.save({"generator": self.state_dict()}, model_path) + + config_path = save_directory / "config.json" + with open(config_path, "w") as config_file: + json.dump(self.h, config_file, indent=4) + + @classmethod + def _from_pretrained( + cls, + *, + model_id: str, + revision: str, + cache_dir: str, + force_download: bool, + proxies: Optional[Dict], + resume_download: bool, + local_files_only: bool, + token: Union[str, bool, None], + map_location: str = "cpu", # Additional argument + strict: bool = False, # Additional argument + use_cuda_kernel: bool = False, + **model_kwargs, + ): + """Load Pytorch pretrained weights and return the loaded model.""" + + # Download and load hyperparameters (h) used by BigVGAN + if os.path.isdir(model_id): + print("Loading config.json from local directory") + config_file = os.path.join(model_id, "config.json") + else: + config_file = hf_hub_download( + repo_id=model_id, + filename="config.json", + revision=revision, + cache_dir=cache_dir, + force_download=force_download, + proxies=proxies, + resume_download=resume_download, + token=token, + local_files_only=local_files_only, + ) + h = load_hparams_from_json(config_file) + + # instantiate BigVGAN using h + if use_cuda_kernel: + print( + f"[WARNING] You have specified use_cuda_kernel=True during BigVGAN.from_pretrained(). Only inference is supported (training is not implemented)!" + ) + print( + f"[WARNING] You need nvcc and ninja installed in your system that matches your PyTorch build is using to build the kernel. If not, the model will fail to initialize or generate incorrect waveform!" + ) + print( + f"[WARNING] For detail, see the official GitHub repository: https://github.com/NVIDIA/BigVGAN?tab=readme-ov-file#using-custom-cuda-kernel-for-synthesis" + ) + model = cls(h, use_cuda_kernel=use_cuda_kernel) + + # Download and load pretrained generator weight + if os.path.isdir(model_id): + print("Loading weights from local directory") + model_file = os.path.join(model_id, "bigvgan_generator.pt") + else: + print(f"Loading weights from {model_id}") + model_file = hf_hub_download( + repo_id=model_id, + filename="bigvgan_generator.pt", + revision=revision, + cache_dir=cache_dir, + force_download=force_download, + proxies=proxies, + resume_download=resume_download, + token=token, + local_files_only=local_files_only, + ) + + checkpoint_dict = torch.load(model_file, map_location=map_location, weights_only=True) + + try: + model.load_state_dict(checkpoint_dict["generator"]) + except RuntimeError: + print( + f"[INFO] the pretrained checkpoint does not contain weight norm. Loading the checkpoint after removing weight norm!" + ) + model.remove_weight_norm() + model.load_state_dict(checkpoint_dict["generator"]) + + return model diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/env.py b/postprocessing/mmaudio/ext/bigvgan_v2/env.py new file mode 100644 index 0000000..b8be238 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/env.py @@ -0,0 +1,18 @@ +# Adapted from https://github.com/jik876/hifi-gan under the MIT license. +# LICENSE is in incl_licenses directory. + +import os +import shutil + + +class AttrDict(dict): + def __init__(self, *args, **kwargs): + super(AttrDict, self).__init__(*args, **kwargs) + self.__dict__ = self + + +def build_env(config, config_name, path): + t_path = os.path.join(path, config_name) + if config != t_path: + os.makedirs(path, exist_ok=True) + shutil.copyfile(config, os.path.join(path, config_name)) \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_1 b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_1 new file mode 100644 index 0000000..5afae39 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_1 @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2020 Jungil Kong + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_2 b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_2 new file mode 100644 index 0000000..322b758 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_2 @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2020 Edward Dixon + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_3 b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_3 new file mode 100644 index 0000000..56ee3c8 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_3 @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_4 b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_4 new file mode 100644 index 0000000..48fd1a1 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_4 @@ -0,0 +1,29 @@ +BSD 3-Clause License + +Copyright (c) 2019, Seungwon Park 박승원 +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +1. Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +2. Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +3. Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_5 b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_5 new file mode 100644 index 0000000..01ae553 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_5 @@ -0,0 +1,16 @@ +Copyright 2020 Alexandre Défossez + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and +associated documentation files (the "Software"), to deal in the Software without restriction, +including without limitation the rights to use, copy, modify, merge, publish, distribute, +sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or +substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT +NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, +DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_6 b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_6 new file mode 100644 index 0000000..2569ec0 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_6 @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2023-present, Descript + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_7 b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_7 new file mode 100644 index 0000000..c37bdaf --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_7 @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2023 Charactr Inc. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_8 b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_8 new file mode 100644 index 0000000..ab3d7ff --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_8 @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2023 Amphion + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/postprocessing/mmaudio/ext/bigvgan_v2/utils.py b/postprocessing/mmaudio/ext/bigvgan_v2/utils.py new file mode 100644 index 0000000..3b1d416 --- /dev/null +++ b/postprocessing/mmaudio/ext/bigvgan_v2/utils.py @@ -0,0 +1,31 @@ +# Adapted from https://github.com/jik876/hifi-gan under the MIT license. +# LICENSE is in incl_licenses directory. + +import os + +import torch +from torch.nn.utils import weight_norm + + +def init_weights(m, mean=0.0, std=0.01): + classname = m.__class__.__name__ + if classname.find("Conv") != -1: + m.weight.data.normal_(mean, std) + + +def apply_weight_norm(m): + classname = m.__class__.__name__ + if classname.find("Conv") != -1: + weight_norm(m) + + +def get_padding(kernel_size, dilation=1): + return int((kernel_size * dilation - dilation) / 2) + + +def load_checkpoint(filepath, device): + assert os.path.isfile(filepath) + print(f"Loading '{filepath}'") + checkpoint_dict = torch.load(filepath, map_location=device) + print("Complete.") + return checkpoint_dict diff --git a/postprocessing/mmaudio/ext/mel_converter.py b/postprocessing/mmaudio/ext/mel_converter.py new file mode 100644 index 0000000..15266d2 --- /dev/null +++ b/postprocessing/mmaudio/ext/mel_converter.py @@ -0,0 +1,106 @@ +# Reference: # https://github.com/bytedance/Make-An-Audio-2 +from typing import Literal + +import torch +import torch.nn as nn +from librosa.filters import mel as librosa_mel_fn + + +def dynamic_range_compression_torch(x, C=1, clip_val=1e-5, *, norm_fn): + return norm_fn(torch.clamp(x, min=clip_val) * C) + + +def spectral_normalize_torch(magnitudes, norm_fn): + output = dynamic_range_compression_torch(magnitudes, norm_fn=norm_fn) + return output + + +class MelConverter(nn.Module): + + def __init__( + self, + *, + sampling_rate: float, + n_fft: int, + num_mels: int, + hop_size: int, + win_size: int, + fmin: float, + fmax: float, + norm_fn, + ): + super().__init__() + self.sampling_rate = sampling_rate + self.n_fft = n_fft + self.num_mels = num_mels + self.hop_size = hop_size + self.win_size = win_size + self.fmin = fmin + self.fmax = fmax + self.norm_fn = norm_fn + + mel = librosa_mel_fn(sr=self.sampling_rate, + n_fft=self.n_fft, + n_mels=self.num_mels, + fmin=self.fmin, + fmax=self.fmax) + mel_basis = torch.from_numpy(mel).float() + hann_window = torch.hann_window(self.win_size) + + self.register_buffer('mel_basis', mel_basis) + self.register_buffer('hann_window', hann_window) + + @property + def device(self): + return self.mel_basis.device + + def forward(self, waveform: torch.Tensor, center: bool = False) -> torch.Tensor: + waveform = waveform.clamp(min=-1., max=1.).to(self.device) + + waveform = torch.nn.functional.pad( + waveform.unsqueeze(1), + [int((self.n_fft - self.hop_size) / 2), + int((self.n_fft - self.hop_size) / 2)], + mode='reflect') + waveform = waveform.squeeze(1) + + spec = torch.stft(waveform, + self.n_fft, + hop_length=self.hop_size, + win_length=self.win_size, + window=self.hann_window, + center=center, + pad_mode='reflect', + normalized=False, + onesided=True, + return_complex=True) + + spec = torch.view_as_real(spec) + spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9)) + spec = torch.matmul(self.mel_basis, spec) + spec = spectral_normalize_torch(spec, self.norm_fn) + + return spec + + +def get_mel_converter(mode: Literal['16k', '44k']) -> MelConverter: + if mode == '16k': + return MelConverter(sampling_rate=16_000, + n_fft=1024, + num_mels=80, + hop_size=256, + win_size=1024, + fmin=0, + fmax=8_000, + norm_fn=torch.log10) + elif mode == '44k': + return MelConverter(sampling_rate=44_100, + n_fft=2048, + num_mels=128, + hop_size=512, + win_size=2048, + fmin=0, + fmax=44100 / 2, + norm_fn=torch.log) + else: + raise ValueError(f'Unknown mode: {mode}') diff --git a/postprocessing/mmaudio/ext/rotary_embeddings.py b/postprocessing/mmaudio/ext/rotary_embeddings.py new file mode 100644 index 0000000..1ea9d56 --- /dev/null +++ b/postprocessing/mmaudio/ext/rotary_embeddings.py @@ -0,0 +1,35 @@ +from typing import Union + +import torch +from einops import rearrange +from torch import Tensor + +# Ref: https://github.com/black-forest-labs/flux/blob/main/src/flux/math.py +# Ref: https://github.com/lucidrains/rotary-embedding-torch + + +def compute_rope_rotations(length: int, + dim: int, + theta: int, + *, + freq_scaling: float = 1.0, + device: Union[torch.device, str] = 'cpu') -> Tensor: + assert dim % 2 == 0 + + with torch.amp.autocast(device_type='cuda', enabled=False): + pos = torch.arange(length, dtype=torch.float32, device=device) + freqs = 1.0 / (theta**(torch.arange(0, dim, 2, dtype=torch.float32, device=device) / dim)) + freqs *= freq_scaling + + rot = torch.einsum('..., f -> ... f', pos, freqs) + rot = torch.stack([torch.cos(rot), -torch.sin(rot), torch.sin(rot), torch.cos(rot)], dim=-1) + rot = rearrange(rot, 'n d (i j) -> 1 n d i j', i=2, j=2) + return rot + + +def apply_rope(x: Tensor, rot: Tensor) -> tuple[Tensor, Tensor]: + with torch.amp.autocast(device_type='cuda', enabled=False): + _x = x.float() + _x = _x.view(*_x.shape[:-1], -1, 1, 2) + x_out = rot[..., 0] * _x[..., 0] + rot[..., 1] * _x[..., 1] + return x_out.reshape(*x.shape).to(dtype=x.dtype) diff --git a/postprocessing/mmaudio/ext/stft_converter.py b/postprocessing/mmaudio/ext/stft_converter.py new file mode 100644 index 0000000..6292206 --- /dev/null +++ b/postprocessing/mmaudio/ext/stft_converter.py @@ -0,0 +1,183 @@ +# Reference: # https://github.com/bytedance/Make-An-Audio-2 + +import torch +import torch.nn as nn +import torchaudio +from einops import rearrange +from librosa.filters import mel as librosa_mel_fn + + +def dynamic_range_compression_torch(x, C=1, clip_val=1e-5, norm_fn=torch.log10): + return norm_fn(torch.clamp(x, min=clip_val) * C) + + +def spectral_normalize_torch(magnitudes, norm_fn): + output = dynamic_range_compression_torch(magnitudes, norm_fn=norm_fn) + return output + + +class STFTConverter(nn.Module): + + def __init__( + self, + *, + sampling_rate: float = 16_000, + n_fft: int = 1024, + num_mels: int = 128, + hop_size: int = 256, + win_size: int = 1024, + fmin: float = 0, + fmax: float = 8_000, + norm_fn=torch.log, + ): + super().__init__() + self.sampling_rate = sampling_rate + self.n_fft = n_fft + self.num_mels = num_mels + self.hop_size = hop_size + self.win_size = win_size + self.fmin = fmin + self.fmax = fmax + self.norm_fn = norm_fn + + mel = librosa_mel_fn(sr=self.sampling_rate, + n_fft=self.n_fft, + n_mels=self.num_mels, + fmin=self.fmin, + fmax=self.fmax) + mel_basis = torch.from_numpy(mel).float() + hann_window = torch.hann_window(self.win_size) + + self.register_buffer('mel_basis', mel_basis) + self.register_buffer('hann_window', hann_window) + + @property + def device(self): + return self.hann_window.device + + def forward(self, waveform: torch.Tensor) -> torch.Tensor: + # input: batch_size * length + bs = waveform.shape[0] + waveform = waveform.clamp(min=-1., max=1.) + + spec = torch.stft(waveform, + self.n_fft, + hop_length=self.hop_size, + win_length=self.win_size, + window=self.hann_window, + center=True, + pad_mode='reflect', + normalized=False, + onesided=True, + return_complex=True) + + spec = torch.view_as_real(spec) + # print('After stft', spec.shape, spec.min(), spec.max(), spec.mean()) + + power = spec.pow(2).sum(-1) + angle = torch.atan2(spec[..., 1], spec[..., 0]) + + print('power', power.shape, power.min(), power.max(), power.mean()) + print('angle', angle.shape, angle.min(), angle.max(), angle.mean()) + + # print('mel', self.mel_basis.shape, self.mel_basis.min(), self.mel_basis.max(), + # self.mel_basis.mean()) + + # spec = rearrange(spec, 'b f t c -> (b c) f t') + + # spec = self.mel_transform(spec) + + # spec = torch.matmul(self.mel_basis, spec) + + # print('After mel', spec.shape, spec.min(), spec.max(), spec.mean()) + + # spec = spectral_normalize_torch(spec, self.norm_fn) + + # print('After norm', spec.shape, spec.min(), spec.max(), spec.mean()) + + # compute magnitude + # magnitude = torch.sqrt((spec**2).sum(-1)) + # normalize by magnitude + # scaled_magnitude = torch.log10(magnitude.clamp(min=1e-5)) * 10 + # spec = spec / magnitude.unsqueeze(-1) * scaled_magnitude.unsqueeze(-1) + + # power = torch.log10(power.clamp(min=1e-5)) * 10 + power = torch.log10(power.clamp(min=1e-5)) + + print('After scaling', power.shape, power.min(), power.max(), power.mean()) + + spec = torch.stack([power, angle], dim=-1) + + # spec = rearrange(spec, '(b c) f t -> b c f t', b=bs) + spec = rearrange(spec, 'b f t c -> b c f t', b=bs) + + # spec[:, :, 400:] = 0 + + return spec + + def invert(self, spec: torch.Tensor, length: int) -> torch.Tensor: + bs = spec.shape[0] + + # spec = rearrange(spec, 'b c f t -> (b c) f t') + # print(spec.shape, self.mel_basis.shape) + # spec = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), spec).solution + # spec = torch.linalg.pinv(self.mel_basis.unsqueeze(0)) @ spec + + # spec = self.invmel_transform(spec) + + spec = rearrange(spec, 'b c f t -> b f t c', b=bs).contiguous() + + # spec[..., 0] = 10**(spec[..., 0] / 10) + + power = spec[..., 0] + power = 10**power + + # print('After unscaling', spec[..., 0].shape, spec[..., 0].min(), spec[..., 0].max(), + # spec[..., 0].mean()) + + unit_vector = torch.stack([ + torch.cos(spec[..., 1]), + torch.sin(spec[..., 1]), + ], dim=-1) + + spec = torch.sqrt(power) * unit_vector + + # spec = rearrange(spec, '(b c) f t -> b f t c', b=bs).contiguous() + spec = torch.view_as_complex(spec) + + waveform = torch.istft( + spec, + self.n_fft, + length=length, + hop_length=self.hop_size, + win_length=self.win_size, + window=self.hann_window, + center=True, + normalized=False, + onesided=True, + return_complex=False, + ) + + return waveform + + +if __name__ == '__main__': + + converter = STFTConverter(sampling_rate=16000) + + signal = torchaudio.load('./output/ZZ6GRocWW38_000090.wav')[0] + # resample signal at 44100 Hz + # signal = torchaudio.transforms.Resample(16_000, 44_100)(signal) + + L = signal.shape[1] + print('Input signal', signal.shape) + spec = converter(signal) + + print('Final spec', spec.shape) + + signal_recon = converter.invert(spec, length=L) + print('Output signal', signal_recon.shape, signal_recon.min(), signal_recon.max(), + signal_recon.mean()) + + print('MSE', torch.nn.functional.mse_loss(signal, signal_recon)) + torchaudio.save('./output/ZZ6GRocWW38_000090_recon.wav', signal_recon, 16000) diff --git a/postprocessing/mmaudio/ext/stft_converter_mel.py b/postprocessing/mmaudio/ext/stft_converter_mel.py new file mode 100644 index 0000000..f6b32d4 --- /dev/null +++ b/postprocessing/mmaudio/ext/stft_converter_mel.py @@ -0,0 +1,234 @@ +# Reference: # https://github.com/bytedance/Make-An-Audio-2 + +import torch +import torch.nn as nn +import torchaudio +from einops import rearrange +from librosa.filters import mel as librosa_mel_fn + + +def dynamic_range_compression_torch(x, C=1, clip_val=1e-5, norm_fn=torch.log10): + return norm_fn(torch.clamp(x, min=clip_val) * C) + + +def spectral_normalize_torch(magnitudes, norm_fn): + output = dynamic_range_compression_torch(magnitudes, norm_fn=norm_fn) + return output + + +class STFTConverter(nn.Module): + + def __init__( + self, + *, + sampling_rate: float = 16_000, + n_fft: int = 1024, + num_mels: int = 128, + hop_size: int = 256, + win_size: int = 1024, + fmin: float = 0, + fmax: float = 8_000, + norm_fn=torch.log, + ): + super().__init__() + self.sampling_rate = sampling_rate + self.n_fft = n_fft + self.num_mels = num_mels + self.hop_size = hop_size + self.win_size = win_size + self.fmin = fmin + self.fmax = fmax + self.norm_fn = norm_fn + + mel = librosa_mel_fn(sr=self.sampling_rate, + n_fft=self.n_fft, + n_mels=self.num_mels, + fmin=self.fmin, + fmax=self.fmax) + mel_basis = torch.from_numpy(mel).float() + hann_window = torch.hann_window(self.win_size) + + self.register_buffer('mel_basis', mel_basis) + self.register_buffer('hann_window', hann_window) + + @property + def device(self): + return self.hann_window.device + + def forward(self, waveform: torch.Tensor) -> torch.Tensor: + # input: batch_size * length + bs = waveform.shape[0] + waveform = waveform.clamp(min=-1., max=1.) + + spec = torch.stft(waveform, + self.n_fft, + hop_length=self.hop_size, + win_length=self.win_size, + window=self.hann_window, + center=True, + pad_mode='reflect', + normalized=False, + onesided=True, + return_complex=True) + + spec = torch.view_as_real(spec) + # print('After stft', spec.shape, spec.min(), spec.max(), spec.mean()) + + power = (spec.pow(2).sum(-1))**(0.5) + angle = torch.atan2(spec[..., 1], spec[..., 0]) + + print('power 1', power.shape, power.min(), power.max(), power.mean()) + print('angle 1', angle.shape, angle.min(), angle.max(), angle.mean(), angle[:, :2, :2]) + + # print('mel', self.mel_basis.shape, self.mel_basis.min(), self.mel_basis.max(), + # self.mel_basis.mean()) + + # spec = self.mel_transform(spec) + + # power = torch.matmul(self.mel_basis, power) + + spec = rearrange(spec, 'b f t c -> (b c) f t') + spec = self.mel_basis.unsqueeze(0) @ spec + spec = rearrange(spec, '(b c) f t -> b f t c', b=bs) + + power = (spec.pow(2).sum(-1))**(0.5) + angle = torch.atan2(spec[..., 1], spec[..., 0]) + + print('power', power.shape, power.min(), power.max(), power.mean()) + print('angle', angle.shape, angle.min(), angle.max(), angle.mean(), angle[:, :2, :2]) + + # print('After mel', spec.shape, spec.min(), spec.max(), spec.mean()) + + # spec = spectral_normalize_torch(spec, self.norm_fn) + + # print('After norm', spec.shape, spec.min(), spec.max(), spec.mean()) + + # compute magnitude + # magnitude = torch.sqrt((spec**2).sum(-1)) + # normalize by magnitude + # scaled_magnitude = torch.log10(magnitude.clamp(min=1e-5)) * 10 + # spec = spec / magnitude.unsqueeze(-1) * scaled_magnitude.unsqueeze(-1) + + # power = torch.log10(power.clamp(min=1e-5)) * 10 + power = torch.log10(power.clamp(min=1e-8)) + + print('After scaling', power.shape, power.min(), power.max(), power.mean()) + + # spec = torch.stack([power, angle], dim=-1) + + # spec = rearrange(spec, '(b c) f t -> b c f t', b=bs) + # spec = rearrange(spec, 'b f t c -> b c f t', b=bs) + + # spec[:, :, 400:] = 0 + + return power, angle + # return spec[..., 0], spec[..., 1] + + def invert(self, spec: torch.Tensor, length: int) -> torch.Tensor: + + power, angle = spec + + bs = power.shape[0] + + # spec = rearrange(spec, 'b c f t -> (b c) f t') + # print(spec.shape, self.mel_basis.shape) + # spec = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), spec).solution + # spec = torch.linalg.pinv(self.mel_basis.unsqueeze(0)) @ spec + + # spec = self.invmel_transform(spec) + + # spec = rearrange(spec, 'b c f t -> b f t c', b=bs).contiguous() + + # spec[..., 0] = 10**(spec[..., 0] / 10) + + # power = spec[..., 0] + power = 10**power + + # print('After unscaling', spec[..., 0].shape, spec[..., 0].min(), spec[..., 0].max(), + # spec[..., 0].mean()) + + unit_vector = torch.stack([ + torch.cos(angle), + torch.sin(angle), + ], dim=-1) + + spec = power.unsqueeze(-1) * unit_vector + + # power = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), power).solution + spec = rearrange(spec, 'b f t c -> (b c) f t') + spec = torch.linalg.pinv(self.mel_basis.unsqueeze(0)) @ spec + # spec = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), spec).solution + spec = rearrange(spec, '(b c) f t -> b f t c', b=bs).contiguous() + + power = (spec.pow(2).sum(-1))**(0.5) + angle = torch.atan2(spec[..., 1], spec[..., 0]) + + print('power 2', power.shape, power.min(), power.max(), power.mean()) + print('angle 2', angle.shape, angle.min(), angle.max(), angle.mean(), angle[:, :2, :2]) + + # spec = rearrange(spec, '(b c) f t -> b f t c', b=bs).contiguous() + spec = torch.view_as_complex(spec) + + waveform = torch.istft( + spec, + self.n_fft, + length=length, + hop_length=self.hop_size, + win_length=self.win_size, + window=self.hann_window, + center=True, + normalized=False, + onesided=True, + return_complex=False, + ) + + return waveform + + +if __name__ == '__main__': + + converter = STFTConverter(sampling_rate=16000) + + signal = torchaudio.load('./output/ZZ6GRocWW38_000090.wav')[0] + # resample signal at 44100 Hz + # signal = torchaudio.transforms.Resample(16_000, 44_100)(signal) + + L = signal.shape[1] + print('Input signal', signal.shape) + spec = converter(signal) + + power, angle = spec + + # print(power.shape, angle.shape) + # print(power, power.min(), power.max(), power.mean()) + # power = power.clamp(-1, 1) + # angle = angle.clamp(-1, 1) + + import matplotlib.pyplot as plt + + # Visualize power + plt.figure() + plt.imshow(power[0].detach().numpy(), aspect='auto', origin='lower') + plt.colorbar() + plt.title('Power') + plt.xlabel('Time') + plt.ylabel('Frequency') + plt.savefig('./output/power.png') + + # Visualize angle + plt.figure() + plt.imshow(angle[0].detach().numpy(), aspect='auto', origin='lower') + plt.colorbar() + plt.title('Angle') + plt.xlabel('Time') + plt.ylabel('Frequency') + plt.savefig('./output/angle.png') + + # print('Final spec', spec.shape) + + signal_recon = converter.invert(spec, length=L) + print('Output signal', signal_recon.shape, signal_recon.min(), signal_recon.max(), + signal_recon.mean()) + + print('MSE', torch.nn.functional.mse_loss(signal, signal_recon)) + torchaudio.save('./output/ZZ6GRocWW38_000090_recon.wav', signal_recon, 16000) diff --git a/postprocessing/mmaudio/ext/synchformer/LICENSE b/postprocessing/mmaudio/ext/synchformer/LICENSE new file mode 100644 index 0000000..2f70bf2 --- /dev/null +++ b/postprocessing/mmaudio/ext/synchformer/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2024 Vladimir Iashin + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/postprocessing/mmaudio/ext/synchformer/__init__.py b/postprocessing/mmaudio/ext/synchformer/__init__.py new file mode 100644 index 0000000..838ebd4 --- /dev/null +++ b/postprocessing/mmaudio/ext/synchformer/__init__.py @@ -0,0 +1 @@ +# from .synchformer import Synchformer diff --git a/postprocessing/mmaudio/ext/synchformer/divided_224_16x4.yaml b/postprocessing/mmaudio/ext/synchformer/divided_224_16x4.yaml new file mode 100644 index 0000000..f9d20b7 --- /dev/null +++ b/postprocessing/mmaudio/ext/synchformer/divided_224_16x4.yaml @@ -0,0 +1,84 @@ +TRAIN: + ENABLE: True + DATASET: Ssv2 + BATCH_SIZE: 32 + EVAL_PERIOD: 5 + CHECKPOINT_PERIOD: 5 + AUTO_RESUME: True + CHECKPOINT_EPOCH_RESET: True + CHECKPOINT_FILE_PATH: /checkpoint/fmetze/neurips_sota/40944587/checkpoints/checkpoint_epoch_00035.pyth +DATA: + NUM_FRAMES: 16 + SAMPLING_RATE: 4 + TRAIN_JITTER_SCALES: [256, 320] + TRAIN_CROP_SIZE: 224 + TEST_CROP_SIZE: 224 + INPUT_CHANNEL_NUM: [3] + MEAN: [0.5, 0.5, 0.5] + STD: [0.5, 0.5, 0.5] + PATH_TO_DATA_DIR: /private/home/mandelapatrick/slowfast/data/ssv2 + PATH_PREFIX: /datasets01/SomethingV2/092720/20bn-something-something-v2-frames + INV_UNIFORM_SAMPLE: True + RANDOM_FLIP: False + REVERSE_INPUT_CHANNEL: True + USE_RAND_AUGMENT: True + RE_PROB: 0.0 + USE_REPEATED_AUG: False + USE_RANDOM_RESIZE_CROPS: False + COLORJITTER: False + GRAYSCALE: False + GAUSSIAN: False +SOLVER: + BASE_LR: 1e-4 + LR_POLICY: steps_with_relative_lrs + LRS: [1, 0.1, 0.01] + STEPS: [0, 20, 30] + MAX_EPOCH: 35 + MOMENTUM: 0.9 + WEIGHT_DECAY: 5e-2 + WARMUP_EPOCHS: 0.0 + OPTIMIZING_METHOD: adamw + USE_MIXED_PRECISION: True + SMOOTHING: 0.2 +SLOWFAST: + ALPHA: 8 +VIT: + PATCH_SIZE: 16 + PATCH_SIZE_TEMP: 2 + CHANNELS: 3 + EMBED_DIM: 768 + DEPTH: 12 + NUM_HEADS: 12 + MLP_RATIO: 4 + QKV_BIAS: True + VIDEO_INPUT: True + TEMPORAL_RESOLUTION: 8 + USE_MLP: True + DROP: 0.0 + POS_DROPOUT: 0.0 + DROP_PATH: 0.2 + IM_PRETRAINED: True + HEAD_DROPOUT: 0.0 + HEAD_ACT: tanh + PRETRAINED_WEIGHTS: vit_1k + ATTN_LAYER: divided +MODEL: + NUM_CLASSES: 174 + ARCH: slow + MODEL_NAME: VisionTransformer + LOSS_FUNC: cross_entropy +TEST: + ENABLE: True + DATASET: Ssv2 + BATCH_SIZE: 64 + NUM_ENSEMBLE_VIEWS: 1 + NUM_SPATIAL_CROPS: 3 +DATA_LOADER: + NUM_WORKERS: 4 + PIN_MEMORY: True +NUM_GPUS: 8 +NUM_SHARDS: 4 +RNG_SEED: 0 +OUTPUT_DIR: . +TENSORBOARD: + ENABLE: True diff --git a/postprocessing/mmaudio/ext/synchformer/motionformer.py b/postprocessing/mmaudio/ext/synchformer/motionformer.py new file mode 100644 index 0000000..dbf3014 --- /dev/null +++ b/postprocessing/mmaudio/ext/synchformer/motionformer.py @@ -0,0 +1,400 @@ +import logging +from pathlib import Path + +import einops +import torch +from omegaconf import OmegaConf +from timm.layers import trunc_normal_ +from torch import nn + +from .utils import check_if_file_exists_else_download +from .video_model_builder import VisionTransformer + +FILE2URL = { + # cfg + 'motionformer_224_16x4.yaml': + 'https://raw.githubusercontent.com/facebookresearch/Motionformer/bf43d50/configs/SSV2/motionformer_224_16x4.yaml', + 'joint_224_16x4.yaml': + 'https://raw.githubusercontent.com/facebookresearch/Motionformer/bf43d50/configs/SSV2/joint_224_16x4.yaml', + 'divided_224_16x4.yaml': + 'https://raw.githubusercontent.com/facebookresearch/Motionformer/bf43d50/configs/SSV2/divided_224_16x4.yaml', + # ckpt + 'ssv2_motionformer_224_16x4.pyth': + 'https://dl.fbaipublicfiles.com/motionformer/ssv2_motionformer_224_16x4.pyth', + 'ssv2_joint_224_16x4.pyth': + 'https://dl.fbaipublicfiles.com/motionformer/ssv2_joint_224_16x4.pyth', + 'ssv2_divided_224_16x4.pyth': + 'https://dl.fbaipublicfiles.com/motionformer/ssv2_divided_224_16x4.pyth', +} + + +class MotionFormer(VisionTransformer): + ''' This class serves three puposes: + 1. Renames the class to MotionFormer. + 2. Downloads the cfg from the original repo and patches it if needed. + 3. Takes care of feature extraction by redefining .forward() + - if `extract_features=True` and `factorize_space_time=False`, + the output is of shape (B, T, D) where T = 1 + (224 // 16) * (224 // 16) * 8 + - if `extract_features=True` and `factorize_space_time=True`, the output is of shape (B*S, D) + and spatial and temporal transformer encoder layers are used. + - if `extract_features=True` and `factorize_space_time=True` as well as `add_global_repr=True` + the output is of shape (B, D) and spatial and temporal transformer encoder layers + are used as well as the global representation is extracted from segments (extra pos emb + is added). + ''' + + def __init__( + self, + extract_features: bool = False, + ckpt_path: str = None, + factorize_space_time: bool = None, + agg_space_module: str = None, + agg_time_module: str = None, + add_global_repr: bool = True, + agg_segments_module: str = None, + max_segments: int = None, + ): + self.extract_features = extract_features + self.ckpt_path = ckpt_path + self.factorize_space_time = factorize_space_time + + if self.ckpt_path is not None: + check_if_file_exists_else_download(self.ckpt_path, FILE2URL) + ckpt = torch.load(self.ckpt_path, map_location='cpu') + mformer_ckpt2cfg = { + 'ssv2_motionformer_224_16x4.pyth': 'motionformer_224_16x4.yaml', + 'ssv2_joint_224_16x4.pyth': 'joint_224_16x4.yaml', + 'ssv2_divided_224_16x4.pyth': 'divided_224_16x4.yaml', + } + # init from motionformer ckpt or from our Stage I ckpt + # depending on whether the feat extractor was pre-trained on AVCLIPMoCo or not, we need to + # load the state dict differently + was_pt_on_avclip = self.ckpt_path.endswith( + '.pt') # checks if it is a stage I ckpt (FIXME: a bit generic) + if self.ckpt_path.endswith(tuple(mformer_ckpt2cfg.keys())): + cfg_fname = mformer_ckpt2cfg[Path(self.ckpt_path).name] + elif was_pt_on_avclip: + # TODO: this is a hack, we should be able to get the cfg from the ckpt (earlier ckpt didn't have it) + s1_cfg = ckpt.get('args', None) # Stage I cfg + if s1_cfg is not None: + s1_vfeat_extractor_ckpt_path = s1_cfg.model.params.vfeat_extractor.params.ckpt_path + # if the stage I ckpt was initialized from a motionformer ckpt or train from scratch + if s1_vfeat_extractor_ckpt_path is not None: + cfg_fname = mformer_ckpt2cfg[Path(s1_vfeat_extractor_ckpt_path).name] + else: + cfg_fname = 'divided_224_16x4.yaml' + else: + cfg_fname = 'divided_224_16x4.yaml' + else: + raise ValueError(f'ckpt_path {self.ckpt_path} is not supported.') + else: + was_pt_on_avclip = False + cfg_fname = 'divided_224_16x4.yaml' + # logging.info(f'No ckpt_path provided, using {cfg_fname} config.') + + if cfg_fname in ['motionformer_224_16x4.yaml', 'divided_224_16x4.yaml']: + pos_emb_type = 'separate' + elif cfg_fname == 'joint_224_16x4.yaml': + pos_emb_type = 'joint' + + self.mformer_cfg_path = Path(__file__).absolute().parent / cfg_fname + + check_if_file_exists_else_download(self.mformer_cfg_path, FILE2URL) + mformer_cfg = OmegaConf.load(self.mformer_cfg_path) + logging.info(f'Loading MotionFormer config from {self.mformer_cfg_path.absolute()}') + + # patch the cfg (from the default cfg defined in the repo `Motionformer/slowfast/config/defaults.py`) + mformer_cfg.VIT.ATTN_DROPOUT = 0.0 + mformer_cfg.VIT.POS_EMBED = pos_emb_type + mformer_cfg.VIT.USE_ORIGINAL_TRAJ_ATTN_CODE = True + mformer_cfg.VIT.APPROX_ATTN_TYPE = 'none' # guessing + mformer_cfg.VIT.APPROX_ATTN_DIM = 64 # from ckpt['cfg'] + + # finally init VisionTransformer with the cfg + super().__init__(mformer_cfg) + + # load the ckpt now if ckpt is provided and not from AVCLIPMoCo-pretrained ckpt + if (self.ckpt_path is not None) and (not was_pt_on_avclip): + _ckpt_load_status = self.load_state_dict(ckpt['model_state'], strict=False) + if len(_ckpt_load_status.missing_keys) > 0 or len( + _ckpt_load_status.unexpected_keys) > 0: + logging.warning(f'Loading exact vfeat_extractor ckpt from {self.ckpt_path} failed.' \ + f'Missing keys: {_ckpt_load_status.missing_keys}, ' \ + f'Unexpected keys: {_ckpt_load_status.unexpected_keys}') + else: + logging.info(f'Loading vfeat_extractor ckpt from {self.ckpt_path} succeeded.') + + if self.extract_features: + assert isinstance(self.norm, + nn.LayerNorm), 'early x[:, 1:, :] may not be safe for per-tr weights' + # pre-logits are Sequential(nn.Linear(emb, emd), act) and `act` is tanh but see the logger + self.pre_logits = nn.Identity() + # we don't need the classification head (saving memory) + self.head = nn.Identity() + self.head_drop = nn.Identity() + # avoiding code duplication (used only if agg_*_module is TransformerEncoderLayer) + transf_enc_layer_kwargs = dict( + d_model=self.embed_dim, + nhead=self.num_heads, + activation=nn.GELU(), + batch_first=True, + dim_feedforward=self.mlp_ratio * self.embed_dim, + dropout=self.drop_rate, + layer_norm_eps=1e-6, + norm_first=True, + ) + # define adapters if needed + if self.factorize_space_time: + if agg_space_module == 'TransformerEncoderLayer': + self.spatial_attn_agg = SpatialTransformerEncoderLayer( + **transf_enc_layer_kwargs) + elif agg_space_module == 'AveragePooling': + self.spatial_attn_agg = AveragePooling(avg_pattern='BS D t h w -> BS D t', + then_permute_pattern='BS D t -> BS t D') + if agg_time_module == 'TransformerEncoderLayer': + self.temp_attn_agg = TemporalTransformerEncoderLayer(**transf_enc_layer_kwargs) + elif agg_time_module == 'AveragePooling': + self.temp_attn_agg = AveragePooling(avg_pattern='BS t D -> BS D') + elif 'Identity' in agg_time_module: + self.temp_attn_agg = nn.Identity() + # define a global aggregation layer (aggregarate over segments) + self.add_global_repr = add_global_repr + if add_global_repr: + if agg_segments_module == 'TransformerEncoderLayer': + # we can reuse the same layer as for temporal factorization (B, dim_to_agg, D) -> (B, D) + # we need to add pos emb (PE) because previously we added the same PE for each segment + pos_max_len = max_segments if max_segments is not None else 16 # 16 = 10sec//0.64sec + 1 + self.global_attn_agg = TemporalTransformerEncoderLayer( + add_pos_emb=True, + pos_emb_drop=mformer_cfg.VIT.POS_DROPOUT, + pos_max_len=pos_max_len, + **transf_enc_layer_kwargs) + elif agg_segments_module == 'AveragePooling': + self.global_attn_agg = AveragePooling(avg_pattern='B S D -> B D') + + if was_pt_on_avclip: + # we need to filter out the state_dict of the AVCLIP model (has both A and V extractors) + # and keep only the state_dict of the feat extractor + ckpt_weights = dict() + for k, v in ckpt['state_dict'].items(): + if k.startswith(('module.v_encoder.', 'v_encoder.')): + k = k.replace('module.', '').replace('v_encoder.', '') + ckpt_weights[k] = v + _load_status = self.load_state_dict(ckpt_weights, strict=False) + if len(_load_status.missing_keys) > 0 or len(_load_status.unexpected_keys) > 0: + logging.warning(f'Loading exact vfeat_extractor ckpt from {self.ckpt_path} failed. \n' \ + f'Missing keys ({len(_load_status.missing_keys)}): ' \ + f'{_load_status.missing_keys}, \n' \ + f'Unexpected keys ({len(_load_status.unexpected_keys)}): ' \ + f'{_load_status.unexpected_keys} \n' \ + f'temp_attn_agg are expected to be missing if ckpt was pt contrastively.') + else: + logging.info(f'Loading vfeat_extractor ckpt from {self.ckpt_path} succeeded.') + + # patch_embed is not used in MotionFormer, only patch_embed_3d, because cfg.VIT.PATCH_SIZE_TEMP > 1 + # but it used to calculate the number of patches, so we need to set keep it + self.patch_embed.requires_grad_(False) + + def forward(self, x): + ''' + x is of shape (B, S, C, T, H, W) where S is the number of segments. + ''' + # Batch, Segments, Channels, T=frames, Height, Width + B, S, C, T, H, W = x.shape + # Motionformer expects a tensor of shape (1, B, C, T, H, W). + # The first dimension (1) is a dummy dimension to make the input tensor and won't be used: + # see `video_model_builder.video_input`. + # x = x.unsqueeze(0) # (1, B, S, C, T, H, W) + + orig_shape = (B, S, C, T, H, W) + x = x.view(B * S, C, T, H, W) # flatten batch and segments + x = self.forward_segments(x, orig_shape=orig_shape) + # unpack the segments (using rest dimensions to support different shapes e.g. (BS, D) or (BS, t, D)) + x = x.view(B, S, *x.shape[1:]) + # x is now of shape (B*S, D) or (B*S, t, D) if `self.temp_attn_agg` is `Identity` + + return x # x is (B, S, ...) + + def forward_segments(self, x, orig_shape: tuple) -> torch.Tensor: + '''x is of shape (1, BS, C, T, H, W) where S is the number of segments.''' + x, x_mask = self.forward_features(x) + + assert self.extract_features + + # (BS, T, D) where T = 1 + (224 // 16) * (224 // 16) * 8 + x = x[:, + 1:, :] # without the CLS token for efficiency (should be safe for LayerNorm and FC) + x = self.norm(x) + x = self.pre_logits(x) + if self.factorize_space_time: + x = self.restore_spatio_temp_dims(x, orig_shape) # (B*S, D, t, h, w) <- (B*S, t*h*w, D) + + x = self.spatial_attn_agg(x, x_mask) # (B*S, t, D) + x = self.temp_attn_agg( + x) # (B*S, D) or (BS, t, D) if `self.temp_attn_agg` is `Identity` + + return x + + def restore_spatio_temp_dims(self, feats: torch.Tensor, orig_shape: tuple) -> torch.Tensor: + ''' + feats are of shape (B*S, T, D) where T = 1 + (224 // 16) * (224 // 16) * 8 + Our goal is to make them of shape (B*S, t, h, w, D) where h, w are the spatial dimensions. + From `self.patch_embed_3d`, it follows that we could reshape feats with: + `feats.transpose(1, 2).view(B*S, D, t, h, w)` + ''' + B, S, C, T, H, W = orig_shape + D = self.embed_dim + + # num patches in each dimension + t = T // self.patch_embed_3d.z_block_size + h = self.patch_embed_3d.height + w = self.patch_embed_3d.width + + feats = feats.permute(0, 2, 1) # (B*S, D, T) + feats = feats.view(B * S, D, t, h, w) # (B*S, D, t, h, w) + + return feats + + +class BaseEncoderLayer(nn.TransformerEncoderLayer): + ''' + This is a wrapper around nn.TransformerEncoderLayer that adds a CLS token + to the sequence and outputs the CLS token's representation. + This base class parents both SpatialEncoderLayer and TemporalEncoderLayer for the RGB stream + and the FrequencyEncoderLayer and TemporalEncoderLayer for the audio stream stream. + We also, optionally, add a positional embedding to the input sequence which + allows to reuse it for global aggregation (of segments) for both streams. + ''' + + def __init__(self, + add_pos_emb: bool = False, + pos_emb_drop: float = None, + pos_max_len: int = None, + *args_transformer_enc, + **kwargs_transformer_enc): + super().__init__(*args_transformer_enc, **kwargs_transformer_enc) + self.cls_token = nn.Parameter(torch.zeros(1, 1, self.self_attn.embed_dim)) + trunc_normal_(self.cls_token, std=.02) + + # add positional embedding + self.add_pos_emb = add_pos_emb + if add_pos_emb: + self.pos_max_len = 1 + pos_max_len # +1 (for CLS) + self.pos_emb = nn.Parameter(torch.zeros(1, self.pos_max_len, self.self_attn.embed_dim)) + self.pos_drop = nn.Dropout(pos_emb_drop) + trunc_normal_(self.pos_emb, std=.02) + + self.apply(self._init_weights) + + def forward(self, x: torch.Tensor, x_mask: torch.Tensor = None): + ''' x is of shape (B, N, D); if provided x_mask is of shape (B, N)''' + batch_dim = x.shape[0] + + # add CLS token + cls_tokens = self.cls_token.expand(batch_dim, -1, -1) # expanding to match batch dimension + x = torch.cat((cls_tokens, x), dim=-2) # (batch_dim, 1+seq_len, D) + if x_mask is not None: + cls_mask = torch.ones((batch_dim, 1), dtype=torch.bool, + device=x_mask.device) # 1=keep; 0=mask + x_mask_w_cls = torch.cat((cls_mask, x_mask), dim=-1) # (batch_dim, 1+seq_len) + B, N = x_mask_w_cls.shape + # torch expects (N, N) or (B*num_heads, N, N) mask (sadness ahead); torch masks + x_mask_w_cls = x_mask_w_cls.reshape(B, 1, 1, N)\ + .expand(-1, self.self_attn.num_heads, N, -1)\ + .reshape(B * self.self_attn.num_heads, N, N) + assert x_mask_w_cls.dtype == x_mask_w_cls.bool().dtype, 'x_mask_w_cls.dtype != bool' + x_mask_w_cls = ~x_mask_w_cls # invert mask (1=mask) + else: + x_mask_w_cls = None + + # add positional embedding + if self.add_pos_emb: + seq_len = x.shape[ + 1] # (don't even think about moving it before the CLS token concatenation) + assert seq_len <= self.pos_max_len, f'Seq len ({seq_len}) > pos_max_len ({self.pos_max_len})' + x = x + self.pos_emb[:, :seq_len, :] + x = self.pos_drop(x) + + # apply encoder layer (calls nn.TransformerEncoderLayer.forward); + x = super().forward(src=x, src_mask=x_mask_w_cls) # (batch_dim, 1+seq_len, D) + + # CLS token is expected to hold spatial information for each frame + x = x[:, 0, :] # (batch_dim, D) + + return x + + def _init_weights(self, m): + if isinstance(m, nn.Linear): + trunc_normal_(m.weight, std=.02) + if isinstance(m, nn.Linear) and m.bias is not None: + nn.init.constant_(m.bias, 0) + elif isinstance(m, nn.LayerNorm): + nn.init.constant_(m.bias, 0) + nn.init.constant_(m.weight, 1.0) + + @torch.jit.ignore + def no_weight_decay(self): + return {'cls_token', 'pos_emb'} + + +class SpatialTransformerEncoderLayer(BaseEncoderLayer): + ''' Aggregates spatial dimensions by applying attention individually to each frame. ''' + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + + def forward(self, x: torch.Tensor, x_mask: torch.Tensor = None) -> torch.Tensor: + ''' x is of shape (B*S, D, t, h, w) where S is the number of segments. + if specified x_mask (B*S, t, h, w), 0=masked, 1=kept + Returns a tensor of shape (B*S, t, D) pooling spatial information for each frame. ''' + BS, D, t, h, w = x.shape + + # time as a batch dimension and flatten spatial dimensions as sequence + x = einops.rearrange(x, 'BS D t h w -> (BS t) (h w) D') + # similar to mask + if x_mask is not None: + x_mask = einops.rearrange(x_mask, 'BS t h w -> (BS t) (h w)') + + # apply encoder layer (BaseEncoderLayer.forward) - it will add CLS token and output its representation + x = super().forward(x=x, x_mask=x_mask) # (B*S*t, D) + + # reshape back to (B*S, t, D) + x = einops.rearrange(x, '(BS t) D -> BS t D', BS=BS, t=t) + + # (B*S, t, D) + return x + + +class TemporalTransformerEncoderLayer(BaseEncoderLayer): + ''' Aggregates temporal dimension with attention. Also used with pos emb as global aggregation + in both streams. ''' + + def __init__(self, *args, **kwargs): + super().__init__(*args, **kwargs) + + def forward(self, x): + ''' x is of shape (B*S, t, D) where S is the number of segments. + Returns a tensor of shape (B*S, D) pooling temporal information. ''' + BS, t, D = x.shape + + # apply encoder layer (BaseEncoderLayer.forward) - it will add CLS token and output its representation + x = super().forward(x) # (B*S, D) + + return x # (B*S, D) + + +class AveragePooling(nn.Module): + + def __init__(self, avg_pattern: str, then_permute_pattern: str = None) -> None: + ''' patterns are e.g. "bs t d -> bs d" ''' + super().__init__() + # TODO: need to register them as buffers (but fails because these are strings) + self.reduce_fn = 'mean' + self.avg_pattern = avg_pattern + self.then_permute_pattern = then_permute_pattern + + def forward(self, x: torch.Tensor, x_mask: torch.Tensor = None) -> torch.Tensor: + x = einops.reduce(x, self.avg_pattern, self.reduce_fn) + if self.then_permute_pattern is not None: + x = einops.rearrange(x, self.then_permute_pattern) + return x diff --git a/postprocessing/mmaudio/ext/synchformer/synchformer.py b/postprocessing/mmaudio/ext/synchformer/synchformer.py new file mode 100644 index 0000000..8cd7026 --- /dev/null +++ b/postprocessing/mmaudio/ext/synchformer/synchformer.py @@ -0,0 +1,55 @@ +import logging +from typing import Any, Mapping + +import torch +from torch import nn + +from .motionformer import MotionFormer + + +class Synchformer(nn.Module): + + def __init__(self): + super().__init__() + + self.vfeat_extractor = MotionFormer(extract_features=True, + factorize_space_time=True, + agg_space_module='TransformerEncoderLayer', + agg_time_module='torch.nn.Identity', + add_global_repr=False) + + # self.vfeat_extractor = instantiate_from_config(vfeat_extractor) + # self.afeat_extractor = instantiate_from_config(afeat_extractor) + # # bridging the s3d latent dim (1024) into what is specified in the config + # # to match e.g. the transformer dim + # self.vproj = instantiate_from_config(vproj) + # self.aproj = instantiate_from_config(aproj) + # self.transformer = instantiate_from_config(transformer) + + def forward(self, vis): + B, S, Tv, C, H, W = vis.shape + vis = vis.permute(0, 1, 3, 2, 4, 5) # (B, S, C, Tv, H, W) + # feat extractors return a tuple of segment-level and global features (ignored for sync) + # (B, S, tv, D), e.g. (B, 7, 8, 768) + vis = self.vfeat_extractor(vis) + return vis + + def load_state_dict(self, sd: Mapping[str, Any], strict: bool = True): + # discard all entries except vfeat_extractor + sd = {k: v for k, v in sd.items() if k.startswith('vfeat_extractor')} + + return super().load_state_dict(sd, strict) + + +if __name__ == "__main__": + model = Synchformer().cuda().eval() + sd = torch.load('./ext_weights/synchformer_state_dict.pth', weights_only=True) + model.load_state_dict(sd) + + vid = torch.randn(2, 7, 16, 3, 224, 224).cuda() + features = model.extract_vfeats(vid, for_loop=False).detach().cpu() + print(features.shape) + + # extract and save the state dict only + # sd = torch.load('./ext_weights/sync_model_audioset.pt')['model'] + # torch.save(sd, './ext_weights/synchformer_state_dict.pth') diff --git a/postprocessing/mmaudio/ext/synchformer/utils.py b/postprocessing/mmaudio/ext/synchformer/utils.py new file mode 100644 index 0000000..a797eb9 --- /dev/null +++ b/postprocessing/mmaudio/ext/synchformer/utils.py @@ -0,0 +1,92 @@ +from hashlib import md5 +from pathlib import Path + +import requests +from tqdm import tqdm + +PARENT_LINK = 'https://a3s.fi/swift/v1/AUTH_a235c0f452d648828f745589cde1219a' +FNAME2LINK = { + # S3: Synchability: AudioSet (run 2) + '24-01-22T20-34-52.pt': + f'{PARENT_LINK}/sync/sync_models/24-01-22T20-34-52/24-01-22T20-34-52.pt', + 'cfg-24-01-22T20-34-52.yaml': + f'{PARENT_LINK}/sync/sync_models/24-01-22T20-34-52/cfg-24-01-22T20-34-52.yaml', + # S2: Synchformer: AudioSet (run 2) + '24-01-04T16-39-21.pt': + f'{PARENT_LINK}/sync/sync_models/24-01-04T16-39-21/24-01-04T16-39-21.pt', + 'cfg-24-01-04T16-39-21.yaml': + f'{PARENT_LINK}/sync/sync_models/24-01-04T16-39-21/cfg-24-01-04T16-39-21.yaml', + # S2: Synchformer: AudioSet (run 1) + '23-08-28T11-23-23.pt': + f'{PARENT_LINK}/sync/sync_models/23-08-28T11-23-23/23-08-28T11-23-23.pt', + 'cfg-23-08-28T11-23-23.yaml': + f'{PARENT_LINK}/sync/sync_models/23-08-28T11-23-23/cfg-23-08-28T11-23-23.yaml', + # S2: Synchformer: LRS3 (run 2) + '23-12-23T18-33-57.pt': + f'{PARENT_LINK}/sync/sync_models/23-12-23T18-33-57/23-12-23T18-33-57.pt', + 'cfg-23-12-23T18-33-57.yaml': + f'{PARENT_LINK}/sync/sync_models/23-12-23T18-33-57/cfg-23-12-23T18-33-57.yaml', + # S2: Synchformer: VGS (run 2) + '24-01-02T10-00-53.pt': + f'{PARENT_LINK}/sync/sync_models/24-01-02T10-00-53/24-01-02T10-00-53.pt', + 'cfg-24-01-02T10-00-53.yaml': + f'{PARENT_LINK}/sync/sync_models/24-01-02T10-00-53/cfg-24-01-02T10-00-53.yaml', + # SparseSync: ft VGGSound-Full + '22-09-21T21-00-52.pt': + f'{PARENT_LINK}/sync/sync_models/22-09-21T21-00-52/22-09-21T21-00-52.pt', + 'cfg-22-09-21T21-00-52.yaml': + f'{PARENT_LINK}/sync/sync_models/22-09-21T21-00-52/cfg-22-09-21T21-00-52.yaml', + # SparseSync: ft VGGSound-Sparse + '22-07-28T15-49-45.pt': + f'{PARENT_LINK}/sync/sync_models/22-07-28T15-49-45/22-07-28T15-49-45.pt', + 'cfg-22-07-28T15-49-45.yaml': + f'{PARENT_LINK}/sync/sync_models/22-07-28T15-49-45/cfg-22-07-28T15-49-45.yaml', + # SparseSync: only pt on LRS3 + '22-07-13T22-25-49.pt': + f'{PARENT_LINK}/sync/sync_models/22-07-13T22-25-49/22-07-13T22-25-49.pt', + 'cfg-22-07-13T22-25-49.yaml': + f'{PARENT_LINK}/sync/sync_models/22-07-13T22-25-49/cfg-22-07-13T22-25-49.yaml', + # SparseSync: feature extractors + 'ResNetAudio-22-08-04T09-51-04.pt': + f'{PARENT_LINK}/sync/ResNetAudio-22-08-04T09-51-04.pt', # 2s + 'ResNetAudio-22-08-03T23-14-49.pt': + f'{PARENT_LINK}/sync/ResNetAudio-22-08-03T23-14-49.pt', # 3s + 'ResNetAudio-22-08-03T23-14-28.pt': + f'{PARENT_LINK}/sync/ResNetAudio-22-08-03T23-14-28.pt', # 4s + 'ResNetAudio-22-06-24T08-10-33.pt': + f'{PARENT_LINK}/sync/ResNetAudio-22-06-24T08-10-33.pt', # 5s + 'ResNetAudio-22-06-24T17-31-07.pt': + f'{PARENT_LINK}/sync/ResNetAudio-22-06-24T17-31-07.pt', # 6s + 'ResNetAudio-22-06-24T23-57-11.pt': + f'{PARENT_LINK}/sync/ResNetAudio-22-06-24T23-57-11.pt', # 7s + 'ResNetAudio-22-06-25T04-35-42.pt': + f'{PARENT_LINK}/sync/ResNetAudio-22-06-25T04-35-42.pt', # 8s +} + + +def check_if_file_exists_else_download(path, fname2link=FNAME2LINK, chunk_size=1024): + '''Checks if file exists, if not downloads it from the link to the path''' + path = Path(path) + if not path.exists(): + path.parent.mkdir(exist_ok=True, parents=True) + link = fname2link.get(path.name, None) + if link is None: + raise ValueError(f'Cant find the checkpoint file: {path}.', + f'Please download it manually and ensure the path exists.') + with requests.get(fname2link[path.name], stream=True) as r: + total_size = int(r.headers.get('content-length', 0)) + with tqdm(total=total_size, unit='B', unit_scale=True) as pbar: + with open(path, 'wb') as f: + for data in r.iter_content(chunk_size=chunk_size): + if data: + f.write(data) + pbar.update(chunk_size) + + +def get_md5sum(path): + hash_md5 = md5() + with open(path, 'rb') as f: + for chunk in iter(lambda: f.read(4096 * 8), b''): + hash_md5.update(chunk) + md5sum = hash_md5.hexdigest() + return md5sum diff --git a/postprocessing/mmaudio/ext/synchformer/video_model_builder.py b/postprocessing/mmaudio/ext/synchformer/video_model_builder.py new file mode 100644 index 0000000..5fab804 --- /dev/null +++ b/postprocessing/mmaudio/ext/synchformer/video_model_builder.py @@ -0,0 +1,277 @@ +#!/usr/bin/env python3 +# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. +# Copyright 2020 Ross Wightman +# Modified Model definition + +from collections import OrderedDict +from functools import partial + +import torch +import torch.nn as nn +from timm.layers import trunc_normal_ + +from . import vit_helper + + +class VisionTransformer(nn.Module): + """ Vision Transformer with support for patch or hybrid CNN input stage """ + + def __init__(self, cfg): + super().__init__() + self.img_size = cfg.DATA.TRAIN_CROP_SIZE + self.patch_size = cfg.VIT.PATCH_SIZE + self.in_chans = cfg.VIT.CHANNELS + if cfg.TRAIN.DATASET == "Epickitchens": + self.num_classes = [97, 300] + else: + self.num_classes = cfg.MODEL.NUM_CLASSES + self.embed_dim = cfg.VIT.EMBED_DIM + self.depth = cfg.VIT.DEPTH + self.num_heads = cfg.VIT.NUM_HEADS + self.mlp_ratio = cfg.VIT.MLP_RATIO + self.qkv_bias = cfg.VIT.QKV_BIAS + self.drop_rate = cfg.VIT.DROP + self.drop_path_rate = cfg.VIT.DROP_PATH + self.head_dropout = cfg.VIT.HEAD_DROPOUT + self.video_input = cfg.VIT.VIDEO_INPUT + self.temporal_resolution = cfg.VIT.TEMPORAL_RESOLUTION + self.use_mlp = cfg.VIT.USE_MLP + self.num_features = self.embed_dim + norm_layer = partial(nn.LayerNorm, eps=1e-6) + self.attn_drop_rate = cfg.VIT.ATTN_DROPOUT + self.head_act = cfg.VIT.HEAD_ACT + self.cfg = cfg + + # Patch Embedding + self.patch_embed = vit_helper.PatchEmbed(img_size=224, + patch_size=self.patch_size, + in_chans=self.in_chans, + embed_dim=self.embed_dim) + + # 3D Patch Embedding + self.patch_embed_3d = vit_helper.PatchEmbed3D(img_size=self.img_size, + temporal_resolution=self.temporal_resolution, + patch_size=self.patch_size, + in_chans=self.in_chans, + embed_dim=self.embed_dim, + z_block_size=self.cfg.VIT.PATCH_SIZE_TEMP) + self.patch_embed_3d.proj.weight.data = torch.zeros_like( + self.patch_embed_3d.proj.weight.data) + + # Number of patches + if self.video_input: + num_patches = self.patch_embed.num_patches * self.temporal_resolution + else: + num_patches = self.patch_embed.num_patches + self.num_patches = num_patches + + # CLS token + self.cls_token = nn.Parameter(torch.zeros(1, 1, self.embed_dim)) + trunc_normal_(self.cls_token, std=.02) + + # Positional embedding + self.pos_embed = nn.Parameter( + torch.zeros(1, self.patch_embed.num_patches + 1, self.embed_dim)) + self.pos_drop = nn.Dropout(p=cfg.VIT.POS_DROPOUT) + trunc_normal_(self.pos_embed, std=.02) + + if self.cfg.VIT.POS_EMBED == "joint": + self.st_embed = nn.Parameter(torch.zeros(1, num_patches + 1, self.embed_dim)) + trunc_normal_(self.st_embed, std=.02) + elif self.cfg.VIT.POS_EMBED == "separate": + self.temp_embed = nn.Parameter(torch.zeros(1, self.temporal_resolution, self.embed_dim)) + + # Layer Blocks + dpr = [x.item() for x in torch.linspace(0, self.drop_path_rate, self.depth)] + if self.cfg.VIT.ATTN_LAYER == "divided": + self.blocks = nn.ModuleList([ + vit_helper.DividedSpaceTimeBlock( + attn_type=cfg.VIT.ATTN_LAYER, + dim=self.embed_dim, + num_heads=self.num_heads, + mlp_ratio=self.mlp_ratio, + qkv_bias=self.qkv_bias, + drop=self.drop_rate, + attn_drop=self.attn_drop_rate, + drop_path=dpr[i], + norm_layer=norm_layer, + ) for i in range(self.depth) + ]) + else: + self.blocks = nn.ModuleList([ + vit_helper.Block(attn_type=cfg.VIT.ATTN_LAYER, + dim=self.embed_dim, + num_heads=self.num_heads, + mlp_ratio=self.mlp_ratio, + qkv_bias=self.qkv_bias, + drop=self.drop_rate, + attn_drop=self.attn_drop_rate, + drop_path=dpr[i], + norm_layer=norm_layer, + use_original_code=self.cfg.VIT.USE_ORIGINAL_TRAJ_ATTN_CODE) + for i in range(self.depth) + ]) + self.norm = norm_layer(self.embed_dim) + + # MLP head + if self.use_mlp: + hidden_dim = self.embed_dim + if self.head_act == 'tanh': + # logging.info("Using TanH activation in MLP") + act = nn.Tanh() + elif self.head_act == 'gelu': + # logging.info("Using GELU activation in MLP") + act = nn.GELU() + else: + # logging.info("Using ReLU activation in MLP") + act = nn.ReLU() + self.pre_logits = nn.Sequential( + OrderedDict([ + ('fc', nn.Linear(self.embed_dim, hidden_dim)), + ('act', act), + ])) + else: + self.pre_logits = nn.Identity() + + # Classifier Head + self.head_drop = nn.Dropout(p=self.head_dropout) + if isinstance(self.num_classes, (list, )) and len(self.num_classes) > 1: + for a, i in enumerate(range(len(self.num_classes))): + setattr(self, "head%d" % a, nn.Linear(self.embed_dim, self.num_classes[i])) + else: + self.head = nn.Linear(self.embed_dim, + self.num_classes) if self.num_classes > 0 else nn.Identity() + + # Initialize weights + self.apply(self._init_weights) + + def _init_weights(self, m): + if isinstance(m, nn.Linear): + trunc_normal_(m.weight, std=.02) + if isinstance(m, nn.Linear) and m.bias is not None: + nn.init.constant_(m.bias, 0) + elif isinstance(m, nn.LayerNorm): + nn.init.constant_(m.bias, 0) + nn.init.constant_(m.weight, 1.0) + + @torch.jit.ignore + def no_weight_decay(self): + if self.cfg.VIT.POS_EMBED == "joint": + return {'pos_embed', 'cls_token', 'st_embed'} + else: + return {'pos_embed', 'cls_token', 'temp_embed'} + + def get_classifier(self): + return self.head + + def reset_classifier(self, num_classes, global_pool=''): + self.num_classes = num_classes + self.head = (nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity()) + + def forward_features(self, x): + # if self.video_input: + # x = x[0] + B = x.shape[0] + + # Tokenize input + # if self.cfg.VIT.PATCH_SIZE_TEMP > 1: + # for simplicity of mapping between content dimensions (input x) and token dims (after patching) + # we use the same trick as for AST (see modeling_ast.ASTModel.forward for the details): + + # apply patching on input + x = self.patch_embed_3d(x) + tok_mask = None + + # else: + # tok_mask = None + # # 2D tokenization + # if self.video_input: + # x = x.permute(0, 2, 1, 3, 4) + # (B, T, C, H, W) = x.shape + # x = x.reshape(B * T, C, H, W) + + # x = self.patch_embed(x) + + # if self.video_input: + # (B2, T2, D2) = x.shape + # x = x.reshape(B, T * T2, D2) + + # Append CLS token + cls_tokens = self.cls_token.expand(B, -1, -1) + x = torch.cat((cls_tokens, x), dim=1) + # if tok_mask is not None: + # # prepend 1(=keep) to the mask to account for the CLS token as well + # tok_mask = torch.cat((torch.ones_like(tok_mask[:, [0]]), tok_mask), dim=1) + + # Interpolate positinoal embeddings + # if self.cfg.DATA.TRAIN_CROP_SIZE != 224: + # pos_embed = self.pos_embed + # N = pos_embed.shape[1] - 1 + # npatch = int((x.size(1) - 1) / self.temporal_resolution) + # class_emb = pos_embed[:, 0] + # pos_embed = pos_embed[:, 1:] + # dim = x.shape[-1] + # pos_embed = torch.nn.functional.interpolate( + # pos_embed.reshape(1, int(math.sqrt(N)), int(math.sqrt(N)), dim).permute(0, 3, 1, 2), + # scale_factor=math.sqrt(npatch / N), + # mode='bicubic', + # ) + # pos_embed = pos_embed.permute(0, 2, 3, 1).view(1, -1, dim) + # new_pos_embed = torch.cat((class_emb.unsqueeze(0), pos_embed), dim=1) + # else: + new_pos_embed = self.pos_embed + npatch = self.patch_embed.num_patches + + # Add positional embeddings to input + if self.video_input: + if self.cfg.VIT.POS_EMBED == "separate": + cls_embed = self.pos_embed[:, 0, :].unsqueeze(1) + tile_pos_embed = new_pos_embed[:, 1:, :].repeat(1, self.temporal_resolution, 1) + tile_temporal_embed = self.temp_embed.repeat_interleave(npatch, 1) + total_pos_embed = tile_pos_embed + tile_temporal_embed + total_pos_embed = torch.cat([cls_embed, total_pos_embed], dim=1) + x = x + total_pos_embed + elif self.cfg.VIT.POS_EMBED == "joint": + x = x + self.st_embed + else: + # image input + x = x + new_pos_embed + + # Apply positional dropout + x = self.pos_drop(x) + + # Encoding using transformer layers + for i, blk in enumerate(self.blocks): + x = blk(x, + seq_len=npatch, + num_frames=self.temporal_resolution, + approx=self.cfg.VIT.APPROX_ATTN_TYPE, + num_landmarks=self.cfg.VIT.APPROX_ATTN_DIM, + tok_mask=tok_mask) + + ### v-iashin: I moved it to the forward pass + # x = self.norm(x)[:, 0] + # x = self.pre_logits(x) + ### + return x, tok_mask + + # def forward(self, x): + # x = self.forward_features(x) + # ### v-iashin: here. This should leave the same forward output as before + # x = self.norm(x)[:, 0] + # x = self.pre_logits(x) + # ### + # x = self.head_drop(x) + # if isinstance(self.num_classes, (list, )) and len(self.num_classes) > 1: + # output = [] + # for head in range(len(self.num_classes)): + # x_out = getattr(self, "head%d" % head)(x) + # if not self.training: + # x_out = torch.nn.functional.softmax(x_out, dim=-1) + # output.append(x_out) + # return output + # else: + # x = self.head(x) + # if not self.training: + # x = torch.nn.functional.softmax(x, dim=-1) + # return x diff --git a/postprocessing/mmaudio/ext/synchformer/vit_helper.py b/postprocessing/mmaudio/ext/synchformer/vit_helper.py new file mode 100644 index 0000000..6af730a --- /dev/null +++ b/postprocessing/mmaudio/ext/synchformer/vit_helper.py @@ -0,0 +1,399 @@ +#!/usr/bin/env python3 +# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. +# Copyright 2020 Ross Wightman +# Modified Model definition +"""Video models.""" + +import math + +import torch +import torch.nn as nn +from einops import rearrange, repeat +from timm.layers import to_2tuple +from torch import einsum +from torch.nn import functional as F + +default_cfgs = { + 'vit_1k': + 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_224-80ecf9dd.pth', + 'vit_1k_large': + 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p16_224-4ee7a4dc.pth', +} + + +def qkv_attn(q, k, v, tok_mask: torch.Tensor = None): + sim = einsum('b i d, b j d -> b i j', q, k) + # apply masking if provided, tok_mask is (B*S*H, N): 1s - keep; sim is (B*S*H, H, N, N) + if tok_mask is not None: + BSH, N = tok_mask.shape + sim = sim.masked_fill(tok_mask.view(BSH, 1, N) == 0, + float('-inf')) # 1 - broadcasts across N + attn = sim.softmax(dim=-1) + out = einsum('b i j, b j d -> b i d', attn, v) + return out + + +class DividedAttention(nn.Module): + + def __init__(self, dim, num_heads=8, qkv_bias=False, attn_drop=0., proj_drop=0.): + super().__init__() + self.num_heads = num_heads + head_dim = dim // num_heads + self.scale = head_dim**-0.5 + self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) + self.proj = nn.Linear(dim, dim) + + # init to zeros + self.qkv.weight.data.fill_(0) + self.qkv.bias.data.fill_(0) + self.proj.weight.data.fill_(1) + self.proj.bias.data.fill_(0) + + self.attn_drop = nn.Dropout(attn_drop) + self.proj_drop = nn.Dropout(proj_drop) + + def forward(self, x, einops_from, einops_to, tok_mask: torch.Tensor = None, **einops_dims): + # num of heads variable + h = self.num_heads + + # project x to q, k, v vaalues + q, k, v = self.qkv(x).chunk(3, dim=-1) + q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) + if tok_mask is not None: + # replicate token mask across heads (b, n) -> (b, h, n) -> (b*h, n) -- same as qkv but w/o d + assert len(tok_mask.shape) == 2 + tok_mask = tok_mask.unsqueeze(1).expand(-1, h, -1).reshape(-1, tok_mask.shape[1]) + + # Scale q + q *= self.scale + + # Take out cls_q, cls_k, cls_v + (cls_q, q_), (cls_k, k_), (cls_v, v_) = map(lambda t: (t[:, 0:1], t[:, 1:]), (q, k, v)) + # the same for masking + if tok_mask is not None: + cls_mask, mask_ = tok_mask[:, 0:1], tok_mask[:, 1:] + else: + cls_mask, mask_ = None, None + + # let CLS token attend to key / values of all patches across time and space + cls_out = qkv_attn(cls_q, k, v, tok_mask=tok_mask) + + # rearrange across time or space + q_, k_, v_ = map(lambda t: rearrange(t, f'{einops_from} -> {einops_to}', **einops_dims), + (q_, k_, v_)) + + # expand CLS token keys and values across time or space and concat + r = q_.shape[0] // cls_k.shape[0] + cls_k, cls_v = map(lambda t: repeat(t, 'b () d -> (b r) () d', r=r), (cls_k, cls_v)) + + k_ = torch.cat((cls_k, k_), dim=1) + v_ = torch.cat((cls_v, v_), dim=1) + + # the same for masking (if provided) + if tok_mask is not None: + # since mask does not have the latent dim (d), we need to remove it from einops dims + mask_ = rearrange(mask_, f'{einops_from} -> {einops_to}'.replace(' d', ''), + **einops_dims) + cls_mask = repeat(cls_mask, 'b () -> (b r) ()', + r=r) # expand cls_mask across time or space + mask_ = torch.cat((cls_mask, mask_), dim=1) + + # attention + out = qkv_attn(q_, k_, v_, tok_mask=mask_) + + # merge back time or space + out = rearrange(out, f'{einops_to} -> {einops_from}', **einops_dims) + + # concat back the cls token + out = torch.cat((cls_out, out), dim=1) + + # merge back the heads + out = rearrange(out, '(b h) n d -> b n (h d)', h=h) + + ## to out + x = self.proj(out) + x = self.proj_drop(x) + return x + + +class DividedSpaceTimeBlock(nn.Module): + + def __init__(self, + dim=768, + num_heads=12, + attn_type='divided', + mlp_ratio=4., + qkv_bias=False, + drop=0., + attn_drop=0., + drop_path=0., + act_layer=nn.GELU, + norm_layer=nn.LayerNorm): + super().__init__() + + self.einops_from_space = 'b (f n) d' + self.einops_to_space = '(b f) n d' + self.einops_from_time = 'b (f n) d' + self.einops_to_time = '(b n) f d' + + self.norm1 = norm_layer(dim) + + self.attn = DividedAttention(dim, + num_heads=num_heads, + qkv_bias=qkv_bias, + attn_drop=attn_drop, + proj_drop=drop) + + self.timeattn = DividedAttention(dim, + num_heads=num_heads, + qkv_bias=qkv_bias, + attn_drop=attn_drop, + proj_drop=drop) + + # self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() + self.drop_path = nn.Identity() + self.norm2 = norm_layer(dim) + mlp_hidden_dim = int(dim * mlp_ratio) + self.mlp = Mlp(in_features=dim, + hidden_features=mlp_hidden_dim, + act_layer=act_layer, + drop=drop) + self.norm3 = norm_layer(dim) + + def forward(self, + x, + seq_len=196, + num_frames=8, + approx='none', + num_landmarks=128, + tok_mask: torch.Tensor = None): + time_output = self.timeattn(self.norm3(x), + self.einops_from_time, + self.einops_to_time, + n=seq_len, + tok_mask=tok_mask) + time_residual = x + time_output + + space_output = self.attn(self.norm1(time_residual), + self.einops_from_space, + self.einops_to_space, + f=num_frames, + tok_mask=tok_mask) + space_residual = time_residual + self.drop_path(space_output) + + x = space_residual + x = x + self.drop_path(self.mlp(self.norm2(x))) + return x + + +class Mlp(nn.Module): + + def __init__(self, + in_features, + hidden_features=None, + out_features=None, + act_layer=nn.GELU, + drop=0.): + super().__init__() + out_features = out_features or in_features + hidden_features = hidden_features or in_features + self.fc1 = nn.Linear(in_features, hidden_features) + self.act = act_layer() + self.fc2 = nn.Linear(hidden_features, out_features) + self.drop = nn.Dropout(drop) + + def forward(self, x): + x = self.fc1(x) + x = self.act(x) + x = self.drop(x) + x = self.fc2(x) + x = self.drop(x) + return x + + +class PatchEmbed(nn.Module): + """ Image to Patch Embedding + """ + + def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768): + super().__init__() + img_size = img_size if type(img_size) is tuple else to_2tuple(img_size) + patch_size = img_size if type(patch_size) is tuple else to_2tuple(patch_size) + num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0]) + self.img_size = img_size + self.patch_size = patch_size + self.num_patches = num_patches + + self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) + + def forward(self, x): + B, C, H, W = x.shape + x = self.proj(x).flatten(2).transpose(1, 2) + return x + + +class PatchEmbed3D(nn.Module): + """ Image to Patch Embedding """ + + def __init__(self, + img_size=224, + temporal_resolution=4, + in_chans=3, + patch_size=16, + z_block_size=2, + embed_dim=768, + flatten=True): + super().__init__() + self.height = (img_size // patch_size) + self.width = (img_size // patch_size) + ### v-iashin: these two are incorrect + # self.frames = (temporal_resolution // z_block_size) + # self.num_patches = self.height * self.width * self.frames + self.z_block_size = z_block_size + ### + self.proj = nn.Conv3d(in_chans, + embed_dim, + kernel_size=(z_block_size, patch_size, patch_size), + stride=(z_block_size, patch_size, patch_size)) + self.flatten = flatten + + def forward(self, x): + B, C, T, H, W = x.shape + x = self.proj(x) + if self.flatten: + x = x.flatten(2).transpose(1, 2) + return x + + +class HeadMLP(nn.Module): + + def __init__(self, n_input, n_classes, n_hidden=512, p=0.1): + super(HeadMLP, self).__init__() + self.n_input = n_input + self.n_classes = n_classes + self.n_hidden = n_hidden + if n_hidden is None: + # use linear classifier + self.block_forward = nn.Sequential(nn.Dropout(p=p), + nn.Linear(n_input, n_classes, bias=True)) + else: + # use simple MLP classifier + self.block_forward = nn.Sequential(nn.Dropout(p=p), + nn.Linear(n_input, n_hidden, bias=True), + nn.BatchNorm1d(n_hidden), nn.ReLU(inplace=True), + nn.Dropout(p=p), + nn.Linear(n_hidden, n_classes, bias=True)) + print(f"Dropout-NLP: {p}") + + def forward(self, x): + return self.block_forward(x) + + +def _conv_filter(state_dict, patch_size=16): + """ convert patch embedding weight from manual patchify + linear proj to conv""" + out_dict = {} + for k, v in state_dict.items(): + if 'patch_embed.proj.weight' in k: + v = v.reshape((v.shape[0], 3, patch_size, patch_size)) + out_dict[k] = v + return out_dict + + +def adapt_input_conv(in_chans, conv_weight, agg='sum'): + conv_type = conv_weight.dtype + conv_weight = conv_weight.float() + O, I, J, K = conv_weight.shape + if in_chans == 1: + if I > 3: + assert conv_weight.shape[1] % 3 == 0 + # For models with space2depth stems + conv_weight = conv_weight.reshape(O, I // 3, 3, J, K) + conv_weight = conv_weight.sum(dim=2, keepdim=False) + else: + if agg == 'sum': + print("Summing conv1 weights") + conv_weight = conv_weight.sum(dim=1, keepdim=True) + else: + print("Averaging conv1 weights") + conv_weight = conv_weight.mean(dim=1, keepdim=True) + elif in_chans != 3: + if I != 3: + raise NotImplementedError('Weight format not supported by conversion.') + else: + if agg == 'sum': + print("Summing conv1 weights") + repeat = int(math.ceil(in_chans / 3)) + conv_weight = conv_weight.repeat(1, repeat, 1, 1)[:, :in_chans, :, :] + conv_weight *= (3 / float(in_chans)) + else: + print("Averaging conv1 weights") + conv_weight = conv_weight.mean(dim=1, keepdim=True) + conv_weight = conv_weight.repeat(1, in_chans, 1, 1) + conv_weight = conv_weight.to(conv_type) + return conv_weight + + +def load_pretrained(model, + cfg=None, + num_classes=1000, + in_chans=3, + filter_fn=None, + strict=True, + progress=False): + # Load state dict + assert (f"{cfg.VIT.PRETRAINED_WEIGHTS} not in [vit_1k, vit_1k_large]") + state_dict = torch.hub.load_state_dict_from_url(url=default_cfgs[cfg.VIT.PRETRAINED_WEIGHTS]) + + if filter_fn is not None: + state_dict = filter_fn(state_dict) + + input_convs = 'patch_embed.proj' + if input_convs is not None and in_chans != 3: + if isinstance(input_convs, str): + input_convs = (input_convs, ) + for input_conv_name in input_convs: + weight_name = input_conv_name + '.weight' + try: + state_dict[weight_name] = adapt_input_conv(in_chans, + state_dict[weight_name], + agg='avg') + print( + f'Converted input conv {input_conv_name} pretrained weights from 3 to {in_chans} channel(s)' + ) + except NotImplementedError as e: + del state_dict[weight_name] + strict = False + print( + f'Unable to convert pretrained {input_conv_name} weights, using random init for this layer.' + ) + + classifier_name = 'head' + label_offset = cfg.get('label_offset', 0) + pretrain_classes = 1000 + if num_classes != pretrain_classes: + # completely discard fully connected if model num_classes doesn't match pretrained weights + del state_dict[classifier_name + '.weight'] + del state_dict[classifier_name + '.bias'] + strict = False + elif label_offset > 0: + # special case for pretrained weights with an extra background class in pretrained weights + classifier_weight = state_dict[classifier_name + '.weight'] + state_dict[classifier_name + '.weight'] = classifier_weight[label_offset:] + classifier_bias = state_dict[classifier_name + '.bias'] + state_dict[classifier_name + '.bias'] = classifier_bias[label_offset:] + + loaded_state = state_dict + self_state = model.state_dict() + all_names = set(self_state.keys()) + saved_names = set([]) + for name, param in loaded_state.items(): + param = param + if 'module.' in name: + name = name.replace('module.', '') + if name in self_state.keys() and param.shape == self_state[name].shape: + saved_names.add(name) + self_state[name].copy_(param) + else: + print(f"didnt load: {name} of shape: {param.shape}") + print("Missing Keys:") + print(all_names - saved_names) diff --git a/postprocessing/mmaudio/mmaudio.py b/postprocessing/mmaudio/mmaudio.py new file mode 100644 index 0000000..e153b09 --- /dev/null +++ b/postprocessing/mmaudio/mmaudio.py @@ -0,0 +1,120 @@ +import gc +import logging + +import torch + +from .eval_utils import (ModelConfig, VideoInfo, all_model_cfg, generate, load_image, + load_video, make_video, setup_eval_logging) +from .model.flow_matching import FlowMatching +from .model.networks import MMAudio, get_my_mmaudio +from .model.sequence_config import SequenceConfig +from .model.utils.features_utils import FeaturesUtils + +persistent_offloadobj = None + +def get_model(persistent_models = False, verboseLevel = 1) -> tuple[MMAudio, FeaturesUtils, SequenceConfig]: + torch.backends.cuda.matmul.allow_tf32 = True + torch.backends.cudnn.allow_tf32 = True + + global device, persistent_offloadobj, persistent_net, persistent_features_utils, persistent_seq_cfg + + log = logging.getLogger() + + device = 'cpu' #"cuda" + # if torch.cuda.is_available(): + # device = 'cuda' + # elif torch.backends.mps.is_available(): + # device = 'mps' + # else: + # log.warning('CUDA/MPS are not available, running on CPU') + dtype = torch.bfloat16 + + model: ModelConfig = all_model_cfg['large_44k_v2'] + # model.download_if_needed() + + setup_eval_logging() + + seq_cfg = model.seq_cfg + if persistent_offloadobj == None: + from accelerate import init_empty_weights + # with init_empty_weights(): + net: MMAudio = get_my_mmaudio(model.model_name) + net.load_weights(torch.load(model.model_path, map_location=device, weights_only=True)) + net.to(device, dtype).eval() + log.info(f'Loaded weights from {model.model_path}') + feature_utils = FeaturesUtils(tod_vae_ckpt=model.vae_path, + synchformer_ckpt=model.synchformer_ckpt, + enable_conditions=True, + mode=model.mode, + bigvgan_vocoder_ckpt=model.bigvgan_16k_path, + need_vae_encoder=False) + feature_utils = feature_utils.to(device, dtype).eval() + feature_utils.device = "cuda" + + pipe = { "net" : net, "clip" : feature_utils.clip_model, "syncformer" : feature_utils.synchformer, "vocode" : feature_utils.tod.vocoder, "vae" : feature_utils.tod.vae } + from mmgp import offload + offloadobj = offload.profile(pipe, profile_no=4, verboseLevel=2) + if persistent_models: + persistent_offloadobj = offloadobj + persistent_net = net + persistent_features_utils = feature_utils + persistent_seq_cfg = seq_cfg + + else: + offloadobj = persistent_offloadobj + net = persistent_net + feature_utils = persistent_features_utils + seq_cfg = persistent_seq_cfg + + if not persistent_models: + persistent_offloadobj = None + persistent_net = None + persistent_features_utils = None + persistent_seq_cfg = None + + return net, feature_utils, seq_cfg, offloadobj + +@torch.inference_mode() +def video_to_audio(video, prompt: str, negative_prompt: str, seed: int, num_steps: int, + cfg_strength: float, duration: float, video_save_path , persistent_models = False, verboseLevel = 1): + + global device + + net, feature_utils, seq_cfg, offloadobj = get_model(persistent_models, verboseLevel ) + + rng = torch.Generator(device="cuda") + if seed >= 0: + rng.manual_seed(seed) + else: + rng.seed() + fm = FlowMatching(min_sigma=0, inference_mode='euler', num_steps=num_steps) + + video_info = load_video(video, duration) + clip_frames = video_info.clip_frames + sync_frames = video_info.sync_frames + duration = video_info.duration_sec + clip_frames = clip_frames.unsqueeze(0) + sync_frames = sync_frames.unsqueeze(0) + seq_cfg.duration = duration + net.update_seq_lengths(seq_cfg.latent_seq_len, seq_cfg.clip_seq_len, seq_cfg.sync_seq_len) + + audios = generate(clip_frames, + sync_frames, [prompt], + negative_text=[negative_prompt], + feature_utils=feature_utils, + net=net, + fm=fm, + rng=rng, + cfg_strength=cfg_strength, + offloadobj = offloadobj + ) + audio = audios.float().cpu()[0] + + make_video(video, video_info, video_save_path, audio, sampling_rate=seq_cfg.sampling_rate) + offloadobj.unload_all() + if not persistent_models: + offloadobj.release() + + torch.cuda.empty_cache() + gc.collect() + return video_save_path diff --git a/postprocessing/mmaudio/model/__init__.py b/postprocessing/mmaudio/model/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/postprocessing/mmaudio/model/embeddings.py b/postprocessing/mmaudio/model/embeddings.py new file mode 100644 index 0000000..297feb4 --- /dev/null +++ b/postprocessing/mmaudio/model/embeddings.py @@ -0,0 +1,49 @@ +import torch +import torch.nn as nn + +# https://github.com/facebookresearch/DiT + + +class TimestepEmbedder(nn.Module): + """ + Embeds scalar timesteps into vector representations. + """ + + def __init__(self, dim, frequency_embedding_size, max_period): + super().__init__() + self.mlp = nn.Sequential( + nn.Linear(frequency_embedding_size, dim), + nn.SiLU(), + nn.Linear(dim, dim), + ) + self.dim = dim + self.max_period = max_period + assert dim % 2 == 0, 'dim must be even.' + + with torch.autocast('cuda', enabled=False): + self.freqs = nn.Buffer( + 1.0 / (10000**(torch.arange(0, frequency_embedding_size, 2, dtype=torch.float32) / + frequency_embedding_size)), + persistent=False) + freq_scale = 10000 / max_period + self.freqs = freq_scale * self.freqs + + def timestep_embedding(self, t): + """ + Create sinusoidal timestep embeddings. + :param t: a 1-D Tensor of N indices, one per batch element. + These may be fractional. + :param dim: the dimension of the output. + :param max_period: controls the minimum frequency of the embeddings. + :return: an (N, D) Tensor of positional embeddings. + """ + # https://github.com/openai/glide-text2im/blob/main/glide_text2im/nn.py + + args = t[:, None].float() * self.freqs[None] + embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) + return embedding + + def forward(self, t): + t_freq = self.timestep_embedding(t).to(t.dtype) + t_emb = self.mlp(t_freq) + return t_emb diff --git a/postprocessing/mmaudio/model/flow_matching.py b/postprocessing/mmaudio/model/flow_matching.py new file mode 100644 index 0000000..e7c65de --- /dev/null +++ b/postprocessing/mmaudio/model/flow_matching.py @@ -0,0 +1,71 @@ +import logging +from typing import Callable, Optional + +import torch +from torchdiffeq import odeint + +log = logging.getLogger() + + +# Partially from https://github.com/gle-bellier/flow-matching +class FlowMatching: + + def __init__(self, min_sigma: float = 0.0, inference_mode='euler', num_steps: int = 25): + # inference_mode: 'euler' or 'adaptive' + # num_steps: number of steps in the euler inference mode + super().__init__() + self.min_sigma = min_sigma + self.inference_mode = inference_mode + self.num_steps = num_steps + + # self.fm = ExactOptimalTransportConditionalFlowMatcher(sigma=min_sigma) + + assert self.inference_mode in ['euler', 'adaptive'] + if self.inference_mode == 'adaptive' and num_steps > 0: + log.info('The number of steps is ignored in adaptive inference mode ') + + def get_conditional_flow(self, x0: torch.Tensor, x1: torch.Tensor, + t: torch.Tensor) -> torch.Tensor: + # which is psi_t(x), eq 22 in flow matching for generative models + t = t[:, None, None].expand_as(x0) + return (1 - (1 - self.min_sigma) * t) * x0 + t * x1 + + def loss(self, predicted_v: torch.Tensor, x0: torch.Tensor, x1: torch.Tensor) -> torch.Tensor: + # return the mean error without reducing the batch dimension + reduce_dim = list(range(1, len(predicted_v.shape))) + target_v = x1 - (1 - self.min_sigma) * x0 + return (predicted_v - target_v).pow(2).mean(dim=reduce_dim) + + def get_x0_xt_c( + self, + x1: torch.Tensor, + t: torch.Tensor, + Cs: list[torch.Tensor], + generator: Optional[torch.Generator] = None + ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: + x0 = torch.empty_like(x1).normal_(generator=generator) + + xt = self.get_conditional_flow(x0, x1, t) + return x0, x1, xt, Cs + + def to_prior(self, fn: Callable, x1: torch.Tensor) -> torch.Tensor: + return self.run_t0_to_t1(fn, x1, 1, 0) + + def to_data(self, fn: Callable, x0: torch.Tensor) -> torch.Tensor: + return self.run_t0_to_t1(fn, x0, 0, 1) + + def run_t0_to_t1(self, fn: Callable, x0: torch.Tensor, t0: float, t1: float) -> torch.Tensor: + # fn: a function that takes (t, x) and returns the direction x0->x1 + + if self.inference_mode == 'adaptive': + return odeint(fn, x0, torch.tensor([t0, t1], device=x0.device, dtype=x0.dtype)) + elif self.inference_mode == 'euler': + x = x0 + steps = torch.linspace(t0, t1 - self.min_sigma, self.num_steps + 1) + for ti, t in enumerate(steps[:-1]): + flow = fn(t, x) + next_t = steps[ti + 1] + dt = next_t - t + x = x + dt * flow + + return x diff --git a/postprocessing/mmaudio/model/low_level.py b/postprocessing/mmaudio/model/low_level.py new file mode 100644 index 0000000..c8326a8 --- /dev/null +++ b/postprocessing/mmaudio/model/low_level.py @@ -0,0 +1,95 @@ +import torch +from torch import nn +from torch.nn import functional as F + + +class ChannelLastConv1d(nn.Conv1d): + + def forward(self, x: torch.Tensor) -> torch.Tensor: + x = x.permute(0, 2, 1) + x = super().forward(x) + x = x.permute(0, 2, 1) + return x + + +# https://github.com/Stability-AI/sd3-ref +class MLP(nn.Module): + + def __init__( + self, + dim: int, + hidden_dim: int, + multiple_of: int = 256, + ): + """ + Initialize the FeedForward module. + + Args: + dim (int): Input dimension. + hidden_dim (int): Hidden dimension of the feedforward layer. + multiple_of (int): Value to ensure hidden dimension is a multiple of this value. + + Attributes: + w1 (ColumnParallelLinear): Linear transformation for the first layer. + w2 (RowParallelLinear): Linear transformation for the second layer. + w3 (ColumnParallelLinear): Linear transformation for the third layer. + + """ + super().__init__() + hidden_dim = int(2 * hidden_dim / 3) + hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of) + + self.w1 = nn.Linear(dim, hidden_dim, bias=False) + self.w2 = nn.Linear(hidden_dim, dim, bias=False) + self.w3 = nn.Linear(dim, hidden_dim, bias=False) + + def forward(self, x): + return self.w2(F.silu(self.w1(x)) * self.w3(x)) + + +class ConvMLP(nn.Module): + + def __init__( + self, + dim: int, + hidden_dim: int, + multiple_of: int = 256, + kernel_size: int = 3, + padding: int = 1, + ): + """ + Initialize the FeedForward module. + + Args: + dim (int): Input dimension. + hidden_dim (int): Hidden dimension of the feedforward layer. + multiple_of (int): Value to ensure hidden dimension is a multiple of this value. + + Attributes: + w1 (ColumnParallelLinear): Linear transformation for the first layer. + w2 (RowParallelLinear): Linear transformation for the second layer. + w3 (ColumnParallelLinear): Linear transformation for the third layer. + + """ + super().__init__() + hidden_dim = int(2 * hidden_dim / 3) + hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of) + + self.w1 = ChannelLastConv1d(dim, + hidden_dim, + bias=False, + kernel_size=kernel_size, + padding=padding) + self.w2 = ChannelLastConv1d(hidden_dim, + dim, + bias=False, + kernel_size=kernel_size, + padding=padding) + self.w3 = ChannelLastConv1d(dim, + hidden_dim, + bias=False, + kernel_size=kernel_size, + padding=padding) + + def forward(self, x): + return self.w2(F.silu(self.w1(x)) * self.w3(x)) diff --git a/postprocessing/mmaudio/model/networks.py b/postprocessing/mmaudio/model/networks.py new file mode 100644 index 0000000..d8a1cc0 --- /dev/null +++ b/postprocessing/mmaudio/model/networks.py @@ -0,0 +1,477 @@ +import logging +from dataclasses import dataclass +from typing import Optional + +import torch +import torch.nn as nn +import torch.nn.functional as F + +from ..ext.rotary_embeddings import compute_rope_rotations +from .embeddings import TimestepEmbedder +from .low_level import MLP, ChannelLastConv1d, ConvMLP +from .transformer_layers import (FinalBlock, JointBlock, MMDitSingleBlock) + +log = logging.getLogger() + + +@dataclass +class PreprocessedConditions: + clip_f: torch.Tensor + sync_f: torch.Tensor + text_f: torch.Tensor + clip_f_c: torch.Tensor + text_f_c: torch.Tensor + + +# Partially from https://github.com/facebookresearch/DiT +class MMAudio(nn.Module): + + def __init__(self, + *, + latent_dim: int, + clip_dim: int, + sync_dim: int, + text_dim: int, + hidden_dim: int, + depth: int, + fused_depth: int, + num_heads: int, + mlp_ratio: float = 4.0, + latent_seq_len: int, + clip_seq_len: int, + sync_seq_len: int, + text_seq_len: int = 77, + latent_mean: Optional[torch.Tensor] = None, + latent_std: Optional[torch.Tensor] = None, + empty_string_feat: Optional[torch.Tensor] = None, + v2: bool = False) -> None: + super().__init__() + + self.v2 = v2 + self.latent_dim = latent_dim + self._latent_seq_len = latent_seq_len + self._clip_seq_len = clip_seq_len + self._sync_seq_len = sync_seq_len + self._text_seq_len = text_seq_len + self.hidden_dim = hidden_dim + self.num_heads = num_heads + + if v2: + self.audio_input_proj = nn.Sequential( + ChannelLastConv1d(latent_dim, hidden_dim, kernel_size=7, padding=3), + nn.SiLU(), + ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=7, padding=3), + ) + + self.clip_input_proj = nn.Sequential( + nn.Linear(clip_dim, hidden_dim), + nn.SiLU(), + ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1), + ) + + self.sync_input_proj = nn.Sequential( + ChannelLastConv1d(sync_dim, hidden_dim, kernel_size=7, padding=3), + nn.SiLU(), + ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1), + ) + + self.text_input_proj = nn.Sequential( + nn.Linear(text_dim, hidden_dim), + nn.SiLU(), + MLP(hidden_dim, hidden_dim * 4), + ) + else: + self.audio_input_proj = nn.Sequential( + ChannelLastConv1d(latent_dim, hidden_dim, kernel_size=7, padding=3), + nn.SELU(), + ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=7, padding=3), + ) + + self.clip_input_proj = nn.Sequential( + nn.Linear(clip_dim, hidden_dim), + ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1), + ) + + self.sync_input_proj = nn.Sequential( + ChannelLastConv1d(sync_dim, hidden_dim, kernel_size=7, padding=3), + nn.SELU(), + ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1), + ) + + self.text_input_proj = nn.Sequential( + nn.Linear(text_dim, hidden_dim), + MLP(hidden_dim, hidden_dim * 4), + ) + + self.clip_cond_proj = nn.Linear(hidden_dim, hidden_dim) + self.text_cond_proj = nn.Linear(hidden_dim, hidden_dim) + self.global_cond_mlp = MLP(hidden_dim, hidden_dim * 4) + # each synchformer output segment has 8 feature frames + self.sync_pos_emb = nn.Parameter(torch.zeros((1, 1, 8, sync_dim))) + + self.final_layer = FinalBlock(hidden_dim, latent_dim) + + if v2: + self.t_embed = TimestepEmbedder(hidden_dim, + frequency_embedding_size=hidden_dim, + max_period=1) + else: + self.t_embed = TimestepEmbedder(hidden_dim, + frequency_embedding_size=256, + max_period=10000) + self.joint_blocks = nn.ModuleList([ + JointBlock(hidden_dim, + num_heads, + mlp_ratio=mlp_ratio, + pre_only=(i == depth - fused_depth - 1)) for i in range(depth - fused_depth) + ]) + + self.fused_blocks = nn.ModuleList([ + MMDitSingleBlock(hidden_dim, num_heads, mlp_ratio=mlp_ratio, kernel_size=3, padding=1) + for i in range(fused_depth) + ]) + + if latent_mean is None: + # these values are not meant to be used + # if you don't provide mean/std here, we should load them later from a checkpoint + assert latent_std is None + latent_mean = torch.ones(latent_dim).view(1, 1, -1).fill_(float('nan')) + latent_std = torch.ones(latent_dim).view(1, 1, -1).fill_(float('nan')) + else: + assert latent_std is not None + assert latent_mean.numel() == latent_dim, f'{latent_mean.numel()=} != {latent_dim=}' + if empty_string_feat is None: + empty_string_feat = torch.zeros((text_seq_len, text_dim)) + self.latent_mean = nn.Parameter(latent_mean.view(1, 1, -1), requires_grad=False) + self.latent_std = nn.Parameter(latent_std.view(1, 1, -1), requires_grad=False) + + self.empty_string_feat = nn.Parameter(empty_string_feat, requires_grad=False) + self.empty_clip_feat = nn.Parameter(torch.zeros(1, clip_dim), requires_grad=True) + self.empty_sync_feat = nn.Parameter(torch.zeros(1, sync_dim), requires_grad=True) + + self.initialize_weights() + self.initialize_rotations() + + def initialize_rotations(self): + base_freq = 1.0 + latent_rot = compute_rope_rotations(self._latent_seq_len, + self.hidden_dim // self.num_heads, + 10000, + freq_scaling=base_freq, + device=self.device) + clip_rot = compute_rope_rotations(self._clip_seq_len, + self.hidden_dim // self.num_heads, + 10000, + freq_scaling=base_freq * self._latent_seq_len / + self._clip_seq_len, + device=self.device) + + self.latent_rot = latent_rot #, persistent=False) + self.clip_rot = clip_rot #, persistent=False) + + def update_seq_lengths(self, latent_seq_len: int, clip_seq_len: int, sync_seq_len: int) -> None: + self._latent_seq_len = latent_seq_len + self._clip_seq_len = clip_seq_len + self._sync_seq_len = sync_seq_len + self.initialize_rotations() + + def initialize_weights(self): + + def _basic_init(module): + if isinstance(module, nn.Linear): + torch.nn.init.xavier_uniform_(module.weight) + if module.bias is not None: + nn.init.constant_(module.bias, 0) + + self.apply(_basic_init) + + # Initialize timestep embedding MLP: + nn.init.normal_(self.t_embed.mlp[0].weight, std=0.02) + nn.init.normal_(self.t_embed.mlp[2].weight, std=0.02) + + # Zero-out adaLN modulation layers in DiT blocks: + for block in self.joint_blocks: + nn.init.constant_(block.latent_block.adaLN_modulation[-1].weight, 0) + nn.init.constant_(block.latent_block.adaLN_modulation[-1].bias, 0) + nn.init.constant_(block.clip_block.adaLN_modulation[-1].weight, 0) + nn.init.constant_(block.clip_block.adaLN_modulation[-1].bias, 0) + nn.init.constant_(block.text_block.adaLN_modulation[-1].weight, 0) + nn.init.constant_(block.text_block.adaLN_modulation[-1].bias, 0) + for block in self.fused_blocks: + nn.init.constant_(block.adaLN_modulation[-1].weight, 0) + nn.init.constant_(block.adaLN_modulation[-1].bias, 0) + + # Zero-out output layers: + nn.init.constant_(self.final_layer.adaLN_modulation[-1].weight, 0) + nn.init.constant_(self.final_layer.adaLN_modulation[-1].bias, 0) + nn.init.constant_(self.final_layer.conv.weight, 0) + nn.init.constant_(self.final_layer.conv.bias, 0) + + # empty string feat shall be initialized by a CLIP encoder + nn.init.constant_(self.sync_pos_emb, 0) + nn.init.constant_(self.empty_clip_feat, 0) + nn.init.constant_(self.empty_sync_feat, 0) + + def normalize(self, x: torch.Tensor) -> torch.Tensor: + # return (x - self.latent_mean) / self.latent_std + return x.sub_(self.latent_mean).div_(self.latent_std) + + def unnormalize(self, x: torch.Tensor) -> torch.Tensor: + # return x * self.latent_std + self.latent_mean + return x.mul_(self.latent_std).add_(self.latent_mean) + + def preprocess_conditions(self, clip_f: torch.Tensor, sync_f: torch.Tensor, + text_f: torch.Tensor) -> PreprocessedConditions: + """ + cache computations that do not depend on the latent/time step + i.e., the features are reused over steps during inference + """ + assert clip_f.shape[1] == self._clip_seq_len, f'{clip_f.shape=} {self._clip_seq_len=}' + assert sync_f.shape[1] == self._sync_seq_len, f'{sync_f.shape=} {self._sync_seq_len=}' + assert text_f.shape[1] == self._text_seq_len, f'{text_f.shape=} {self._text_seq_len=}' + + bs = clip_f.shape[0] + + # B * num_segments (24) * 8 * 768 + num_sync_segments = self._sync_seq_len // 8 + sync_f = sync_f.view(bs, num_sync_segments, 8, -1) + self.sync_pos_emb + sync_f = sync_f.flatten(1, 2) # (B, VN, D) + + # extend vf to match x + clip_f = self.clip_input_proj(clip_f) # (B, VN, D) + sync_f = self.sync_input_proj(sync_f) # (B, VN, D) + text_f = self.text_input_proj(text_f) # (B, VN, D) + + # upsample the sync features to match the audio + sync_f = sync_f.transpose(1, 2) # (B, D, VN) + sync_f = F.interpolate(sync_f, size=self._latent_seq_len, mode='nearest-exact') + sync_f = sync_f.transpose(1, 2) # (B, N, D) + + # get conditional features from the clip side + clip_f_c = self.clip_cond_proj(clip_f.mean(dim=1)) # (B, D) + text_f_c = self.text_cond_proj(text_f.mean(dim=1)) # (B, D) + + return PreprocessedConditions(clip_f=clip_f, + sync_f=sync_f, + text_f=text_f, + clip_f_c=clip_f_c, + text_f_c=text_f_c) + + def predict_flow(self, latent: torch.Tensor, t: torch.Tensor, + conditions: PreprocessedConditions) -> torch.Tensor: + """ + for non-cacheable computations + """ + assert latent.shape[1] == self._latent_seq_len, f'{latent.shape=} {self._latent_seq_len=}' + + clip_f = conditions.clip_f + sync_f = conditions.sync_f + text_f = conditions.text_f + clip_f_c = conditions.clip_f_c + text_f_c = conditions.text_f_c + + latent = self.audio_input_proj(latent) # (B, N, D) + global_c = self.global_cond_mlp(clip_f_c + text_f_c) # (B, D) + + global_c = self.t_embed(t).unsqueeze(1) + global_c.unsqueeze(1) # (B, D) + extended_c = global_c + sync_f + + + + self.latent_rot = self.latent_rot.to("cuda") + self.clip_rot = self.clip_rot.to("cuda") + for block in self.joint_blocks: + latent, clip_f, text_f = block(latent, clip_f, text_f, global_c, extended_c, + self.latent_rot, self.clip_rot) # (B, N, D) + + for block in self.fused_blocks: + latent = block(latent, extended_c, self.latent_rot) + self.latent_rot = self.latent_rot.to("cpu") + self.clip_rot = self.clip_rot.to("cpu") + + # should be extended_c; this is a minor implementation error #55 + flow = self.final_layer(latent, global_c) # (B, N, out_dim), remove t + return flow + + def forward(self, latent: torch.Tensor, clip_f: torch.Tensor, sync_f: torch.Tensor, + text_f: torch.Tensor, t: torch.Tensor) -> torch.Tensor: + """ + latent: (B, N, C) + vf: (B, T, C_V) + t: (B,) + """ + conditions = self.preprocess_conditions(clip_f, sync_f, text_f) + flow = self.predict_flow(latent, t, conditions) + return flow + + def get_empty_string_sequence(self, bs: int) -> torch.Tensor: + return self.empty_string_feat.unsqueeze(0).expand(bs, -1, -1) + + def get_empty_clip_sequence(self, bs: int) -> torch.Tensor: + return self.empty_clip_feat.unsqueeze(0).expand(bs, self._clip_seq_len, -1) + + def get_empty_sync_sequence(self, bs: int) -> torch.Tensor: + return self.empty_sync_feat.unsqueeze(0).expand(bs, self._sync_seq_len, -1) + + def get_empty_conditions( + self, + bs: int, + *, + negative_text_features: Optional[torch.Tensor] = None) -> PreprocessedConditions: + if negative_text_features is not None: + empty_text = negative_text_features + else: + empty_text = self.get_empty_string_sequence(1) + + empty_clip = self.get_empty_clip_sequence(1) + empty_sync = self.get_empty_sync_sequence(1) + conditions = self.preprocess_conditions(empty_clip, empty_sync, empty_text) + conditions.clip_f = conditions.clip_f.expand(bs, -1, -1) + conditions.sync_f = conditions.sync_f.expand(bs, -1, -1) + conditions.clip_f_c = conditions.clip_f_c.expand(bs, -1) + if negative_text_features is None: + conditions.text_f = conditions.text_f.expand(bs, -1, -1) + conditions.text_f_c = conditions.text_f_c.expand(bs, -1) + + return conditions + + def ode_wrapper(self, t: torch.Tensor, latent: torch.Tensor, conditions: PreprocessedConditions, + empty_conditions: PreprocessedConditions, cfg_strength: float) -> torch.Tensor: + t = t * torch.ones(len(latent), device=latent.device, dtype=latent.dtype) + + if cfg_strength < 1.0: + return self.predict_flow(latent, t, conditions) + else: + return (cfg_strength * self.predict_flow(latent, t, conditions) + + (1 - cfg_strength) * self.predict_flow(latent, t, empty_conditions)) + + def load_weights(self, src_dict) -> None: + if 't_embed.freqs' in src_dict: + del src_dict['t_embed.freqs'] + if 'latent_rot' in src_dict: + del src_dict['latent_rot'] + if 'clip_rot' in src_dict: + del src_dict['clip_rot'] + + a,b = self.load_state_dict(src_dict, strict=True, assign= True) + pass + + @property + def device(self) -> torch.device: + return self.latent_mean.device + + @property + def latent_seq_len(self) -> int: + return self._latent_seq_len + + @property + def clip_seq_len(self) -> int: + return self._clip_seq_len + + @property + def sync_seq_len(self) -> int: + return self._sync_seq_len + + +def small_16k(**kwargs) -> MMAudio: + num_heads = 7 + return MMAudio(latent_dim=20, + clip_dim=1024, + sync_dim=768, + text_dim=1024, + hidden_dim=64 * num_heads, + depth=12, + fused_depth=8, + num_heads=num_heads, + latent_seq_len=250, + clip_seq_len=64, + sync_seq_len=192, + **kwargs) + + +def small_44k(**kwargs) -> MMAudio: + num_heads = 7 + return MMAudio(latent_dim=40, + clip_dim=1024, + sync_dim=768, + text_dim=1024, + hidden_dim=64 * num_heads, + depth=12, + fused_depth=8, + num_heads=num_heads, + latent_seq_len=345, + clip_seq_len=64, + sync_seq_len=192, + **kwargs) + + +def medium_44k(**kwargs) -> MMAudio: + num_heads = 14 + return MMAudio(latent_dim=40, + clip_dim=1024, + sync_dim=768, + text_dim=1024, + hidden_dim=64 * num_heads, + depth=12, + fused_depth=8, + num_heads=num_heads, + latent_seq_len=345, + clip_seq_len=64, + sync_seq_len=192, + **kwargs) + + +def large_44k(**kwargs) -> MMAudio: + num_heads = 14 + return MMAudio(latent_dim=40, + clip_dim=1024, + sync_dim=768, + text_dim=1024, + hidden_dim=64 * num_heads, + depth=21, + fused_depth=14, + num_heads=num_heads, + latent_seq_len=345, + clip_seq_len=64, + sync_seq_len=192, + **kwargs) + + +def large_44k_v2(**kwargs) -> MMAudio: + num_heads = 14 + return MMAudio(latent_dim=40, + clip_dim=1024, + sync_dim=768, + text_dim=1024, + hidden_dim=64 * num_heads, + depth=21, + fused_depth=14, + num_heads=num_heads, + latent_seq_len=345, + clip_seq_len=64, + sync_seq_len=192, + v2=True, + **kwargs) + + +def get_my_mmaudio(name: str, **kwargs) -> MMAudio: + if name == 'small_16k': + return small_16k(**kwargs) + if name == 'small_44k': + return small_44k(**kwargs) + if name == 'medium_44k': + return medium_44k(**kwargs) + if name == 'large_44k': + return large_44k(**kwargs) + if name == 'large_44k_v2': + return large_44k_v2(**kwargs) + + raise ValueError(f'Unknown model name: {name}') + + +if __name__ == '__main__': + network = get_my_mmaudio('small_16k') + + # print the number of parameters in terms of millions + num_params = sum(p.numel() for p in network.parameters()) / 1e6 + print(f'Number of parameters: {num_params:.2f}M') diff --git a/postprocessing/mmaudio/model/sequence_config.py b/postprocessing/mmaudio/model/sequence_config.py new file mode 100644 index 0000000..1426901 --- /dev/null +++ b/postprocessing/mmaudio/model/sequence_config.py @@ -0,0 +1,58 @@ +import dataclasses +import math + + +@dataclasses.dataclass +class SequenceConfig: + # general + duration: float + + # audio + sampling_rate: int + spectrogram_frame_rate: int + latent_downsample_rate: int = 2 + + # visual + clip_frame_rate: int = 8 + sync_frame_rate: int = 25 + sync_num_frames_per_segment: int = 16 + sync_step_size: int = 8 + sync_downsample_rate: int = 2 + + @property + def num_audio_frames(self) -> int: + # we need an integer number of latents + return self.latent_seq_len * self.spectrogram_frame_rate * self.latent_downsample_rate + + @property + def latent_seq_len(self) -> int: + return int( + math.ceil(self.duration * self.sampling_rate / self.spectrogram_frame_rate / + self.latent_downsample_rate)) + + @property + def clip_seq_len(self) -> int: + return int(self.duration * self.clip_frame_rate) + + @property + def sync_seq_len(self) -> int: + num_frames = self.duration * self.sync_frame_rate + num_segments = (num_frames - self.sync_num_frames_per_segment) // self.sync_step_size + 1 + return int(num_segments * self.sync_num_frames_per_segment / self.sync_downsample_rate) + + +CONFIG_16K = SequenceConfig(duration=8.0, sampling_rate=16000, spectrogram_frame_rate=256) +CONFIG_44K = SequenceConfig(duration=8.0, sampling_rate=44100, spectrogram_frame_rate=512) + +if __name__ == '__main__': + assert CONFIG_16K.latent_seq_len == 250 + assert CONFIG_16K.clip_seq_len == 64 + assert CONFIG_16K.sync_seq_len == 192 + assert CONFIG_16K.num_audio_frames == 128000 + + assert CONFIG_44K.latent_seq_len == 345 + assert CONFIG_44K.clip_seq_len == 64 + assert CONFIG_44K.sync_seq_len == 192 + assert CONFIG_44K.num_audio_frames == 353280 + + print('Passed') diff --git a/postprocessing/mmaudio/model/transformer_layers.py b/postprocessing/mmaudio/model/transformer_layers.py new file mode 100644 index 0000000..28c17e3 --- /dev/null +++ b/postprocessing/mmaudio/model/transformer_layers.py @@ -0,0 +1,202 @@ +from typing import Optional + +import torch +import torch.nn as nn +import torch.nn.functional as F +from einops import rearrange +from einops.layers.torch import Rearrange + +from ..ext.rotary_embeddings import apply_rope +from ..model.low_level import MLP, ChannelLastConv1d, ConvMLP + + +def modulate(x: torch.Tensor, shift: torch.Tensor, scale: torch.Tensor): + return x * (1 + scale) + shift + + +def attention(q: torch.Tensor, k: torch.Tensor, v: torch.Tensor): + # training will crash without these contiguous calls and the CUDNN limitation + # I believe this is related to https://github.com/pytorch/pytorch/issues/133974 + # unresolved at the time of writing + q = q.contiguous() + k = k.contiguous() + v = v.contiguous() + out = F.scaled_dot_product_attention(q, k, v) + out = rearrange(out, 'b h n d -> b n (h d)').contiguous() + return out + + +class SelfAttention(nn.Module): + + def __init__(self, dim: int, nheads: int): + super().__init__() + self.dim = dim + self.nheads = nheads + + self.qkv = nn.Linear(dim, dim * 3, bias=True) + self.q_norm = nn.RMSNorm(dim // nheads) + self.k_norm = nn.RMSNorm(dim // nheads) + + self.split_into_heads = Rearrange('b n (h d j) -> b h n d j', + h=nheads, + d=dim // nheads, + j=3) + + def pre_attention( + self, x: torch.Tensor, + rot: Optional[torch.Tensor]) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]: + # x: batch_size * n_tokens * n_channels + qkv = self.qkv(x) + q, k, v = self.split_into_heads(qkv).chunk(3, dim=-1) + q = q.squeeze(-1) + k = k.squeeze(-1) + v = v.squeeze(-1) + q = self.q_norm(q) + k = self.k_norm(k) + + if rot is not None: + q = apply_rope(q, rot) + k = apply_rope(k, rot) + + return q, k, v + + def forward( + self, + x: torch.Tensor, # batch_size * n_tokens * n_channels + ) -> torch.Tensor: + q, v, k = self.pre_attention(x) + out = attention(q, k, v) + return out + + +class MMDitSingleBlock(nn.Module): + + def __init__(self, + dim: int, + nhead: int, + mlp_ratio: float = 4.0, + pre_only: bool = False, + kernel_size: int = 7, + padding: int = 3): + super().__init__() + self.norm1 = nn.LayerNorm(dim, elementwise_affine=False) + self.attn = SelfAttention(dim, nhead) + + self.pre_only = pre_only + if pre_only: + self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(dim, 2 * dim, bias=True)) + else: + if kernel_size == 1: + self.linear1 = nn.Linear(dim, dim) + else: + self.linear1 = ChannelLastConv1d(dim, dim, kernel_size=kernel_size, padding=padding) + self.norm2 = nn.LayerNorm(dim, elementwise_affine=False) + + if kernel_size == 1: + self.ffn = MLP(dim, int(dim * mlp_ratio)) + else: + self.ffn = ConvMLP(dim, + int(dim * mlp_ratio), + kernel_size=kernel_size, + padding=padding) + + self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(dim, 6 * dim, bias=True)) + + def pre_attention(self, x: torch.Tensor, c: torch.Tensor, rot: Optional[torch.Tensor]): + # x: BS * N * D + # cond: BS * D + modulation = self.adaLN_modulation(c) + if self.pre_only: + (shift_msa, scale_msa) = modulation.chunk(2, dim=-1) + gate_msa = shift_mlp = scale_mlp = gate_mlp = None + else: + (shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp, + gate_mlp) = modulation.chunk(6, dim=-1) + + x = modulate(self.norm1(x), shift_msa, scale_msa) + q, k, v = self.attn.pre_attention(x, rot) + return (q, k, v), (gate_msa, shift_mlp, scale_mlp, gate_mlp) + + def post_attention(self, x: torch.Tensor, attn_out: torch.Tensor, c: tuple[torch.Tensor]): + if self.pre_only: + return x + + (gate_msa, shift_mlp, scale_mlp, gate_mlp) = c + x = x + self.linear1(attn_out) * gate_msa + r = modulate(self.norm2(x), shift_mlp, scale_mlp) + x = x + self.ffn(r) * gate_mlp + + return x + + def forward(self, x: torch.Tensor, cond: torch.Tensor, + rot: Optional[torch.Tensor]) -> torch.Tensor: + # x: BS * N * D + # cond: BS * D + x_qkv, x_conditions = self.pre_attention(x, cond, rot) + attn_out = attention(*x_qkv) + x = self.post_attention(x, attn_out, x_conditions) + + return x + + +class JointBlock(nn.Module): + + def __init__(self, dim: int, nhead: int, mlp_ratio: float = 4.0, pre_only: bool = False): + super().__init__() + self.pre_only = pre_only + self.latent_block = MMDitSingleBlock(dim, + nhead, + mlp_ratio, + pre_only=False, + kernel_size=3, + padding=1) + self.clip_block = MMDitSingleBlock(dim, + nhead, + mlp_ratio, + pre_only=pre_only, + kernel_size=3, + padding=1) + self.text_block = MMDitSingleBlock(dim, nhead, mlp_ratio, pre_only=pre_only, kernel_size=1) + + def forward(self, latent: torch.Tensor, clip_f: torch.Tensor, text_f: torch.Tensor, + global_c: torch.Tensor, extended_c: torch.Tensor, latent_rot: torch.Tensor, + clip_rot: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]: + # latent: BS * N1 * D + # clip_f: BS * N2 * D + # c: BS * (1/N) * D + x_qkv, x_mod = self.latent_block.pre_attention(latent, extended_c, latent_rot) + c_qkv, c_mod = self.clip_block.pre_attention(clip_f, global_c, clip_rot) + t_qkv, t_mod = self.text_block.pre_attention(text_f, global_c, rot=None) + + latent_len = latent.shape[1] + clip_len = clip_f.shape[1] + text_len = text_f.shape[1] + + joint_qkv = [torch.cat([x_qkv[i], c_qkv[i], t_qkv[i]], dim=2) for i in range(3)] + + attn_out = attention(*joint_qkv) + x_attn_out = attn_out[:, :latent_len] + c_attn_out = attn_out[:, latent_len:latent_len + clip_len] + t_attn_out = attn_out[:, latent_len + clip_len:] + + latent = self.latent_block.post_attention(latent, x_attn_out, x_mod) + if not self.pre_only: + clip_f = self.clip_block.post_attention(clip_f, c_attn_out, c_mod) + text_f = self.text_block.post_attention(text_f, t_attn_out, t_mod) + + return latent, clip_f, text_f + + +class FinalBlock(nn.Module): + + def __init__(self, dim, out_dim): + super().__init__() + self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(dim, 2 * dim, bias=True)) + self.norm = nn.LayerNorm(dim, elementwise_affine=False) + self.conv = ChannelLastConv1d(dim, out_dim, kernel_size=7, padding=3) + + def forward(self, latent, c): + shift, scale = self.adaLN_modulation(c).chunk(2, dim=-1) + latent = modulate(self.norm(latent), shift, scale) + latent = self.conv(latent) + return latent diff --git a/postprocessing/mmaudio/model/utils/__init__.py b/postprocessing/mmaudio/model/utils/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/postprocessing/mmaudio/model/utils/distributions.py b/postprocessing/mmaudio/model/utils/distributions.py new file mode 100644 index 0000000..1d526a5 --- /dev/null +++ b/postprocessing/mmaudio/model/utils/distributions.py @@ -0,0 +1,46 @@ +from typing import Optional + +import numpy as np +import torch + + +class DiagonalGaussianDistribution: + + def __init__(self, parameters, deterministic=False): + self.parameters = parameters + self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) + self.logvar = torch.clamp(self.logvar, -30.0, 20.0) + self.deterministic = deterministic + self.std = torch.exp(0.5 * self.logvar) + self.var = torch.exp(self.logvar) + if self.deterministic: + self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device) + + def sample(self, rng: Optional[torch.Generator] = None): + # x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device) + + r = torch.empty_like(self.mean).normal_(generator=rng) + x = self.mean + self.std * r + + return x + + def kl(self, other=None): + if self.deterministic: + return torch.Tensor([0.]) + else: + if other is None: + + return 0.5 * torch.pow(self.mean, 2) + self.var - 1.0 - self.logvar + else: + return 0.5 * (torch.pow(self.mean - other.mean, 2) / other.var + + self.var / other.var - 1.0 - self.logvar + other.logvar) + + def nll(self, sample, dims=[1, 2, 3]): + if self.deterministic: + return torch.Tensor([0.]) + logtwopi = np.log(2.0 * np.pi) + return 0.5 * torch.sum(logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, + dim=dims) + + def mode(self): + return self.mean diff --git a/postprocessing/mmaudio/model/utils/features_utils.py b/postprocessing/mmaudio/model/utils/features_utils.py new file mode 100644 index 0000000..2947c30 --- /dev/null +++ b/postprocessing/mmaudio/model/utils/features_utils.py @@ -0,0 +1,174 @@ +from typing import Literal, Optional +import json +import open_clip +import torch +import torch.nn as nn +import torch.nn.functional as F +from einops import rearrange +from open_clip import create_model_from_pretrained, create_model +from torchvision.transforms import Normalize + +from ...ext.autoencoder import AutoEncoderModule +from ...ext.mel_converter import get_mel_converter +from ...ext.synchformer.synchformer import Synchformer +from ...model.utils.distributions import DiagonalGaussianDistribution + + +def patch_clip(clip_model): + # a hack to make it output last hidden states + # https://github.com/mlfoundations/open_clip/blob/fc5a37b72d705f760ebbc7915b84729816ed471f/src/open_clip/model.py#L269 + def new_encode_text(self, text, normalize: bool = False): + cast_dtype = self.transformer.get_cast_dtype() + + x = self.token_embedding(text).to(cast_dtype) # [batch_size, n_ctx, d_model] + + x = x + self.positional_embedding.to(cast_dtype) + x = self.transformer(x, attn_mask=self.attn_mask) + x = self.ln_final(x) # [batch_size, n_ctx, transformer.width] + return F.normalize(x, dim=-1) if normalize else x + + clip_model.encode_text = new_encode_text.__get__(clip_model) + return clip_model + +def get_model_config(model_name): + with open("ckpts/DFN5B-CLIP-ViT-H-14-378/open_clip_config.json", 'r', encoding='utf-8') as f: + return json.load(f)["model_cfg"] + +class FeaturesUtils(nn.Module): + + def __init__( + self, + *, + tod_vae_ckpt: Optional[str] = None, + bigvgan_vocoder_ckpt: Optional[str] = None, + synchformer_ckpt: Optional[str] = None, + enable_conditions: bool = True, + mode=Literal['16k', '44k'], + need_vae_encoder: bool = True, + ): + super().__init__() + self.device ="cuda" + if enable_conditions: + old_get_model_config = open_clip.factory.get_model_config + open_clip.factory.get_model_config = get_model_config + with open("ckpts/DFN5B-CLIP-ViT-H-14-378/open_clip_config.json", 'r', encoding='utf-8') as f: + override_preprocess = json.load(f)["preprocess_cfg"] + + self.clip_model = create_model('DFN5B-CLIP-ViT-H-14-378', pretrained='ckpts/DFN5B-CLIP-ViT-H-14-378/open_clip_pytorch_model.bin', force_preprocess_cfg= override_preprocess) + open_clip.factory.get_model_config = old_get_model_config + + # self.clip_model = create_model_from_pretrained('hf-hub:apple/DFN5B-CLIP-ViT-H-14-384', return_transform=False) + self.clip_preprocess = Normalize(mean=[0.48145466, 0.4578275, 0.40821073], + std=[0.26862954, 0.26130258, 0.27577711]) + self.clip_model = patch_clip(self.clip_model) + + self.synchformer = Synchformer() + self.synchformer.load_state_dict( + torch.load(synchformer_ckpt, weights_only=True, map_location='cpu')) + + self.tokenizer = open_clip.get_tokenizer('ViT-H-14-378-quickgelu') # same as 'ViT-H-14' + else: + self.clip_model = None + self.synchformer = None + self.tokenizer = None + + if tod_vae_ckpt is not None: + self.mel_converter = get_mel_converter(mode) + self.tod = AutoEncoderModule(vae_ckpt_path=tod_vae_ckpt, + vocoder_ckpt_path=bigvgan_vocoder_ckpt, + mode=mode, + need_vae_encoder=need_vae_encoder) + else: + self.tod = None + + def compile(self): + if self.clip_model is not None: + self.clip_model.encode_image = torch.compile(self.clip_model.encode_image) + self.clip_model.encode_text = torch.compile(self.clip_model.encode_text) + if self.synchformer is not None: + self.synchformer = torch.compile(self.synchformer) + self.decode = torch.compile(self.decode) + self.vocode = torch.compile(self.vocode) + + def train(self, mode: bool) -> None: + return super().train(False) + + @torch.inference_mode() + def encode_video_with_clip(self, x: torch.Tensor, batch_size: int = -1) -> torch.Tensor: + assert self.clip_model is not None, 'CLIP is not loaded' + # x: (B, T, C, H, W) H/W: 384 + b, t, c, h, w = x.shape + assert c == 3 and h == 384 and w == 384 + x = self.clip_preprocess(x) + x = rearrange(x, 'b t c h w -> (b t) c h w') + outputs = [] + if batch_size < 0: + batch_size = b * t + for i in range(0, b * t, batch_size): + outputs.append(self.clip_model.encode_image(x[i:i + batch_size], normalize=True)) + x = torch.cat(outputs, dim=0) + # x = self.clip_model.encode_image(x, normalize=True) + x = rearrange(x, '(b t) d -> b t d', b=b) + return x + + @torch.inference_mode() + def encode_video_with_sync(self, x: torch.Tensor, batch_size: int = -1) -> torch.Tensor: + assert self.synchformer is not None, 'Synchformer is not loaded' + # x: (B, T, C, H, W) H/W: 384 + + b, t, c, h, w = x.shape + assert c == 3 and h == 224 and w == 224 + + # partition the video + segment_size = 16 + step_size = 8 + num_segments = (t - segment_size) // step_size + 1 + segments = [] + for i in range(num_segments): + segments.append(x[:, i * step_size:i * step_size + segment_size]) + x = torch.stack(segments, dim=1) # (B, S, T, C, H, W) + + outputs = [] + if batch_size < 0: + batch_size = b + x = rearrange(x, 'b s t c h w -> (b s) 1 t c h w') + for i in range(0, b * num_segments, batch_size): + outputs.append(self.synchformer(x[i:i + batch_size])) + x = torch.cat(outputs, dim=0) + x = rearrange(x, '(b s) 1 t d -> b (s t) d', b=b) + return x + + @torch.inference_mode() + def encode_text(self, text: list[str]) -> torch.Tensor: + assert self.clip_model is not None, 'CLIP is not loaded' + assert self.tokenizer is not None, 'Tokenizer is not loaded' + # x: (B, L) + tokens = self.tokenizer(text).to(self.device) + return self.clip_model.encode_text(tokens, normalize=True) + + @torch.inference_mode() + def encode_audio(self, x) -> DiagonalGaussianDistribution: + assert self.tod is not None, 'VAE is not loaded' + # x: (B * L) + mel = self.mel_converter(x) + dist = self.tod.encode(mel) + + return dist + + @torch.inference_mode() + def vocode(self, mel: torch.Tensor) -> torch.Tensor: + assert self.tod is not None, 'VAE is not loaded' + return self.tod.vocode(mel) + + @torch.inference_mode() + def decode(self, z: torch.Tensor) -> torch.Tensor: + assert self.tod is not None, 'VAE is not loaded' + return self.tod.decode(z.transpose(1, 2)) + + # @property + # def device(self): + # return next(self.parameters()).device + + @property + def dtype(self): + return next(self.parameters()).dtype diff --git a/postprocessing/mmaudio/model/utils/parameter_groups.py b/postprocessing/mmaudio/model/utils/parameter_groups.py new file mode 100644 index 0000000..89c3993 --- /dev/null +++ b/postprocessing/mmaudio/model/utils/parameter_groups.py @@ -0,0 +1,72 @@ +import logging + +log = logging.getLogger() + + +def get_parameter_groups(model, cfg, print_log=False): + """ + Assign different weight decays and learning rates to different parameters. + Returns a parameter group which can be passed to the optimizer. + """ + weight_decay = cfg.weight_decay + # embed_weight_decay = cfg.embed_weight_decay + # backbone_lr_ratio = cfg.backbone_lr_ratio + base_lr = cfg.learning_rate + + backbone_params = [] + embed_params = [] + other_params = [] + + # embedding_names = ['summary_pos', 'query_init', 'query_emb', 'obj_pe'] + # embedding_names = [e + '.weight' for e in embedding_names] + + # inspired by detectron2 + memo = set() + for name, param in model.named_parameters(): + if not param.requires_grad: + continue + # Avoid duplicating parameters + if param in memo: + continue + memo.add(param) + + if name.startswith('module'): + name = name[7:] + + inserted = False + # if name.startswith('pixel_encoder.'): + # backbone_params.append(param) + # inserted = True + # if print_log: + # log.info(f'{name} counted as a backbone parameter.') + # else: + # for e in embedding_names: + # if name.endswith(e): + # embed_params.append(param) + # inserted = True + # if print_log: + # log.info(f'{name} counted as an embedding parameter.') + # break + + # if not inserted: + other_params.append(param) + + parameter_groups = [ + # { + # 'params': backbone_params, + # 'lr': base_lr * backbone_lr_ratio, + # 'weight_decay': weight_decay + # }, + # { + # 'params': embed_params, + # 'lr': base_lr, + # 'weight_decay': embed_weight_decay + # }, + { + 'params': other_params, + 'lr': base_lr, + 'weight_decay': weight_decay + }, + ] + + return parameter_groups diff --git a/postprocessing/mmaudio/model/utils/sample_utils.py b/postprocessing/mmaudio/model/utils/sample_utils.py new file mode 100644 index 0000000..d44cf27 --- /dev/null +++ b/postprocessing/mmaudio/model/utils/sample_utils.py @@ -0,0 +1,12 @@ +from typing import Optional + +import torch + + +def log_normal_sample(x: torch.Tensor, + generator: Optional[torch.Generator] = None, + m: float = 0.0, + s: float = 1.0) -> torch.Tensor: + bs = x.shape[0] + s = torch.randn(bs, device=x.device, generator=generator) * s + m + return torch.sigmoid(s) diff --git a/postprocessing/mmaudio/runner.py b/postprocessing/mmaudio/runner.py new file mode 100644 index 0000000..7668f89 --- /dev/null +++ b/postprocessing/mmaudio/runner.py @@ -0,0 +1,609 @@ +""" +trainer.py - wrapper and utility functions for network training +Compute loss, back-prop, update parameters, logging, etc. +""" +import os +from pathlib import Path +from typing import Optional, Union + +import torch +import torch.distributed +import torch.optim as optim +# from av_bench.evaluate import evaluate +# from av_bench.extract import extract +# from nitrous_ema import PostHocEMA +from omegaconf import DictConfig +from torch.nn.parallel import DistributedDataParallel as DDP + +from .model.flow_matching import FlowMatching +from .model.networks import get_my_mmaudio +from .model.sequence_config import CONFIG_16K, CONFIG_44K +from .model.utils.features_utils import FeaturesUtils +from .model.utils.parameter_groups import get_parameter_groups +from .model.utils.sample_utils import log_normal_sample +from .utils.dist_utils import (info_if_rank_zero, local_rank, string_if_rank_zero) +from .utils.log_integrator import Integrator +from .utils.logger import TensorboardLogger +from .utils.time_estimator import PartialTimeEstimator, TimeEstimator +from .utils.video_joiner import VideoJoiner + + +class Runner: + + def __init__(self, + cfg: DictConfig, + log: TensorboardLogger, + run_path: Union[str, Path], + for_training: bool = True, + latent_mean: Optional[torch.Tensor] = None, + latent_std: Optional[torch.Tensor] = None): + self.exp_id = cfg.exp_id + self.use_amp = cfg.amp + self.enable_grad_scaler = cfg.enable_grad_scaler + self.for_training = for_training + self.cfg = cfg + + if cfg.model.endswith('16k'): + self.seq_cfg = CONFIG_16K + mode = '16k' + elif cfg.model.endswith('44k'): + self.seq_cfg = CONFIG_44K + mode = '44k' + else: + raise ValueError(f'Unknown model: {cfg.model}') + + self.sample_rate = self.seq_cfg.sampling_rate + self.duration_sec = self.seq_cfg.duration + + # setting up the model + empty_string_feat = torch.load('./ext_weights/empty_string.pth', weights_only=True)[0] + self.network = DDP(get_my_mmaudio(cfg.model, + latent_mean=latent_mean, + latent_std=latent_std, + empty_string_feat=empty_string_feat).cuda(), + device_ids=[local_rank], + broadcast_buffers=False) + if cfg.compile: + # NOTE: though train_fn and val_fn are very similar + # (early on they are implemented as a single function) + # keeping them separate and compiling them separately are CRUCIAL for high performance + self.train_fn = torch.compile(self.train_fn) + self.val_fn = torch.compile(self.val_fn) + + self.fm = FlowMatching(cfg.sampling.min_sigma, + inference_mode=cfg.sampling.method, + num_steps=cfg.sampling.num_steps) + + # ema profile + if for_training and cfg.ema.enable and local_rank == 0: + self.ema = PostHocEMA(self.network.module, + sigma_rels=cfg.ema.sigma_rels, + update_every=cfg.ema.update_every, + checkpoint_every_num_steps=cfg.ema.checkpoint_every, + checkpoint_folder=cfg.ema.checkpoint_folder, + step_size_correction=True).cuda() + self.ema_start = cfg.ema.start + else: + self.ema = None + + self.rng = torch.Generator(device='cuda') + self.rng.manual_seed(cfg['seed'] + local_rank) + + # setting up feature extractors and VAEs + if mode == '16k': + self.features = FeaturesUtils( + tod_vae_ckpt=cfg['vae_16k_ckpt'], + bigvgan_vocoder_ckpt=cfg['bigvgan_vocoder_ckpt'], + synchformer_ckpt=cfg['synchformer_ckpt'], + enable_conditions=True, + mode=mode, + need_vae_encoder=False, + ) + elif mode == '44k': + self.features = FeaturesUtils( + tod_vae_ckpt=cfg['vae_44k_ckpt'], + synchformer_ckpt=cfg['synchformer_ckpt'], + enable_conditions=True, + mode=mode, + need_vae_encoder=False, + ) + self.features = self.features.cuda().eval() + + if cfg.compile: + self.features.compile() + + # hyperparameters + self.log_normal_sampling_mean = cfg.sampling.mean + self.log_normal_sampling_scale = cfg.sampling.scale + self.null_condition_probability = cfg.null_condition_probability + self.cfg_strength = cfg.cfg_strength + + # setting up logging + self.log = log + self.run_path = Path(run_path) + vgg_cfg = cfg.data.VGGSound + if for_training: + self.val_video_joiner = VideoJoiner(vgg_cfg.root, self.run_path / 'val-sampled-videos', + self.sample_rate, self.duration_sec) + else: + self.test_video_joiner = VideoJoiner(vgg_cfg.root, + self.run_path / 'test-sampled-videos', + self.sample_rate, self.duration_sec) + string_if_rank_zero(self.log, 'model_size', + f'{sum([param.nelement() for param in self.network.parameters()])}') + string_if_rank_zero( + self.log, 'number_of_parameters_that_require_gradient: ', + str( + sum([ + param.nelement() + for param in filter(lambda p: p.requires_grad, self.network.parameters()) + ]))) + info_if_rank_zero(self.log, 'torch version: ' + torch.__version__) + self.train_integrator = Integrator(self.log, distributed=True) + self.val_integrator = Integrator(self.log, distributed=True) + + # setting up optimizer and loss + if for_training: + self.enter_train() + parameter_groups = get_parameter_groups(self.network, cfg, print_log=(local_rank == 0)) + self.optimizer = optim.AdamW(parameter_groups, + lr=cfg['learning_rate'], + weight_decay=cfg['weight_decay'], + betas=[0.9, 0.95], + eps=1e-6 if self.use_amp else 1e-8, + fused=True) + if self.enable_grad_scaler: + self.scaler = torch.amp.GradScaler(init_scale=2048) + self.clip_grad_norm = cfg['clip_grad_norm'] + + # linearly warmup learning rate + linear_warmup_steps = cfg['linear_warmup_steps'] + + def warmup(currrent_step: int): + return (currrent_step + 1) / (linear_warmup_steps + 1) + + warmup_scheduler = optim.lr_scheduler.LambdaLR(self.optimizer, lr_lambda=warmup) + + # setting up learning rate scheduler + if cfg['lr_schedule'] == 'constant': + next_scheduler = optim.lr_scheduler.LambdaLR(self.optimizer, lr_lambda=lambda _: 1) + elif cfg['lr_schedule'] == 'poly': + total_num_iter = cfg['iterations'] + next_scheduler = optim.lr_scheduler.LambdaLR(self.optimizer, + lr_lambda=lambda x: + (1 - (x / total_num_iter))**0.9) + elif cfg['lr_schedule'] == 'step': + next_scheduler = optim.lr_scheduler.MultiStepLR(self.optimizer, + cfg['lr_schedule_steps'], + cfg['lr_schedule_gamma']) + else: + raise NotImplementedError + + self.scheduler = optim.lr_scheduler.SequentialLR(self.optimizer, + [warmup_scheduler, next_scheduler], + [linear_warmup_steps]) + + # Logging info + self.log_text_interval = cfg['log_text_interval'] + self.log_extra_interval = cfg['log_extra_interval'] + self.save_weights_interval = cfg['save_weights_interval'] + self.save_checkpoint_interval = cfg['save_checkpoint_interval'] + self.save_copy_iterations = cfg['save_copy_iterations'] + self.num_iterations = cfg['num_iterations'] + if cfg['debug']: + self.log_text_interval = self.log_extra_interval = 1 + + # update() is called when we log metrics, within the logger + self.log.batch_timer = TimeEstimator(self.num_iterations, self.log_text_interval) + # update() is called every iteration, in this script + self.log.data_timer = PartialTimeEstimator(self.num_iterations, 1, ema_alpha=0.9) + else: + self.enter_val() + + def train_fn( + self, + clip_f: torch.Tensor, + sync_f: torch.Tensor, + text_f: torch.Tensor, + a_mean: torch.Tensor, + a_std: torch.Tensor, + ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: + # sample + a_randn = torch.empty_like(a_mean).normal_(generator=self.rng) + x1 = a_mean + a_std * a_randn + bs = x1.shape[0] # batch_size * seq_len * num_channels + + # normalize the latents + x1 = self.network.module.normalize(x1) + + t = log_normal_sample(x1, + generator=self.rng, + m=self.log_normal_sampling_mean, + s=self.log_normal_sampling_scale) + x0, x1, xt, (clip_f, sync_f, text_f) = self.fm.get_x0_xt_c(x1, + t, + Cs=[clip_f, sync_f, text_f], + generator=self.rng) + + # classifier-free training + samples = torch.rand(bs, device=x1.device, generator=self.rng) + null_video = (samples < self.null_condition_probability) + clip_f[null_video] = self.network.module.empty_clip_feat + sync_f[null_video] = self.network.module.empty_sync_feat + + samples = torch.rand(bs, device=x1.device, generator=self.rng) + null_text = (samples < self.null_condition_probability) + text_f[null_text] = self.network.module.empty_string_feat + + pred_v = self.network(xt, clip_f, sync_f, text_f, t) + loss = self.fm.loss(pred_v, x0, x1) + mean_loss = loss.mean() + return x1, loss, mean_loss, t + + def val_fn( + self, + clip_f: torch.Tensor, + sync_f: torch.Tensor, + text_f: torch.Tensor, + x1: torch.Tensor, + ) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]: + bs = x1.shape[0] # batch_size * seq_len * num_channels + # normalize the latents + x1 = self.network.module.normalize(x1) + t = log_normal_sample(x1, + generator=self.rng, + m=self.log_normal_sampling_mean, + s=self.log_normal_sampling_scale) + x0, x1, xt, (clip_f, sync_f, text_f) = self.fm.get_x0_xt_c(x1, + t, + Cs=[clip_f, sync_f, text_f], + generator=self.rng) + + # classifier-free training + samples = torch.rand(bs, device=x1.device, generator=self.rng) + # null mask is for when a video is provided but we decided to ignore it + null_video = (samples < self.null_condition_probability) + # complete mask is for when a video is not provided or we decided to ignore it + clip_f[null_video] = self.network.module.empty_clip_feat + sync_f[null_video] = self.network.module.empty_sync_feat + + samples = torch.rand(bs, device=x1.device, generator=self.rng) + null_text = (samples < self.null_condition_probability) + text_f[null_text] = self.network.module.empty_string_feat + + pred_v = self.network(xt, clip_f, sync_f, text_f, t) + + loss = self.fm.loss(pred_v, x0, x1) + mean_loss = loss.mean() + return loss, mean_loss, t + + def train_pass(self, data, it: int = 0): + + if not self.for_training: + raise ValueError('train_pass() should not be called when not training.') + + self.enter_train() + with torch.amp.autocast('cuda', enabled=self.use_amp, dtype=torch.bfloat16): + clip_f = data['clip_features'].cuda(non_blocking=True) + sync_f = data['sync_features'].cuda(non_blocking=True) + text_f = data['text_features'].cuda(non_blocking=True) + video_exist = data['video_exist'].cuda(non_blocking=True) + text_exist = data['text_exist'].cuda(non_blocking=True) + a_mean = data['a_mean'].cuda(non_blocking=True) + a_std = data['a_std'].cuda(non_blocking=True) + + # these masks are for non-existent data; masking for CFG training is in train_fn + clip_f[~video_exist] = self.network.module.empty_clip_feat + sync_f[~video_exist] = self.network.module.empty_sync_feat + text_f[~text_exist] = self.network.module.empty_string_feat + + self.log.data_timer.end() + if it % self.log_extra_interval == 0: + unmasked_clip_f = clip_f.clone() + unmasked_sync_f = sync_f.clone() + unmasked_text_f = text_f.clone() + x1, loss, mean_loss, t = self.train_fn(clip_f, sync_f, text_f, a_mean, a_std) + + self.train_integrator.add_dict({'loss': mean_loss}) + + if it % self.log_text_interval == 0 and it != 0: + self.train_integrator.add_scalar('lr', self.scheduler.get_last_lr()[0]) + self.train_integrator.add_binned_tensor('binned_loss', loss, t) + self.train_integrator.finalize('train', it) + self.train_integrator.reset_except_hooks() + + # Backward pass + self.optimizer.zero_grad(set_to_none=True) + if self.enable_grad_scaler: + self.scaler.scale(mean_loss).backward() + self.scaler.unscale_(self.optimizer) + grad_norm = torch.nn.utils.clip_grad_norm_(self.network.parameters(), + self.clip_grad_norm) + self.scaler.step(self.optimizer) + self.scaler.update() + else: + mean_loss.backward() + grad_norm = torch.nn.utils.clip_grad_norm_(self.network.parameters(), + self.clip_grad_norm) + self.optimizer.step() + + if self.ema is not None and it >= self.ema_start: + self.ema.update() + self.scheduler.step() + self.integrator.add_scalar('grad_norm', grad_norm) + + self.enter_val() + with torch.amp.autocast('cuda', enabled=self.use_amp, + dtype=torch.bfloat16), torch.inference_mode(): + try: + if it % self.log_extra_interval == 0: + # save GT audio + # unnormalize the latents + x1 = self.network.module.unnormalize(x1[0:1]) + mel = self.features.decode(x1) + audio = self.features.vocode(mel).cpu()[0] # 1 * num_samples + self.log.log_spectrogram('train', f'spec-gt-r{local_rank}', mel.cpu()[0], it) + self.log.log_audio('train', + f'audio-gt-r{local_rank}', + audio, + it, + sample_rate=self.sample_rate) + + # save audio from sampling + x0 = torch.empty_like(x1[0:1]).normal_(generator=self.rng) + clip_f = unmasked_clip_f[0:1] + sync_f = unmasked_sync_f[0:1] + text_f = unmasked_text_f[0:1] + conditions = self.network.module.preprocess_conditions(clip_f, sync_f, text_f) + empty_conditions = self.network.module.get_empty_conditions(x0.shape[0]) + cfg_ode_wrapper = lambda t, x: self.network.module.ode_wrapper( + t, x, conditions, empty_conditions, self.cfg_strength) + x1_hat = self.fm.to_data(cfg_ode_wrapper, x0) + x1_hat = self.network.module.unnormalize(x1_hat) + mel = self.features.decode(x1_hat) + audio = self.features.vocode(mel).cpu()[0] + self.log.log_spectrogram('train', f'spec-r{local_rank}', mel.cpu()[0], it) + self.log.log_audio('train', + f'audio-r{local_rank}', + audio, + it, + sample_rate=self.sample_rate) + except Exception as e: + self.log.warning(f'Error in extra logging: {e}') + if self.cfg.debug: + raise + + # Save network weights and checkpoint if needed + save_copy = it in self.save_copy_iterations + + if (it % self.save_weights_interval == 0 and it != 0) or save_copy: + self.save_weights(it) + + if it % self.save_checkpoint_interval == 0 and it != 0: + self.save_checkpoint(it, save_copy=save_copy) + + self.log.data_timer.start() + + @torch.inference_mode() + def validation_pass(self, data, it: int = 0): + self.enter_val() + with torch.amp.autocast('cuda', enabled=self.use_amp, dtype=torch.bfloat16): + clip_f = data['clip_features'].cuda(non_blocking=True) + sync_f = data['sync_features'].cuda(non_blocking=True) + text_f = data['text_features'].cuda(non_blocking=True) + video_exist = data['video_exist'].cuda(non_blocking=True) + text_exist = data['text_exist'].cuda(non_blocking=True) + a_mean = data['a_mean'].cuda(non_blocking=True) + a_std = data['a_std'].cuda(non_blocking=True) + + clip_f[~video_exist] = self.network.module.empty_clip_feat + sync_f[~video_exist] = self.network.module.empty_sync_feat + text_f[~text_exist] = self.network.module.empty_string_feat + a_randn = torch.empty_like(a_mean).normal_(generator=self.rng) + x1 = a_mean + a_std * a_randn + + self.log.data_timer.end() + loss, mean_loss, t = self.val_fn(clip_f.clone(), sync_f.clone(), text_f.clone(), x1) + + self.val_integrator.add_binned_tensor('binned_loss', loss, t) + self.val_integrator.add_dict({'loss': mean_loss}) + + self.log.data_timer.start() + + @torch.inference_mode() + def inference_pass(self, + data, + it: int, + data_cfg: DictConfig, + *, + save_eval: bool = True) -> Path: + self.enter_val() + with torch.amp.autocast('cuda', enabled=self.use_amp, dtype=torch.bfloat16): + clip_f = data['clip_features'].cuda(non_blocking=True) + sync_f = data['sync_features'].cuda(non_blocking=True) + text_f = data['text_features'].cuda(non_blocking=True) + video_exist = data['video_exist'].cuda(non_blocking=True) + text_exist = data['text_exist'].cuda(non_blocking=True) + a_mean = data['a_mean'].cuda(non_blocking=True) # for the shape only + + clip_f[~video_exist] = self.network.module.empty_clip_feat + sync_f[~video_exist] = self.network.module.empty_sync_feat + text_f[~text_exist] = self.network.module.empty_string_feat + + # sample + x0 = torch.empty_like(a_mean).normal_(generator=self.rng) + conditions = self.network.module.preprocess_conditions(clip_f, sync_f, text_f) + empty_conditions = self.network.module.get_empty_conditions(x0.shape[0]) + cfg_ode_wrapper = lambda t, x: self.network.module.ode_wrapper( + t, x, conditions, empty_conditions, self.cfg_strength) + x1_hat = self.fm.to_data(cfg_ode_wrapper, x0) + x1_hat = self.network.module.unnormalize(x1_hat) + mel = self.features.decode(x1_hat) + audio = self.features.vocode(mel).cpu() + for i in range(audio.shape[0]): + video_id = data['id'][i] + if (not self.for_training) and i == 0: + # save very few videos + self.test_video_joiner.join(video_id, f'{video_id}', audio[i].transpose(0, 1)) + + if data_cfg.output_subdir is not None: + # validation + if save_eval: + iter_naming = f'{it:09d}' + else: + iter_naming = 'val-cache' + audio_dir = self.log.log_audio(iter_naming, + f'{video_id}', + audio[i], + it=None, + sample_rate=self.sample_rate, + subdir=Path(data_cfg.output_subdir)) + if save_eval and i == 0: + self.val_video_joiner.join(video_id, f'{iter_naming}-{video_id}', + audio[i].transpose(0, 1)) + else: + # full test set, usually + audio_dir = self.log.log_audio(f'{data_cfg.tag}-sampled', + f'{video_id}', + audio[i], + it=None, + sample_rate=self.sample_rate) + + return Path(audio_dir) + + @torch.inference_mode() + def eval(self, audio_dir: Path, it: int, data_cfg: DictConfig) -> dict[str, float]: + with torch.amp.autocast('cuda', enabled=False): + if local_rank == 0: + extract(audio_path=audio_dir, + output_path=audio_dir / 'cache', + device='cuda', + batch_size=32, + audio_length=8) + output_metrics = evaluate(gt_audio_cache=Path(data_cfg.gt_cache), + pred_audio_cache=audio_dir / 'cache') + for k, v in output_metrics.items(): + # pad k to 10 characters + # pad v to 10 decimal places + self.log.log_scalar(f'{data_cfg.tag}/{k}', v, it) + self.log.info(f'{data_cfg.tag}/{k:<10}: {v:.10f}') + else: + output_metrics = None + + return output_metrics + + def save_weights(self, it, save_copy=False): + if local_rank != 0: + return + + os.makedirs(self.run_path, exist_ok=True) + if save_copy: + model_path = self.run_path / f'{self.exp_id}_{it}.pth' + torch.save(self.network.module.state_dict(), model_path) + self.log.info(f'Network weights saved to {model_path}.') + + # if last exists, move it to a shadow copy + model_path = self.run_path / f'{self.exp_id}_last.pth' + if model_path.exists(): + shadow_path = model_path.with_name(model_path.name.replace('last', 'shadow')) + model_path.replace(shadow_path) + self.log.info(f'Network weights shadowed to {shadow_path}.') + + torch.save(self.network.module.state_dict(), model_path) + self.log.info(f'Network weights saved to {model_path}.') + + def save_checkpoint(self, it, save_copy=False): + if local_rank != 0: + return + + checkpoint = { + 'it': it, + 'weights': self.network.module.state_dict(), + 'optimizer': self.optimizer.state_dict(), + 'scheduler': self.scheduler.state_dict(), + 'ema': self.ema.state_dict() if self.ema is not None else None, + } + + os.makedirs(self.run_path, exist_ok=True) + if save_copy: + model_path = self.run_path / f'{self.exp_id}_ckpt_{it}.pth' + torch.save(checkpoint, model_path) + self.log.info(f'Checkpoint saved to {model_path}.') + + # if ckpt_last exists, move it to a shadow copy + model_path = self.run_path / f'{self.exp_id}_ckpt_last.pth' + if model_path.exists(): + shadow_path = model_path.with_name(model_path.name.replace('last', 'shadow')) + model_path.replace(shadow_path) # moves the file + self.log.info(f'Checkpoint shadowed to {shadow_path}.') + + torch.save(checkpoint, model_path) + self.log.info(f'Checkpoint saved to {model_path}.') + + def get_latest_checkpoint_path(self): + ckpt_path = self.run_path / f'{self.exp_id}_ckpt_last.pth' + if not ckpt_path.exists(): + info_if_rank_zero(self.log, f'No checkpoint found at {ckpt_path}.') + return None + return ckpt_path + + def get_latest_weight_path(self): + weight_path = self.run_path / f'{self.exp_id}_last.pth' + if not weight_path.exists(): + self.log.info(f'No weight found at {weight_path}.') + return None + return weight_path + + def get_final_ema_weight_path(self): + weight_path = self.run_path / f'{self.exp_id}_ema_final.pth' + if not weight_path.exists(): + self.log.info(f'No weight found at {weight_path}.') + return None + return weight_path + + def load_checkpoint(self, path): + # This method loads everything and should be used to resume training + map_location = 'cuda:%d' % local_rank + checkpoint = torch.load(path, map_location={'cuda:0': map_location}, weights_only=True) + + it = checkpoint['it'] + weights = checkpoint['weights'] + optimizer = checkpoint['optimizer'] + scheduler = checkpoint['scheduler'] + if self.ema is not None: + self.ema.load_state_dict(checkpoint['ema']) + self.log.info(f'EMA states loaded from step {self.ema.step}') + + map_location = 'cuda:%d' % local_rank + self.network.module.load_state_dict(weights) + self.optimizer.load_state_dict(optimizer) + self.scheduler.load_state_dict(scheduler) + + self.log.info(f'Global iteration {it} loaded.') + self.log.info('Network weights, optimizer states, and scheduler states loaded.') + + return it + + def load_weights_in_memory(self, src_dict): + self.network.module.load_weights(src_dict) + self.log.info('Network weights loaded from memory.') + + def load_weights(self, path): + # This method loads only the network weight and should be used to load a pretrained model + map_location = 'cuda:%d' % local_rank + src_dict = torch.load(path, map_location={'cuda:0': map_location}, weights_only=True) + + self.log.info(f'Importing network weights from {path}...') + self.load_weights_in_memory(src_dict) + + def weights(self): + return self.network.module.state_dict() + + def enter_train(self): + self.integrator = self.train_integrator + self.network.train() + return self + + def enter_val(self): + self.network.eval() + return self diff --git a/postprocessing/mmaudio/sample.py b/postprocessing/mmaudio/sample.py new file mode 100644 index 0000000..30858e7 --- /dev/null +++ b/postprocessing/mmaudio/sample.py @@ -0,0 +1,90 @@ +import json +import logging +import os +import random + +import numpy as np +import torch +from hydra.core.hydra_config import HydraConfig +from omegaconf import DictConfig, open_dict +from tqdm import tqdm + +from .data.data_setup import setup_test_datasets +from .runner import Runner +from .utils.dist_utils import info_if_rank_zero +from .utils.logger import TensorboardLogger + +local_rank = int(os.environ['LOCAL_RANK']) +world_size = int(os.environ['WORLD_SIZE']) + + +def sample(cfg: DictConfig): + # initial setup + num_gpus = world_size + run_dir = HydraConfig.get().run.dir + + # wrap python logger with a tensorboard logger + log = TensorboardLogger(cfg.exp_id, + run_dir, + logging.getLogger(), + is_rank0=(local_rank == 0), + enable_email=cfg.enable_email and not cfg.debug) + + info_if_rank_zero(log, f'All configuration: {cfg}') + info_if_rank_zero(log, f'Number of GPUs detected: {num_gpus}') + + # cuda setup + torch.cuda.set_device(local_rank) + torch.backends.cudnn.benchmark = cfg.cudnn_benchmark + + # number of dataloader workers + info_if_rank_zero(log, f'Number of dataloader workers (per GPU): {cfg.num_workers}') + + # Set seeds to ensure the same initialization + torch.manual_seed(cfg.seed) + np.random.seed(cfg.seed) + random.seed(cfg.seed) + + # setting up configurations + info_if_rank_zero(log, f'Configuration: {cfg}') + info_if_rank_zero(log, f'Batch size (per GPU): {cfg.batch_size}') + + # construct the trainer + runner = Runner(cfg, log=log, run_path=run_dir, for_training=False).enter_val() + + # load the last weights if needed + if cfg['weights'] is not None: + info_if_rank_zero(log, f'Loading weights from the disk: {cfg["weights"]}') + runner.load_weights(cfg['weights']) + cfg['weights'] = None + else: + weights = runner.get_final_ema_weight_path() + if weights is not None: + info_if_rank_zero(log, f'Automatically finding weight: {weights}') + runner.load_weights(weights) + + # setup datasets + dataset, sampler, loader = setup_test_datasets(cfg) + data_cfg = cfg.data.ExtractedVGG_test + with open_dict(data_cfg): + if cfg.output_name is not None: + # append to the tag + data_cfg.tag = f'{data_cfg.tag}-{cfg.output_name}' + + # loop + audio_path = None + for curr_iter, data in enumerate(tqdm(loader)): + new_audio_path = runner.inference_pass(data, curr_iter, data_cfg) + if audio_path is None: + audio_path = new_audio_path + else: + assert audio_path == new_audio_path, 'Different audio path detected' + + info_if_rank_zero(log, f'Inference completed. Audio path: {audio_path}') + output_metrics = runner.eval(audio_path, curr_iter, data_cfg) + + if local_rank == 0: + # write the output metrics to run_dir + output_metrics_path = os.path.join(run_dir, f'{data_cfg.tag}-output_metrics.json') + with open(output_metrics_path, 'w') as f: + json.dump(output_metrics, f, indent=4) diff --git a/postprocessing/mmaudio/utils/__init__.py b/postprocessing/mmaudio/utils/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/postprocessing/mmaudio/utils/dist_utils.py b/postprocessing/mmaudio/utils/dist_utils.py new file mode 100644 index 0000000..f4f4e32 --- /dev/null +++ b/postprocessing/mmaudio/utils/dist_utils.py @@ -0,0 +1,17 @@ +import os +from logging import Logger + +from .logger import TensorboardLogger + +local_rank = int(os.environ['LOCAL_RANK']) if 'LOCAL_RANK' in os.environ else 0 +world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1 + + +def info_if_rank_zero(logger: Logger, msg: str): + if local_rank == 0: + logger.info(msg) + + +def string_if_rank_zero(logger: TensorboardLogger, tag: str, msg: str): + if local_rank == 0: + logger.log_string(tag, msg) diff --git a/postprocessing/mmaudio/utils/download_utils.py b/postprocessing/mmaudio/utils/download_utils.py new file mode 100644 index 0000000..1d193ef --- /dev/null +++ b/postprocessing/mmaudio/utils/download_utils.py @@ -0,0 +1,84 @@ +import hashlib +import logging +from pathlib import Path + +import requests +from tqdm import tqdm + +log = logging.getLogger() + +links = [ + { + 'name': 'mmaudio_small_16k.pth', + 'url': 'https://huggingface.co/hkchengrex/MMAudio/resolve/main/weights/mmaudio_small_16k.pth', + 'md5': 'af93cde404179f58e3919ac085b8033b', + }, + { + 'name': 'mmaudio_small_44k.pth', + 'url': 'https://huggingface.co/hkchengrex/MMAudio/resolve/main/weights/mmaudio_small_44k.pth', + 'md5': 'babd74c884783d13701ea2820a5f5b6d', + }, + { + 'name': 'mmaudio_medium_44k.pth', + 'url': 'https://huggingface.co/hkchengrex/MMAudio/resolve/main/weights/mmaudio_medium_44k.pth', + 'md5': '5a56b6665e45a1e65ada534defa903d0', + }, + { + 'name': 'mmaudio_large_44k.pth', + 'url': 'https://huggingface.co/hkchengrex/MMAudio/resolve/main/weights/mmaudio_large_44k.pth', + 'md5': 'fed96c325a6785b85ce75ae1aafd2673' + }, + { + 'name': 'mmaudio_large_44k_v2.pth', + 'url': 'https://huggingface.co/hkchengrex/MMAudio/resolve/main/weights/mmaudio_large_44k_v2.pth', + 'md5': '01ad4464f049b2d7efdaa4c1a59b8dfe' + }, + { + 'name': 'v1-16.pth', + 'url': 'https://github.com/hkchengrex/MMAudio/releases/download/v0.1/v1-16.pth', + 'md5': '69f56803f59a549a1a507c93859fd4d7' + }, + { + 'name': 'best_netG.pt', + 'url': 'https://github.com/hkchengrex/MMAudio/releases/download/v0.1/best_netG.pt', + 'md5': 'eeaf372a38a9c31c362120aba2dde292' + }, + { + 'name': 'v1-44.pth', + 'url': 'https://github.com/hkchengrex/MMAudio/releases/download/v0.1/v1-44.pth', + 'md5': 'fab020275fa44c6589820ce025191600' + }, + { + 'name': 'synchformer_state_dict.pth', + 'url': + 'https://github.com/hkchengrex/MMAudio/releases/download/v0.1/synchformer_state_dict.pth', + 'md5': '5b2f5594b0730f70e41e549b7c94390c' + }, +] + + +def download_model_if_needed(model_path: Path): + base_name = model_path.name + + for link in links: + if link['name'] == base_name: + target_link = link + break + else: + raise ValueError(f'No link found for {base_name}') + + model_path.parent.mkdir(parents=True, exist_ok=True) + if not model_path.exists() or hashlib.md5(open(model_path, + 'rb').read()).hexdigest() != target_link['md5']: + log.info(f'Downloading {base_name} to {model_path}...') + r = requests.get(target_link['url'], stream=True) + total_size = int(r.headers.get('content-length', 0)) + block_size = 1024 + t = tqdm(total=total_size, unit='iB', unit_scale=True) + with open(model_path, 'wb') as f: + for data in r.iter_content(block_size): + t.update(len(data)) + f.write(data) + t.close() + if total_size != 0 and t.n != total_size: + raise RuntimeError('Error while downloading %s' % base_name) diff --git a/postprocessing/mmaudio/utils/email_utils.py b/postprocessing/mmaudio/utils/email_utils.py new file mode 100644 index 0000000..3de5f44 --- /dev/null +++ b/postprocessing/mmaudio/utils/email_utils.py @@ -0,0 +1,50 @@ +import logging +import os +from datetime import datetime + +import requests +# from dotenv import load_dotenv +from pytz import timezone + +from .timezone import my_timezone + +_source = 'USE YOURS' +_target = 'USE YOURS' + +log = logging.getLogger() + +_fmt = "%Y-%m-%d %H:%M:%S %Z%z" + + +class EmailSender: + + def __init__(self, exp_id: str, enable: bool): + self.exp_id = exp_id + self.enable = enable + if enable: + load_dotenv() + self.MAILGUN_API_KEY = os.getenv('MAILGUN_API_KEY') + if self.MAILGUN_API_KEY is None: + log.warning('MAILGUN_API_KEY is not set') + self.enable = False + + def send(self, subject, content): + if self.enable: + subject = str(subject) + content = str(content) + try: + return requests.post(f'https://api.mailgun.net/v3/{_source}/messages', + auth=('api', self.MAILGUN_API_KEY), + data={ + 'from': + f'🤖 ', + 'to': [f'{_target}'], + 'subject': + f'[{self.exp_id}] {subject}', + 'text': + ('\n\n' + content + '\n\n\n' + + datetime.now(timezone(my_timezone)).strftime(_fmt)), + }, + timeout=20) + except Exception as e: + log.error(f'Failed to send email: {e}') diff --git a/postprocessing/mmaudio/utils/log_integrator.py b/postprocessing/mmaudio/utils/log_integrator.py new file mode 100644 index 0000000..8479c8f --- /dev/null +++ b/postprocessing/mmaudio/utils/log_integrator.py @@ -0,0 +1,112 @@ +""" +Integrate numerical values for some iterations +Typically used for loss computation / logging to tensorboard +Call finalize and create a new Integrator when you want to display/log +""" +from typing import Callable, Union + +import torch + +from .logger import TensorboardLogger +from .tensor_utils import distribute_into_histogram + + +class Integrator: + + def __init__(self, logger: TensorboardLogger, distributed: bool = True): + self.values = {} + self.counts = {} + self.hooks = [] # List is used here to maintain insertion order + + # for binned tensors + self.binned_tensors = {} + self.binned_tensor_indices = {} + + self.logger = logger + + self.distributed = distributed + self.local_rank = torch.distributed.get_rank() + self.world_size = torch.distributed.get_world_size() + + def add_scalar(self, key: str, x: Union[torch.Tensor, int, float]): + if isinstance(x, torch.Tensor): + x = x.detach() + if x.dtype in [torch.long, torch.int, torch.bool]: + x = x.float() + + if key not in self.values: + self.counts[key] = 1 + self.values[key] = x + else: + self.counts[key] += 1 + self.values[key] += x + + def add_dict(self, tensor_dict: dict[str, torch.Tensor]): + for k, v in tensor_dict.items(): + self.add_scalar(k, v) + + def add_binned_tensor(self, key: str, x: torch.Tensor, indices: torch.Tensor): + if key not in self.binned_tensors: + self.binned_tensors[key] = [x.detach().flatten()] + self.binned_tensor_indices[key] = [indices.detach().flatten()] + else: + self.binned_tensors[key].append(x.detach().flatten()) + self.binned_tensor_indices[key].append(indices.detach().flatten()) + + def add_hook(self, hook: Callable[[torch.Tensor], tuple[str, torch.Tensor]]): + """ + Adds a custom hook, i.e. compute new metrics using values in the dict + The hook takes the dict as argument, and returns a (k, v) tuple + e.g. for computing IoU + """ + self.hooks.append(hook) + + def reset_except_hooks(self): + self.values = {} + self.counts = {} + + # Average and output the metrics + def finalize(self, prefix: str, it: int, ignore_timer: bool = False) -> None: + + for hook in self.hooks: + k, v = hook(self.values) + self.add_scalar(k, v) + + # for the metrics + outputs = {} + for k, v in self.values.items(): + avg = v / self.counts[k] + if self.distributed: + # Inplace operation + if isinstance(avg, torch.Tensor): + avg = avg.cuda() + else: + avg = torch.tensor(avg).cuda() + torch.distributed.reduce(avg, dst=0) + + if self.local_rank == 0: + avg = (avg / self.world_size).cpu().item() + outputs[k] = avg + else: + # Simple does it + outputs[k] = avg + + if (not self.distributed) or (self.local_rank == 0): + self.logger.log_metrics(prefix, outputs, it, ignore_timer=ignore_timer) + + # for the binned tensors + for k, v in self.binned_tensors.items(): + x = torch.cat(v, dim=0) + indices = torch.cat(self.binned_tensor_indices[k], dim=0) + hist, count = distribute_into_histogram(x, indices) + + if self.distributed: + torch.distributed.reduce(hist, dst=0) + torch.distributed.reduce(count, dst=0) + if self.local_rank == 0: + hist = hist / count + else: + hist = hist / count + + if (not self.distributed) or (self.local_rank == 0): + self.logger.log_histogram(f'{prefix}/{k}', hist, it) diff --git a/postprocessing/mmaudio/utils/logger.py b/postprocessing/mmaudio/utils/logger.py new file mode 100644 index 0000000..bd8cea2 --- /dev/null +++ b/postprocessing/mmaudio/utils/logger.py @@ -0,0 +1,231 @@ +""" +Dumps things to tensorboard and console +""" + +import datetime +import logging +import math +import os +from collections import defaultdict +from pathlib import Path +from typing import Optional, Union + +import matplotlib.pyplot as plt +import numpy as np +import torch +import torchaudio +from PIL import Image +from pytz import timezone +from torch.utils.tensorboard import SummaryWriter + +from .email_utils import EmailSender +from .time_estimator import PartialTimeEstimator, TimeEstimator +from .timezone import my_timezone + + +def tensor_to_numpy(image: torch.Tensor): + image_np = (image.numpy() * 255).astype('uint8') + return image_np + + +def detach_to_cpu(x: torch.Tensor): + return x.detach().cpu() + + +def fix_width_trunc(x: float): + return ('{:.9s}'.format('{:0.9f}'.format(x))) + + +def plot_spectrogram(spectrogram: np.ndarray, title=None, ylabel="freq_bin", ax=None): + if ax is None: + _, ax = plt.subplots(1, 1) + if title is not None: + ax.set_title(title) + ax.set_ylabel(ylabel) + ax.imshow(spectrogram, origin="lower", aspect="auto", interpolation="nearest") + + +class TensorboardLogger: + + def __init__(self, + exp_id: str, + run_dir: Union[Path, str], + py_logger: logging.Logger, + *, + is_rank0: bool = False, + enable_email: bool = False): + self.exp_id = exp_id + self.run_dir = Path(run_dir) + self.py_log = py_logger + self.email_sender = EmailSender(exp_id, enable=(is_rank0 and enable_email)) + if is_rank0: + self.tb_log = SummaryWriter(run_dir) + else: + self.tb_log = None + + # Get current git info for logging + try: + import git + repo = git.Repo(".") + git_info = str(repo.active_branch) + ' ' + str(repo.head.commit.hexsha) + except (ImportError, RuntimeError, TypeError): + print('Failed to fetch git info. Defaulting to None') + git_info = 'None' + + self.log_string('git', git_info) + + # log the SLURM job id if available + job_id = os.environ.get('SLURM_JOB_ID', None) + if job_id is not None: + self.log_string('slurm_job_id', job_id) + self.email_sender.send(f'Job {job_id} started', f'Job started {run_dir}') + + # used when logging metrics + self.batch_timer: TimeEstimator = None + self.data_timer: PartialTimeEstimator = None + + self.nan_count = defaultdict(int) + + def log_scalar(self, tag: str, x: float, it: int): + if self.tb_log is None: + return + if math.isnan(x) and 'grad_norm' not in tag: + self.nan_count[tag] += 1 + if self.nan_count[tag] == 10: + self.email_sender.send( + f'Nan detected in {tag} @ {self.run_dir}', + f'Nan detected in {tag} at iteration {it}; run_dir: {self.run_dir}') + else: + self.nan_count[tag] = 0 + self.tb_log.add_scalar(tag, x, it) + + def log_metrics(self, + prefix: str, + metrics: dict[str, float], + it: int, + ignore_timer: bool = False): + msg = f'{self.exp_id}-{prefix} - it {it:6d}: ' + metrics_msg = '' + for k, v in sorted(metrics.items()): + self.log_scalar(f'{prefix}/{k}', v, it) + metrics_msg += f'{k: >10}:{v:.7f},\t' + + if self.batch_timer is not None and not ignore_timer: + self.batch_timer.update() + avg_time = self.batch_timer.get_and_reset_avg_time() + data_time = self.data_timer.get_and_reset_avg_time() + + # add time to tensorboard + self.log_scalar(f'{prefix}/avg_time', avg_time, it) + self.log_scalar(f'{prefix}/data_time', data_time, it) + + est = self.batch_timer.get_est_remaining(it) + est = datetime.timedelta(seconds=est) + if est.days > 0: + remaining_str = f'{est.days}d {est.seconds // 3600}h' + else: + remaining_str = f'{est.seconds // 3600}h {(est.seconds%3600) // 60}m' + eta = datetime.datetime.now(timezone(my_timezone)) + est + eta_str = eta.strftime('%Y-%m-%d %H:%M:%S %Z%z') + time_msg = f'avg_time:{avg_time:.3f},data:{data_time:.3f},remaining:{remaining_str},eta:{eta_str},\t' + msg = f'{msg} {time_msg}' + + msg = f'{msg} {metrics_msg}' + self.py_log.info(msg) + + def log_histogram(self, tag: str, hist: torch.Tensor, it: int): + if self.tb_log is None: + return + # hist should be a 1D tensor + hist = hist.cpu().numpy() + fig, ax = plt.subplots() + x_range = np.linspace(0, 1, len(hist)) + ax.bar(x_range, hist, width=1 / (len(hist) - 1)) + ax.set_xticks(x_range) + ax.set_xticklabels(x_range) + plt.tight_layout() + self.tb_log.add_figure(tag, fig, it) + plt.close() + + def log_image(self, prefix: str, tag: str, image: np.ndarray, it: int): + image_dir = self.run_dir / f'{prefix}_images' + image_dir.mkdir(exist_ok=True, parents=True) + + image = Image.fromarray(image) + image.save(image_dir / f'{it:09d}_{tag}.png') + + def log_audio(self, + prefix: str, + tag: str, + waveform: torch.Tensor, + it: Optional[int] = None, + *, + subdir: Optional[Path] = None, + sample_rate: int = 16000) -> Path: + if subdir is None: + audio_dir = self.run_dir / prefix + else: + audio_dir = self.run_dir / subdir / prefix + audio_dir.mkdir(exist_ok=True, parents=True) + + if it is None: + name = f'{tag}.flac' + else: + name = f'{it:09d}_{tag}.flac' + + torchaudio.save(audio_dir / name, + waveform.cpu().float(), + sample_rate=sample_rate, + channels_first=True) + return Path(audio_dir) + + def log_spectrogram( + self, + prefix: str, + tag: str, + spec: torch.Tensor, + it: Optional[int], + *, + subdir: Optional[Path] = None, + ): + if subdir is None: + spec_dir = self.run_dir / prefix + else: + spec_dir = self.run_dir / subdir / prefix + spec_dir.mkdir(exist_ok=True, parents=True) + + if it is None: + name = f'{tag}.png' + else: + name = f'{it:09d}_{tag}.png' + + plot_spectrogram(spec.cpu().float()) + plt.tight_layout() + plt.savefig(spec_dir / name) + plt.close() + + def log_string(self, tag: str, x: str): + self.py_log.info(f'{tag} - {x}') + if self.tb_log is None: + return + self.tb_log.add_text(tag, x) + + def debug(self, x): + self.py_log.debug(x) + + def info(self, x): + self.py_log.info(x) + + def warning(self, x): + self.py_log.warning(x) + + def error(self, x): + self.py_log.error(x) + + def critical(self, x): + self.py_log.critical(x) + + self.email_sender.send(f'Error occurred in {self.run_dir}', x) + + def complete(self): + self.email_sender.send(f'Job completed in {self.run_dir}', 'Job completed') diff --git a/postprocessing/mmaudio/utils/synthesize_ema.py b/postprocessing/mmaudio/utils/synthesize_ema.py new file mode 100644 index 0000000..eb36e39 --- /dev/null +++ b/postprocessing/mmaudio/utils/synthesize_ema.py @@ -0,0 +1,19 @@ +from typing import Optional + +# from nitrous_ema import PostHocEMA +from omegaconf import DictConfig + +from ..model.networks import get_my_mmaudio + + +def synthesize_ema(cfg: DictConfig, sigma: float, step: Optional[int]): + vae = get_my_mmaudio(cfg.model) + emas = PostHocEMA(vae, + sigma_rels=cfg.ema.sigma_rels, + update_every=cfg.ema.update_every, + checkpoint_every_num_steps=cfg.ema.checkpoint_every, + checkpoint_folder=cfg.ema.checkpoint_folder) + + synthesized_ema = emas.synthesize_ema_model(sigma_rel=sigma, step=step, device='cpu') + state_dict = synthesized_ema.ema_model.state_dict() + return state_dict diff --git a/postprocessing/mmaudio/utils/tensor_utils.py b/postprocessing/mmaudio/utils/tensor_utils.py new file mode 100644 index 0000000..b650955 --- /dev/null +++ b/postprocessing/mmaudio/utils/tensor_utils.py @@ -0,0 +1,14 @@ +import torch + + +def distribute_into_histogram(loss: torch.Tensor, + t: torch.Tensor, + num_bins: int = 25) -> tuple[torch.Tensor, torch.Tensor]: + loss = loss.detach().flatten() + t = t.detach().flatten() + t = (t * num_bins).long() + hist = torch.zeros(num_bins, device=loss.device) + count = torch.zeros(num_bins, device=loss.device) + hist.scatter_add_(0, t, loss) + count.scatter_add_(0, t, torch.ones_like(loss)) + return hist, count diff --git a/postprocessing/mmaudio/utils/time_estimator.py b/postprocessing/mmaudio/utils/time_estimator.py new file mode 100644 index 0000000..62ff3ca --- /dev/null +++ b/postprocessing/mmaudio/utils/time_estimator.py @@ -0,0 +1,72 @@ +import time + + +class TimeEstimator: + + def __init__(self, total_iter: int, step_size: int, ema_alpha: float = 0.7): + self.avg_time_window = [] # window-based average + self.exp_avg_time = None # exponential moving average + self.alpha = ema_alpha # for exponential moving average + + self.last_time = time.time() # would not be accurate for the first iteration but well + self.total_iter = total_iter + self.step_size = step_size + + self._buffering_exp = True + + # call this at a fixed interval + # does not have to be every step + def update(self): + curr_time = time.time() + time_per_iter = curr_time - self.last_time + self.last_time = curr_time + + self.avg_time_window.append(time_per_iter) + + if self._buffering_exp: + if self.exp_avg_time is not None: + # discard the first iteration call to not pollute the ema + self._buffering_exp = False + self.exp_avg_time = time_per_iter + else: + self.exp_avg_time = self.alpha * self.exp_avg_time + (1 - self.alpha) * time_per_iter + + def get_est_remaining(self, it: int): + if self.exp_avg_time is None: + return 0 + + remaining_iter = self.total_iter - it + return remaining_iter * self.exp_avg_time / self.step_size + + def get_and_reset_avg_time(self): + avg = sum(self.avg_time_window) / len(self.avg_time_window) / self.step_size + self.avg_time_window = [] + return avg + + +class PartialTimeEstimator(TimeEstimator): + """ + Used where the start_time and the end_time do not align + """ + + def update(self): + raise RuntimeError('Please use start() and end() for PartialTimeEstimator') + + def start(self): + self.last_time = time.time() + + def end(self): + assert self.last_time is not None, 'Please call start() before calling end()' + curr_time = time.time() + time_per_iter = curr_time - self.last_time + self.last_time = None + + self.avg_time_window.append(time_per_iter) + + if self._buffering_exp: + if self.exp_avg_time is not None: + # discard the first iteration call to not pollute the ema + self._buffering_exp = False + self.exp_avg_time = time_per_iter + else: + self.exp_avg_time = self.alpha * self.exp_avg_time + (1 - self.alpha) * time_per_iter diff --git a/postprocessing/mmaudio/utils/timezone.py b/postprocessing/mmaudio/utils/timezone.py new file mode 100644 index 0000000..4c7f0e6 --- /dev/null +++ b/postprocessing/mmaudio/utils/timezone.py @@ -0,0 +1 @@ +my_timezone = 'US/Central' diff --git a/postprocessing/mmaudio/utils/video_joiner.py b/postprocessing/mmaudio/utils/video_joiner.py new file mode 100644 index 0000000..1a05ae8 --- /dev/null +++ b/postprocessing/mmaudio/utils/video_joiner.py @@ -0,0 +1,66 @@ +from pathlib import Path +from typing import Union + +import torch +from torio.io import StreamingMediaDecoder, StreamingMediaEncoder + + +class VideoJoiner: + + def __init__(self, src_root: Union[str, Path], output_root: Union[str, Path], sample_rate: int, + duration_seconds: float): + self.src_root = Path(src_root) + self.output_root = Path(output_root) + self.sample_rate = sample_rate + self.duration_seconds = duration_seconds + + self.output_root.mkdir(parents=True, exist_ok=True) + + def join(self, video_id: str, output_name: str, audio: torch.Tensor): + video_path = self.src_root / f'{video_id}.mp4' + output_path = self.output_root / f'{output_name}.mp4' + merge_audio_into_video(video_path, output_path, audio, self.sample_rate, + self.duration_seconds) + + +def merge_audio_into_video(video_path: Union[str, Path], output_path: Union[str, Path], + audio: torch.Tensor, sample_rate: int, duration_seconds: float): + # audio: (num_samples, num_channels=1/2) + + frame_rate = 24 + # read the video + reader = StreamingMediaDecoder(video_path) + reader.add_basic_video_stream( + frames_per_chunk=int(frame_rate * duration_seconds), + # buffer_chunk_size=1, # does not work with this -- extracted audio would be too short + format="rgb24", + frame_rate=frame_rate, + ) + + reader.fill_buffer() + video_chunk = reader.pop_chunks()[0] + t, _, h, w = video_chunk.shape + + writer = StreamingMediaEncoder(output_path) + writer.add_audio_stream( + sample_rate=sample_rate, + num_channels=audio.shape[-1], + encoder="libmp3lame", + ) + writer.add_video_stream(frame_rate=frame_rate, + width=w, + height=h, + format="rgb24", + encoder="libx264", + encoder_format="yuv420p") + + with writer.open(): + writer.write_audio_chunk(0, audio.float()) + writer.write_video_chunk(1, video_chunk) + + +if __name__ == '__main__': + # Usage example + import sys + audio = torch.randn(16000 * 4, 1) + merge_audio_into_video(sys.argv[1], sys.argv[2], audio, 16000, 4) diff --git a/rife/IFNet_HDv3.py b/postprocessing/rife/IFNet_HDv3.py similarity index 100% rename from rife/IFNet_HDv3.py rename to postprocessing/rife/IFNet_HDv3.py diff --git a/rife/RIFE_HDv3.py b/postprocessing/rife/RIFE_HDv3.py similarity index 100% rename from rife/RIFE_HDv3.py rename to postprocessing/rife/RIFE_HDv3.py diff --git a/rife/inference.py b/postprocessing/rife/inference.py similarity index 100% rename from rife/inference.py rename to postprocessing/rife/inference.py diff --git a/rife/ssim.py b/postprocessing/rife/ssim.py similarity index 100% rename from rife/ssim.py rename to postprocessing/rife/ssim.py diff --git a/preprocessing/matanyone/app.py b/preprocessing/matanyone/app.py index c9df1b4..a6146d8 100644 --- a/preprocessing/matanyone/app.py +++ b/preprocessing/matanyone/app.py @@ -395,19 +395,20 @@ def show_outputs(): return gr.update(visible=True), gr.update(visible=True) def add_audio_to_video(video_path, audio_path, output_path): - try: - video_input = ffmpeg.input(video_path) - audio_input = ffmpeg.input(audio_path) + pass + # try: + # video_input = ffmpeg.input(video_path) + # audio_input = ffmpeg.input(audio_path) - _ = ( - ffmpeg - .output(video_input, audio_input, output_path, vcodec="copy", acodec="aac") - .run(overwrite_output=True, capture_stdout=True, capture_stderr=True) - ) - return output_path - except ffmpeg.Error as e: - print(f"FFmpeg error:\n{e.stderr.decode()}") - return None + # _ = ( + # ffmpeg + # .output(video_input, audio_input, output_path, vcodec="copy", acodec="aac") + # .run(overwrite_output=True, capture_stdout=True, capture_stderr=True) + # ) + # return output_path + # except ffmpeg.Error as e: + # print(f"FFmpeg error:\n{e.stderr.decode()}") + # return None def generate_video_from_frames(frames, output_path, fps=30, gray2rgb=False, audio_path=""): @@ -542,7 +543,7 @@ def teleport_to_video_tab(tab_state): return gr.Tabs(selected="video_gen") -def display(tabs, tab_state, model_choice, vace_video_input, vace_video_mask, vace_image_refs, video_prompt_video_guide_trigger): +def display(tabs, tab_state, model_choice, vace_video_input, vace_video_mask, vace_image_refs): # my_tab.select(fn=load_unload_models, inputs=[], outputs=[]) media_url = "https://github.com/pq-yang/MatAnyone/releases/download/media/" @@ -879,7 +880,7 @@ def display(tabs, tab_state, model_choice, vace_video_input, vace_video_mask, va alpha_output_button = gr.Button(value="Alpha Mask Output", visible=False, elem_classes="new_button") export_image_btn.click( fn=export_image, inputs= [vace_image_refs, foreground_image_output], outputs= [vace_image_refs]).then( #video_prompt_video_guide_trigger, - fn=teleport_to_video_tab, inputs= [], outputs= [tabs]) + fn=teleport_to_video_tab, inputs= [tab_state], outputs= [tabs]) # first step: get the image information extract_frames_button.click( diff --git a/requirements.txt b/requirements.txt index a948cf7..54f5d2b 100644 --- a/requirements.txt +++ b/requirements.txt @@ -17,7 +17,7 @@ gradio==5.23.0 numpy>=1.23.5,<2 einops moviepy==1.0.3 -mmgp==3.4.9 +mmgp==3.5.0 peft==0.15.0 mutagen pydantic==2.10.6 @@ -37,3 +37,8 @@ opencv-python pygame>=2.1.0 sounddevice>=0.4.0 # rembg==2.0.65 +torchdiffeq >= 0.2.5 +# 'nitrous-ema', +# 'hydra_colorlog', +tensordict >= 0.6.1 +open_clip_torch >= 2.29.0 \ No newline at end of file diff --git a/wan/diffusion_forcing.py b/wan/diffusion_forcing.py index 774a444..4a737f1 100644 --- a/wan/diffusion_forcing.py +++ b/wan/diffusion_forcing.py @@ -19,6 +19,7 @@ from wan.utils.utils import calculate_new_dimensions from .utils.fm_solvers import (FlowDPMSolverMultistepScheduler, get_sampling_sigmas, retrieve_timesteps) from .utils.fm_solvers_unipc import FlowUniPCMultistepScheduler +from wgp import update_loras_slists class DTT2V: @@ -216,6 +217,7 @@ class DTT2V: slg_start = 0.0, slg_end = 1.0, callback = None, + loras_slists = None, **bbargs ): self._interrupt = False @@ -316,8 +318,9 @@ class DTT2V: updated_num_steps= len(step_matrix) if callback != None: + update_loras_slists(self.model, loras_slists, updated_num_steps) callback(-1, None, True, override_num_inference_steps = updated_num_steps) - if self.model.enable_cache: + if self.model.enable_cache == "tea": x_count = 2 if self.do_classifier_free_guidance else 1 self.model.previous_residual = [None] * x_count time_steps_comb = [] @@ -328,8 +331,10 @@ class DTT2V: if overlap_noise > 0 and valid_interval_start < predix_video_latent_length: timestep[:, valid_interval_start:predix_video_latent_length] = overlap_noise time_steps_comb.append(timestep) - self.model.compute_teacache_threshold(self.model.cache_start_step, time_steps_comb, self.model.teacache_multiplier) + self.model.compute_teacache_threshold(self.model.cache_start_step, time_steps_comb, self.model.cache_multiplier) del time_steps_comb + else: + trans.enable_cache == None from mmgp import offload freqs = get_rotary_pos_embed(latents.shape[1 :], enable_RIFLEx= False) kwrags = { diff --git a/fantasytalking/infer.py b/wan/fantasytalking/infer.py similarity index 96% rename from fantasytalking/infer.py rename to wan/fantasytalking/infer.py index e7bdc6f..80d1945 100644 --- a/fantasytalking/infer.py +++ b/wan/fantasytalking/infer.py @@ -10,7 +10,7 @@ def parse_audio(audio_path, num_frames, fps = 23, device = "cuda"): fantasytalking = FantasyTalkingAudioConditionModel(None, 768, 2048).to(device) from mmgp import offload from accelerate import init_empty_weights - from fantasytalking.model import AudioProjModel + from .model import AudioProjModel torch.set_grad_enabled(False) diff --git a/fantasytalking/model.py b/wan/fantasytalking/model.py similarity index 100% rename from fantasytalking/model.py rename to wan/fantasytalking/model.py diff --git a/fantasytalking/utils.py b/wan/fantasytalking/utils.py similarity index 100% rename from fantasytalking/utils.py rename to wan/fantasytalking/utils.py diff --git a/wan/image2video.py b/wan/image2video.py index 51656e4..3c14fc5 100644 --- a/wan/image2video.py +++ b/wan/image2video.py @@ -283,8 +283,12 @@ class WanI2V: # evaluation mode - - if sample_solver == 'unipc': + if sample_solver == 'causvid': + sample_scheduler = FlowMatchScheduler(num_inference_steps=sampling_steps, shift=shift, sigma_min=0, extra_one_step=True) + timesteps = torch.tensor([1000, 934, 862, 756, 603, 410, 250, 140, 74])[:sampling_steps].to(self.device) + sample_scheduler.timesteps =timesteps + sample_scheduler.sigmas = torch.cat([sample_scheduler.timesteps / 1000, torch.tensor([0.], device=self.device)]) + elif sample_solver == 'unipc' or sample_solver == "": sample_scheduler = FlowUniPCMultistepScheduler( num_train_timesteps=self.num_train_timesteps, shift=1, @@ -303,7 +307,7 @@ class WanI2V: device=self.device, sigmas=sampling_sigmas) else: - raise NotImplementedError("Unsupported solver.") + raise NotImplementedError("Unsupported scheduler.") # sample videos latent = noise @@ -317,10 +321,16 @@ class WanI2V: "audio_proj": audio_proj.to(self.dtype), "audio_context_lens": audio_context_lens, }) - - if self.model.enable_cache: - self.model.previous_residual = [None] * (3 if audio_cfg_scale !=None else 2) - self.model.compute_teacache_threshold(self.model.cache_start_step, timesteps, self.model.teacache_multiplier) + cache_type = self.model.enable_cache + if cache_type != None: + x_count = 3 if audio_cfg_scale !=None else 2 + self.model.previous_residual = [None] * x_count + if cache_type == "tea": + self.model.compute_teacache_threshold(self.model.cache_start_step, timesteps, self.model.cache_multiplier) + else: + self.model.compute_magcache_threshold(self.model.cache_start_step, timesteps, self.model.cache_multiplier) + self.model.accumulated_err, self.model.accumulated_steps, self.model.accumulated_ratio = [0.0] * x_count, [0] * x_count, [1.0] * x_count + self.model.one_for_all = x_count > 2 # self.model.to(self.device) if callback != None: diff --git a/wan/modules/model.py b/wan/modules/model.py index edf04e2..d276b2f 100644 --- a/wan/modules/model.py +++ b/wan/modules/model.py @@ -646,6 +646,7 @@ class WanModel(ModelMixin, ConfigMixin): for k,v in sd.items(): k = k.replace("lora_unet_blocks_","diffusion_model.blocks.") + k = k.replace("lora_unet__blocks_","diffusion_model.blocks.") for s,t in zip(src_list, tgt_list): k = k.replace(s,t) @@ -653,33 +654,15 @@ class WanModel(ModelMixin, ConfigMixin): k = k.replace("lora_up","lora_B") k = k.replace("lora_down","lora_A") - if "alpha" in k: - alphas[k] = v - else: - new_sd[k] = v + new_sd[k] = v - new_alphas = {} - for k,v in new_sd.items(): - if "lora_B" in k: - dim = v.shape[1] - elif "lora_A" in k: - dim = v.shape[0] - else: - continue - alpha_key = k[:-len("lora_X.weight")] +"alpha" - if alpha_key in alphas: - scale = alphas[alpha_key] / dim - new_alphas[alpha_key] = scale - else: - print(f"Lora alpha'{alpha_key}' is missing") - new_sd.update(new_alphas) sd = new_sd from wgp import test_class_i2v if not test_class_i2v(model_type): new_sd = {} # convert loras for i2v to t2v for k,v in sd.items(): - if any(layer in k for layer in ["cross_attn.k_img", "cross_attn.v_img"]): + if any(layer in k for layer in ["cross_attn.k_img", "cross_attn.v_img", "img_emb."]): continue new_sd[k] = v sd = new_sd @@ -849,7 +832,7 @@ class WanModel(ModelMixin, ConfigMixin): block.projector.bias = nn.Parameter(torch.zeros(dim)) if fantasytalking_dim > 0: - from fantasytalking.model import WanCrossAttentionProcessor + from wan.fantasytalking.model import WanCrossAttentionProcessor for block in self.blocks: block.cross_attn.processor = WanCrossAttentionProcessor(fantasytalking_dim, dim) @@ -891,6 +874,66 @@ class WanModel(ModelMixin, ConfigMixin): self._lock_dtype = dtype + def compute_magcache_threshold(self, start_step, timesteps = None, speed_factor =0): + def nearest_interp(src_array, target_length): + src_length = len(src_array) + if target_length == 1: return np.array([src_array[-1]]) + scale = (src_length - 1) / (target_length - 1) + mapped_indices = np.round(np.arange(target_length) * scale).astype(int) + return src_array[mapped_indices] + num_inference_steps = len(timesteps) + if len(self.def_mag_ratios) != num_inference_steps*2: + mag_ratio_con = nearest_interp(self.def_mag_ratios[0::2], num_inference_steps) + mag_ratio_ucon = nearest_interp(self.def_mag_ratios[1::2], num_inference_steps) + interpolated_mag_ratios = np.concatenate([mag_ratio_con.reshape(-1, 1), mag_ratio_ucon.reshape(-1, 1)], axis=1).reshape(-1) + self.mag_ratios = interpolated_mag_ratios + else: + self.mag_ratios = self.def_mag_ratios + + + best_deltas = None + best_threshold = 0.01 + best_diff = 1000 + best_signed_diff = 1000 + target_nb_steps= int(len(timesteps) / speed_factor) + threshold = 0.01 + x_id_max = 1 + while threshold <= 0.6: + nb_steps = 0 + diff = 1000 + accumulated_err, accumulated_steps, accumulated_ratio = [0] * x_id_max , [0] * x_id_max, [1.0] * x_id_max + for i, t in enumerate(timesteps): + if i<=start_step: + skip = False + x_should_calc = [True] * x_id_max + else: + x_should_calc = [] + for cur_x_id in range(x_id_max): + cur_mag_ratio = self.mag_ratios[i * 2 + cur_x_id] # conditional and unconditional in one list + accumulated_ratio[cur_x_id] *= cur_mag_ratio # magnitude ratio between current step and the cached step + accumulated_steps[cur_x_id] += 1 # skip steps plus 1 + cur_skip_err = np.abs(1-accumulated_ratio[cur_x_id]) # skip error of current steps + accumulated_err[cur_x_id] += cur_skip_err # accumulated error of multiple steps + if accumulated_err[cur_x_id] best_diff: + break + threshold += 0.01 + self.magcache_thresh = best_threshold + print(f"Mag Cache, best threshold found:{best_threshold:0.2f} with gain x{len(timesteps)/(target_nb_steps - best_signed_diff):0.2f} for a target of x{speed_factor}") + return best_threshold def compute_teacache_threshold(self, start_step, timesteps = None, speed_factor =0): modulation_dtype = self.time_projection[1].weight.dtype @@ -1073,48 +1116,84 @@ class WanModel(ModelMixin, ConfigMixin): kwargs['context_scale'] = vace_context_scale hints_list = [ [ [sub_c] for sub_c in c] for _ in range(len(x_list)) ] del c - should_calc = True - if self.enable_cache: - if x_id != 0: - should_calc = self.should_calc - else: - if current_step <= self.cache_start_step or current_step == self.num_steps-1: + x_should_calc = None + if self.enable_cache != None: + if self.enable_cache == "mag": + if current_step <= self.cache_start_step: should_calc = True - self.accumulated_rel_l1_distance = 0 + elif self.one_for_all and x_id != 0: # not joint pass, not main pas, one for all + assert len(x_list) == 1 + should_calc = self.should_calc else: - rescale_func = np.poly1d(self.coefficients) - delta = abs(rescale_func(((e-self.previous_modulated_input).abs().mean() / self.previous_modulated_input.abs().mean()).cpu().item())) - self.accumulated_rel_l1_distance += delta - if self.accumulated_rel_l1_distance < self.rel_l1_thresh: - should_calc = False - self.teacache_skipped_steps += 1 - # print(f"Teacache Skipped Step no {current_step} ({self.teacache_skipped_steps}/{current_step}), delta={delta}" ) - else: + x_should_calc = [] + for i in range(1 if self.one_for_all else len(x_list)): + cur_x_id = i if joint_pass else x_id + cur_mag_ratio = self.mag_ratios[current_step * 2 + cur_x_id] # conditional and unconditional in one list + self.accumulated_ratio[cur_x_id] *= cur_mag_ratio # magnitude ratio between current step and the cached step + self.accumulated_steps[cur_x_id] += 1 # skip steps plus 1 + cur_skip_err = np.abs(1-self.accumulated_ratio[cur_x_id]) # skip error of current steps + self.accumulated_err[cur_x_id] += cur_skip_err # accumulated error of multiple steps + if self.accumulated_err[cur_x_id] torch.Tensor: + if pv_accum_dtype == None: + pv_accum_dtype = "fp32+fp16" if sg2pp else "fp32+fp32" + """ SageAttention with INT8 quantization for Q and K, FP8 PV with FP32 accumulation, implemented using CUDA. @@ -687,6 +707,12 @@ def sageattn_qk_int8_pv_fp8_cuda( assert q.device == k.device == v.device, "All tensors must be on the same device." assert q.dtype == k.dtype == v.dtype, "All tensors must have the same dtype." + # if sg2pp: + # cuda_major_version, cuda_minor_version = get_cuda_version() + # if(cuda_major_version, cuda_minor_version) < (12, 8) and pv_accum_dtype == 'fp32+fp16': + # warnings.warn("cuda version < 12.8, change pv_accum_dtype to 'fp32+fp32'") + # pv_accum_dtype = 'fp32+fp32' + # FIXME(DefTruth): make sage attention work compatible with distributed # env, for example, xDiT which launch by torchrun. Without this workaround, # sage attention will run into illegal memory access error after first @@ -742,8 +768,18 @@ def sageattn_qk_int8_pv_fp8_cuda( if pv_accum_dtype == 'fp32+fp32' and smooth_v: warnings.warn("pv_accum_dtype is 'fp32+fp32', smooth_v will be ignored.") smooth_v = False + if sg2pp: + if pv_accum_dtype == 'fp32+fp16' and smooth_v: + warnings.warn("pv_accum_dtype is 'fp32+fp16', smooth_v will be ignored.") + smooth_v = False - v_fp8, v_scale, vm = per_channel_fp8(v, tensor_layout=tensor_layout, smooth_v=smooth_v) + quant_v_scale_max = 448.0 + if pv_accum_dtype == 'fp32+fp16': + quant_v_scale_max = 2.25 + + v_fp8, v_scale, vm = per_channel_fp8(v, tensor_layout=tensor_layout, scale_max=quant_v_scale_max, smooth_v=smooth_v) + else: + v_fp8, v_scale, vm = per_channel_fp8(v, tensor_layout=tensor_layout, smooth_v=smooth_v) del v o = torch.empty(q_size, dtype=dtype, device=q_device) if pv_accum_dtype == "fp32": @@ -753,6 +789,9 @@ def sageattn_qk_int8_pv_fp8_cuda( lse = _qattn_sm89.qk_int8_sv_f8_accum_f32_fuse_v_scale_attn(q_int8, k_int8, v_fp8, o, q_scale, k_scale, v_scale, _tensor_layout, _is_caual, _qk_quant_gran, sm_scale, _return_lse) elif pv_accum_dtype == "fp32+fp32": lse = _qattn_sm89.qk_int8_sv_f8_accum_f32_fuse_v_scale_attn_inst_buf(q_int8, k_int8, v_fp8, o, q_scale, k_scale, v_scale, _tensor_layout, _is_caual, _qk_quant_gran, sm_scale, _return_lse) + elif pv_accum_dtype == "fp32+fp16": + lse = _qattn_sm89.qk_int8_sv_f8_accum_f16_fuse_v_scale_attn_inst_buf(q_int8, k_int8, v_fp8, o, q_scale, k_scale, v_scale, _tensor_layout, _is_caual, _qk_quant_gran, sm_scale, _return_lse) + o = o[..., :head_dim_og] diff --git a/wan/text2video.py b/wan/text2video.py index dfa456b..0ef917a 100644 --- a/wan/text2video.py +++ b/wan/text2video.py @@ -28,6 +28,7 @@ from wan.modules.posemb_layers import get_rotary_pos_embed from .utils.vace_preprocessor import VaceVideoProcessor from wan.utils.basic_flowmatch import FlowMatchScheduler from wan.utils.utils import get_outpainting_frame_location +from wgp import update_loras_slists def optimized_scale(positive_flat, negative_flat): @@ -231,16 +232,16 @@ class WanT2V: canvas = canvas.to(device) return ref_img.to(device), canvas - def prepare_source(self, src_video, src_mask, src_ref_images, total_frames, image_size, device, keep_frames= [], start_frame = 0, fit_into_canvas = None, pre_src_video = None, inject_frames = [], outpainting_dims = None, any_background_ref = False): + def prepare_source(self, src_video, src_mask, src_ref_images, total_frames, image_size, device, keep_video_guide_frames= [], start_frame = 0, fit_into_canvas = None, pre_src_video = None, inject_frames = [], outpainting_dims = None, any_background_ref = False): image_sizes = [] - trim_video = len(keep_frames) + trim_video_guide = len(keep_video_guide_frames) def conv_tensor(t, device): return t.float().div_(127.5).add_(-1).permute(3, 0, 1, 2).to(device) for i, (sub_src_video, sub_src_mask, sub_pre_src_video) in enumerate(zip(src_video, src_mask,pre_src_video)): prepend_count = 0 if sub_pre_src_video == None else sub_pre_src_video.shape[1] num_frames = total_frames - prepend_count - num_frames = min(num_frames, trim_video) if trim_video > 0 else num_frames + num_frames = min(num_frames, trim_video_guide) if trim_video_guide > 0 and sub_src_video != None else num_frames if sub_src_mask is not None and sub_src_video is not None: src_video[i] = conv_tensor(sub_src_video[:num_frames], device) src_mask[i] = conv_tensor(sub_src_mask[:num_frames], device) @@ -253,14 +254,14 @@ class WanT2V: if src_video_shape[1] != total_frames: src_video[i] = torch.cat( [src_video[i], src_video[i].new_zeros(src_video_shape[0], total_frames -src_video_shape[1], *src_video_shape[-2:])], dim=1) src_mask[i] = torch.cat( [src_mask[i], src_mask[i].new_ones(src_video_shape[0], total_frames -src_video_shape[1], *src_video_shape[-2:])], dim=1) - src_mask[i] = torch.clamp((src_mask[i][:1, :, :, :] + 1) / 2, min=0, max=1) + src_mask[i] = torch.clamp((src_mask[i][:, :, :, :] + 1) / 2, min=0, max=1) image_sizes.append(src_video[i].shape[2:]) elif sub_src_video is None: if prepend_count > 0: src_video[i] = torch.cat( [sub_pre_src_video, torch.zeros((3, num_frames, image_size[0], image_size[1]), device=device)], dim=1) src_mask[i] = torch.cat( [torch.zeros_like(sub_pre_src_video), torch.ones((3, num_frames, image_size[0], image_size[1]), device=device)] ,1) else: - src_video[i] = torch.zeros((3, num_frames, image_size[0], image_size[1]), device=device) + src_video[i] = torch.zeros((3, total_frames, image_size[0], image_size[1]), device=device) src_mask[i] = torch.ones_like(src_video[i], device=device) image_sizes.append(image_size) else: @@ -274,7 +275,7 @@ class WanT2V: src_video[i] = torch.cat( [src_video[i], src_video[i].new_zeros(src_video_shape[0], total_frames -src_video_shape[1], *src_video_shape[-2:])], dim=1) src_mask[i] = torch.cat( [src_mask[i], src_mask[i].new_ones(src_video_shape[0], total_frames -src_video_shape[1], *src_video_shape[-2:])], dim=1) image_sizes.append(src_video[i].shape[2:]) - for k, keep in enumerate(keep_frames): + for k, keep in enumerate(keep_video_guide_frames): if not keep: src_video[i][:, k:k+1] = 0 src_mask[i][:, k:k+1] = 1 @@ -328,6 +329,7 @@ class WanT2V: input_masks = None, input_ref_images = None, input_video=None, + denoising_strength = 1.0, target_camera=None, context_scale=None, width = 1280, @@ -354,7 +356,10 @@ class WanT2V: return_latent_slice = None, overlap_noise = 0, conditioning_latents_size = 0, + keep_frames_parsed = [], model_filename = None, + model_type = None, + loras_slists = None, **bbargs ): r""" @@ -420,6 +425,10 @@ class WanT2V: cam_emb = get_camera_embedding(target_camera) cam_emb = cam_emb.to(dtype=self.dtype, device=self.device) + if denoising_strength < 1. and input_frames != None: + height, width = input_frames.shape[-2:] + source_latents = self.vae.encode([input_frames])[0] + if vace : # vace context encode input_frames = [u.to(self.device) for u in input_frames] @@ -464,11 +473,12 @@ class WanT2V: # evaluation mode - if False: + if sample_solver == 'causvid': sample_scheduler = FlowMatchScheduler(num_inference_steps=sampling_steps, shift=shift, sigma_min=0, extra_one_step=True) - timesteps = torch.tensor([1000, 934, 862, 756, 603, 410, 250, 140, 74, 0])[:sampling_steps].to(self.device) + timesteps = torch.tensor([1000, 934, 862, 756, 603, 410, 250, 140, 74])[:sampling_steps].to(self.device) sample_scheduler.timesteps =timesteps - elif sample_solver == 'unipc': + sample_scheduler.sigmas = torch.cat([sample_scheduler.timesteps / 1000, torch.tensor([0.], device=self.device)]) + elif sample_solver == 'unipc' or sample_solver == "": sample_scheduler = FlowUniPCMultistepScheduler( num_train_timesteps=self.num_train_timesteps, shift=1, use_dynamic_shifting=False) sample_scheduler.set_timesteps( sampling_steps, device=self.device, shift=shift) @@ -484,11 +494,32 @@ class WanT2V: device=self.device, sigmas=sampling_sigmas) else: - raise NotImplementedError("Unsupported solver.") + raise NotImplementedError(f"Unsupported Scheduler {sample_solver}") + # sample videos latents = noise[0] del noise + + injection_denoising_step = 0 + inject_from_start = False + if denoising_strength < 1 and input_frames != None: + if len(keep_frames_parsed) == 0 or all(keep_frames_parsed): keep_frames_parsed = [] + injection_denoising_step = int(sampling_steps * (1. - denoising_strength) ) + latent_keep_frames = [] + if source_latents.shape[1] < latents.shape[1] or len(keep_frames_parsed) > 0: + inject_from_start = True + if len(keep_frames_parsed) >0 : + latent_keep_frames =[keep_frames_parsed[0]] + for i in range(1, len(keep_frames_parsed), 4): + latent_keep_frames.append(all(keep_frames_parsed[i:i+4])) + else: + timesteps = timesteps[injection_denoising_step:] + if hasattr(sample_scheduler, "timesteps"): sample_scheduler.timesteps = timesteps + if hasattr(sample_scheduler, "sigmas"): sample_scheduler.sigmas= sample_scheduler.sigmas[injection_denoising_step:] + injection_denoising_step = 0 + + batch_size = 1 if target_camera != None: shape = list(latents.shape[1:]) @@ -511,11 +542,16 @@ class WanT2V: # overlapped_latents_size = 3 z_reactive = [ zz[0:16, 0:overlapped_latents_size + ref_images_count].clone() for zz in z] - - if self.model.enable_cache: + cache_type = self.model.enable_cache + if cache_type != None: x_count = 3 if phantom else 2 - self.model.previous_residual = [None] * x_count - self.model.compute_teacache_threshold(self.model.cache_start_step, timesteps, self.model.teacache_multiplier) + self.model.previous_residual = [None] * x_count + if cache_type == "tea": + self.model.compute_teacache_threshold(self.model.cache_start_step, timesteps, self.model.cache_multiplier) + else: + self.model.compute_magcache_threshold(self.model.cache_start_step, timesteps, self.model.cache_multiplier) + self.model.accumulated_err, self.model.accumulated_steps, self.model.accumulated_ratio = [0.0] * x_count, [0] * x_count, [1.0] * x_count + self.model.one_for_all = x_count > 2 if callback != None: callback(-1, None, True) @@ -524,8 +560,31 @@ class WanT2V: if chipmunk: self.model.setup_chipmunk() + updated_num_steps= len(timesteps) + if callback != None: + update_loras_slists(self.model, loras_slists, updated_num_steps) + callback(-1, None, True, override_num_inference_steps = updated_num_steps) + + scheduler_kwargs = {} if isinstance(sample_scheduler, FlowMatchScheduler) else {"generator": seed_g} + for i, t in enumerate(tqdm(timesteps)): timestep = [t] + + if denoising_strength < 1 and input_frames != None and i <= injection_denoising_step: + sigma = t / 1000 + noise = torch.randn( *target_shape, dtype=torch.float32, device=self.device, generator=seed_g) + if inject_from_start: + new_latents = latents.clone() + new_latents[:, :source_latents.shape[1] ] = noise[:, :source_latents.shape[1] ] * sigma + (1 - sigma) * source_latents + for latent_no, keep_latent in enumerate(latent_keep_frames): + if not keep_latent: + new_latents[:, latent_no:latent_no+1 ] = latents[:, latent_no:latent_no+1] + latents = new_latents + new_latents = None + else: + latents = noise * sigma + (1 - sigma) * source_latents + noise = None + if overlapped_latents != None : overlap_noise_factor = overlap_noise / 1000 latent_noise_factor = t / 1000 @@ -610,7 +669,6 @@ class WanT2V: noise_pred_uncond *= alpha noise_pred = noise_pred_uncond + guide_scale * (noise_pred_text - noise_pred_uncond) noise_pred_uncond, noise_pred_cond, noise_pred_text, pos_it, pos_i, neg = None, None, None, None, None, None - scheduler_kwargs = {} if isinstance(sample_scheduler, FlowMatchScheduler) else {"generator": seed_g} temp_x0 = sample_scheduler.step( noise_pred[:, :target_shape[1]].unsqueeze(0), t, diff --git a/wan/utils/utils.py b/wan/utils/utils.py index b2ee774..f9882e6 100644 --- a/wan/utils/utils.py +++ b/wan/utils/utils.py @@ -57,6 +57,33 @@ def resample(video_fps, video_frames_count, max_target_frames_count, target_fps, frame_ids = frame_ids[:max_target_frames_count] return frame_ids +import os +from datetime import datetime + +def get_file_creation_date(file_path): + # On Windows + if os.name == 'nt': + return datetime.fromtimestamp(os.path.getctime(file_path)) + # On Unix/Linux/Mac (gets last status change, not creation) + else: + stat = os.stat(file_path) + return datetime.fromtimestamp(stat.st_birthtime if hasattr(stat, 'st_birthtime') else stat.st_mtime) + +def get_video_info(video_path): + import cv2 + cap = cv2.VideoCapture(video_path) + + # Get FPS + fps = cap.get(cv2.CAP_PROP_FPS) + + # Get resolution + width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) + height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) + frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) + cap.release() + + return fps, width, height, frame_count + def get_video_frame(file_name, frame_no): decord.bridge.set_bridge('torch') reader = decord.VideoReader(file_name) diff --git a/wgp.py b/wgp.py index e9e4f31..ce2d2b8 100644 --- a/wgp.py +++ b/wgp.py @@ -40,13 +40,14 @@ logging.set_verbosity_error from preprocessing.matanyone import app as matanyone_app from tqdm import tqdm import requests + global_queue_ref = [] AUTOSAVE_FILENAME = "queue.zip" PROMPT_VARS_MAX = 10 -target_mmgp_version = "3.4.9" -WanGP_version = "6.31" -settings_version = 2 +target_mmgp_version = "3.5.0" +WanGP_version = "6.5" +settings_version = 2.1 prompt_enhancer_image_caption_model, prompt_enhancer_image_caption_processor, prompt_enhancer_llm_model, prompt_enhancer_llm_tokenizer = None, None, None, None from importlib.metadata import version @@ -58,8 +59,16 @@ lock = threading.Lock() current_task_id = None task_id = 0 vmc_event_handler = matanyone_app.get_vmc_event_handler() +unique_id = 0 +unique_id_lock = threading.Lock() +offloadobj = None +wan_model = None - +def get_unique_id(): + global unique_id + with unique_id_lock: + unique_id += 1 + return str(time.time()+unique_id) def download_ffmpeg(): if os.name != 'nt': return @@ -145,18 +154,55 @@ def process_prompt_and_add_tasks(state, model_choice): model_filename = state["model_filename"] model_type = state["model_type"] - inputs = state.get(model_type, None) + inputs = get_model_settings(state, model_type) if model_choice != model_type or inputs ==None: raise gr.Error("Webform can not be used as the App has been restarted since the form was displayed. Please refresh the page") inputs["state"] = state + gen = get_gen_info(state) inputs["model_type"] = model_type inputs.pop("lset_name") if inputs == None: gr.Warning("Internal state error: Could not retrieve inputs for the model.") - gen = get_gen_info(state) queue = gen.get("queue", []) return get_queue_table(queue) + model_type = get_base_model_type(model_type) + inputs["model_filename"] = model_filename + + mode = inputs["mode"] + if mode == "edit": + edit_video_source =gen.get("edit_video_source", None) + edit_overrides =gen.get("edit_overrides", None) + + for k in ["image_start", "image_end", "image_refs", "video_guide", "audio_guide", "video_mask"]: + inputs[k] = None + inputs.update(edit_overrides) + del gen["edit_video_source"], gen["edit_overrides"] + inputs["video_source"]= edit_video_source + prompt = [] + + spatial_upsampling = inputs.get("spatial_upsampling","") + if len(spatial_upsampling) >0: prompt += ["Spatial Upsampling"] + temporal_upsampling = inputs.get("temporal_upsampling","") + if len(temporal_upsampling) >0: prompt += ["Temporal Upsampling"] + MMAudio_setting = inputs.get("MMAudio_setting",0) + seed = inputs.get("seed",None) + repeat_generation= inputs.get("repeat_generation",1) + if repeat_generation > 1 and (MMAudio_setting == 0 or seed != -1): + gr.Info("It is useless to generate more than one sample if you don't use MMAudio with a random seed") + return + if MMAudio_setting !=0: prompt += ["MMAudio"] + if len(prompt) == 0: + gr.Info("You must choose at leat one Post Processing Method") + return + inputs["prompt"] = ", ".join(prompt) + add_video_task(**inputs) + gen["prompts_max"] = 1 + gen.get("prompts_max",0) + state["validate_success"] = 1 + queue= gen.get("queue", []) + return update_queue_data(queue) + + prompt = inputs["prompt"] if len(prompt) ==0: gr.Info("Prompt cannot be empty.") @@ -167,8 +213,6 @@ def process_prompt_and_add_tasks(state, model_choice): if len(errors) > 0: gr.Info("Error processing prompt template: " + errors) return - model_type = get_base_model_type(model_type) - inputs["model_filename"] = model_filename model_filename = get_model_filename(model_type) prompts = prompt.replace("\r", "").split("\n") prompts = [prompt.strip() for prompt in prompts if len(prompt.strip())>0 and not prompt.startswith("#")] @@ -193,14 +237,27 @@ def process_prompt_and_add_tasks(state, model_choice): video_mask = inputs["video_mask"] video_source = inputs["video_source"] frames_positions = inputs["frames_positions"] - keep_frames_video_source = inputs["keep_frames_video_source"] keep_frames_video_guide= inputs["keep_frames_video_guide"] + keep_frames_video_source = inputs["keep_frames_video_source"] + denoising_strength= inputs["denoising_strength"] sliding_window_size = inputs["sliding_window_size"] sliding_window_overlap = inputs["sliding_window_overlap"] sliding_window_discard_last_frames = inputs["sliding_window_discard_last_frames"] video_length = inputs["video_length"] - - + num_inference_steps= inputs["num_inference_steps"] + skip_steps_cache_type= inputs["skip_steps_cache_type"] + MMAudio_setting = inputs["MMAudio_setting"] + + if skip_steps_cache_type == "mag": + if model_type in ["sky_df_1.3B", "sky_df_14B", "sky_df_720p_14B"]: + gr.Info("Mag Cache is not supported with Diffusion Forcing") + return + if num_inference_steps > 50: + gr.Info("Mag Cache maximum number of steps is 50") + return + + if MMAudio_setting != 0 and server_config.get("mmaudio_enabled", 0) != 0 and video_length <16: #should depend on the architecture + gr.Info("MMAudio can generate an Audio track only if the Video is at least 1s long") if "F" in video_prompt_type: if len(frames_positions.strip()) > 0: positions = frames_positions.split(" ") @@ -226,15 +283,6 @@ def process_prompt_and_add_tasks(state, model_choice): if video_source == None: gr.Info("You must provide a Source Video file to continue") return - elif "G" in image_prompt_type: - gen = get_gen_info(state) - file_list = gen.get("file_list",[]) - choice = gen.get("selected",-1) - if choice >=0 and len(file_list)>0: - video_source = file_list[choice] - else: - gr.Info("Please Select a generated Video as a Video to continue") - return else: video_source = None @@ -269,7 +317,9 @@ def process_prompt_and_add_tasks(state, model_choice): else: video_mask = None - keep_frames_video_guide= inputs["keep_frames_video_guide"] + if not "G" in video_prompt_type: + denoising_strength = 1.0 + _, error = parse_keep_frames_video_guide(keep_frames_video_guide, video_length) if len(error) > 0: gr.Info(f"Invalid Keep Frames property: {error}") @@ -278,11 +328,13 @@ def process_prompt_and_add_tasks(state, model_choice): video_guide = None video_mask = None keep_frames_video_guide = "" + denoising_strength = 1.0 if "S" in image_prompt_type: if image_start == None or isinstance(image_start, list) and len(image_start) == 0: gr.Info("You must provide a Start Image") + return if not isinstance(image_start, list): image_start = [image_start] if not all( not isinstance(img[0], str) for img in image_start) : @@ -358,6 +410,7 @@ def process_prompt_and_add_tasks(state, model_choice): "frames_positions": frames_positions, "keep_frames_video_source": keep_frames_video_source, "keep_frames_video_guide": keep_frames_video_guide, + "denoising_strength": denoising_strength, "image_prompt_type": image_prompt_type, "video_prompt_type": video_prompt_type, } @@ -422,7 +475,6 @@ def process_prompt_and_add_tasks(state, model_choice): inputs.update(override_inputs) add_video_task(**inputs) - gen = get_gen_info(state) gen["prompts_max"] = len(prompts) + gen.get("prompts_max",0) state["validate_success"] = 1 queue= gen.get("queue", []) @@ -1253,12 +1305,12 @@ def _parse_args(): ) - parser.add_argument( - "--teacache", - type=float, - default=-1, - help="teacache speed multiplier" - ) + # parser.add_argument( + # "--teacache", + # type=float, + # default=-1, + # help="teacache speed multiplier" + # ) parser.add_argument( "--frames", @@ -1581,13 +1633,17 @@ model_signatures = {"t2v": "text2video_14B", "t2v_1.3B" : "text2video_1.3B", " "hunyuan" : "hunyuan_video_720", "hunyuan_i2v" : "hunyuan_video_i2v_720", "hunyuan_custom" : "hunyuan_video_custom_720", "hunyuan_custom_audio" : "hunyuan_video_custom_audio", "hunyuan_custom_edit" : "hunyuan_video_custom_edit", "hunyuan_avatar" : "hunyuan_video_avatar" } +def are_model_types_compatible(model_type1, model_type2): + return get_base_model_type(model_type1) == get_base_model_type(model_type2) + def get_model_finetune_def(model_type): return finetunes.get(model_type, None ) def get_base_model_type(model_type): finetune_def = get_model_finetune_def(model_type) if finetune_def == None: - return model_type + return model_type if model_type in model_types else None + # return model_type else: return finetune_def["architecture"] @@ -1757,15 +1813,24 @@ def get_settings_file_name(model_type): return os.path.join(args.settings, model_type + "_settings.json") def fix_settings(model_type, ui_defaults): + video_settings_version = ui_defaults.get("settings_version", 0) prompts = ui_defaults.get("prompts", "") if len(prompts) > 0: ui_defaults["prompt"] = prompts image_prompt_type = ui_defaults.get("image_prompt_type", None) - if image_prompt_type !=None and not isinstance(image_prompt_type, str): - ui_defaults["image_prompt_type"] = "S" if image_prompt_type == 0 else "SE" + if image_prompt_type != None : + if not isinstance(image_prompt_type, str): + image_prompt_type = "S" if image_prompt_type == 0 else "SE" + + if video_settings_version <= 2: + image_prompt_type = image_prompt_type.replace("G","") + ui_defaults["image_prompt_type"] = image_prompt_type + + if "lset_name" in ui_defaults: del ui_defaults["lset_name"] + model_type = get_base_model_type(model_type) - if model_type == None: - return + + if model_type == None: return video_prompt_type = ui_defaults.get("video_prompt_type", "") if model_type in ["hunyuan_custom", "hunyuan_custom_edit", "hunyuan_custom_audio", "hunyuan_avatar", "phantom_14B", "phantom_1.3B"]: @@ -1775,6 +1840,22 @@ def fix_settings(model_type, ui_defaults): video_prompt_type = video_prompt_type.replace("I", "") ui_defaults["video_prompt_type"] = video_prompt_type + tea_cache_setting = ui_defaults.get("tea_cache_setting", None) + tea_cache_start_step_perc = ui_defaults.get("tea_cache_start_step_perc", None) + + if tea_cache_setting != None: + del ui_defaults["tea_cache_setting"] + if tea_cache_setting > 0: + ui_defaults["skip_steps_multiplier"] = tea_cache_setting + ui_defaults["skip_steps_cache_type"] = "tea" + else: + ui_defaults["skip_steps_multiplier"] = 1.75 + ui_defaults["skip_steps_cache_type"] = "" + + if tea_cache_start_step_perc != None: + del ui_defaults["tea_cache_start_step_perc"] + ui_defaults["skip_steps_start_step_perc"] = tea_cache_start_step_perc + def get_default_settings(model_type): def get_default_prompt(i2v): if i2v: @@ -1805,8 +1886,8 @@ def get_default_settings(model_type): "negative_prompt": "", "activated_loras": [], "loras_multipliers": "", - "tea_cache": 0.0, - "tea_cache_start_step_perc": 0, + "skip_steps_multiplier": 1.5, + "skip_steps_start_step_perc": 20, "RIFLEx_setting": 0, "slg_switch": 0, "slg_layers": [9], @@ -1865,7 +1946,7 @@ def get_default_settings(model_type): ui_defaults.update({ "guidance_scale": 7.5, "flow_shift": 5, - "tea_cache_start_step_perc": 25, + "skip_steps_start_step_perc": 25, "video_length": 129, "video_prompt_type": "I", }) @@ -1881,7 +1962,7 @@ def get_default_settings(model_type): with open(defaults_filename, "r", encoding="utf-8") as f: ui_defaults = json.load(f) fix_settings(model_type, ui_defaults) - + default_seed = args.seed if default_seed > -1: ui_defaults["seed"] = default_seed @@ -1912,7 +1993,10 @@ model_types += finetunes.keys() transformer_types = server_config.get("transformer_types", []) -transformer_type = transformer_types[0] if len(transformer_types) > 0 else model_types[0] +transformer_type = server_config.get("last_model_type", None) +if transformer_type != None and not transformer_type in model_types and not transformer_type in finetunes: transformer_type = None +if transformer_type == None: + transformer_type = transformer_types[0] if len(transformer_types) > 0 else model_types[0] transformer_quantization =server_config.get("transformer_quantization", "int8") @@ -2096,6 +2180,16 @@ def download_models(model_filename, model_type): } process_files_def(**enhancer_def) + if server_config.get("mmaudio_enabled", 0) != 0: + enhancer_def = { + "repoId" : "DeepBeepMeep/Wan2.1", + "sourceFolderList" : [ "mmaudio", "DFN5B-CLIP-ViT-H-14-378" ], + "fileList" : [ ["mmaudio_large_44k_v2.pth", "synchformer_state_dict.pth", "v1-44.pth"],["open_clip_config.json", "open_clip_pytorch_model.bin"]] + } + process_files_def(**enhancer_def) + + + def download_file(url,filename): if url.startswith("https://huggingface.co/") and "/resolve/main/" in url: url = url[len("https://huggingface.co/"):] @@ -2244,9 +2338,12 @@ def setup_loras(model_type, transformer, lora_dir, lora_preselected_preset, spl dir_loras.sort() loras += [element for element in dir_loras if element not in loras ] - dir_presets = glob.glob( os.path.join(lora_dir , "*.lset") ) + dir_presets_settings = glob.glob( os.path.join(lora_dir , "*.json") ) + dir_presets_settings.sort() + dir_presets = glob.glob( os.path.join(lora_dir , "*.lset") ) dir_presets.sort() - loras_presets = [ Path(Path(file_path).parts[-1]).stem for file_path in dir_presets] + # loras_presets = [ Path(Path(file_path).parts[-1]).stem for file_path in dir_presets_settings + dir_presets] + loras_presets = [ Path(file_path).parts[-1] for file_path in dir_presets_settings + dir_presets] if transformer !=None: loras = offload.load_loras_into_model(transformer, loras, activate_all_loras=False, check_only= True, preprocess_sd=get_loras_preprocessor(transformer, model_type), split_linear_modules_map = split_linear_modules_map) #lora_multiplier, @@ -2510,6 +2607,7 @@ def apply_changes( state, preload_model_policy_choice = 1, UI_theme_choice = "default", enhancer_enabled_choice = 0, + mmaudio_enabled_choice = 0, fit_canvas_choice = 0, preload_in_VRAM_choice = 0, depth_anything_v2_variant_choice = "vitl", @@ -2540,10 +2638,12 @@ def apply_changes( state, "UI_theme" : UI_theme_choice, "fit_canvas": fit_canvas_choice, "enhancer_enabled" : enhancer_enabled_choice, + "mmaudio_enabled" : mmaudio_enabled_choice, "preload_in_VRAM" : preload_in_VRAM_choice, "depth_anything_v2_variant": depth_anything_v2_variant_choice, "notification_sound_enabled" : notification_sound_enabled_choice, - "notification_sound_volume" : notification_sound_volume_choice + "notification_sound_volume" : notification_sound_volume_choice, + "last_model_type" : state["model_type"] } if Path(server_config_filename).is_file(): @@ -2556,7 +2656,7 @@ def apply_changes( state, server_config["compile"] = old_server_config["compile"] with open(server_config_filename, "w", encoding="utf-8") as writer: - writer.write(json.dumps(server_config)) + writer.write(json.dumps(server_config, indent=4)) changes = [] for k, v in server_config.items(): @@ -2579,14 +2679,14 @@ def apply_changes( state, transformer_types = server_config["transformer_types"] model_filename = get_model_filename(transformer_type, transformer_quantization, transformer_dtype_policy) state["model_filename"] = model_filename - if all(change in ["attention_mode", "vae_config", "boost", "save_path", "metadata_type", "clear_file_list", "fit_canvas", "depth_anything_v2_variant", "notification_sound_enabled", "notification_sound_volume"] for change in changes ): + if all(change in ["attention_mode", "vae_config", "boost", "save_path", "metadata_type", "clear_file_list", "fit_canvas", "depth_anything_v2_variant", "notification_sound_enabled", "notification_sound_volume", "mmaudio_enabled"] for change in changes ): model_choice = gr.Dropdown() else: reload_needed = True model_choice = generate_dropdown_model_list(transformer_type) header = generate_header(state["model_type"], compile=compile, attention_mode= attention_mode) - return "
The new configuration has been succesfully applied
", header, model_choice, gr.Row(visible= server_config["enhancer_enabled"] == 1) + return "
The new configuration has been succesfully applied
", header, model_choice, gr.Row(visible= server_config["enhancer_enabled"] == 1), gr.Row(visible= server_config["mmaudio_enabled"] > 0) @@ -2666,9 +2766,10 @@ def build_callback(state, pipe, send_cmd, status, num_inference_steps): return callback def abort_generation(state): gen = get_gen_info(state) - if "in_progress" in gen and wan_model != None: - - wan_model._interrupt= True + if "in_progress" in gen: # and wan_model != None: + if wan_model != None: + wan_model._interrupt= True + gen["abort"] = True msg = "Processing Request to abort Current Generation" gen["status"] = msg gr.Info(msg) @@ -2692,7 +2793,7 @@ def refresh_gallery(state): #, msg queue = gen.get("queue", []) abort_interactive = not gen.get("abort", False) if not in_progress or len(queue) == 0: - return gr.Gallery(selected_index=choice, value = file_list), gr.HTML("", visible= False), gr.Button(visible=True), gr.Button(visible=False), gr.Row(visible=False), update_queue_data(queue), gr.Button(interactive= abort_interactive), gr.Button(visible= False) + return gr.Gallery(selected_index=choice, value = file_list), gr.HTML("", visible= False), gr.Button(visible=True), gr.Button(visible=False), gr.Row(visible=False), gr.Row(visible=False), update_queue_data(queue), gr.Button(interactive= abort_interactive), gr.Button(visible= False) else: task = queue[0] start_img_md = "" @@ -2746,10 +2847,11 @@ def refresh_gallery(state): #, msg border-radius: 6px; box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1); """ - + if params.get("mode", None) in ['edit'] : onemorewindow_visible = False + gen_buttons_visible = True html = f"" + thumbnails + "
" + prompt + "
" html_output = gr.HTML(html, visible= True) - return gr.Gallery(selected_index=choice, value = file_list), html_output, gr.Button(visible=False), gr.Button(visible=True), gr.Row(visible=True), update_queue_data(queue), gr.Button(interactive= abort_interactive), gr.Button(visible= onemorewindow_visible) + return gr.Gallery(selected_index=choice, value = file_list), html_output, gr.Button(visible=False), gr.Button(visible=True), gr.Row(visible=True), gr.Row(visible= gen_buttons_visible), update_queue_data(queue), gr.Button(interactive= abort_interactive), gr.Button(visible= onemorewindow_visible) @@ -2769,17 +2871,138 @@ def finalize_generation(state): gen_in_progress = False return gr.Gallery(selected_index=choice), gr.Button(interactive= True), gr.Button(visible= True), gr.Button(visible= False), gr.Column(visible= False), gr.HTML(visible= False, value="") +def get_default_video_info(): + return "Please Select a Video" -def select_video(state , event_data: gr.EventData): + +def get_file_list(state, input_file_list): + gen = get_gen_info(state) + with lock: + if "file_list" in gen: + file_list = gen["file_list"] + file_settings_list = gen["file_settings_list"] + else: + file_list = [] + file_settings_list = [] + if input_file_list != None: + for file_path in input_file_list: + if isinstance(file_path, tuple): file_path = file_path[0] + file_settings, _ = get_settings_from_file(state, file_path, False, False, False) + if file_settings != None: + file_list.append(file_path) + file_settings_list.append(file_settings) + + gen["file_list"] = file_list + gen["file_settings_list"] = file_settings_list + return file_list, file_settings_list + +def set_file_choice(gen, file_list, choice): + gen["last_selected"] = (choice + 1) >= len(file_list) + gen["selected"] = choice + +def select_video(state, input_file_list, event_data: gr.EventData): data= event_data._data gen = get_gen_info(state) + file_list, file_settings_list = get_file_list(state, input_file_list) if data!=None: choice = data.get("index",0) - file_list = gen.get("file_list", []) - gen["last_selected"] = (choice + 1) >= len(file_list) - gen["selected"] = choice - return + set_file_choice(gen, file_list, choice) + + if len(file_list) > 0: + configs = file_settings_list[choice] + from wan.utils.utils import get_video_info, get_file_creation_date + file_name = file_list[choice] + fps, width, height, frames_count = get_video_info(file_name) + video_model_name = configs.get("type", "Unknown model") + if "-" in video_model_name: video_model_name = video_model_name[video_model_name.find("-")+2:] + video_prompt = configs.get("prompt", "")[:200] + video_video_prompt_type = configs.get("video_prompt_type", "") + video_image_prompt_type = configs.get("image_prompt_type", "") + map_video_prompt = {"V" : "Control Video", "A" : "Mask Video", "I" : "Reference Images"} + map_image_prompt = {"V" : "Source Video", "L" : "Last Video", "S" : "Start Image", "E" : "End Image"} + video_other_prompts = [ v for s,v in map_image_prompt.items() if s in video_image_prompt_type] + [ v for s,v in map_video_prompt.items() if s in video_video_prompt_type] + video_model_type = configs.get("model_type", "t2v") + if any_audio_track(video_model_type): video_other_prompts += ["Audio Source"] + video_other_prompts = ", ".join(video_other_prompts) + video_resolution = configs.get("resolution", "") + f" (real: {width}x{height})" + video_length = configs.get("video_length", 0) + original_fps= int(video_length/frames_count*fps) + video_length_summary = f"{video_length} frames" + video_window_no = configs.get("window_no", 0) + if video_window_no > 0: video_length_summary +=f", Window no {video_window_no }" + video_length_summary += " (" + if video_length != frames_count: video_length_summary += f"real: {frames_count} frames, " + video_length_summary += f"{frames_count/fps:.1f}s, {round(fps)} fps)" + video_seed = configs.get("seed", -1) + video_MMAudio_seed = configs.get("MMAudio_seed", video_seed) + video_guidance_scale = configs.get("video_guidance_scale", 1) + video_embedded_guidance_scale = configs.get("video_embedded_guidance_scale ", 1) + if get_model_family(video_model_type) == "hunyuan": + video_guidance_scale = video_embedded_guidance_scale + video_guidance_label = "Embedded Guidance Scale" + else: + video_guidance_label = "Guidance Scale" + video_flow_shift = configs.get("flow_shift", 1) + video_num_inference_steps = configs.get("num_inference_steps", 0) + video_creation_date = str(get_file_creation_date(file_name)) + if "." in video_creation_date: video_creation_date = video_creation_date[:video_creation_date.rfind(".")] + video_generation_time = str(configs.get("generation_time", "0")) + "s" + video_activated_loras = "
".join(configs.get("activated_loras", [])) + video_temporal_upsampling = configs.get("temporal_upsampling", "") + video_spatial_upsampling = configs.get("spatial_upsampling", "") + video_MMAudio_setting = configs.get("MMAudio_setting", 0) + video_MMAudio_prompt = configs.get("MMAudio_prompt", "") + video_MMAudio_neg_prompt = configs.get("MMAudio_neg_prompt", "") + values = [video_model_name, video_prompt] + labels = ["Model", "Text Prompt"] + if len(video_other_prompts) >0 : + values += [video_other_prompts] + labels += ["Other Prompts"] + values += [video_resolution, video_length_summary, video_seed, video_guidance_scale, video_flow_shift, video_num_inference_steps] + labels += [ "Resolution", "Video Length", "Seed", video_guidance_label, "Flow Shift", "Num Inference steps"] + + video_skip_steps_cache_type = configs.get("skip_steps_cache_type", "") + video_skip_steps_multiplier = configs.get("skip_steps_multiplier", 0) + video_skip_steps_cache_start_step_perc = configs.get("skip_steps_start_step_perc", 0) + if len(video_skip_steps_cache_type) > 0: + video_skip_steps_cache = "TeaCache" if video_skip_steps_cache_type == "tea" else "MagCache" + video_skip_steps_cache += f" x{video_skip_steps_multiplier }" + if video_skip_steps_cache_start_step_perc >0: video_skip_steps_cache += f", Start from {video_skip_steps_cache_start_step_perc}%" + values += [ video_skip_steps_cache ] + labels += [ "Skip Steps" ] + + if len(video_spatial_upsampling) > 0: + video_temporal_upsampling += " " + video_spatial_upsampling + if len(video_temporal_upsampling) > 0: + values += [ video_temporal_upsampling ] + labels += [ "Upsampling" ] + if video_MMAudio_setting != 0: + values += [ f'Prompt="{video_MMAudio_prompt}", Neg Prompt="{video_MMAudio_neg_prompt}", Seed={video_MMAudio_seed}' ] + labels += [ "MMAudio" ] + + if len(video_activated_loras) > 0: + values += [video_activated_loras] + labels += ["Loras"] + values += [ video_creation_date, video_generation_time ] + labels += [ "Creation Date", "Generation Time" ] + + table_style = """ + """ + rows = [f"{label}{value}" for label, value in zip(labels, values)] + html = f"{table_style}" + "".join(rows) + "
" + else: + html = get_default_video_info() + visible= len(file_list) > 0 + return choice, html, gr.update(visible=visible), gr.update(visible=visible) def expand_slist(slist, num_inference_steps ): new_slist= [] @@ -3111,7 +3334,11 @@ def preprocess_video(height, width, video_in, max_frames, start_frame=0, fit_can return torch.stack(torch_frames) - +def update_loras_slists(trans, slists, num_inference_steps ): + slists = [ expand_slist(slist, num_inference_steps ) if isinstance(slist, list) else slist for slist in slists ] + nos = [str(l) for l in range(len(slists))] + offload.activate_loras(trans, nos, slists ) + def parse_keep_frames_video_guide(keep_frames, video_length): def absolute(n): @@ -3121,7 +3348,7 @@ def parse_keep_frames_video_guide(keep_frames, video_length): return max(0, video_length + n) else: return min(n-1, video_length-1) - + keep_frames = keep_frames.strip() if len(keep_frames) == 0: return [True] *video_length, "" frames =[False] *video_length @@ -3156,6 +3383,201 @@ def parse_keep_frames_video_guide(keep_frames, video_length): frames= frames[0: i+1] return frames, error + +def perform_temporal_upsampling(sample, previous_last_frame, temporal_upsampling, fps): + exp = 0 + if temporal_upsampling == "rife2": + exp = 1 + elif temporal_upsampling == "rife4": + exp = 2 + output_fps = fps + if exp > 0: + from postprocessing.rife.inference import temporal_interpolation + if previous_last_frame != None: + sample = torch.cat([previous_last_frame, sample], dim=1) + previous_last_frame = sample[:, -1:].clone() + sample = temporal_interpolation( os.path.join("ckpts", "flownet.pkl"), sample, exp, device=processing_device) + sample = sample[:, 1:] + else: + sample = temporal_interpolation( os.path.join("ckpts", "flownet.pkl"), sample, exp, device=processing_device) + previous_last_frame = sample[:, -1:].clone() + + output_fps = output_fps * 2**exp + return sample, previous_last_frame, output_fps + + +def perform_spatial_upsampling(sample, spatial_upsampling): + from wan.utils.utils import resize_lanczos + if spatial_upsampling == "lanczos1.5": + scale = 1.5 + else: + scale = 2 + sample = (sample + 1) / 2 + h, w = sample.shape[-2:] + h *= scale + h = round(h/16) * 16 + w *= scale + w = round(w/16) * 16 + h = int(h) + w = int(w) + frames_to_upsample = [sample[:, i] for i in range( sample.shape[1]) ] + def upsample_frames(frame): + return resize_lanczos(frame, h, w).unsqueeze(1) + sample = torch.cat(process_images_multithread(upsample_frames, frames_to_upsample, "upsample", wrap_in_list = False), dim=1) + frames_to_upsample = None + sample.mul_(2).sub_(1) + return sample + +def any_audio_track(model_type): + base_model_type = get_base_model_type(model_type) + return base_model_type in ["fantasy", "hunyuan_avatar", "hunyuan_custom_audio"] + +def get_available_filename(target_path, video_source, suffix = ""): + name, extension = os.path.splitext(os.path.basename(video_source)) + name+= suffix + full_path= os.path.join(target_path, f"{name}{extension}") + if not os.path.exists(full_path): + return full_path + counter = 2 + while True: + full_path= os.path.join(target_path, f"{name}({counter}){extension}") + if not os.path.exists(full_path): + return full_path + counter += 1 + +def edit_video( + send_cmd, + state, + video_source, + seed, + temporal_upsampling, + spatial_upsampling, + MMAudio_setting, + MMAudio_prompt, + MMAudio_neg_prompt, + repeat_generation, + **kwargs + ): + + + gen = get_gen_info(state) + + if gen.get("abort", False): return + abort = False + + + + configs, _ = get_settings_from_file(state, video_source, False, False, False) + if configs == None: return + has_already_audio = False + tempAudioFileName = None + if MMAudio_setting == 0: + import subprocess + has_already_audio = configs.get("MMAudio_setting",0) != 0 or any_audio_track(configs["model_type"]) + if has_already_audio: + # extract audio from video + tempAudioFileName = get_available_filename(save_path, video_source[:-4]+".mkv", "_audio") + try: + subprocess.run([ 'ffmpeg', '-v', 'quiet', '-i', video_source, '-vn', '-acodec', 'copy', '-y', tempAudioFileName ], check=True, capture_output=True) + except: + tempAudioFileName= None + + + with lock: + file_list = gen["file_list"] + file_settings_list = gen["file_settings_list"] + + import random + if seed == None or seed <0: + seed = random.randint(0, 999999999) + + from wan.utils.utils import get_video_info + fps, width, height, frames_count = get_video_info(video_source) + + sample = None + + if len(temporal_upsampling) > 0 or len(spatial_upsampling) > 0: + send_cmd("progress", [0, get_latest_status(state,"Upsampling")]) + sample = get_resampled_video(video_source, 0, 1000, fps) + sample = sample.float().div_(127.5).sub_(1.).permute(-1,0,1,2) + frames_count = sample.shape[1] + + output_fps = round(fps) + if len(temporal_upsampling) > 0: + sample, previous_last_frame, output_fps = perform_temporal_upsampling(sample, None, temporal_upsampling, fps) + configs["temporal_upsampling"] = temporal_upsampling + frames_count = sample.shape[1] + + + if len(spatial_upsampling) > 0: + sample = perform_spatial_upsampling(sample, spatial_upsampling ) + configs["spatial_upsampling"] = spatial_upsampling + + any_mmaudio = MMAudio_setting != 0 and server_config.get("mmaudio_enabled", 0) != 0 and frames_count >=output_fps + tmp_path = None + any_change = False + if sample != None: + video_path =get_available_filename(save_path, video_source, "_tmp") if any_mmaudio or tempAudioFileName != None else get_available_filename(save_path, video_source, "_post") + cache_video( tensor=sample[None], save_file=video_path, fps=output_fps, nrow=1, normalize=True, value_range=(-1, 1)) + + if any_mmaudio or tempAudioFileName: tmp_path = video_path + any_change = True + else: + video_path = video_source + + repeat_no = 0 + extra_generation = 0 + initial_total_windows = 0 + any_change_initial = any_change + while not gen.get("abort", False): + any_change = any_change_initial + extra_generation += gen.get("extra_orders",0) + gen["extra_orders"] = 0 + total_generation = repeat_generation + extra_generation + gen["total_generation"] = total_generation + if repeat_no >= total_generation: break + repeat_no +=1 + gen["repeat_no"] = repeat_no + + if any_mmaudio: + send_cmd("progress", [0, get_latest_status(state,"MMAudio Soundtrack Generation")]) + from postprocessing.mmaudio.mmaudio import video_to_audio + new_video_path = get_available_filename(save_path, video_source, "_post") + video_to_audio(video_path, prompt = MMAudio_prompt, negative_prompt = MMAudio_neg_prompt, seed = seed, num_steps = 25, cfg_strength = 4.5, duration= frames_count /output_fps, video_save_path = new_video_path , persistent_models = server_config.get("mmaudio_enabled", 0) == 2, verboseLevel = verbose_level) + configs["MMAudio_setting"] = MMAudio_setting + configs["MMAudio_prompt"] = MMAudio_prompt + configs["MMAudio_neg_prompt"] = MMAudio_neg_prompt + configs["MMAudio_seed"] = seed + any_change = True + elif tempAudioFileName != None: + # combine audio file and new video file + new_video_path = get_available_filename(save_path, video_source, "_post") + os.system('ffmpeg -v quiet -y -i "{}" -i "{}" -c copy "{}"'.format(video_path, tempAudioFileName, new_video_path)) + else: + new_video_path = video_path + if tmp_path != None: + os.remove(tmp_path) + if tempAudioFileName != None: + os.remove(tempAudioFileName) + + if any_change: + print(f"Postprocessed video saved to Path: "+ new_video_path) + with lock: + file_list.append(new_video_path) + file_settings_list.append(configs) + + if configs != None: + from mutagen.mp4 import MP4 + file = MP4(new_video_path) + file.tags['©cmt'] = [json.dumps(configs)] + file.save() + + send_cmd("output") + seed = random.randint(0, 999999999) + + + clear_status(state) + def generate_video( task, send_cmd, @@ -3168,12 +3590,14 @@ def generate_video( guidance_scale, audio_guidance_scale, flow_shift, + sample_solver, embedded_guidance_scale, repeat_generation, multi_prompts_gen_type, multi_images_gen_type, - tea_cache_setting, - tea_cache_start_step_perc, + skip_steps_cache_type, + skip_steps_multiplier, + skip_steps_start_step_perc, activated_loras, loras_multipliers, image_prompt_type, @@ -3187,6 +3611,7 @@ def generate_video( frames_positions, video_guide, keep_frames_video_guide, + denoising_strength, video_guide_outpainting, video_mask, control_net_weight, @@ -3200,6 +3625,9 @@ def generate_video( remove_background_images_ref, temporal_upsampling, spatial_upsampling, + MMAudio_setting, + MMAudio_prompt, + MMAudio_neg_prompt, RIFLEx_setting, slg_switch, slg_layers, @@ -3210,15 +3638,18 @@ def generate_video( prompt_enhancer, state, model_type, - model_filename - + model_filename, + mode, ): global wan_model, offloadobj, reload_needed gen = get_gen_info(state) torch.set_grad_enabled(False) - - file_list = gen["file_list"] - file_settings_list = gen["file_settings_list"] + if mode == "edit": + edit_video(send_cmd, state, video_source, seed, temporal_upsampling, spatial_upsampling, MMAudio_setting, MMAudio_prompt, MMAudio_neg_prompt, repeat_generation) + return + with lock: + file_list = gen["file_list"] + file_settings_list = gen["file_settings_list"] prompt_no = gen["prompt_no"] @@ -3227,6 +3658,7 @@ def generate_video( # gr.Info("Unable to generate a Video while a new configuration is being applied.") # return + if "P" in preload_model_policy and not "U" in preload_model_policy: while wan_model == None: time.sleep(1) @@ -3266,12 +3698,14 @@ def generate_video( trans = get_transformer_model(wan_model) temp_filename = None + base_model_type = get_base_model_type(model_type) prompts = prompt.split("\n") prompts = [part for part in prompts if len(prompt)>0] loras = state["loras"] + loras_slists = [] if len(loras) > 0 or transformer_loras_filenames != None: def is_float(element: any) -> bool: if element is None: @@ -3281,7 +3715,8 @@ def generate_video( return True except ValueError: return False - list_mult_choices_nums = [] + loras_list_mult_choices_nums = [] + loras_multipliers = loras_multipliers.strip(" \r\n") if len(loras_multipliers) > 0: loras_mult_choices_list = loras_multipliers.replace("\r", "").split("\n") loras_mult_choices_list = [multi for multi in loras_mult_choices_list if len(multi)>0 and not multi.startswith("#")] @@ -3296,21 +3731,26 @@ def generate_video( if not is_float(smult): raise gr.Error(f"Lora sub value no {i+1} ({smult}) in Multiplier definition '{multlist}' is invalid") slist.append(float(smult)) + loras_slists.append(slist) slist = expand_slist(slist, num_inference_steps ) - list_mult_choices_nums.append(slist) + loras_list_mult_choices_nums.append(slist) else: if not is_float(mult): raise gr.Error(f"Lora Multiplier no {i+1} ({mult}) is invalid") - list_mult_choices_nums.append(float(mult)) - if len(list_mult_choices_nums ) < len(activated_loras): - list_mult_choices_nums += [1.0] * ( len(activated_loras) - len(list_mult_choices_nums ) ) - loras_selected = [ lora for lora in loras if os.path.basename(lora) in activated_loras] + mult = float(mult) + loras_slists.append(mult) + loras_list_mult_choices_nums.append(mult) + if len(loras_list_mult_choices_nums ) < len(activated_loras): + loras_list_mult_choices_nums += [1.0] * ( len(activated_loras) - len(loras_list_mult_choices_nums ) ) + lora_dir = get_lora_dir(model_type) + loras_selected = [ os.path.join(lora_dir, lora) for lora in activated_loras] + pinnedLora = profile !=5 and transformer_loras_filenames == None #False # # # split_linear_modules_map = getattr(trans,"split_linear_modules_map", None) if transformer_loras_filenames != None: loras_selected += transformer_loras_filenames - list_mult_choices_nums.append(1.) - offload.load_loras_into_model(trans, loras_selected, list_mult_choices_nums, activate_all_loras=True, preprocess_sd=get_loras_preprocessor(trans, model_filename), pinnedLora=pinnedLora, split_linear_modules_map = split_linear_modules_map) + loras_list_mult_choices_nums.append(1.) + offload.load_loras_into_model(trans, loras_selected, loras_list_mult_choices_nums, activate_all_loras=True, preprocess_sd=get_loras_preprocessor(trans, base_model_type), pinnedLora=pinnedLora, split_linear_modules_map = split_linear_modules_map) errors = trans._loras_errors if len(errors) > 0: error_files = [msg for _ , msg in errors] @@ -3318,7 +3758,7 @@ def generate_video( seed = None if seed == -1 else seed # negative_prompt = "" # not applicable in the inference original_filename = model_filename - model_filename = get_model_filename(get_base_model_type(model_type)) + model_filename = get_model_filename(base_model_type) image2video = test_class_i2v(model_type) current_video_length = video_length @@ -3327,6 +3767,7 @@ def generate_video( device_mem_capacity = torch.cuda.get_device_properties(None).total_memory / 1048576 diffusion_forcing = "diffusion_forcing" in model_filename + t2v = base_model_type in ["t2v"] ltxv = "ltxv" in model_filename vace = "Vace" in model_filename phantom = "phantom" in model_filename @@ -3380,14 +3821,29 @@ def generate_video( update_task_thumbnails(task, locals()) send_cmd("output") joint_pass = boost ==1 #and profile != 1 and profile != 3 - # TeaCache - if args.teacache > 0: - tea_cache_setting = args.teacache - trans.enable_cache = tea_cache_setting > 0 - if trans.enable_cache: - trans.teacache_multiplier = tea_cache_setting + trans.enable_cache = None if len(skip_steps_cache_type) == 0 else skip_steps_cache_type + + if trans.enable_cache != None: + trans.cache_multiplier = skip_steps_multiplier + trans.cache_start_step = int(skip_steps_start_step_perc*num_inference_steps/100) + + if trans.enable_cache == "mag": + trans.magcache_thresh = 0 + trans.magcache_K = 2 + if get_model_family(model_type) == "wan": + if image2video: + trans.def_mag_ratios = np.array([1.0]*2+[1.0124, 1.02213, 1.00166, 1.0041, 0.99791, 1.00061, 0.99682, 0.99762, 0.99634, 0.99685, 0.99567, 0.99586, 0.99416, 0.99422, 0.99578, 0.99575, 0.9957, 0.99563, 0.99511, 0.99506, 0.99535, 0.99531, 0.99552, 0.99549, 0.99541, 0.99539, 0.9954, 0.99536, 0.99489, 0.99485, 0.99518, 0.99514, 0.99484, 0.99478, 0.99481, 0.99479, 0.99415, 0.99413, 0.99419, 0.99416, 0.99396, 0.99393, 0.99388, 0.99386, 0.99349, 0.99349, 0.99309, 0.99304, 0.9927, 0.9927, 0.99228, 0.99226, 0.99171, 0.9917, 0.99137, 0.99135, 0.99068, 0.99063, 0.99005, 0.99003, 0.98944, 0.98942, 0.98849, 0.98849, 0.98758, 0.98757, 0.98644, 0.98643, 0.98504, 0.98503, 0.9836, 0.98359, 0.98202, 0.98201, 0.97977, 0.97978, 0.97717, 0.97718, 0.9741, 0.97411, 0.97003, 0.97002, 0.96538, 0.96541, 0.9593, 0.95933, 0.95086, 0.95089, 0.94013, 0.94019, 0.92402, 0.92414, 0.90241, 0.9026, 0.86821, 0.86868, 0.81838, 0.81939])#**(0.5)# In our papaer, we utilize the sqrt to smooth the ratio, which has little impact on the performance and can be deleted. + else: + trans.def_mag_ratios = np.array([1.0]*2+[1.02504, 1.03017, 1.00025, 1.00251, 0.9985, 0.99962, 0.99779, 0.99771, 0.9966, 0.99658, 0.99482, 0.99476, 0.99467, 0.99451, 0.99664, 0.99656, 0.99434, 0.99431, 0.99533, 0.99545, 0.99468, 0.99465, 0.99438, 0.99434, 0.99516, 0.99517, 0.99384, 0.9938, 0.99404, 0.99401, 0.99517, 0.99516, 0.99409, 0.99408, 0.99428, 0.99426, 0.99347, 0.99343, 0.99418, 0.99416, 0.99271, 0.99269, 0.99313, 0.99311, 0.99215, 0.99215, 0.99218, 0.99215, 0.99216, 0.99217, 0.99163, 0.99161, 0.99138, 0.99135, 0.98982, 0.9898, 0.98996, 0.98995, 0.9887, 0.98866, 0.98772, 0.9877, 0.98767, 0.98765, 0.98573, 0.9857, 0.98501, 0.98498, 0.9838, 0.98376, 0.98177, 0.98173, 0.98037, 0.98035, 0.97678, 0.97677, 0.97546, 0.97543, 0.97184, 0.97183, 0.96711, 0.96708, 0.96349, 0.96345, 0.95629, 0.95625, 0.94926, 0.94929, 0.93964, 0.93961, 0.92511, 0.92504, 0.90693, 0.90678, 0.8796, 0.87945, 0.86111, 0.86189]) + else: + if width * height >= 1280* 720: + trans.def_mag_ratios = np.array([1.0]+[1.0754, 1.27807, 1.11596, 1.09504, 1.05188, 1.00844, 1.05779, 1.00657, 1.04142, 1.03101, 1.00679, 1.02556, 1.00908, 1.06949, 1.05438, 1.02214, 1.02321, 1.03019, 1.00779, 1.03381, 1.01886, 1.01161, 1.02968, 1.00544, 1.02822, 1.00689, 1.02119, 1.0105, 1.01044, 1.01572, 1.02972, 1.0094, 1.02368, 1.0226, 0.98965, 1.01588, 1.02146, 1.0018, 1.01687, 0.99436, 1.00283, 1.01139, 0.97122, 0.98251, 0.94513, 0.97656, 0.90943, 0.85703, 0.75456]) + else: + trans.def_mag_ratios = np.array([1.0]+[1.06971, 1.29073, 1.11245, 1.09596, 1.05233, 1.01415, 1.05672, 1.00848, 1.03632, 1.02974, 1.00984, 1.03028, 1.00681, 1.06614, 1.05022, 1.02592, 1.01776, 1.02985, 1.00726, 1.03727, 1.01502, 1.00992, 1.03371, 0.9976, 1.02742, 1.0093, 1.01869, 1.00815, 1.01461, 1.01152, 1.03082, 1.0061, 1.02162, 1.01999, 0.99063, 1.01186, 1.0217, 0.99947, 1.01711, 0.9904, 1.00258, 1.00878, 0.97039, 0.97686, 0.94315, 0.97728, 0.91154, 0.86139, 0.76592]) + + + elif trans.enable_cache == "tea": trans.rel_l1_thresh = 0 - trans.cache_start_step = int(tea_cache_start_step_perc*num_inference_steps/100) if get_model_family(model_type) == "wan": if image2video: if '720p' in model_filename: @@ -3411,7 +3867,7 @@ def generate_video( audio_scale = None audio_context_lens = None if (fantasy or hunyuan_avatar or hunyuan_custom_audio) and audio_guide != None: - from fantasytalking.infer import parse_audio + from wan.fantasytalking.infer import parse_audio import librosa duration = librosa.get_duration(path=audio_guide) current_video_length = min(int(fps * duration // 4) * 4 + 5, current_video_length) @@ -3432,11 +3888,13 @@ def generate_video( torch.set_grad_enabled(False) global save_path os.makedirs(save_path, exist_ok=True) - abort = False gc.collect() torch.cuda.empty_cache() wan_model._interrupt = False - gen["abort"] = False + abort = False + if gen.get("abort", False): + return + # gen["abort"] = False gen["prompt"] = prompt repeat_no = 0 extra_generation = 0 @@ -3466,17 +3924,18 @@ def generate_video( gen["extra_orders"] = 0 total_generation = repeat_generation + extra_generation gen["total_generation"] = total_generation - if repeat_no >= total_generation: - break + if repeat_no >= total_generation: break repeat_no +=1 gen["repeat_no"] = repeat_no src_video, src_mask, src_ref_images = None, None, None prefix_video = None - prefix_video_frames_count = 0 + source_video_overlap_frames_count = 0 + source_video_frames_count = 0 frames_already_processed = None pre_video_guide = None overlapped_latents = None context_scale = None + keep_frames_parsed = [] window_no = 0 extra_windows = 0 guide_start_frame = 0 @@ -3530,9 +3989,9 @@ def generate_video( sliding_window = sliding_window or extra_windows > 0 if sliding_window and window_no > 0: num_frames_generated -= reuse_frames - if (max_frames_to_generate - prefix_video_frames_count - num_frames_generated) < latent_size: + if (max_frames_to_generate - source_video_overlap_frames_count - num_frames_generated) < latent_size: break - current_video_length = min(sliding_window_size, ((max_frames_to_generate - num_frames_generated - prefix_video_frames_count + reuse_frames + discard_last_frames) // latent_size) * latent_size + 1 ) + current_video_length = min(sliding_window_size, ((max_frames_to_generate - num_frames_generated - source_video_overlap_frames_count + reuse_frames + discard_last_frames) // latent_size) * latent_size + 1 ) total_windows = initial_total_windows + extra_windows gen["total_windows"] = total_windows @@ -3563,32 +4022,39 @@ def generate_video( prefix_video = prefix_video .permute(3, 0, 1, 2) prefix_video = prefix_video .float().div_(127.5).sub_(1.) # c, f, h, w pre_video_guide = prefix_video[:, -reuse_frames:] - prefix_video_frames_count = pre_video_guide.shape[1] + source_video_overlap_frames_count = pre_video_guide.shape[1] + source_video_frames_count = prefix_video.shape[1] if vace and sample_fit_canvas != None: image_size = pre_video_guide.shape[-2:] guide_start_frame = prefix_video.shape[1] sample_fit_canvas = None - if vace: - image_refs_copy = image_refs[nb_frames_positions:].copy() if image_refs != None and len(image_refs) > nb_frames_positions else None # required since prepare_source do inplace modifications - video_guide_copy = video_guide - video_mask_copy = video_mask - keep_frames_parsed, error = parse_keep_frames_video_guide(keep_frames_video_guide, max_frames_to_generate) + if (vace or t2v) and video_guide != None : + keep_frames_parsed, error = parse_keep_frames_video_guide(keep_frames_video_guide, source_video_frames_count + max_frames_to_generate) if len(error) > 0: raise gr.Error(f"invalid keep frames {keep_frames_video_guide}") keep_frames_parsed = keep_frames_parsed[guide_start_frame: guide_start_frame + current_video_length] + if t2v and "G" in video_prompt_type: + video_guide_processed = preprocess_video(width = image_size[1], height=image_size[0], video_in=video_guide, max_frames= len(keep_frames_parsed) if guide_start_frame == 0 else len(keep_frames_parsed) - reuse_frames, start_frame = guide_start_frame, fit_canvas= sample_fit_canvas) + if sample_fit_canvas != None: + image_size = video_guide_processed.shape[-3: -1] + sample_fit_canvas = None + src_video = video_guide_processed.float().div_(127.5).sub_(1.).permute(-1,0,1,2) + + if vace : + image_refs_copy = image_refs[nb_frames_positions:].copy() if image_refs != None and len(image_refs) > nb_frames_positions else None # required since prepare_source do inplace modifications context_scale = [ control_net_weight] - video_guide_copy2 = video_mask_copy2 = None + video_guide_processed = video_mask_processed = video_guide_processed2 = video_mask_processed2 = None if "V" in video_prompt_type: process_map = { "Y" : "depth", "W": "scribble", "X": "inpaint", "Z": "flow"} process_outside_mask = process_map.get(filter_letters(video_prompt_type, "YWX"), None) preprocess_type, preprocess_type2 = "vace", None - process_map = { "D" : "depth", "P": "pose", "S": "scribble", "F": "flow", "C": "gray", "M": "inpaint", "U": "identity"} - for process_num, process_letter in enumerate( filter_letters(video_prompt_type, "PDSFCMU")): + process_map = { "P": "pose", "D" : "depth", "S": "scribble", "L": "flow", "C": "gray", "M": "inpaint", "U": "identity"} + for process_num, process_letter in enumerate( filter_letters(video_prompt_type, "PDSLCMU")): if process_num == 0: preprocess_type = process_map.get(process_letter, "vace") else: preprocess_type2 = process_map.get(process_letter, None) - process_names = { "pose": "Open Pose", "depth": "Depth Mask", "scribble" : "Shapes", "flow" : "Flow Map", "gray" : "Gray Levels", "inpaint" : "Inpaint Mask", "U": "Identity Mask", "vace" : "Vace Data"} + process_names = { "pose": "Open Pose", "depth": "Depth Mask", "scribble" : "Shapes", "flow" : "Flow Map", "gray" : "Gray Levels", "inpaint" : "Inpaint Mask", "identity": "Identity Mask", "vace" : "Vace Data"} status_info = "Extracting " + process_names[preprocess_type] extra_process_list = ([] if preprocess_type2==None else [preprocess_type2]) + ([] if process_outside_mask==None or process_outside_mask == preprocess_type else [process_outside_mask]) if len(extra_process_list) == 1: @@ -3596,28 +4062,28 @@ def generate_video( elif len(extra_process_list) == 2: status_info += ", " + process_names[extra_process_list[0]] + " and " + process_names[extra_process_list[1]] send_cmd("progress", [0, get_latest_status(state, status_info)]) - video_guide_copy, video_mask_copy = preprocess_video_with_mask(video_guide, video_mask, height=image_size[0], width = image_size[1], max_frames= len(keep_frames_parsed) if guide_start_frame == 0 else len(keep_frames_parsed) - reuse_frames, start_frame = guide_start_frame, fit_canvas = sample_fit_canvas, target_fps = fps, process_type = preprocess_type, expand_scale = mask_expand, RGB_Mask = True, negate_mask = "N" in video_prompt_type, process_outside_mask = process_outside_mask, outpainting_dims = outpainting_dims, proc_no =1 ) + video_guide_processed, video_mask_processed = preprocess_video_with_mask(video_guide, video_mask, height=image_size[0], width = image_size[1], max_frames= len(keep_frames_parsed) if guide_start_frame == 0 else len(keep_frames_parsed) - reuse_frames, start_frame = guide_start_frame, fit_canvas = sample_fit_canvas, target_fps = fps, process_type = preprocess_type, expand_scale = mask_expand, RGB_Mask = True, negate_mask = "N" in video_prompt_type, process_outside_mask = process_outside_mask, outpainting_dims = outpainting_dims, proc_no =1 ) if preprocess_type2 != None: - video_guide_copy2, video_mask_copy2 = preprocess_video_with_mask(video_guide, video_mask, height=image_size[0], width = image_size[1], max_frames= len(keep_frames_parsed) if guide_start_frame == 0 else len(keep_frames_parsed) - reuse_frames, start_frame = guide_start_frame, fit_canvas = sample_fit_canvas, target_fps = fps, process_type = preprocess_type2, expand_scale = mask_expand, RGB_Mask = True, negate_mask = "N" in video_prompt_type, process_outside_mask = process_outside_mask, outpainting_dims = outpainting_dims, proc_no =2 ) + video_guide_processed2, video_mask_processed2 = preprocess_video_with_mask(video_guide, video_mask, height=image_size[0], width = image_size[1], max_frames= len(keep_frames_parsed) if guide_start_frame == 0 else len(keep_frames_parsed) - reuse_frames, start_frame = guide_start_frame, fit_canvas = sample_fit_canvas, target_fps = fps, process_type = preprocess_type2, expand_scale = mask_expand, RGB_Mask = True, negate_mask = "N" in video_prompt_type, process_outside_mask = process_outside_mask, outpainting_dims = outpainting_dims, proc_no =2 ) - if video_guide_copy != None: + if video_guide_processed != None: if sample_fit_canvas != None: - image_size = video_guide_copy.shape[-3: -1] + image_size = video_guide_processed.shape[-3: -1] sample_fit_canvas = None - refresh_preview["video_guide"] = Image.fromarray(video_guide_copy[0].cpu().numpy()) - if video_guide_copy2 != None: - refresh_preview["video_guide"] = [refresh_preview["video_guide"], Image.fromarray(video_guide_copy2[0].cpu().numpy())] - if video_mask_copy != None: - refresh_preview["video_mask"] = Image.fromarray(video_mask_copy[0].cpu().numpy()) + refresh_preview["video_guide"] = Image.fromarray(video_guide_processed[0].cpu().numpy()) + if video_guide_processed2 != None: + refresh_preview["video_guide"] = [refresh_preview["video_guide"], Image.fromarray(video_guide_processed2[0].cpu().numpy())] + if video_mask_processed != None: + refresh_preview["video_mask"] = Image.fromarray(video_mask_processed[0].cpu().numpy()) frames_to_inject_parsed = frames_to_inject[guide_start_frame: guide_start_frame + current_video_length] - src_video, src_mask, src_ref_images = wan_model.prepare_source([video_guide_copy] if video_guide_copy2 == None else [video_guide_copy, video_guide_copy2], - [video_mask_copy] if video_guide_copy2 == None else [video_mask_copy, video_mask_copy2], - [image_refs_copy] if video_guide_copy2 == None else [image_refs_copy, image_refs_copy], + src_video, src_mask, src_ref_images = wan_model.prepare_source([video_guide_processed] if video_guide_processed2 == None else [video_guide_processed, video_guide_processed2], + [video_mask_processed] if video_guide_processed2 == None else [video_mask_processed, video_mask_processed2], + [image_refs_copy] if video_guide_processed2 == None else [image_refs_copy, image_refs_copy], current_video_length, image_size = image_size, device ="cpu", - keep_frames=keep_frames_parsed, + keep_video_guide_frames=keep_frames_parsed, start_frame = guide_start_frame, - pre_src_video = [pre_video_guide] if video_guide_copy2 == None else [pre_video_guide, pre_video_guide], + pre_src_video = [pre_video_guide] if video_guide_processed2 == None else [pre_video_guide, pre_video_guide], fit_into_canvas = sample_fit_canvas, inject_frames= frames_to_inject_parsed, outpainting_dims = outpainting_dims, @@ -3653,7 +4119,7 @@ def generate_video( send_cmd("output") if window_no == 1: - conditioning_latents_size = ( (prefix_video_frames_count-1) // latent_size) + 1 if prefix_video_frames_count > 0 else 0 + conditioning_latents_size = ( (source_video_overlap_frames_count-1) // latent_size) + 1 if source_video_overlap_frames_count > 0 else 0 else: conditioning_latents_size = ( (reuse_frames-1) // latent_size) + 1 @@ -3664,10 +4130,9 @@ def generate_video( progress_args = [0, merge_status_context(status, "Encoding Prompt")] send_cmd("progress", progress_args) - if trans.enable_cache: - trans.teacache_counter = 0 + if trans.enable_cache != None: trans.num_steps = num_inference_steps - trans.teacache_skipped_steps = 0 + trans.cache_skipped_steps = 0 trans.previous_residual = None trans.previous_modulated_input = None @@ -3683,12 +4148,14 @@ def generate_video( input_ref_images= src_ref_images, input_masks = src_mask, input_video= pre_video_guide if diffusion_forcing or ltxv or hunyuan_custom_edit else source_video, + denoising_strength=denoising_strength, target_camera= target_camera, frame_num=(current_video_length // latent_size)* latent_size + 1, height = height, width = width, fit_into_canvas = fit_canvas == 1, shift=flow_shift, + sample_solver=sample_solver, sampling_steps=num_inference_steps, guide_scale=guidance_scale, embedded_guidance_scale=embedded_guidance_scale, @@ -3717,12 +4184,15 @@ def generate_video( return_latent_slice= return_latent_slice, overlap_noise = sliding_window_overlap_noise, conditioning_latents_size = conditioning_latents_size, + keep_frames_parsed = keep_frames_parsed, model_filename = model_filename, + model_type = base_model_type, + loras_slists = loras_slists, ) except Exception as e: if temp_filename!= None and os.path.isfile(temp_filename): os.remove(temp_filename) - offload.last_offload_obj.unload_all() + offloadobj.unload_all() offload.unload_loras_from_model(trans) # if compile: # cache_size = torch._dynamo.config.cache_size_limit @@ -3754,15 +4224,15 @@ def generate_video( trans.previous_residual = None trans.previous_modulated_input = None - if trans.enable_cache: - print(f"Teacache Skipped Steps:{trans.teacache_skipped_steps}/{trans.num_steps}" ) + if trans.enable_cache != None : + print(f"Skipped Steps:{trans.cache_skipped_steps}/{trans.num_steps}" ) if samples != None: if isinstance(samples, dict): overlapped_latents = samples.get("latent_slice", None) samples= samples["x"] samples = samples.to("cpu") - offload.last_offload_obj.unload_all() + offloadobj.unload_all() gc.collect() torch.cuda.empty_cache() @@ -3810,47 +4280,15 @@ def generate_video( sample = sample[: , reuse_frames:] guide_start_frame -= reuse_frames - exp = 0 if len(temporal_upsampling) > 0 or len(spatial_upsampling) > 0: - progress_args = [(num_inference_steps , num_inference_steps) , status + " - Upsampling" , num_inference_steps] - send_cmd("progress", progress_args) - - if temporal_upsampling == "rife2": - exp = 1 - elif temporal_upsampling == "rife4": - exp = 2 - output_fps = fps - if exp > 0: - from rife.inference import temporal_interpolation - if sliding_window and window_no > 1: - sample = torch.cat([previous_last_frame, sample], dim=1) - previous_last_frame = sample[:, -1:].clone() - sample = temporal_interpolation( os.path.join("ckpts", "flownet.pkl"), sample, exp, device=processing_device) - sample = sample[:, 1:] - else: - sample = temporal_interpolation( os.path.join("ckpts", "flownet.pkl"), sample, exp, device=processing_device) - previous_last_frame = sample[:, -1:].clone() - - output_fps = output_fps * 2**exp + send_cmd("progress", [0, get_latest_status(state,"Upsampling")]) + + output_fps = fps + if len(temporal_upsampling) > 0: + sample, previous_last_frame, output_fps = perform_temporal_upsampling(sample, previous_last_frame if sliding_window and window_no > 1 else None, temporal_upsampling, fps) if len(spatial_upsampling) > 0: - from wan.utils.utils import resize_lanczos # need multithreading or to do lanczos with cuda - if spatial_upsampling == "lanczos1.5": - scale = 1.5 - else: - scale = 2 - sample = (sample + 1) / 2 - h, w = sample.shape[-2:] - h *= scale - w *= scale - h = int(h) - w = int(w) - frames_to_upsample = [sample[:, i] for i in range( sample.shape[1]) ] - def upsample_frames(frame): - return resize_lanczos(frame, h, w).unsqueeze(1) - sample = torch.cat(process_images_multithread(upsample_frames, frames_to_upsample, "upsample", wrap_in_list = False), dim=1) - frames_to_upsample = None - sample.mul_(2).sub_(1) + sample = perform_spatial_upsampling(sample, spatial_upsampling ) if sliding_window : if frames_already_processed == None: @@ -3866,25 +4304,32 @@ def generate_video( else: file_name = f"{time_flag}_seed{seed}_{sanitize_file_name(save_prompt[:100]).strip()}.mp4" video_path = os.path.join(save_path, file_name) - - if audio_guide == None: - cache_video( tensor=sample[None], save_file=video_path, fps=output_fps, nrow=1, normalize=True, value_range=(-1, 1)) - else: + any_mmaudio = MMAudio_setting != 0 and server_config.get("mmaudio_enabled", 0) != 0 and sample.shape[1] >=fps + if audio_guide != None or any_mmaudio : save_path_tmp = video_path[:-4] + "_tmp.mp4" cache_video( tensor=sample[None], save_file=save_path_tmp, fps=output_fps, nrow=1, normalize=True, value_range=(-1, 1)) - final_command = [ "ffmpeg", "-y", "-i", save_path_tmp, "-i", audio_guide, "-c:v", "libx264", "-c:a", "aac", "-shortest", "-loglevel", "warning", "-nostats", video_path, ] - import subprocess - subprocess.run(final_command, check=True) + if any_mmaudio: + send_cmd("progress", [0, get_latest_status(state,"MMAudio Soundtrack Generation")]) + from postprocessing.mmaudio.mmaudio import video_to_audio + video_to_audio(save_path_tmp, prompt = MMAudio_prompt, negative_prompt = MMAudio_neg_prompt, seed = seed, num_steps = 25, cfg_strength = 4.5, duration= sample.shape[1] /fps, video_save_path = video_path, persistent_models = server_config.get("mmaudio_enabled", 0) == 2, verboseLevel = verbose_level) + else: + final_command = [ "ffmpeg", "-y", "-i", save_path_tmp, "-i", audio_guide, "-c:v", "libx264", "-c:a", "aac", "-shortest", "-loglevel", "warning", "-nostats", video_path, ] + import subprocess + subprocess.run(final_command, check=True) os.remove(save_path_tmp) + else: + cache_video( tensor=sample[None], save_file=video_path, fps=output_fps, nrow=1, normalize=True, value_range=(-1, 1)) end_time = time.time() inputs = get_function_arguments(generate_video, locals()) inputs.pop("send_cmd") inputs.pop("task") + inputs.pop("mode") inputs["model_filename"] = original_filename inputs["model_type"] = model_type configs = prepare_inputs_dict("metadata", inputs) + if sliding_window: configs["window_no"] = window_no configs["prompt"] = "\n".join(original_prompts) if prompt_enhancer_image_caption_model != None and prompt_enhancer !=None and len(prompt_enhancer)>0: configs["enhanced_prompt"] = "\n".join(prompts) @@ -3900,8 +4345,9 @@ def generate_video( file.save() print(f"New video saved to Path: "+video_path) - file_list.append(video_path) - file_settings_list.append(configs) + with lock: + file_list.append(video_path) + file_settings_list.append(configs) # Play notification sound for single video try: @@ -3923,10 +4369,11 @@ def generate_video( offload.unload_loras_from_model(trans) def prepare_generate_video(state): + if state.get("validate_success",0) != 1: - return gr.Button(visible= True), gr.Button(visible= False), gr.Column(visible= False) + return gr.Button(visible= True), gr.Button(visible= False), gr.Column(visible= False), gr.update(visible=False) else: - return gr.Button(visible= False), gr.Button(visible= True), gr.Column(visible= True) + return gr.Button(visible= False), gr.Button(visible= True), gr.Column(visible= True), gr.update(visible= False) def generate_preview(latents): import einops @@ -4160,24 +4607,25 @@ def process_tasks(state): if len(queue) == 0: gen["status_display"] = False return - gen = get_gen_info(state) - clear_file_list = server_config.get("clear_file_list", 0) - file_list = gen.get("file_list", []) - file_settings_list = gen.get("file_settings_list", []) - if clear_file_list > 0: - file_list_current_size = len(file_list) - keep_file_from = max(file_list_current_size - clear_file_list, 0) - files_removed = keep_file_from - choice = gen.get("selected",0) - choice = max(choice- files_removed, 0) - file_list = file_list[ keep_file_from: ] - file_settings_list = file_settings_list[ keep_file_from: ] - else: - file_list = [] - choice = 0 - gen["selected"] = choice - gen["file_list"] = file_list - gen["file_settings_list"] = file_settings_list + with lock: + gen = get_gen_info(state) + clear_file_list = server_config.get("clear_file_list", 0) + file_list = gen.get("file_list", []) + file_settings_list = gen.get("file_settings_list", []) + if clear_file_list > 0: + file_list_current_size = len(file_list) + keep_file_from = max(file_list_current_size - clear_file_list, 0) + files_removed = keep_file_from + choice = gen.get("selected",0) + choice = max(choice- files_removed, 0) + file_list = file_list[ keep_file_from: ] + file_settings_list = file_settings_list[ keep_file_from: ] + else: + file_list = [] + choice = 0 + gen["selected"] = choice + gen["file_list"] = file_list + gen["file_settings_list"] = file_settings_list start_time = time.time() @@ -4243,7 +4691,8 @@ def process_tasks(state): if abort: gen["abort"] = False status = "Video Generation Aborted", "Video Generation Aborted" - yield gr.Text(), gr.Text() + # yield gr.Text(), gr.Text() + yield time.time() , time.time() gen["status"] = status queue[:] = [item for item in queue if item['id'] != task['id']] @@ -4361,20 +4810,53 @@ def one_more_window(state): def get_new_preset_msg(advanced = True): if advanced: - return "Enter here a Name for a Lora Preset or Choose one in the List" + return "Enter here a Name for a Lora Preset or a Settings or Choose one" else: - return "Choose a Lora Preset in this List to Apply a Special Effect" + return "Choose a Lora Preset or a Settings file in this List" + +def compute_lset_choices(loras_presets): + # lset_choices = [ (preset, preset) for preset in loras_presets] + lset_list = [] + settings_list = [] + for item in loras_presets: + if item.endswith(".lset"): + lset_list.append(item) + else: + settings_list.append(item) + sep = '\u2500' + indent = chr(160) * 4 + lset_choices = [] + if len(settings_list) > 0: + settings_list.sort() + lset_choices += [( (sep*16) +"Settings" + (sep*17), ">settings")] + lset_choices += [ ( indent + os.path.splitext(preset)[0], preset) for preset in settings_list ] + if len(lset_list) > 0: + lset_list.sort() + lset_choices += [( (sep*18) + "Lsets" + (sep*18), ">lset")] + lset_choices += [ ( indent + os.path.splitext(preset)[0], preset) for preset in lset_list ] + return lset_choices -def validate_delete_lset(lset_name): - if len(lset_name) == 0 or lset_name == get_new_preset_msg(True) or lset_name == get_new_preset_msg(False): +def get_lset_name(state, lset_name): + presets = state["loras_presets"] + if len(lset_name) == 0 or lset_name.startswith(">") or lset_name== get_new_preset_msg(True) or lset_name== get_new_preset_msg(False): return "" + if lset_name in presets: return lset_name + choices = compute_lset_choices(presets) + for label, value in choices: + if label == lset_name: return value + return lset_name + +def validate_delete_lset(state, lset_name): + lset_name = get_lset_name(state, lset_name) + if len(lset_name) == 0: gr.Info(f"Choose a Preset to delete") return gr.Button(visible= True), gr.Checkbox(visible= True), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= False), gr.Button(visible= False) else: return gr.Button(visible= False), gr.Checkbox(visible= False), gr.Button(visible= False), gr.Button(visible= False), gr.Button(visible= True), gr.Button(visible= True) -def validate_save_lset(lset_name): - if len(lset_name) == 0 or lset_name == get_new_preset_msg(True) or lset_name == get_new_preset_msg(False): +def validate_save_lset(state, lset_name): + lset_name = get_lset_name(state, lset_name) + if len(lset_name) == 0: gr.Info("Please enter a name for the preset") return gr.Button(visible= True), gr.Checkbox(visible= True), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= False), gr.Button(visible= False),gr.Checkbox(visible= False) else: @@ -4384,65 +4866,93 @@ def cancel_lset(): return gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= False), gr.Button(visible= False), gr.Button(visible= False), gr.Checkbox(visible= False) - def save_lset(state, lset_name, loras_choices, loras_mult_choices, prompt, save_lset_prompt_cbox): + lset_name = os.path.splitext(lset_name)[0] + loras_presets = state["loras_presets"] loras = state["loras"] if state.get("validate_success",0) == 0: pass - if len(lset_name) == 0 or lset_name == get_new_preset_msg(True) or lset_name == get_new_preset_msg(False): - gr.Info("Please enter a name for the preset") - lset_choices =[("Please enter a name for a Lora Preset","")] + lset_name = get_lset_name(state, lset_name) + if len(lset_name) == 0: + gr.Info("Please enter a name for the preset / settings file") + lset_choices =[("Please enter a name for a Lora Preset / Settings file","")] else: lset_name = sanitize_file_name(lset_name) + lset_name = lset_name.replace('\u2500',"").strip() - loras_choices_files = [ Path(loras[int(choice_no)]).parts[-1] for choice_no in loras_choices ] - lset = {"loras" : loras_choices_files, "loras_mult" : loras_mult_choices} - if save_lset_prompt_cbox!=1: - prompts = prompt.replace("\r", "").split("\n") - prompts = [prompt for prompt in prompts if len(prompt)> 0 and prompt.startswith("#")] - prompt = "\n".join(prompts) - if len(prompt) > 0: - lset["prompt"] = prompt - lset["full_prompt"] = save_lset_prompt_cbox ==1 + if save_lset_prompt_cbox ==2: + lset = collect_model_settings(state) + extension = ".json" + else: + loras_choices_files = [ Path(loras[int(choice_no)]).parts[-1] for choice_no in loras_choices ] + lset = {"loras" : loras_choices_files, "loras_mult" : loras_mult_choices} + if save_lset_prompt_cbox!=1: + prompts = prompt.replace("\r", "").split("\n") + prompts = [prompt for prompt in prompts if len(prompt)> 0 and prompt.startswith("#")] + prompt = "\n".join(prompts) + if len(prompt) > 0: + lset["prompt"] = prompt + lset["full_prompt"] = save_lset_prompt_cbox ==1 + extension = ".lset" + if lset_name.endswith(".json") or lset_name.endswith(".lset"): lset_name = os.path.splitext(lset_name)[0] + old_lset_name = lset_name + ".json" + if not old_lset_name in loras_presets: + old_lset_name = lset_name + ".lset" + if not old_lset_name in loras_presets: old_lset_name = "" + lset_name = lset_name + extension - lset_name_filename = lset_name + ".lset" - full_lset_name_filename = os.path.join(get_lora_dir(state["model_type"]), lset_name_filename) + lora_dir = get_lora_dir(state["model_type"]) + full_lset_name_filename = os.path.join(lora_dir, lset_name ) with open(full_lset_name_filename, "w", encoding="utf-8") as writer: writer.write(json.dumps(lset, indent=4)) - if lset_name in loras_presets: - gr.Info(f"Lora Preset '{lset_name}' has been updated") + if len(old_lset_name) > 0 : + if save_lset_prompt_cbox ==2: + gr.Info(f"Settings File '{lset_name}' has been updated") + else: + gr.Info(f"Lora Preset '{lset_name}' has been updated") + if old_lset_name != lset_name: + pos = loras_presets.index(old_lset_name) + loras_presets[pos] = lset_name + shutil.move( os.path.join(lora_dir, old_lset_name), get_available_filename(lora_dir, old_lset_name + ".bkp" ) ) else: - gr.Info(f"Lora Preset '{lset_name}' has been created") - loras_presets.append(Path(Path(lset_name_filename).parts[-1]).stem ) - lset_choices = [ ( preset, preset) for preset in loras_presets ] - lset_choices.append( (get_new_preset_msg(), "")) + if save_lset_prompt_cbox ==2: + gr.Info(f"Settings File '{lset_name}' has been created") + else: + gr.Info(f"Lora Preset '{lset_name}' has been created") + loras_presets.append(lset_name) state["loras_presets"] = loras_presets + + lset_choices = compute_lset_choices(loras_presets) + lset_choices.append( (get_new_preset_msg(), "")) return gr.Dropdown(choices=lset_choices, value= lset_name), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= False), gr.Button(visible= False), gr.Checkbox(visible= False) def delete_lset(state, lset_name): loras_presets = state["loras_presets"] - lset_name_filename = os.path.join( get_lora_dir(state["model_type"]), sanitize_file_name(lset_name) + ".lset" ) - if len(lset_name) > 0 and lset_name != get_new_preset_msg(True) and lset_name != get_new_preset_msg(False): + lset_name = get_lset_name(state, lset_name) + if len(lset_name) > 0: + lset_name_filename = os.path.join( get_lora_dir(state["model_type"]), sanitize_file_name(lset_name)) if not os.path.isfile(lset_name_filename): raise gr.Error(f"Preset '{lset_name}' not found ") os.remove(lset_name_filename) - pos = loras_presets.index(lset_name) + lset_choices = compute_lset_choices(loras_presets) + pos = next( (i for i, item in enumerate(lset_choices) if item[1]==lset_name ), -1) gr.Info(f"Lora Preset '{lset_name}' has been deleted") loras_presets.remove(lset_name) else: - pos = len(loras_presets) - gr.Info(f"Choose a Preset to delete") + pos = -1 + gr.Info(f"Choose a Preset / Settings File to delete") state["loras_presets"] = loras_presets - lset_choices = [ (preset, preset) for preset in loras_presets] + lset_choices = compute_lset_choices(loras_presets) lset_choices.append((get_new_preset_msg(), "")) - return gr.Dropdown(choices=lset_choices, value= lset_choices[pos][1]), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= False), gr.Checkbox(visible= False) + selected_lset_name = "" if pos < -1 else lset_choices[pos][1] + return gr.Dropdown(choices=lset_choices, value= selected_lset_name), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= True), gr.Button(visible= False), gr.Checkbox(visible= False) def refresh_lora_list(state, lset_name, loras_choices): loras_names = state["loras_names"] @@ -4462,13 +4972,10 @@ def refresh_lora_list(state, lset_name, loras_choices): if lora_id!= None: lora_names_selected.append(lora_id) - lset_choices = [ (preset, preset) for preset in loras_presets] + lset_choices = compute_lset_choices(loras_presets) lset_choices.append((get_new_preset_msg( state["advanced"]), "")) - if lset_name in loras_presets: - pos = loras_presets.index(lset_name) - else: - pos = len(loras_presets) - lset_name ="" + if not lset_name in loras_presets: + lset_name = "" if wan_model != None: errors = getattr(get_transformer_model(wan_model), "_loras_errors", "") @@ -4479,33 +4986,56 @@ def refresh_lora_list(state, lset_name, loras_choices): gr.Info("Lora List has been refreshed") - return gr.Dropdown(choices=lset_choices, value= lset_choices[pos][1]), gr.Dropdown(choices=new_loras_choices, value= lora_names_selected) + return gr.Dropdown(choices=lset_choices, value= lset_name), gr.Dropdown(choices=new_loras_choices, value= lora_names_selected) + +def update_lset_type(state, lset_name): + return 1 if lset_name.endswith(".lset") else 2 + def apply_lset(state, wizard_prompt_activated, lset_name, loras_choices, loras_mult_choices, prompt): state["apply_success"] = 0 - if len(lset_name) == 0 or lset_name== get_new_preset_msg(True) or lset_name== get_new_preset_msg(False): - gr.Info("Please choose a preset in the list or create one") + lset_name = get_lset_name(state, lset_name) + if len(lset_name) == 0: + gr.Info("Please choose a Lora Preset or Setting File in the list or create one") + return wizard_prompt_activated, loras_choices, loras_mult_choices, prompt, gr.update(), gr.update(), gr.update() else: - loras = state["loras"] - loras_choices, loras_mult_choices, preset_prompt, full_prompt, error = extract_preset(state["model_type"], lset_name, loras) - if len(error) > 0: - gr.Info(error) + current_model_type = state["model_type"] + if lset_name.endswith(".lset"): + loras = state["loras"] + loras_choices, loras_mult_choices, preset_prompt, full_prompt, error = extract_preset(current_model_type, lset_name, loras) + if len(error) > 0: + gr.Info(error) + else: + if full_prompt: + prompt = preset_prompt + elif len(preset_prompt) > 0: + prompts = prompt.replace("\r", "").split("\n") + prompts = [prompt for prompt in prompts if len(prompt)>0 and not prompt.startswith("#")] + prompt = "\n".join(prompts) + prompt = preset_prompt + '\n' + prompt + gr.Info(f"Lora Preset '{lset_name}' has been applied") + state["apply_success"] = 1 + wizard_prompt_activated = "on" + + return wizard_prompt_activated, loras_choices, loras_mult_choices, prompt, get_unique_id(), gr.update(), gr.update() else: - if full_prompt: - prompt = preset_prompt - elif len(preset_prompt) > 0: - prompts = prompt.replace("\r", "").split("\n") - prompts = [prompt for prompt in prompts if len(prompt)>0 and not prompt.startswith("#")] - prompt = "\n".join(prompts) - prompt = preset_prompt + '\n' + prompt - gr.Info(f"Lora Preset '{lset_name}' has been applied") - state["apply_success"] = 1 - wizard_prompt_activated = "on" - - return wizard_prompt_activated, loras_choices, loras_mult_choices, prompt + configs, any_video_file = get_settings_from_file(state, os.path.join(get_lora_dir(current_model_type), lset_name), True, True, True) + if configs == None: + gr.Info("File not supported") + return [gr.update()] * 7 + + model_type = configs["model_type"] + configs["lset_name"] = lset_name + gr.Info(f"Settings File '{lset_name}' has been applied") + if model_type == current_model_type: + set_model_settings(state, current_model_type, configs) + return *[gr.update()] * 4, gr.update(), gr.update(), get_unique_id() + else: + set_model_settings(state, model_type, configs) + return *[gr.update()] * 4, gr.update(), generate_dropdown_model_list(model_type), gr.update() def extract_prompt_from_wizard(state, variables_names, prompt, wizard_prompt, allow_null_values, *args): @@ -4635,7 +5165,7 @@ visible= False def switch_advanced(state, new_advanced, lset_name): state["advanced"] = new_advanced loras_presets = state["loras_presets"] - lset_choices = [ (preset, preset) for preset in loras_presets] + lset_choices = compute_lset_choices(loras_presets) lset_choices.append((get_new_preset_msg(new_advanced), "")) if lset_name== get_new_preset_msg(True) or lset_name== get_new_preset_msg(False) or lset_name=="": lset_name = get_new_preset_msg(new_advanced) @@ -4658,37 +5188,58 @@ def prepare_inputs_dict(target, inputs ): if target == "state": return inputs + + if "lset_name" in inputs: + inputs.pop("lset_name") + unsaved_params = ["image_start", "image_end", "image_refs", "video_guide", "video_source", "video_mask", "audio_guide"] for k in unsaved_params: inputs.pop(k) - model_filename = state["model_filename"] model_type = state["model_type"] inputs["type"] = f"WanGP v{WanGP_version} by DeepBeepMeep - " + get_model_name(model_type) inputs["settings_version"] = settings_version + base_model_type = get_base_model_type(model_type) + if model_type != base_model_type: + inputs["base_model_type"] = base_model_type + diffusion_forcing = base_model_type in ["sky_df_1.3B", "sky_df_14B", "sky_df_720p_14B"] if target == "settings": return inputs - model_filename = get_model_filename(get_base_model_type(model_type)) + model_filename = get_model_filename(base_model_type) - if not (test_class_i2v(model_type) or "diffusion_forcing" in model_filename or "ltxv" in model_filename or "recammaster" in model_filename or "Vace" in model_filename): + if not get_model_family(model_type) == "wan" or diffusion_forcing: + inputs.pop("sample_solver") + + if not (test_class_i2v(base_model_type) or diffusion_forcing or "ltxv" in model_filename or "recammaster" in model_filename or "Vace" in model_filename): inputs.pop("image_prompt_type") + if base_model_type in ["fantasy", "hunyuan_custom_audio", "hunyuan_avatar"] or server_config.get("mmaudio_enabled", 0) == 0: + unsaved_params = ["MMAudio_setting", "MMAudio_prompt", "MMAudio_neg_prompt"] + for k in unsaved_params: + inputs.pop(k) + + video_prompt_type = inputs["video_prompt_type"] + if not base_model_type in ["tv2"]: + inputs.pop("denoising_strength") + if not server_config.get("enhancer_enabled", 0) == 1: inputs.pop("prompt_enhancer") - if not "recam" in model_filename and not "diffusion_forcing" in model_filename: + if not "recam" in model_filename and not diffusion_forcing: inputs.pop("model_mode") if not "Vace" in model_filename and not "phantom" in model_filename and not "hunyuan_video_custom" in model_filename: unsaved_params = ["keep_frames_video_guide", "video_prompt_type", "remove_background_images_ref", "mask_expand"] + if base_model_type in ["t2v"]: unsaved_params = unsaved_params[2:] for k in unsaved_params: inputs.pop(k) if not "Vace" in model_filename: inputs.pop("frames_positions") + inputs.pop("video_guide_outpainting") - if not ("diffusion_forcing" in model_filename or "ltxv" in model_filename): + if not ("diffusion_forcing" in model_filename or "ltxv" in model_filename or "Vace" in model_filename): unsaved_params = ["keep_frames_video_source"] for k in unsaved_params: inputs.pop(k) @@ -4717,34 +5268,136 @@ def get_function_arguments(func, locals): kwargs[k] = locals[k] return kwargs -def export_settings(state): + +def init_generate(state, input_file_list, last_choice): + gen = get_gen_info(state) + file_list, file_settings_list = get_file_list(state, input_file_list) + + set_file_choice(gen, file_list, last_choice) + return get_unique_id(), "" + +def video_to_control_video(state, input_file_list, choice): + file_list, file_settings_list = get_file_list(state, input_file_list) + if len(file_list) == 0 or choice == None or choice < 0 or choice > len(file_list): return gr.update() + gr.Info("Select Video was copied to Control Video input") + return file_list[choice] + +def video_to_source_video(state, input_file_list, choice): + file_list, file_settings_list = get_file_list(state, input_file_list) + if len(file_list) == 0 or choice == None or choice < 0 or choice > len(file_list): return gr.update() + gr.Info("Select Video was copied to Source Video input") + return file_list[choice] + +def apply_post_processing(state, input_file_list, choice, PP_temporal_upsampling, PP_spatial_upsampling, PP_MMAudio_setting, PP_MMAudio_prompt, PP_MMAudio_neg_prompt, PP_MMAudio_seed, PP_repeat_generation): + gen = get_gen_info(state) + file_list, file_settings_list = get_file_list(state, input_file_list) + if len(file_list) == 0 or choice == None or choice < 0 or choice > len(file_list) : + return gr.update(), gr.update() + + overrides = { + "temporal_upsampling":PP_temporal_upsampling, + "spatial_upsampling":PP_spatial_upsampling, + "MMAudio_setting" : PP_MMAudio_setting, + "MMAudio_prompt" : PP_MMAudio_prompt, + "MMAudio_neg_prompt": PP_MMAudio_neg_prompt, + "seed": PP_MMAudio_seed, + "repeat_generation": PP_repeat_generation, + } + + gen["edit_video_source"] = file_list[choice] + gen["edit_overrides"] = overrides + + in_progress = gen.get("in_progress", False) + return "edit", get_unique_id() if not in_progress else gr.update(), get_unique_id() if in_progress else gr.update() + +def eject_video_from_gallery(state, input_file_list, choice): + gen = get_gen_info(state) + file_list, file_settings_list = get_file_list(state, input_file_list) + with lock: + if len(file_list) == 0 or choice == None or choice < 0 or choice > len(file_list) : + return gr.update(), gr.update(), gr.update() + + extend_list = file_list[choice + 1:] # inplace List change + file_list[:] = file_list[:choice] + file_list.extend(extend_list) + + extend_list = file_settings_list[choice + 1:] + file_settings_list[:] = file_settings_list[:choice] + file_settings_list.extend(extend_list) + choice = min(choice, len(file_list)) + return gr.Gallery(value = file_list, selected_index= choice), gr.update() if len(file_list) >0 else get_default_video_info(), gr.Row(visible= len(file_list) > 0) + +def add_videos_to_gallery(state, input_file_list, choice, files_to_load): + gen = get_gen_info(state) + file_list, file_settings_list = get_file_list(state, input_file_list) + with lock: + valid_files_count = 0 + invalid_files_count = 0 + for file_path in files_to_load: + file_settings, _ = get_settings_from_file(state, file_path, False, False, False) + if file_settings != None: + file_list.append(file_path) + file_settings_list.append(file_settings) + valid_files_count +=1 + else: + invalid_files_count +=1 + + if valid_files_count== 0 and invalid_files_count ==0: + gr.Info("No Video to Add") + else: + txt = "" + if valid_files_count > 0: + txt = f"{valid_files_count} files were added. " if valid_files_count > 1 else f"One file was added." + if invalid_files_count > 0: + txt += f"Unable to add {invalid_files_count} files which were invalid. " if invalid_files_count > 1 else f"Unable to add one file which was invalid." + gr.Info(txt) + if choice != None and choice <= 0: + choice = len(file_list) + gen["selected"] = choice + return gr.Gallery(value = file_list, selected_index=choice, preview= True), gr.Files(value=[]), gr.Tabs(selected="video_info") + +def get_model_settings(state, model_type): + all_settings = state.get("all_settings", None) + return None if all_settings == None else all_settings.get(model_type, None) + +def set_model_settings(state, model_type, settings): + all_settings = state.get("all_settings", None) + if all_settings == None: + all_settings = {} + state["all_settings"] = all_settings + all_settings[model_type] = settings + +def collect_model_settings(state): model_filename = state["model_filename"] model_type = state["model_type"] - settings = state[model_type] + settings = get_model_settings(state, model_type) settings["state"] = state settings = prepare_inputs_dict("metadata", settings) settings["model_filename"] = model_filename settings["model_type"] = model_type - text = json.dumps(settings, indent=4) + return settings + +def export_settings(state): + model_type = state["model_type"] + text = json.dumps(collect_model_settings(state), indent=4) text_base64 = base64.b64encode(text.encode('utf8')).decode('utf-8') return text_base64, sanitize_file_name(model_type + "_" + datetime.fromtimestamp(time.time()).strftime("%Y-%m-%d-%Hh%Mm%Ss") + ".json") -def use_video_settings(state, files): + +def use_video_settings(state, input_file_list, choice): gen = get_gen_info(state) - choice = gen.get("selected",-1) - file_list = gen.get("file_list", None) - if file_list !=None and choice >=0 and len(file_list)>0: - file_settings_list = gen["file_settings_list"] + file_list, file_settings_list = get_file_list(state, input_file_list) + if choice != None and choice >=0 and len(file_list)>0: configs = file_settings_list[choice] model_type = configs["model_type"] - defaults = state.get(model_type, None) + defaults = get_model_settings(state, model_type) defaults = get_default_settings(model_type) if defaults == None else defaults defaults.update(configs) current_model_type = state["model_type"] prompt = configs.get("prompt", "") - state[model_type] = defaults + set_model_settings(state, model_type, defaults) gr.Info(f"Settings Loaded from Video with prompt '{prompt[:100]}'") - if model_type == current_model_type: + if are_model_types_compatible(model_type,current_model_type): return gr.update(), str(time.time()) else: return generate_dropdown_model_list(model_type), gr.update() @@ -4753,14 +5406,10 @@ def use_video_settings(state, files): return gr.update(), gr.update() -def load_settings_from_file(state, file_path): - gen = get_gen_info(state) - if file_path==None: - return gr.update(), gr.update(), None - +def get_settings_from_file(state, file_path, allow_json, merge_with_defaults, switch_type_if_compatible): configs = None tags = None - if file_path.endswith(".json"): + if file_path.endswith(".json") and allow_json: try: with open(file_path, 'r', encoding='utf-8') as f: configs = json.load(f) @@ -4776,33 +5425,58 @@ def load_settings_from_file(state, file_path): if tags != None: configs = json.loads(tags) if configs == None: - gr.Info("File not supported") - return gr.update(), gr.update(), None + return None, False - prompt = configs.get("prompt", "") current_model_filename = state["model_filename"] current_model_type = state["model_type"] + model_type = configs.get("model_type", None) + if get_base_model_type(model_type) == None: + model_type = configs.get("base_model_type", None) + if model_type == None: model_filename = configs.get("model_filename", current_model_filename) model_type = get_model_type(model_filename) if model_type == None: model_type = current_model_type - elif not model_type in model_types: + elif not model_type in model_types and not model_type in finetune_def: model_type = current_model_type - defaults = state.get(model_type, None) - if defaults != None: - fix_settings(model_type, defaults) - defaults = get_default_settings(model_type) if defaults == None else defaults - defaults.update(configs) - state[model_type]= defaults - if tags != None: + fix_settings(model_type, configs) + if switch_type_if_compatible and are_model_types_compatible(model_type,current_model_type): + model_type = current_model_type + if merge_with_defaults: + defaults = get_model_settings(state, model_type) + defaults = get_default_settings(model_type) if defaults == None else defaults + defaults.update(configs) + configs = defaults + configs["model_type"] = model_type + + return configs, tags != None + +def load_settings_from_file(state, file_path): + gen = get_gen_info(state) + if file_path==None: + return gr.update(), gr.update(), None + + configs, any_video_file = get_settings_from_file(state, file_path, True, True, True) + if configs == None: + gr.Info("File not supported") + return gr.update(), gr.update(), None + + current_model_type = state["model_type"] + model_type = configs["model_type"] + prompt = configs.get("prompt", "") + + if any_video_file: gr.Info(f"Settings Loaded from Video generated with prompt '{prompt[:100]}'") else: gr.Info(f"Settings Loaded from Settings file with prompt '{prompt[:100]}'") + if model_type == current_model_type: + set_model_settings(state, current_model_type, configs) return gr.update(), str(time.time()), None else: + set_model_settings(state, model_type, configs) return generate_dropdown_model_list(model_type), gr.update(), None def save_inputs( @@ -4817,12 +5491,14 @@ def save_inputs( guidance_scale, audio_guidance_scale, flow_shift, + sample_solver, embedded_guidance_scale, repeat_generation, multi_prompts_gen_type, multi_images_gen_type, - tea_cache_setting, - tea_cache_start_step_perc, + skip_steps_cache_type, + skip_steps_multiplier, + skip_steps_start_step_perc, loras_choices, loras_multipliers, image_prompt_type, @@ -4837,6 +5513,7 @@ def save_inputs( frames_positions, video_guide, keep_frames_video_guide, + denoising_strength, video_mask, control_net_weight, control_net_weight2, @@ -4849,6 +5526,9 @@ def save_inputs( remove_background_images_ref, temporal_upsampling, spatial_upsampling, + MMAudio_setting, + MMAudio_prompt, + MMAudio_neg_prompt, RIFLEx_setting, slg_switch, slg_layers, @@ -4857,6 +5537,7 @@ def save_inputs( cfg_star_switch, cfg_zero_step, prompt_enhancer, + mode, state, ): @@ -4876,7 +5557,7 @@ def save_inputs( gr.Info("New Default Settings saved") elif target == "state": - state[model_type] = cleaned_inputs + set_model_settings(state, model_type, cleaned_inputs) def download_loras(): from huggingface_hub import snapshot_download @@ -4960,15 +5641,20 @@ def change_model(state, model_choice): return model_filename = get_model_filename(model_choice, transformer_quantization, transformer_dtype_policy) state["model_filename"] = model_filename + server_config["last_model_type"] = model_choice + with open(server_config_filename, "w", encoding="utf-8") as writer: + writer.write(json.dumps(server_config, indent=4)) + state["model_type"] = model_choice header = generate_header(model_choice, compile=compile, attention_mode=attention_mode) + return header def fill_inputs(state): - prefix = state["model_type"] - ui_defaults = state.get(prefix, None) + model_type = state["model_type"] + ui_defaults = get_model_settings(state, model_type) if ui_defaults == None: - ui_defaults = get_default_settings(prefix) + ui_defaults = get_default_settings(model_type) return generate_video_tab(update_form = True, state_dict = state, ui_defaults = ui_defaults) @@ -5037,16 +5723,16 @@ def refresh_video_prompt_type_video_mask(video_prompt_type, video_prompt_type_vi return video_prompt_type, gr.update(visible= visible), gr.update(visible= visible ) def refresh_video_prompt_type_video_guide(state, video_prompt_type, video_prompt_type_video_guide): - video_prompt_type = del_in_sequence(video_prompt_type, "PDSFCMUV") + video_prompt_type = del_in_sequence(video_prompt_type, "PDSLCMGUV") video_prompt_type = add_to_sequence(video_prompt_type, video_prompt_type_video_guide) visible = "V" in video_prompt_type mask_visible = visible and "A" in video_prompt_type and not "U" in video_prompt_type vace = get_base_model_type(state["model_type"]) in ("vace_1.3B","vace_14B") - return video_prompt_type, gr.update(visible = visible), gr.update(visible = visible),gr.update(visible= (visible or "F" in video_prompt_type) and vace), gr.update(visible= visible and not "U" in video_prompt_type), gr.update(visible= mask_visible), gr.update(visible= mask_visible) + return video_prompt_type, gr.update(visible = visible), gr.update(visible = visible), gr.update(visible = visible and "G" in video_prompt_type), gr.update(visible= (visible or "F" in video_prompt_type) and vace), gr.update(visible= visible and not "U" in video_prompt_type), gr.update(visible= mask_visible), gr.update(visible= mask_visible) -def refresh_video_prompt_video_guide_trigger(state, video_prompt_type, video_prompt_type_video_guide): - video_prompt_type_video_guide = video_prompt_type_video_guide.split("#")[0] - return refresh_video_prompt_type_video_guide(state, video_prompt_type, video_prompt_type_video_guide) +# def refresh_video_prompt_video_guide_trigger(state, video_prompt_type, video_prompt_type_video_guide): +# video_prompt_type_video_guide = video_prompt_type_video_guide.split("#")[0] +# return refresh_video_prompt_type_video_guide(state, video_prompt_type, video_prompt_type_video_guide) def refresh_preview(state): gen = get_gen_info(state) @@ -5109,6 +5795,71 @@ def refresh_video_guide_outpainting_row(video_guide_outpainting_checkbox, video_ return gr.update(visible=video_guide_outpainting_checkbox), video_guide_outpainting +custom_resolutions = None +def get_resolution_choices(current_resolution_choice): + global custom_resolutions + + resolution_file = "resolutions.json" + if custom_resolutions == None and os.path.isfile(resolution_file) : + with open(resolution_file, 'r', encoding='utf-8') as f: + try: + resolution_choices = json.load(f) + except Exception as e: + print(f'Invalid "{resolution_file}" : {e}') + resolution_choices = None + if resolution_choices == None: + pass + elif not isinstance(resolution_choices, list): + print(f'"{resolution_file}" should be a list of 2 elements lists ["Label","WxH"]') + resolution_choices == None + else: + for tup in resolution_choices: + if not isinstance(tup, list) or len(tup) != 2 or not isinstance(tup[0], str) or not isinstance(tup[1], str): + print(f'"{resolution_file}" contains an invalid list of two elements: {tup}') + resolution_choices == None + break + res_list = tup[1].split("x") + if len(res_list) != 2 or not is_integer(res_list[0]) or not is_integer(res_list[1]): + print(f'"{resolution_file}" contains a resolution value that is not in the format "WxH": {tup[1]}') + resolution_choices == None + break + custom_resolutions = resolution_choices + else: + resolution_choices = custom_resolutions + if resolution_choices == None: + resolution_choices=[ + # 1080p + ("1920x832 (21:9, 1080p)", "1920x832"), + ("832x1920 (9:21, 1080p)", "832x1920"), + # 720p + ("1280x720 (16:9, 720p)", "1280x720"), + ("720x1280 (9:16, 720p)", "720x1280"), + ("1024x1024 (1:1, 720p)", "1024x1024"), + ("1280x544 (21:9, 720p)", "1280x544"), + ("544x1280 (9:21, 720p)", "544x1280"), + ("1104x832 (4:3, 720p)", "1104x832"), + ("832x1104 (3:4, 720p)", "832x1104"), + ("960x960 (1:1, 720p)", "960x960"), + # 540p + ("960x544 (16:9, 540p)", "960x544"), + ("544x960 (9:16, 540p)", "544x960"), + # 480p + ("832x480 (16:9, 480p)", "832x480"), + ("480x832 (9:16, 480p)", "480x832"), + ("832x624 (4:3, 480p)", "832x624"), + ("624x832 (3:4, 480p)", "624x832"), + ("720x720 (1:1, 480p)", "720x720"), + ("512x512 (1:1, 480p)", "512x512"), + ] + + found = False + for label, res in resolution_choices: + if current_resolution_choice == res: + found = True + break + if not found: + resolution_choices.append( (current_resolution_choice, current_resolution_choice )) + return resolution_choices def generate_video_tab(update_form = False, state_dict = None, ui_defaults = None, model_choice = None, header = None, main = None): global inputs_names #, advanced @@ -5129,8 +5880,8 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non gen = dict() gen["queue"] = [] state_dict["gen"] = gen - - model_filename = get_model_filename( get_base_model_type(model_type) ) + base_model_type = get_base_model_type(model_type) + model_filename = get_model_filename( base_model_type ) preset_to_load = lora_preselected_preset if lora_preset_model == model_type else "" loras, loras_names, loras_presets, default_loras_choices, default_loras_multis_str, default_lora_preset_prompt, default_lora_preset = setup_loras(model_type, None, get_lora_dir(model_type), preset_to_load, None) @@ -5179,18 +5930,23 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non modal_image_display = gr.HTML(label="Full Resolution Image") preview_column_no = gr.Text(visible=False, value=-1, elem_id="preview_column_no") with gr.Row(visible= True): #len(loras)>0) as presets_column: - lset_choices = [ (preset, preset) for preset in loras_presets ] + [(get_new_preset_msg(advanced_ui), "")] + lset_choices = compute_lset_choices(loras_presets) + [(get_new_preset_msg(advanced_ui), "")] with gr.Column(scale=6): lset_name = gr.Dropdown(show_label=False, allow_custom_value= True, scale=5, filterable=True, choices= lset_choices, value=launch_preset) with gr.Column(scale=1): - with gr.Row(height=17): - apply_lset_btn = gr.Button("Apply Lora Preset", size="sm", min_width= 1) + with gr.Row(height=17): + apply_lset_btn = gr.Button("Apply", size="sm", min_width= 1) refresh_lora_btn = gr.Button("Refresh", size="sm", min_width= 1, visible=advanced_ui or not only_allow_edit_in_advanced) + if len(launch_preset) == 0 : + lset_type = 2 + else: + lset_type = 1 if launch_preset.endswith(".lset") else 2 save_lset_prompt_drop= gr.Dropdown( choices=[ - ("Save Prompt Comments Only", 0), - ("Save Full Prompt", 1) - ], show_label= False, container=False, value =1, visible= False + # ("Save Loras & Only Prompt Comments", 0), + ("Save Only Loras & Full Prompt", 1), + ("Save All the Settings", 2) + ], show_label= False, container=False, value = lset_type, visible= False ) with gr.Row(height=17, visible=False) as refresh2_row: refresh_lora_btn2 = gr.Button("Refresh", size="sm", min_width= 1) @@ -5198,13 +5954,14 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non with gr.Row(height=17, visible=advanced_ui or not only_allow_edit_in_advanced) as preset_buttons_rows: confirm_save_lset_btn = gr.Button("Go Ahead Save it !", size="sm", min_width= 1, visible=False) confirm_delete_lset_btn = gr.Button("Go Ahead Delete it !", size="sm", min_width= 1, visible=False) - save_lset_btn = gr.Button("Save", size="sm", min_width= 1) - delete_lset_btn = gr.Button("Delete", size="sm", min_width= 1) + save_lset_btn = gr.Button("Save", size="sm", min_width= 1, visible = True) + delete_lset_btn = gr.Button("Delete", size="sm", min_width= 1, visible = True) cancel_lset_btn = gr.Button("Don't do it !", size="sm", min_width= 1 , visible=False) - + #confirm_save_lset_btn, confirm_delete_lset_btn, save_lset_btn, delete_lset_btn, cancel_lset_btn if not update_form: state = gr.State(state_dict) trigger_refresh_input_type = gr.Text(interactive= False, visible= False) + t2v = base_model_type in ["t2v"] diffusion_forcing = "diffusion_forcing" in model_filename ltxv = "ltxv" in model_filename ltxv_distilled = "ltxv" in model_filename and "distilled" in model_filename @@ -5221,12 +5978,12 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non sliding_window_enabled = test_any_sliding_window(model_type) multi_prompts_gen_type_value = ui_defaults.get("multi_prompts_gen_type_value",0) prompt_label, wizard_prompt_label = get_prompt_labels(multi_prompts_gen_type_value) - + any_video_source = True with gr.Column(visible= test_class_i2v(model_type) or diffusion_forcing or ltxv or recammaster or vace) as image_prompt_column: if vace: image_prompt_type_value= ui_defaults.get("image_prompt_type","") image_prompt_type_value = "" if image_prompt_type_value == "S" else image_prompt_type_value - image_prompt_type = gr.Radio( [("New Video", ""),("Continue Video File", "V"),("Continue Last Video", "L"),("Continue Selected Video", "G")], value =image_prompt_type_value, label="Source Video", show_label= False, visible= True , scale= 3) + image_prompt_type = gr.Radio( [("New Video", ""),("Continue Video File", "V"),("Continue Last Video", "L")], value =image_prompt_type_value, label="Source Video", show_label= False, visible= True , scale= 3) image_start = gr.Gallery(visible = False) image_end = gr.Gallery(visible = False) @@ -5306,32 +6063,43 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non video_source = gr.Video(value=None, visible=False) model_mode = gr.Dropdown(value=None, visible=False) keep_frames_video_source = gr.Text(visible=False) + any_video_source = False - with gr.Column(visible= vace or phantom or hunyuan_video_custom or hunyuan_video_avatar or hunyuan_video_custom_edit ) as video_prompt_column: + with gr.Column(visible= vace or phantom or hunyuan_video_custom or hunyuan_video_avatar or hunyuan_video_custom_edit or t2v) as video_prompt_column: video_prompt_type_value= ui_defaults.get("video_prompt_type","") video_prompt_type = gr.Text(value= video_prompt_type_value, visible= False) + any_control_video = True with gr.Row(): - if vace: + if t2v: + video_prompt_type_video_guide = gr.Dropdown( + choices=[ + ("Use Text Prompt Only", ""), + ("Video to Video guided by Text Prompt", "GUV"), + ], + value=filter_letters(video_prompt_type_value, "GUV"), + label="Video to Video", scale = 2, show_label= False, visible= True + ) + elif vace: video_prompt_type_video_guide = gr.Dropdown( choices=[ ("No Control Video", ""), ("Transfer Human Motion", "PV"), ("Transfer Depth", "DV"), ("Transfer Shapes", "SV"), - ("Transfer Flow", "FV"), + ("Transfer Flow", "LV"), ("Recolorize", "CV"), ("Perform Inpainting", "MV"), ("Use Vace raw format", "V"), ("Keep Unchanged", "UV"), ("Transfer Human Motion & Depth", "PDV"), ("Transfer Human Motion & Shapes", "PSV"), - ("Transfer Human Motion & Flow", "PFV"), + ("Transfer Human Motion & Flow", "PLV"), ("Transfer Depth & Shapes", "DSV"), - ("Transfer Depth & Flow", "DFV"), - ("Transfer Shapes & Flow", "SFV"), + ("Transfer Depth & Flow", "DLV"), + ("Transfer Shapes & Flow", "SLV"), ], - value=filter_letters(video_prompt_type_value, "PDSFCMUV"), - label="Control Video Process", scale = 2, visible= True + value=filter_letters(video_prompt_type_value, "PDSLCMGUV"), + label="Control Video Process", scale = 2, visible= True, show_label= True, ) elif hunyuan_video_custom_edit: video_prompt_type_video_guide = gr.Dropdown( @@ -5339,22 +6107,24 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non ("Inpaint Control Video", "MV"), ("Transfer Human Motion", "PMV"), ], - value=filter_letters(video_prompt_type_value, "PDSFCMUV"), - label="Video to Video", scale = 3, visible= True + value=filter_letters(video_prompt_type_value, "PDSLCMUV"), + label="Video to Video", scale = 3, visible= True, show_label= True, ) else: + any_control_video = False video_prompt_type_video_guide = gr.Dropdown(visible= False) - video_prompt_video_guide_trigger = gr.Text(visible=False, value="") - - if hunyuan_video_custom_edit: + # video_prompt_video_guide_trigger = gr.Text(visible=False, value="") + if t2v: + video_prompt_type_video_mask = gr.Dropdown(value = "", visible = False) + elif hunyuan_video_custom_edit: video_prompt_type_video_mask = gr.Dropdown( choices=[ ("Masked Area", "A"), ("Non Masked Area", "NA"), ], value= filter_letters(video_prompt_type_value, "NA"), - visible= "V" in video_prompt_type_value, + visible= "V" in video_prompt_type_value, label="Area Processed", scale = 2 ) else: @@ -5376,7 +6146,9 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non visible= "V" in video_prompt_type_value and not "U" in video_prompt_type_value and not hunyuan_video_custom, label="Area Processed", scale = 2 ) - if vace: + if t2v: + video_prompt_type_image_refs = gr.Dropdown(value="", visible =False) + elif vace: video_prompt_type_image_refs = gr.Dropdown( choices=[ ("None", ""), @@ -5396,7 +6168,9 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non ) video_guide = gr.Video(label= "Control Video", visible= "V" in video_prompt_type_value, value= ui_defaults.get("video_guide", None),) + denoising_strength = gr.Slider(0, 1, value= ui_defaults.get("denoising_strength" ,0.5), step=0.01, label="Denoising Strength (the Lower the Closer to the Control Video)", visible = "G" in video_prompt_type_value, show_reset_button= False) keep_frames_video_guide = gr.Text(value=ui_defaults.get("keep_frames_video_guide","") , visible= "V" in video_prompt_type_value, scale = 2, label= "Frames to keep in Control Video (empty=All, 1=first, a:b for a range, space to separate values)" ) #, -1=last + with gr.Column(visible= ("V" in video_prompt_type_value or "F" in video_prompt_type_value) and vace) as video_guide_outpainting_col: video_guide_outpainting_value = ui_defaults.get("video_guide_outpainting","#") video_guide_outpainting = gr.Text(value=video_guide_outpainting_value , visible= False) @@ -5476,38 +6250,15 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non visible= True ) with gr.Row(): - if test_class_i2v(model_type): - if server_config.get("fit_canvas", 0) == 1: - label = "Max Resolution (as it maybe less depending on video width / height ratio)" - else: - label = "Max Resolution (as it maybe less depending on video width / height ratio)" + if server_config.get("fit_canvas", 0) == 1: + label = "Max Resolution (as it maybe less depending on video width / height ratio)" else: - label = "Max Resolution (as it maybe less depending on video width / height ratio)" + label = "Max Resolution (pixels will be reallocated depending on video width / height ratio)" + current_resolution_choice = ui_defaults.get("resolution","832x480") + resolution_choices= get_resolution_choices(current_resolution_choice) resolution = gr.Dropdown( - choices=[ - # 1080p - ("1920x832 (21:9, 1080p)", "1920x832"), - ("832x1920 (9:21, 1080p)", "832x1920"), - # 720p - ("1280x720 (16:9, 720p)", "1280x720"), - ("720x1280 (9:16, 720p)", "720x1280"), - ("1024x1024 (1:1, 720p)", "1024x024"), - ("1280x544 (21:9, 720p)", "1280x544"), - ("544x1280 (9:21, 720p)", "544x1280"), - ("1104x832 (4:3, 720p)", "1104x832"), - ("832x1104 (3:4, 720p)", "832x1104"), - ("960x960 (1:1, 720p)", "960x960"), - # 480p - ("960x544 (16:9, 540p)", "960x544"), - ("544x960 (9:16, 540p)", "544x960"), - ("832x480 (16:9, 480p)", "832x480"), - ("480x832 (9:16, 480p)", "480x832"), - ("832x624 (4:3, 480p)", "832x624"), - ("624x832 (3:4, 480p)", "624x832"), - ("720x720 (1:1, 480p)", "720x720"), - ("512x512 (1:1, 480p)", "512x512"), - ], - value=ui_defaults.get("resolution","832x480"), + choices = resolution_choices, + value= current_resolution_choice, label= label ) with gr.Row(): @@ -5550,7 +6301,16 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non guidance_scale = gr.Slider(1.0, 20.0, value=ui_defaults.get("guidance_scale",5), step=0.5, label="Guidance Scale", visible=not (hunyuan_t2v or hunyuan_i2v)) audio_guidance_scale = gr.Slider(1.0, 20.0, value=ui_defaults.get("audio_guidance_scale",5), step=0.5, label="Audio Guidance", visible=fantasy) embedded_guidance_scale = gr.Slider(1.0, 20.0, value=6.0, step=0.5, label="Embedded Guidance Scale", visible=(hunyuan_t2v or hunyuan_i2v)) - flow_shift = gr.Slider(0.0, 25.0, value=ui_defaults.get("flow_shift",3), step=0.1, label="Shift Scale") + flow_shift = gr.Slider(1.0, 25.0, value=ui_defaults.get("flow_shift",3), step=0.1, label="Shift Scale") + with gr.Row(visible = get_model_family(model_type) == "wan" and not diffusion_forcing ) as sample_solver_row: + sample_solver = gr.Dropdown( value=ui_defaults.get("sample_solver",""), + choices=[ + ("unipc", ""), + ("dpm++", "dpm++"), + ("causvid", "causvid"), + ], visible= True, label= "Sampler Solver / Scheduler" + ) + with gr.Row(visible = vace): control_net_weight = gr.Slider(0.0, 2.0, value=ui_defaults.get("control_net_weight",1), step=0.1, label="Control Net Weight #1", visible=vace) control_net_weight2 = gr.Slider(0.0, 2.0, value=ui_defaults.get("control_net_weight2",1), step=0.1, label="Control Net Weight #2", visible=vace) @@ -5568,51 +6328,97 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non label="Activated Loras" ) loras_multipliers = gr.Textbox(label="Loras Multipliers (1.0 by default) separated by space characters or carriage returns, line that starts with # are ignored", value=launch_multis_str) - with gr.Tab("Speed", visible = not ltxv) as speed_tab: + with gr.Tab("Steps Skipping", visible = not ltxv) as speed_tab: with gr.Column(): - gr.Markdown("Tea Cache accelerates by skipping intelligently some steps, the more steps are skipped the lower the quality of the video (Tea Cache consumes also VRAM)") + gr.Markdown("Tea Cache and Mag Cache accelerate the Video Generation by skipping intelligently some steps, the more steps are skipped the lower the quality of the video.") + gr.Markdown("Steps Skipping consumes also VRAM. It is recommended not to skip at least the first 10% steps.") - tea_cache_setting = gr.Dropdown( + skip_steps_cache_type = gr.Dropdown( + choices=[ + ("None", ""), + ("Tea Cache", "tea"), + ("Mag Cache", "mag"), + ], + value=ui_defaults.get("skip_steps_cache_type",""), + visible=True, + label="Skip Steps Cache Type" + ) + + skip_steps_multiplier = gr.Dropdown( choices=[ - ("Tea Cache Disabled", 0), ("around x1.5 speed up", 1.5), ("around x1.75 speed up", 1.75), ("around x2 speed up", 2.0), ("around x2.25 speed up", 2.25), ("around x2.5 speed up", 2.5), ], - value=float(ui_defaults.get("tea_cache_setting",0)), + value=float(ui_defaults.get("skip_steps_multiplier",1.75)), visible=True, - label="Tea Cache Global Acceleration" + label="Skip Steps Cache Global Acceleration" ) - tea_cache_start_step_perc = gr.Slider(0, 100, value=ui_defaults.get("tea_cache_start_step_perc",0), step=1, label="Tea Cache starting moment in % of generation") + skip_steps_start_step_perc = gr.Slider(0, 100, value=ui_defaults.get("skip_steps_start_step_perc",0), step=1, label="Skip Steps starting moment in % of generation") with gr.Tab("Upsampling"): + with gr.Column(): gr.Markdown("Upsampling - postprocessing that may improve fluidity and the size of the video") - temporal_upsampling = gr.Dropdown( - choices=[ - ("Disabled", ""), - ("Rife x2 frames/s", "rife2"), - ("Rife x4 frames/s", "rife4"), - ], - value=ui_defaults.get("temporal_upsampling", ""), - visible=True, - scale = 1, - label="Temporal Upsampling" - ) - spatial_upsampling = gr.Dropdown( - choices=[ - ("Disabled", ""), - ("Lanczos x1.5", "lanczos1.5"), - ("Lanczos x2.0", "lanczos2"), - ], - value=ui_defaults.get("spatial_upsampling", ""), - visible=True, - scale = 1, - label="Spatial Upsampling" - ) + def gen_upsampling_dropdowns(temporal_upsampling, spatial_upsampling , element_class= None, max_height= None): + temporal_upsampling = gr.Dropdown( + choices=[ + ("Disabled", ""), + ("Rife x2 frames/s", "rife2"), + ("Rife x4 frames/s", "rife4"), + ], + value=temporal_upsampling, + visible=True, + scale = 1, + label="Temporal Upsampling", + elem_classes= element_class + # max_height = max_height + ) + spatial_upsampling = gr.Dropdown( + choices=[ + ("Disabled", ""), + ("Lanczos x1.5", "lanczos1.5"), + ("Lanczos x2.0", "lanczos2"), + ], + value=spatial_upsampling, + visible=True, + scale = 1, + label="Spatial Upsampling", + elem_classes= element_class + # max_height = max_height + ) + return temporal_upsampling, spatial_upsampling + temporal_upsampling, spatial_upsampling = gen_upsampling_dropdowns(ui_defaults.get("temporal_upsampling", ""), ui_defaults.get("spatial_upsampling", "")) + + with gr.Tab("MMAudio", visible = server_config.get("mmaudio_enabled", 0) != 0 and not fantasy and not hunyuan_video_avatar and not hunyuan_video_custom_audio) as mmaudio_tab: + with gr.Column(): + gr.Markdown("Add a soundtrack based on the content of the Generated Video") + def gen_mmaudio_dropdowns(MMAudio_setting, MMAudio_prompt, MMAudio_neg_prompt, MMAudio_seed = None, element_class = None, max_height = None ): + with gr.Row(max_height=max_height): + MMAudio_setting = gr.Dropdown( + choices=[ + ("Disabled", 0), + ("Enabled", 1), + ], + value=MMAudio_setting, + visible=True, + scale = 1, + label="MMAudio", + elem_classes= element_class, + # max_height = max_height + ) + if MMAudio_seed != None: + MMAudio_seed = gr.Slider(-1, 999999999, value=MMAudio_seed, step=1, scale=3, label="Seed (-1 for random)") + with gr.Row(max_height=max_height): + MMAudio_prompt = gr.Text(MMAudio_prompt, label="Prompt (1 or 2 keywords)", elem_classes= element_class) + MMAudio_neg_prompt = gr.Text(MMAudio_neg_prompt, label="Negative Prompt (1 or 2 keywords)", elem_classes= element_class) + + return MMAudio_setting, MMAudio_prompt, MMAudio_neg_prompt, MMAudio_seed + MMAudio_setting, MMAudio_prompt, MMAudio_neg_prompt, _ = gen_mmaudio_dropdowns(ui_defaults.get("MMAudio_setting", 0), ui_defaults.get("MMAudio_prompt", ""), ui_defaults.get("MMAudio_neg_prompt", "")) + with gr.Tab("Quality", visible = not ltxv) as quality_tab: with gr.Column(visible = not (hunyuan_i2v or hunyuan_t2v or hunyuan_video_custom or hunyuan_video_avatar) ) as skip_layer_guidance_row: @@ -5707,33 +6513,63 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non with gr.Row(): save_settings_btn = gr.Button("Set Settings as Default", visible = not args.lock_config) - export_settings_from_file_btn = gr.Button("Export Settings to File", visible = not args.lock_config) - use_video_settings_btn = gr.Button("Use Selected Video Settings", visible = not args.lock_config) + export_settings_from_file_btn = gr.Button("Export Settings to File") with gr.Row(): settings_file = gr.File(height=41,label="Load Settings From Video / Json") settings_base64_output = gr.Text(interactive= False, visible=False, value = "") settings_filename = gr.Text(interactive= False, visible=False, value = "") + + mode = gr.Text(value="", visible = False) - if not update_form: - with gr.Column(): + with gr.Column(): + if not update_form: gen_status = gr.Text(interactive= False, label = "Status") status_trigger = gr.Text(interactive= False, visible=False) - output = gr.Gallery( label="Generated videos", show_label=False, elem_id="gallery" , columns=[3], rows=[1], object_fit="contain", height=450, selected_index=0, interactive= False) + default_files = [] + output = gr.Gallery(value =default_files, label="Generated videos", show_label=False, elem_id="gallery" , columns=[3], rows=[1], object_fit="contain", height=450, selected_index=0, interactive= False) output_trigger = gr.Text(interactive= False, visible=False) refresh_form_trigger = gr.Text(interactive= False, visible=False) + fill_wizard_prompt_trigger = gr.Text(interactive= False, visible=False) + + with gr.Accordion("Video Info and Late Post Processing", open=False) as video_info_accordion: + with gr.Tabs() as video_info_tabs: + with gr.Tab("Information", id="video_info"): + video_info = gr.HTML(visible=True, min_height=100, value=get_default_video_info()) + with gr.Row(visible= False) as video_buttons_row: + video_info_extract_settings_btn = gr.Button("Extract Settings", size ="sm") + video_info_to_control_video_btn = gr.Button("To Control Video", size ="sm", visible = any_control_video ) + video_info_to_video_source_btn = gr.Button("To Video Source", size ="sm", visible = any_video_source) + video_info_eject_video_btn = gr.Button("Eject Video", size ="sm") + with gr.Tab("Post Processing", id= "post_processing") as video_postprocessing_tab: + with gr.Group(elem_classes= "postprocess"): + with gr.Column(): + PP_temporal_upsampling, PP_spatial_upsampling = gen_upsampling_dropdowns("", "", element_class ="postprocess") + PP_MMAudio_setting, PP_MMAudio_prompt, PP_MMAudio_neg_prompt, _ = gen_mmaudio_dropdowns( 0, "" , "", None, element_class ="postprocess" ) + PP_MMAudio_seed = gr.Slider(-1, 999999999, value=-1, step=1, label="Seed (-1 for random)") + PP_repeat_generation = gr.Slider(1, 25.0, value=1, step=1, label="Number of Sample Videos to Generate") + + video_info_postprocessing_btn = gr.Button("Apply Upscaling & MMAudio", size ="sm", visible=True) + with gr.Tab("Add Videos", id= "video_add"): + files_to_load = gr.Files(label= "Files to Load in Gallery", height=120) + with gr.Row(): + video_info_add_videos_btn = gr.Button("Add Videos", size ="sm") + + if not update_form: generate_btn = gr.Button("Generate") + generate_trigger = gr.Text(visible = False) add_to_queue_btn = gr.Button("Add New Prompt To Queue", visible = False) + add_to_queue_trigger = gr.Text(visible = False) with gr.Column(visible= False) as current_gen_column: with gr.Accordion("Preview", open=False) as queue_accordion: preview = gr.Image(label="Preview", height=200, show_label= False) preview_trigger = gr.Text(visible= False) gen_info = gr.HTML(visible=False, min_height=1) - with gr.Row(): - onemoresample_btn = gr.Button("One More Sample Please !") + with gr.Row() as current_gen_buttons_row: + onemoresample_btn = gr.Button("One More Sample Please !", visible = True) onemorewindow_btn = gr.Button("Extend this Sample Please !", visible = False) - abort_btn = gr.Button("Abort") + abort_btn = gr.Button("Abort", visible = True) with gr.Accordion("Queue Management", open=False) as queue_accordion: with gr.Row( ): queue_df = gr.DataFrame( @@ -5764,11 +6600,11 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non single_hidden_trigger_btn = gr.Button("trigger_countdown", visible=False, elem_id="trigger_info_single_btn") extra_inputs = prompt_vars + [wizard_prompt, wizard_variables_var, wizard_prompt_activated_var, video_prompt_column, image_prompt_column, - prompt_column_advanced, prompt_column_wizard_vars, prompt_column_wizard, lset_name, advanced_row, speed_tab, quality_tab, + prompt_column_advanced, prompt_column_wizard_vars, prompt_column_wizard, lset_name, save_lset_prompt_drop, advanced_row, speed_tab, mmaudio_tab, quality_tab, sliding_window_tab, misc_tab, prompt_enhancer_row, inference_steps_row, skip_layer_guidance_row, video_prompt_type_video_guide, video_prompt_type_video_mask, video_prompt_type_image_refs, video_guide_outpainting_col,video_guide_outpainting_top, video_guide_outpainting_bottom, video_guide_outpainting_left, video_guide_outpainting_right, - video_guide_outpainting_checkbox, video_guide_outpainting_row, show_advanced] # presets_column, + video_guide_outpainting_checkbox, video_guide_outpainting_row, show_advanced, video_info_to_control_video_btn, video_info_to_video_source_btn, sample_solver_row] # presets_column, if update_form: locals_dict = locals() gen_inputs = [state_dict if k=="state" else locals_dict[k] for k in inputs_names] + [state_dict] + extra_inputs @@ -5776,11 +6612,12 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non else: target_state = gr.Text(value = "state", interactive= False, visible= False) target_settings = gr.Text(value = "settings", interactive= False, visible= False) + last_choice = gr.Number(value =-1, interactive= False, visible= False) image_prompt_type.change(fn=refresh_image_prompt_type, inputs=[state, image_prompt_type], outputs=[image_start, image_end, video_source, keep_frames_video_source] ) - video_prompt_video_guide_trigger.change(fn=refresh_video_prompt_video_guide_trigger, inputs=[state, video_prompt_type, video_prompt_video_guide_trigger], outputs=[video_prompt_type, video_prompt_type_video_guide, video_guide, keep_frames_video_guide, video_guide_outpainting_col, video_prompt_type_video_mask, video_mask, mask_expand]) + # video_prompt_video_guide_trigger.change(fn=refresh_video_prompt_video_guide_trigger, inputs=[state, video_prompt_type, video_prompt_video_guide_trigger], outputs=[video_prompt_type, video_prompt_type_video_guide, video_guide, keep_frames_video_guide, denoising_strength, video_guide_outpainting_col, video_prompt_type_video_mask, video_mask, mask_expand]) video_prompt_type_image_refs.input(fn=refresh_video_prompt_type_image_refs, inputs = [state, video_prompt_type, video_prompt_type_image_refs], outputs = [video_prompt_type, image_refs, remove_background_images_ref, frames_positions, video_guide_outpainting_col]) - video_prompt_type_video_guide.input(fn=refresh_video_prompt_type_video_guide, inputs = [state, video_prompt_type, video_prompt_type_video_guide], outputs = [video_prompt_type, video_guide, keep_frames_video_guide, video_guide_outpainting_col, video_prompt_type_video_mask, video_mask, mask_expand]) + video_prompt_type_video_guide.input(fn=refresh_video_prompt_type_video_guide, inputs = [state, video_prompt_type, video_prompt_type_video_guide], outputs = [video_prompt_type, video_guide, keep_frames_video_guide, denoising_strength, video_guide_outpainting_col, video_prompt_type_video_mask, video_mask, mask_expand]) video_prompt_type_video_mask.input(fn=refresh_video_prompt_type_video_mask, inputs = [video_prompt_type, video_prompt_type_video_mask], outputs = [video_prompt_type, video_mask, mask_expand]) multi_prompts_gen_type.select(fn=refresh_prompt_labels, inputs=multi_prompts_gen_type, outputs=[prompt, wizard_prompt]) video_guide_outpainting_top.input(fn=update_video_guide_outpainting, inputs=[video_guide_outpainting, video_guide_outpainting_top, gr.State(0)], outputs = [video_guide_outpainting] ) @@ -5788,21 +6625,10 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non video_guide_outpainting_left.input(fn=update_video_guide_outpainting, inputs=[video_guide_outpainting, video_guide_outpainting_left,gr.State(2)], outputs = [video_guide_outpainting] ) video_guide_outpainting_right.input(fn=update_video_guide_outpainting, inputs=[video_guide_outpainting, video_guide_outpainting_right,gr.State(3)], outputs = [video_guide_outpainting] ) video_guide_outpainting_checkbox.input(fn=refresh_video_guide_outpainting_row, inputs=[video_guide_outpainting_checkbox, video_guide_outpainting], outputs= [video_guide_outpainting_row,video_guide_outpainting]) - show_advanced.change(fn=switch_advanced, inputs=[state, show_advanced, lset_name], outputs=[advanced_row, preset_buttons_rows, refresh_lora_btn, refresh2_row ,lset_name ]).then( + show_advanced.change(fn=switch_advanced, inputs=[state, show_advanced, lset_name], outputs=[advanced_row, preset_buttons_rows, refresh_lora_btn, refresh2_row ,lset_name]).then( fn=switch_prompt_type, inputs = [state, wizard_prompt_activated_var, wizard_variables_var, prompt, wizard_prompt, *prompt_vars], outputs = [wizard_prompt_activated_var, wizard_variables_var, prompt, wizard_prompt, prompt_column_advanced, prompt_column_wizard, prompt_column_wizard_vars, *prompt_vars]) queue_df.select( fn=handle_celll_selection, inputs=state, outputs=[queue_df, modal_image_display, modal_container]) - save_lset_btn.click(validate_save_lset, inputs=[lset_name], outputs=[apply_lset_btn, refresh_lora_btn, delete_lset_btn, save_lset_btn,confirm_save_lset_btn, cancel_lset_btn, save_lset_prompt_drop]) - confirm_save_lset_btn.click(fn=validate_wizard_prompt, inputs =[state, wizard_prompt_activated_var, wizard_variables_var, prompt, wizard_prompt, *prompt_vars] , outputs= [prompt]).then( - save_lset, inputs=[state, lset_name, loras_choices, loras_multipliers, prompt, save_lset_prompt_drop], outputs=[lset_name, apply_lset_btn,refresh_lora_btn, delete_lset_btn, save_lset_btn, confirm_save_lset_btn, cancel_lset_btn, save_lset_prompt_drop]) - delete_lset_btn.click(validate_delete_lset, inputs=[lset_name], outputs=[apply_lset_btn, refresh_lora_btn, delete_lset_btn, save_lset_btn,confirm_delete_lset_btn, cancel_lset_btn ]) - confirm_delete_lset_btn.click(delete_lset, inputs=[state, lset_name], outputs=[lset_name, apply_lset_btn, refresh_lora_btn, delete_lset_btn, save_lset_btn,confirm_delete_lset_btn, cancel_lset_btn ]) - cancel_lset_btn.click(cancel_lset, inputs=[], outputs=[apply_lset_btn, refresh_lora_btn, delete_lset_btn, save_lset_btn, confirm_delete_lset_btn,confirm_save_lset_btn, cancel_lset_btn,save_lset_prompt_drop ]) - apply_lset_btn.click(apply_lset, inputs=[state, wizard_prompt_activated_var, lset_name,loras_choices, loras_multipliers, prompt], outputs=[wizard_prompt_activated_var, loras_choices, loras_multipliers, prompt]).then( - fn = fill_wizard_prompt, inputs = [state, wizard_prompt_activated_var, prompt, wizard_prompt], outputs = [ wizard_prompt_activated_var, wizard_variables_var, prompt, wizard_prompt, prompt_column_advanced, prompt_column_wizard, prompt_column_wizard_vars, *prompt_vars] - ) - refresh_lora_btn.click(refresh_lora_list, inputs=[state, lset_name,loras_choices], outputs=[lset_name, loras_choices]) - refresh_lora_btn2.click(refresh_lora_list, inputs=[state, lset_name,loras_choices], outputs=[lset_name, loras_choices]) - output.select(select_video, state, None ) + output.select(select_video, [state, output], outputs=[last_choice, video_info, video_buttons_row, video_postprocessing_tab] ) preview_trigger.change(refresh_preview, inputs= [state], outputs= [preview]) def refresh_status_async(state, progress=gr.Progress()): @@ -5835,7 +6661,7 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non output_trigger.change(refresh_gallery, inputs = [state], - outputs = [output, gen_info, generate_btn, add_to_queue_btn, current_gen_column, queue_df, abort_btn, onemorewindow_btn]) + outputs = [output, gen_info, generate_btn, add_to_queue_btn, current_gen_column, current_gen_buttons_row, queue_df, abort_btn, onemorewindow_btn]) preview_column_no.input(show_preview_column_modal, inputs=[state, preview_column_no], outputs=[preview_column_no, modal_image_display, modal_container]) @@ -5849,14 +6675,34 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non save_settings_btn.click( fn=validate_wizard_prompt, inputs =[state, wizard_prompt_activated_var, wizard_variables_var, prompt, wizard_prompt, *prompt_vars] , outputs= [prompt]).then( save_inputs, inputs =[target_settings] + gen_inputs, outputs = []) - use_video_settings_btn.click(fn=validate_wizard_prompt, + video_info_extract_settings_btn.click(fn=validate_wizard_prompt, inputs= [state, wizard_prompt_activated_var, wizard_variables_var, prompt, wizard_prompt, *prompt_vars] , outputs= [prompt] ).then(fn=save_inputs, inputs =[target_state] + gen_inputs, outputs= None - ).then( fn=use_video_settings, inputs =[state, output] , outputs= [model_choice, refresh_form_trigger]) + ).then( fn=use_video_settings, inputs =[state, output, last_choice] , outputs= [model_choice, refresh_form_trigger]) + video_info_add_videos_btn.click(fn=add_videos_to_gallery, inputs =[state, output, last_choice, files_to_load], outputs = [output, files_to_load, video_info_tabs] ) + video_info_eject_video_btn.click(fn=eject_video_from_gallery, inputs =[state, output, last_choice], outputs = [output, video_info, video_buttons_row] ) + video_info_to_control_video_btn.click(fn=video_to_control_video, inputs =[state, output, last_choice], outputs = [video_guide] ) + video_info_to_video_source_btn.click(fn=video_to_source_video, inputs =[state, output, last_choice], outputs = [video_source] ) + video_info_postprocessing_btn.click(fn=apply_post_processing, inputs =[state, output, last_choice, PP_temporal_upsampling, PP_spatial_upsampling, PP_MMAudio_setting, PP_MMAudio_prompt, PP_MMAudio_neg_prompt, PP_MMAudio_seed, PP_repeat_generation], outputs = [mode, generate_trigger, add_to_queue_trigger ] ) + save_lset_btn.click(validate_save_lset, inputs=[state, lset_name], outputs=[apply_lset_btn, refresh_lora_btn, delete_lset_btn, save_lset_btn,confirm_save_lset_btn, cancel_lset_btn, save_lset_prompt_drop]) + delete_lset_btn.click(validate_delete_lset, inputs=[state, lset_name], outputs=[apply_lset_btn, refresh_lora_btn, delete_lset_btn, save_lset_btn,confirm_delete_lset_btn, cancel_lset_btn ]) + confirm_save_lset_btn.click(fn=validate_wizard_prompt, inputs =[state, wizard_prompt_activated_var, wizard_variables_var, prompt, wizard_prompt, *prompt_vars] , outputs= [prompt]).then( + fn=save_inputs, + inputs =[target_state] + gen_inputs, + outputs= None).then( + fn=save_lset, inputs=[state, lset_name, loras_choices, loras_multipliers, prompt, save_lset_prompt_drop], outputs=[lset_name, apply_lset_btn,refresh_lora_btn, delete_lset_btn, save_lset_btn, confirm_save_lset_btn, cancel_lset_btn, save_lset_prompt_drop]) + confirm_delete_lset_btn.click(delete_lset, inputs=[state, lset_name], outputs=[lset_name, apply_lset_btn, refresh_lora_btn, delete_lset_btn, save_lset_btn,confirm_delete_lset_btn, cancel_lset_btn ]) + cancel_lset_btn.click(cancel_lset, inputs=[], outputs=[apply_lset_btn, refresh_lora_btn, delete_lset_btn, save_lset_btn, confirm_delete_lset_btn,confirm_save_lset_btn, cancel_lset_btn,save_lset_prompt_drop ]) + apply_lset_btn.click(fn=save_inputs, inputs =[target_state] + gen_inputs, outputs= None).then(fn=apply_lset, + inputs=[state, wizard_prompt_activated_var, lset_name,loras_choices, loras_multipliers, prompt], outputs=[wizard_prompt_activated_var, loras_choices, loras_multipliers, prompt, fill_wizard_prompt_trigger, model_choice, refresh_form_trigger]) + refresh_lora_btn.click(refresh_lora_list, inputs=[state, lset_name,loras_choices], outputs=[lset_name, loras_choices]) + refresh_lora_btn2.click(refresh_lora_list, inputs=[state, lset_name,loras_choices], outputs=[lset_name, loras_choices]) + + lset_name.select(fn=update_lset_type, inputs=[state, lset_name], outputs=save_lset_prompt_drop) export_settings_from_file_btn.click(fn=validate_wizard_prompt, inputs= [state, wizard_prompt_activated_var, wizard_variables_var, prompt, wizard_prompt, *prompt_vars] , outputs= [prompt] @@ -5883,6 +6729,11 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non ).then(fn=load_settings_from_file, inputs =[state, settings_file] , outputs= [model_choice, refresh_form_trigger, settings_file]) + fill_wizard_prompt_trigger.change( + fn = fill_wizard_prompt, inputs = [state, wizard_prompt_activated_var, prompt, wizard_prompt], outputs = [ wizard_prompt_activated_var, wizard_variables_var, prompt, wizard_prompt, prompt_column_advanced, prompt_column_wizard, prompt_column_wizard_vars, *prompt_vars] + ) + + refresh_form_trigger.change(fn= fill_inputs, inputs=[state], outputs=gen_inputs + extra_inputs @@ -5906,8 +6757,10 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non ).then(fn= preload_model_when_switching, inputs=[state], outputs=[gen_status]) + + generate_btn.click(fn = init_generate, inputs = [state, output, last_choice], outputs=[generate_trigger, mode]) - generate_btn.click(fn=validate_wizard_prompt, + generate_trigger.change(fn=validate_wizard_prompt, inputs= [state, wizard_prompt_activated_var, wizard_variables_var, prompt, wizard_prompt, *prompt_vars] , outputs= [prompt] ).then(fn=save_inputs, @@ -5918,7 +6771,7 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non outputs= queue_df ).then(fn=prepare_generate_video, inputs= [state], - outputs= [generate_btn, add_to_queue_btn, current_gen_column] + outputs= [generate_btn, add_to_queue_btn, current_gen_column, current_gen_buttons_row] ).then(fn=activate_status, inputs= [state], outputs= [status_trigger], @@ -6031,7 +6884,9 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non ) - add_to_queue_btn.click(fn=validate_wizard_prompt, + add_to_queue_btn.click(fn = lambda : (get_unique_id(), ""), inputs = None, outputs=[add_to_queue_trigger, mode]) + # gr.on(triggers=[add_to_queue_btn.click, add_to_queue_trigger.change],fn=validate_wizard_prompt, + add_to_queue_trigger.change(fn=validate_wizard_prompt, inputs =[state, wizard_prompt_activated_var, wizard_variables_var, prompt, wizard_prompt, *prompt_vars] , outputs= [prompt] ).then(fn=save_inputs, @@ -6056,8 +6911,8 @@ def generate_video_tab(update_form = False, state_dict = None, ui_defaults = Non ) return ( state, loras_choices, lset_name, state, - video_guide, video_mask, image_refs, video_prompt_video_guide_trigger, prompt_enhancer - ) + video_guide, video_mask, image_refs, prompt_enhancer_row, mmaudio_tab + ) def generate_download_tab(lset_name,loras_choices, state): @@ -6074,7 +6929,7 @@ def generate_download_tab(lset_name,loras_choices, state): download_loras_btn.click(fn=download_loras, inputs=[], outputs=[download_status_row, download_status]).then(fn=refresh_lora_list, inputs=[state, lset_name,loras_choices], outputs=[lset_name, loras_choices]) -def generate_configuration_tab(state, blocks, header, model_choice, prompt_enhancer_row): +def generate_configuration_tab(state, blocks, header, model_choice, prompt_enhancer_row, mmaudio_tab): gr.Markdown("Please click Apply Changes at the bottom so that the changes are effective. Some choices below may be locked if the app has been launched by specifying a config preset.") with gr.Column(): with gr.Tabs(): @@ -6114,7 +6969,7 @@ def generate_configuration_tab(state, blocks, header, model_choice, prompt_enhan ("Flash" + check("flash")+ ": good quality - requires additional install (usually complex to set up on Windows without WSL)", "flash"), ("Xformers" + check("xformers")+ ": good quality - requires additional install (usually complex, may consume less VRAM to set up on Windows without WSL)", "xformers"), ("Sage" + check("sage")+ ": 30% faster but slightly worse quality - requires additional install (usually complex to set up on Windows without WSL)", "sage"), - ("Sage2" + check("sage2")+ ": 40% faster but slightly worse quality - requires additional install (usually complex to set up on Windows without WSL)", "sage2"), + ("Sage2/2++" + check("sage2")+ ": 40% faster but slightly worse quality - requires additional install (usually complex to set up on Windows without WSL)", "sage2"), ], value= attention_mode, label="Attention Type", @@ -6149,15 +7004,6 @@ def generate_configuration_tab(state, blocks, header, model_choice, prompt_enhan label="Keep Previously Generated Videos when starting a new Generation Batch" ) - enhancer_enabled_choice = gr.Dropdown( - choices=[ - ("On", 1), - ("Off", 0), - ], - value=server_config.get("enhancer_enabled", 0), - label="Prompt Enhancer (if enabled, 8 GB of extra models will be downloaded)" - ) - UI_theme_choice = gr.Dropdown( choices=[ ("Blue Sky", "default"), @@ -6274,6 +7120,25 @@ def generate_configuration_tab(state, blocks, header, model_choice, prompt_enhan label="Profile (for power users only, not needed to change it)" ) preload_in_VRAM_choice = gr.Slider(0, 40000, value=server_config.get("preload_in_VRAM", 0), step=100, label="Number of MB of Models that are Preloaded in VRAM (0 will use Profile default)") + with gr.Tab("Extensions"): + enhancer_enabled_choice = gr.Dropdown( + choices=[ + ("Off", 0), + ("On", 1), + ], + value=server_config.get("enhancer_enabled", 0), + label="Prompt Enhancer (if enabled, 8 GB of extra models will be downloaded)" + ) + + mmaudio_enabled_choice = gr.Dropdown( + choices=[ + ("Off", 0), + ("Turned On but unloaded from RAM after usage", 1), + ("Turned On and kept in RAM for fast loading", 2), + ], + value=server_config.get("mmaudio_enabled", 0), + label="MMAudio (if enabled, 10 GB of extra models will be downloaded)" + ) with gr.Tab("Notifications"): gr.Markdown("### Notification Settings") @@ -6319,13 +7184,14 @@ def generate_configuration_tab(state, blocks, header, model_choice, prompt_enhan preload_model_policy_choice, UI_theme_choice, enhancer_enabled_choice, + mmaudio_enabled_choice, fit_canvas_choice, preload_in_VRAM_choice, depth_anything_v2_variant_choice, notification_sound_enabled_choice, notification_sound_volume_choice ], - outputs= [msg , header, model_choice, prompt_enhancer_row] + outputs= [msg , header, model_choice, prompt_enhancer_row, mmaudio_tab] ) def generate_about_tab(): @@ -6512,6 +7378,21 @@ def get_js(): def create_ui(): global vmc_event_handler css = """ + .postprocess div, + .postprocess span, + .postprocess label, + .postprocess input, + .postprocess select, + .postprocess textarea { + font-size: 12px !important; + padding: 0px !important; + border: 5px !important; + border-radius: 0px !important; + --form-gap-width: 0px !important; + box-shadow: none !important; + --layout-gap: 0px !important; + } + .postprocess span {margin-top:4px;margin-bottom:4px} #model_list{ background-color:black; padding:1px} @@ -6836,17 +7717,17 @@ def create_ui(): header = gr.Markdown(generate_header(transformer_type, compile, attention_mode), visible= True) with gr.Row(): ( state, loras_choices, lset_name, state, - video_guide, video_mask, image_refs, video_prompt_type_video_trigger, prompt_enhancer_row + video_guide, video_mask, image_refs, prompt_enhancer_row, mmaudio_tab ) = generate_video_tab(model_choice=model_choice, header=header, main = main) with gr.Tab("Guides", id="info") as info_tab: generate_info_tab() with gr.Tab("Video Mask Creator", id="video_mask_creator") as video_mask_creator: - matanyone_app.display(main_tabs, tab_state, model_choice, video_guide, video_mask, image_refs, video_prompt_type_video_trigger) + matanyone_app.display(main_tabs, tab_state, model_choice, video_guide, video_mask, image_refs) if not args.lock_config: with gr.Tab("Downloads", id="downloads") as downloads_tab: generate_download_tab(lset_name, loras_choices, state) with gr.Tab("Configuration", id="configuration") as configuration_tab: - generate_configuration_tab(state, main, header, model_choice, prompt_enhancer_row) + generate_configuration_tab(state, main, header, model_choice, prompt_enhancer_row, mmaudio_tab) with gr.Tab("About"): generate_about_tab()