mirror of
https://github.com/Wan-Video/Wan2.1.git
synced 2025-11-04 14:16:57 +00:00
stuff and more stuff
This commit is contained in:
parent
e0666a3e6d
commit
28fc48db2d
16
Custom Resolutions Instructions.txt
Normal file
16
Custom Resolutions Instructions.txt
Normal file
@ -0,0 +1,16 @@
|
||||
You can override the choice of Resolutions offered by WanGP, if you create a file "resolutions.json" in the main WanGP folder.
|
||||
This file is composed of a list of 2 elements sublists. Each 2 elements sublist should have the format ["Label", "WxH"] where W, H are respectively the Width and Height of the resolution. Please make sure that W and H are multiples of 16. The letter "x" should be placed inbetween these two dimensions.
|
||||
|
||||
Here is below a sample "resolutions.json" file :
|
||||
|
||||
[
|
||||
["1280x720 (16:9, 720p)", "1280x720"],
|
||||
["720x1280 (9:16, 720p)", "720x1280"],
|
||||
["1024x1024 (1:1, 720p)", "1024x1024"],
|
||||
["1280x544 (21:9, 720p)", "1280x544"],
|
||||
["544x1280 (9:21, 720p)", "544x1280"],
|
||||
["1104x832 (4:3, 720p)", "1104x832"],
|
||||
["832x1104 (3:4, 720p)", "832x1104"],
|
||||
["960x960 (1:1, 720p)", "960x960"],
|
||||
["832x480 (16:9, 480p)", "832x480"]
|
||||
]
|
||||
17
README.md
17
README.md
@ -11,7 +11,7 @@ WanGP supports the Wan (and derived models), Hunyuan Video and LTV Video models
|
||||
- Very Fast on the latest GPUs
|
||||
- Easy to use Full Web based interface
|
||||
- Auto download of the required model adapted to your specific architecture
|
||||
- Tools integrated to facilitate Video Generation : Mask Editor, Prompt Enhancer, Temporal and Spatial Generation
|
||||
- Tools integrated to facilitate Video Generation : Mask Editor, Prompt Enhancer, Temporal and Spatial Generation, MMAudio, Vew
|
||||
- Loras Support to customize each model
|
||||
- Queuing system : make your shopping list of videos to generate and come back later
|
||||
|
||||
@ -20,6 +20,21 @@ WanGP supports the Wan (and derived models), Hunyuan Video and LTV Video models
|
||||
**Follow DeepBeepMeep on Twitter/X to get the Latest News**: https://x.com/deepbeepmeep
|
||||
|
||||
## 🔥 Latest Updates
|
||||
### July 2 2025: WanGP v6.5, WanGP takes care of you: lots of quality of life features:
|
||||
- View directly inside WanGP the properties (seed, resolutions, length, most settings...) of the past generations
|
||||
- In one click use the newly generated video as a Control Video or Source Video to be continued
|
||||
- Manage multiple settings for the same model and switch between them using a dropdown box
|
||||
- WanGP will keep the last generated videos in the Gallery and will remember the last model you used if you restart the app but kept the Web page open
|
||||
|
||||
Taking care of your life is not enough, you want new stuff to play with ?
|
||||
- MMAudio directly inside WanGP : add an audio soundtrack that matches the content of your video. By the way it is a low VRAM MMAudio and 6 GB of VRAM should be sufficient
|
||||
- Forgot to upsample your video during the generation ? want to try another MMAudio variation ? Fear not you can also apply upsampling or add an MMAudio track once the video generation is done. Even better you can ask WangGP for multiple variations of MMAudio to pick the one you like best
|
||||
- MagCache support: a new step skipping approach, supposed to be better than TeaCache. Makes a difference if you usually generate with a high number of steps
|
||||
- SageAttention2++ support : not just the compatibility but also a slightly reduced VRAM usage
|
||||
- Video2Video in Wan Text2Video : this is the paradox, a text2video can become a video2video if you start the denoising process later on an existing video
|
||||
- FusioniX upsampler: this is an illustration of Video2Video in Text2Video. Use the FusioniX text2video model with an output resolution of 1080p and a denoising strength of 0.25 and you will get one of the best upsamplers (in only 2/3 steps, you will need lots of VRAM though). Increase the denoising strength and you will get one of the best Video Restorer
|
||||
- Preliminary support for multiple Wan Samplers
|
||||
|
||||
### June 23 2025: WanGP v6.3, Vace Unleashed. Thought we couldnt squeeze Vace even more ?
|
||||
- Multithreaded preprocessing when possible for faster generations
|
||||
- Multithreaded frames Lanczos Upsampling as a bonus
|
||||
|
||||
@ -206,17 +206,38 @@ https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_T2V_14B_lightx2v_cfg
|
||||
|
||||
## Macro System (Advanced)
|
||||
|
||||
Create multiple prompts from templates using macros:
|
||||
Create multiple prompts from templates using macros. This allows you to generate variations of a sentence by defining lists of values for different variables.
|
||||
|
||||
**Syntax Rule:**
|
||||
|
||||
Define your variables on a single line starting with `!`. Each complete variable definition, including its name and values, **must be separated by a colon (`:`)**.
|
||||
|
||||
**Format:**
|
||||
|
||||
```
|
||||
! {Subject}="cat","woman","man", {Location}="forest","lake","city", {Possessive}="its","her","his"
|
||||
! {Variable1}="valueA","valueB" : {Variable2}="valueC","valueD"
|
||||
This is a template using {Variable1} and {Variable2}.
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
The following macro will generate three distinct prompts by cycling through the values for each variable.
|
||||
|
||||
**Macro Definition:**
|
||||
|
||||
```
|
||||
! {Subject}="cat","woman","man" : {Location}="forest","lake","city" : {Possessive}="its","her","his"
|
||||
In the video, a {Subject} is presented. The {Subject} is in a {Location} and looks at {Possessive} watch.
|
||||
```
|
||||
|
||||
This generates:
|
||||
1. "In the video, a cat is presented. The cat is in a forest and looks at its watch."
|
||||
2. "In the video, a woman is presented. The woman is in a lake and looks at her watch."
|
||||
3. "In the video, a man is presented. The man is in a city and looks at his watch."
|
||||
**Generated Output:**
|
||||
|
||||
```
|
||||
In the video, a cat is presented. The cat is in a forest and looks at its watch.
|
||||
In the video, a woman is presented. The woman is in a lake and looks at her watch.
|
||||
In the video, a man is presented. The man is in a city and looks at his watch.
|
||||
```
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
|
||||
@ -949,11 +949,15 @@ class HunyuanVideoPipeline(DiffusionPipeline):
|
||||
# width = width or self.transformer.config.sample_size * self.vae_scale_factor
|
||||
# to deal with lora scaling and other possible forward hooks
|
||||
trans = self.transformer
|
||||
if trans.enable_cache:
|
||||
teacache_multiplier = trans.teacache_multiplier
|
||||
if trans.enable_cache == "tea":
|
||||
teacache_multiplier = trans.cache_multiplier
|
||||
trans.accumulated_rel_l1_distance = 0
|
||||
trans.rel_l1_thresh = 0.1 if teacache_multiplier < 2 else 0.15
|
||||
# trans.cache_start_step = int(tea_cache_start_step_perc*num_inference_steps/100)
|
||||
elif trans.enable_cache == "mag":
|
||||
trans.compute_magcache_threshold(trans.cache_start_step, num_inference_steps, trans.cache_multiplier)
|
||||
trans.accumulated_err, trans.accumulated_steps, trans.accumulated_ratio = 0, 0, 1.0
|
||||
else:
|
||||
trans.enable_cache == None
|
||||
# 1. Check inputs. Raise error if not correct
|
||||
self.check_inputs(
|
||||
prompt,
|
||||
|
||||
@ -934,10 +934,15 @@ class HunyuanVideoAudioPipeline(DiffusionPipeline):
|
||||
|
||||
transformer = self.transformer
|
||||
|
||||
if transformer.enable_cache:
|
||||
teacache_multiplier = transformer.teacache_multiplier
|
||||
if transformer.enable_cache == "tea":
|
||||
teacache_multiplier = transformer.cache_multiplier
|
||||
transformer.accumulated_rel_l1_distance = 0
|
||||
transformer.rel_l1_thresh = 0.1 if teacache_multiplier < 2 else 0.15
|
||||
elif transformer.enable_cache == "mag":
|
||||
transformer.compute_magcache_threshold(transformer.cache_start_step, num_inference_steps, transformer.cache_multiplier)
|
||||
transformer.accumulated_err, transformer.accumulated_steps, transformer.accumulated_ratio = 0, 0, 1.0
|
||||
else:
|
||||
transformer.enable_cache == None
|
||||
|
||||
# 1. Check inputs. Raise error if not correct
|
||||
self.check_inputs(
|
||||
@ -1136,7 +1141,7 @@ class HunyuanVideoAudioPipeline(DiffusionPipeline):
|
||||
if self._interrupt:
|
||||
return [None]
|
||||
|
||||
if transformer.enable_cache:
|
||||
if transformer.enable_cache == "tea":
|
||||
cache_size = round( infer_length / frames_per_batch )
|
||||
transformer.previous_residual = [None] * latent_items
|
||||
cache_all_previous_residual = [None] * latent_items
|
||||
@ -1144,6 +1149,8 @@ class HunyuanVideoAudioPipeline(DiffusionPipeline):
|
||||
cache_should_calc = [True] * cache_size
|
||||
cache_accumulated_rel_l1_distance = [0.] * cache_size
|
||||
cache_teacache_skipped_steps = [0] * cache_size
|
||||
elif transformer.enable_cache == "mag":
|
||||
transformer.previous_residual = [None] * latent_items
|
||||
|
||||
|
||||
with self.progress_bar(total=num_inference_steps) as progress_bar:
|
||||
@ -1180,7 +1187,7 @@ class HunyuanVideoAudioPipeline(DiffusionPipeline):
|
||||
img_ref_len = (latent_model_input.shape[-1] // 2) * (latent_model_input.shape[-2] // 2) * ( 1)
|
||||
img_all_len = (latents_all.shape[-1] // 2) * (latents_all.shape[-2] // 2) * latents_all.shape[-3]
|
||||
|
||||
if transformer.enable_cache and cache_size > 1:
|
||||
if transformer.enable_cache == "tea" and cache_size > 1:
|
||||
for l in range(latent_items):
|
||||
if cache_all_previous_residual[l] != None:
|
||||
bsz = cache_all_previous_residual[l].shape[0]
|
||||
@ -1297,7 +1304,7 @@ class HunyuanVideoAudioPipeline(DiffusionPipeline):
|
||||
pred_latents[:, :, p] += latents[:, :, iii]
|
||||
counter[:, :, p] += 1
|
||||
|
||||
if transformer.enable_cache and cache_size > 1:
|
||||
if transformer.enable_cache == "tea" and cache_size > 1:
|
||||
for l in range(latent_items):
|
||||
if transformer.previous_residual[l] != None:
|
||||
bsz = transformer.previous_residual[l].shape[0]
|
||||
|
||||
@ -494,7 +494,7 @@ class MMSingleStreamBlock(nn.Module):
|
||||
|
||||
class HYVideoDiffusionTransformer(ModelMixin, ConfigMixin):
|
||||
def preprocess_loras(self, model_type, sd):
|
||||
if model_type != "i2v" :
|
||||
if model_type != "hunyuan_i2v" :
|
||||
return sd
|
||||
new_sd = {}
|
||||
for k,v in sd.items():
|
||||
@ -797,6 +797,59 @@ class HYVideoDiffusionTransformer(ModelMixin, ConfigMixin):
|
||||
for block in self.single_blocks:
|
||||
block.disable_deterministic()
|
||||
|
||||
def compute_magcache_threshold(self, start_step, num_inference_steps = 0, speed_factor =0):
|
||||
def nearest_interp(src_array, target_length):
|
||||
src_length = len(src_array)
|
||||
if target_length == 1:
|
||||
return np.array([src_array[-1]])
|
||||
scale = (src_length - 1) / (target_length - 1)
|
||||
mapped_indices = np.round(np.arange(target_length) * scale).astype(int)
|
||||
return src_array[mapped_indices]
|
||||
|
||||
if len(self.def_mag_ratios) != num_inference_steps:
|
||||
self.mag_ratios = nearest_interp(self.def_mag_ratios, num_inference_steps)
|
||||
else:
|
||||
self.mag_ratios = self.def_mag_ratios
|
||||
|
||||
best_deltas = None
|
||||
best_threshold = 0.01
|
||||
best_diff = 1000
|
||||
best_signed_diff = 1000
|
||||
target_nb_steps= int(num_inference_steps / speed_factor)
|
||||
threshold = 0.01
|
||||
while threshold <= 0.6:
|
||||
nb_steps = 0
|
||||
diff = 1000
|
||||
accumulated_err, accumulated_steps, accumulated_ratio = 0, 0, 1.0
|
||||
for i in range(num_inference_steps):
|
||||
if i<=start_step:
|
||||
skip = False
|
||||
else:
|
||||
cur_mag_ratio = self.mag_ratios[i] # conditional and unconditional in one list
|
||||
accumulated_ratio *= cur_mag_ratio # magnitude ratio between current step and the cached step
|
||||
accumulated_steps += 1 # skip steps plus 1
|
||||
cur_skip_err = np.abs(1-accumulated_ratio) # skip error of current steps
|
||||
accumulated_err += cur_skip_err # accumulated error of multiple steps
|
||||
if accumulated_err<threshold and accumulated_steps<=self.magcache_K:
|
||||
skip = True
|
||||
else:
|
||||
skip = False
|
||||
accumulated_err, accumulated_steps, accumulated_ratio = 0, 0, 1.0
|
||||
if not skip:
|
||||
nb_steps += 1
|
||||
signed_diff = target_nb_steps - nb_steps
|
||||
diff = abs(signed_diff)
|
||||
if diff < best_diff:
|
||||
best_threshold = threshold
|
||||
best_diff = diff
|
||||
best_signed_diff = signed_diff
|
||||
elif diff > best_diff:
|
||||
break
|
||||
threshold += 0.01
|
||||
self.magcache_thresh = best_threshold
|
||||
print(f"Mag Cache, best threshold found:{best_threshold:0.2f} with gain x{num_inference_steps/(target_nb_steps - best_signed_diff):0.2f} for a target of x{speed_factor}")
|
||||
return best_threshold
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
@ -925,25 +978,38 @@ class HYVideoDiffusionTransformer(ModelMixin, ConfigMixin):
|
||||
if self.enable_cache:
|
||||
if x_id == 0:
|
||||
self.should_calc = True
|
||||
inp = img[0:1]
|
||||
vec_ = vec[0:1]
|
||||
( img_mod1_shift, img_mod1_scale, _ , _ , _ , _ , ) = self.double_blocks[0].img_mod(vec_).chunk(6, dim=-1)
|
||||
normed_inp = self.double_blocks[0].img_norm1(inp)
|
||||
normed_inp = normed_inp.to(torch.bfloat16)
|
||||
modulated_inp = modulate( normed_inp, shift=img_mod1_shift, scale=img_mod1_scale )
|
||||
del normed_inp, img_mod1_shift, img_mod1_scale
|
||||
if step_no <= self.cache_start_step or step_no == self.num_steps-1:
|
||||
self.accumulated_rel_l1_distance = 0
|
||||
if self.enable_cache == "mag":
|
||||
if step_no > self.cache_start_step:
|
||||
cur_mag_ratio = self.mag_ratios[step_no]
|
||||
self.accumulated_ratio = self.accumulated_ratio*cur_mag_ratio
|
||||
cur_skip_err = np.abs(1-self.accumulated_ratio)
|
||||
self.accumulated_err += cur_skip_err
|
||||
self.accumulated_steps += 1
|
||||
if self.accumulated_err<=self.magcache_thresh and self.accumulated_steps<=self.magcache_K:
|
||||
self.should_calc = False
|
||||
self.cache_skipped_steps += 1
|
||||
else:
|
||||
self.accumulated_ratio, self.accumulated_steps, self.accumulated_err = 1.0, 0, 0
|
||||
else:
|
||||
coefficients = [7.33226126e+02, -4.01131952e+02, 6.75869174e+01, -3.14987800e+00, 9.61237896e-02]
|
||||
rescale_func = np.poly1d(coefficients)
|
||||
self.accumulated_rel_l1_distance += rescale_func(((modulated_inp-self.previous_modulated_input).abs().mean() / self.previous_modulated_input.abs().mean()).cpu().item())
|
||||
if self.accumulated_rel_l1_distance < self.rel_l1_thresh:
|
||||
self.should_calc = False
|
||||
self.teacache_skipped_steps += 1
|
||||
else:
|
||||
inp = img[0:1]
|
||||
vec_ = vec[0:1]
|
||||
( img_mod1_shift, img_mod1_scale, _ , _ , _ , _ , ) = self.double_blocks[0].img_mod(vec_).chunk(6, dim=-1)
|
||||
normed_inp = self.double_blocks[0].img_norm1(inp)
|
||||
normed_inp = normed_inp.to(torch.bfloat16)
|
||||
modulated_inp = modulate( normed_inp, shift=img_mod1_shift, scale=img_mod1_scale )
|
||||
del normed_inp, img_mod1_shift, img_mod1_scale
|
||||
if step_no <= self.cache_start_step or step_no == self.num_steps-1:
|
||||
self.accumulated_rel_l1_distance = 0
|
||||
self.previous_modulated_input = modulated_inp
|
||||
else:
|
||||
coefficients = [7.33226126e+02, -4.01131952e+02, 6.75869174e+01, -3.14987800e+00, 9.61237896e-02]
|
||||
rescale_func = np.poly1d(coefficients)
|
||||
self.accumulated_rel_l1_distance += rescale_func(((modulated_inp-self.previous_modulated_input).abs().mean() / self.previous_modulated_input.abs().mean()).cpu().item())
|
||||
if self.accumulated_rel_l1_distance < self.rel_l1_thresh:
|
||||
self.should_calc = False
|
||||
self.cache_skipped_steps += 1
|
||||
else:
|
||||
self.accumulated_rel_l1_distance = 0
|
||||
self.previous_modulated_input = modulated_inp
|
||||
else:
|
||||
self.should_calc = True
|
||||
|
||||
|
||||
@ -584,10 +584,10 @@ def main():
|
||||
# If teacache => reset counters
|
||||
if trans.enable_cache:
|
||||
trans.teacache_counter = 0
|
||||
trans.teacache_multiplier = args.teacache
|
||||
trans.cache_multiplier = args.teacache
|
||||
trans.cache_start_step = int(args.teacache_start * args.steps / 100.0)
|
||||
trans.num_steps = args.steps
|
||||
trans.teacache_skipped_steps = 0
|
||||
trans.cache_skipped_steps = 0
|
||||
trans.previous_residual_uncond = None
|
||||
trans.previous_residual_cond = None
|
||||
|
||||
|
||||
0
postprocessing/mmaudio/__init__.py
Normal file
0
postprocessing/mmaudio/__init__.py
Normal file
0
postprocessing/mmaudio/data/__init__.py
Normal file
0
postprocessing/mmaudio/data/__init__.py
Normal file
188
postprocessing/mmaudio/data/av_utils.py
Normal file
188
postprocessing/mmaudio/data/av_utils.py
Normal file
@ -0,0 +1,188 @@
|
||||
from dataclasses import dataclass
|
||||
from fractions import Fraction
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import av
|
||||
import numpy as np
|
||||
import torch
|
||||
from av import AudioFrame
|
||||
|
||||
|
||||
@dataclass
|
||||
class VideoInfo:
|
||||
duration_sec: float
|
||||
fps: Fraction
|
||||
clip_frames: torch.Tensor
|
||||
sync_frames: torch.Tensor
|
||||
all_frames: Optional[list[np.ndarray]]
|
||||
|
||||
@property
|
||||
def height(self):
|
||||
return self.all_frames[0].shape[0]
|
||||
|
||||
@property
|
||||
def width(self):
|
||||
return self.all_frames[0].shape[1]
|
||||
|
||||
@classmethod
|
||||
def from_image_info(cls, image_info: 'ImageInfo', duration_sec: float,
|
||||
fps: Fraction) -> 'VideoInfo':
|
||||
num_frames = int(duration_sec * fps)
|
||||
all_frames = [image_info.original_frame] * num_frames
|
||||
return cls(duration_sec=duration_sec,
|
||||
fps=fps,
|
||||
clip_frames=image_info.clip_frames,
|
||||
sync_frames=image_info.sync_frames,
|
||||
all_frames=all_frames)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ImageInfo:
|
||||
clip_frames: torch.Tensor
|
||||
sync_frames: torch.Tensor
|
||||
original_frame: Optional[np.ndarray]
|
||||
|
||||
@property
|
||||
def height(self):
|
||||
return self.original_frame.shape[0]
|
||||
|
||||
@property
|
||||
def width(self):
|
||||
return self.original_frame.shape[1]
|
||||
|
||||
|
||||
def read_frames(video_path: Path, list_of_fps: list[float], start_sec: float, end_sec: float,
|
||||
need_all_frames: bool) -> tuple[list[np.ndarray], list[np.ndarray], Fraction]:
|
||||
output_frames = [[] for _ in list_of_fps]
|
||||
next_frame_time_for_each_fps = [0.0 for _ in list_of_fps]
|
||||
time_delta_for_each_fps = [1 / fps for fps in list_of_fps]
|
||||
all_frames = []
|
||||
|
||||
# container = av.open(video_path)
|
||||
with av.open(video_path) as container:
|
||||
stream = container.streams.video[0]
|
||||
fps = stream.guessed_rate
|
||||
stream.thread_type = 'AUTO'
|
||||
for packet in container.demux(stream):
|
||||
for frame in packet.decode():
|
||||
frame_time = frame.time
|
||||
if frame_time < start_sec:
|
||||
continue
|
||||
if frame_time > end_sec:
|
||||
break
|
||||
|
||||
frame_np = None
|
||||
if need_all_frames:
|
||||
frame_np = frame.to_ndarray(format='rgb24')
|
||||
all_frames.append(frame_np)
|
||||
|
||||
for i, _ in enumerate(list_of_fps):
|
||||
this_time = frame_time
|
||||
while this_time >= next_frame_time_for_each_fps[i]:
|
||||
if frame_np is None:
|
||||
frame_np = frame.to_ndarray(format='rgb24')
|
||||
|
||||
output_frames[i].append(frame_np)
|
||||
next_frame_time_for_each_fps[i] += time_delta_for_each_fps[i]
|
||||
|
||||
output_frames = [np.stack(frames) for frames in output_frames]
|
||||
return output_frames, all_frames, fps
|
||||
|
||||
|
||||
def reencode_with_audio(video_info: VideoInfo, output_path: Path, audio: torch.Tensor,
|
||||
sampling_rate: int):
|
||||
container = av.open(output_path, 'w')
|
||||
output_video_stream = container.add_stream('h264', video_info.fps)
|
||||
output_video_stream.codec_context.bit_rate = 10 * 1e6 # 10 Mbps
|
||||
output_video_stream.width = video_info.width
|
||||
output_video_stream.height = video_info.height
|
||||
output_video_stream.pix_fmt = 'yuv420p'
|
||||
|
||||
output_audio_stream = container.add_stream('aac', sampling_rate)
|
||||
|
||||
# encode video
|
||||
for image in video_info.all_frames:
|
||||
image = av.VideoFrame.from_ndarray(image)
|
||||
packet = output_video_stream.encode(image)
|
||||
container.mux(packet)
|
||||
|
||||
for packet in output_video_stream.encode():
|
||||
container.mux(packet)
|
||||
|
||||
# convert float tensor audio to numpy array
|
||||
audio_np = audio.numpy().astype(np.float32)
|
||||
audio_frame = AudioFrame.from_ndarray(audio_np, format='flt', layout='mono')
|
||||
audio_frame.sample_rate = sampling_rate
|
||||
|
||||
for packet in output_audio_stream.encode(audio_frame):
|
||||
container.mux(packet)
|
||||
|
||||
for packet in output_audio_stream.encode():
|
||||
container.mux(packet)
|
||||
|
||||
container.close()
|
||||
|
||||
|
||||
|
||||
import subprocess
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
import torch
|
||||
|
||||
def remux_with_audio(video_path: Path, output_path: Path, audio: torch.Tensor, sampling_rate: int):
|
||||
"""Remux video with new audio using FFmpeg."""
|
||||
with tempfile.NamedTemporaryFile(suffix='.wav', delete=False) as f:
|
||||
temp_path = Path(f.name)
|
||||
|
||||
try:
|
||||
# Write audio as WAV
|
||||
import torchaudio
|
||||
torchaudio.save(str(temp_path), audio.unsqueeze(0) if audio.dim() == 1 else audio, sampling_rate)
|
||||
|
||||
# Remux with FFmpeg
|
||||
subprocess.run([
|
||||
'ffmpeg', '-i', str(video_path), '-i', str(temp_path),
|
||||
'-c:v', 'copy', '-c:a', 'aac', '-map', '0:v', '-map', '1:a',
|
||||
'-shortest', '-y', str(output_path)
|
||||
], check=True, capture_output=True)
|
||||
|
||||
finally:
|
||||
temp_path.unlink(missing_ok=True)
|
||||
|
||||
def remux_with_audio_old(video_path: Path, audio: torch.Tensor, output_path: Path, sampling_rate: int):
|
||||
"""
|
||||
NOTE: I don't think we can get the exact video duration right without re-encoding
|
||||
so we are not using this but keeping it here for reference
|
||||
"""
|
||||
video = av.open(video_path)
|
||||
output = av.open(output_path, 'w')
|
||||
input_video_stream = video.streams.video[0]
|
||||
output_video_stream = output.add_stream(template=input_video_stream)
|
||||
output_audio_stream = output.add_stream('aac', sampling_rate)
|
||||
|
||||
duration_sec = audio.shape[-1] / sampling_rate
|
||||
|
||||
for packet in video.demux(input_video_stream):
|
||||
# We need to skip the "flushing" packets that `demux` generates.
|
||||
if packet.dts is None:
|
||||
continue
|
||||
# We need to assign the packet to the new stream.
|
||||
packet.stream = output_video_stream
|
||||
output.mux(packet)
|
||||
|
||||
# convert float tensor audio to numpy array
|
||||
audio_np = audio.numpy().astype(np.float32)
|
||||
audio_frame = av.AudioFrame.from_ndarray(audio_np, format='flt', layout='mono')
|
||||
audio_frame.sample_rate = sampling_rate
|
||||
|
||||
for packet in output_audio_stream.encode(audio_frame):
|
||||
output.mux(packet)
|
||||
|
||||
for packet in output_audio_stream.encode():
|
||||
output.mux(packet)
|
||||
|
||||
video.close()
|
||||
output.close()
|
||||
|
||||
output.close()
|
||||
174
postprocessing/mmaudio/data/data_setup.py
Normal file
174
postprocessing/mmaudio/data/data_setup.py
Normal file
@ -0,0 +1,174 @@
|
||||
import logging
|
||||
import random
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
from omegaconf import DictConfig
|
||||
from torch.utils.data import DataLoader, Dataset
|
||||
from torch.utils.data.dataloader import default_collate
|
||||
from torch.utils.data.distributed import DistributedSampler
|
||||
|
||||
from .eval.audiocaps import AudioCapsData
|
||||
from .eval.video_dataset import MovieGen, VGGSound
|
||||
from .extracted_audio import ExtractedAudio
|
||||
from .extracted_vgg import ExtractedVGG
|
||||
from .mm_dataset import MultiModalDataset
|
||||
from ..utils.dist_utils import local_rank
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
|
||||
# Re-seed randomness every time we start a worker
|
||||
def worker_init_fn(worker_id: int):
|
||||
worker_seed = torch.initial_seed() % (2**31) + worker_id + local_rank * 1000
|
||||
np.random.seed(worker_seed)
|
||||
random.seed(worker_seed)
|
||||
log.debug(f'Worker {worker_id} re-seeded with seed {worker_seed} in rank {local_rank}')
|
||||
|
||||
|
||||
def load_vgg_data(cfg: DictConfig, data_cfg: DictConfig) -> Dataset:
|
||||
dataset = ExtractedVGG(tsv_path=data_cfg.tsv,
|
||||
data_dim=cfg.data_dim,
|
||||
premade_mmap_dir=data_cfg.memmap_dir)
|
||||
|
||||
return dataset
|
||||
|
||||
|
||||
def load_audio_data(cfg: DictConfig, data_cfg: DictConfig) -> Dataset:
|
||||
dataset = ExtractedAudio(tsv_path=data_cfg.tsv,
|
||||
data_dim=cfg.data_dim,
|
||||
premade_mmap_dir=data_cfg.memmap_dir)
|
||||
|
||||
return dataset
|
||||
|
||||
|
||||
def setup_training_datasets(cfg: DictConfig) -> tuple[Dataset, DistributedSampler, DataLoader]:
|
||||
if cfg.mini_train:
|
||||
vgg = load_vgg_data(cfg, cfg.data.ExtractedVGG_val)
|
||||
audiocaps = load_audio_data(cfg, cfg.data.AudioCaps)
|
||||
dataset = MultiModalDataset([vgg], [audiocaps])
|
||||
if cfg.example_train:
|
||||
video = load_vgg_data(cfg, cfg.data.Example_video)
|
||||
audio = load_audio_data(cfg, cfg.data.Example_audio)
|
||||
dataset = MultiModalDataset([video], [audio])
|
||||
else:
|
||||
# load the largest one first
|
||||
freesound = load_audio_data(cfg, cfg.data.FreeSound)
|
||||
vgg = load_vgg_data(cfg, cfg.data.ExtractedVGG)
|
||||
audiocaps = load_audio_data(cfg, cfg.data.AudioCaps)
|
||||
audioset_sl = load_audio_data(cfg, cfg.data.AudioSetSL)
|
||||
bbcsound = load_audio_data(cfg, cfg.data.BBCSound)
|
||||
clotho = load_audio_data(cfg, cfg.data.Clotho)
|
||||
dataset = MultiModalDataset([vgg] * cfg.vgg_oversample_rate,
|
||||
[audiocaps, audioset_sl, bbcsound, freesound, clotho])
|
||||
|
||||
batch_size = cfg.batch_size
|
||||
num_workers = cfg.num_workers
|
||||
pin_memory = cfg.pin_memory
|
||||
sampler, loader = construct_loader(dataset,
|
||||
batch_size,
|
||||
num_workers,
|
||||
shuffle=True,
|
||||
drop_last=True,
|
||||
pin_memory=pin_memory)
|
||||
|
||||
return dataset, sampler, loader
|
||||
|
||||
|
||||
def setup_test_datasets(cfg):
|
||||
dataset = load_vgg_data(cfg, cfg.data.ExtractedVGG_test)
|
||||
|
||||
batch_size = cfg.batch_size
|
||||
num_workers = cfg.num_workers
|
||||
pin_memory = cfg.pin_memory
|
||||
sampler, loader = construct_loader(dataset,
|
||||
batch_size,
|
||||
num_workers,
|
||||
shuffle=False,
|
||||
drop_last=False,
|
||||
pin_memory=pin_memory)
|
||||
|
||||
return dataset, sampler, loader
|
||||
|
||||
|
||||
def setup_val_datasets(cfg: DictConfig) -> tuple[Dataset, DataLoader, DataLoader]:
|
||||
if cfg.example_train:
|
||||
dataset = load_vgg_data(cfg, cfg.data.Example_video)
|
||||
else:
|
||||
dataset = load_vgg_data(cfg, cfg.data.ExtractedVGG_val)
|
||||
|
||||
val_batch_size = cfg.batch_size
|
||||
val_eval_batch_size = cfg.eval_batch_size
|
||||
num_workers = cfg.num_workers
|
||||
pin_memory = cfg.pin_memory
|
||||
_, val_loader = construct_loader(dataset,
|
||||
val_batch_size,
|
||||
num_workers,
|
||||
shuffle=False,
|
||||
drop_last=False,
|
||||
pin_memory=pin_memory)
|
||||
_, eval_loader = construct_loader(dataset,
|
||||
val_eval_batch_size,
|
||||
num_workers,
|
||||
shuffle=False,
|
||||
drop_last=False,
|
||||
pin_memory=pin_memory)
|
||||
|
||||
return dataset, val_loader, eval_loader
|
||||
|
||||
|
||||
def setup_eval_dataset(dataset_name: str, cfg: DictConfig) -> tuple[Dataset, DataLoader]:
|
||||
if dataset_name.startswith('audiocaps_full'):
|
||||
dataset = AudioCapsData(cfg.eval_data.AudioCaps_full.audio_path,
|
||||
cfg.eval_data.AudioCaps_full.csv_path)
|
||||
elif dataset_name.startswith('audiocaps'):
|
||||
dataset = AudioCapsData(cfg.eval_data.AudioCaps.audio_path,
|
||||
cfg.eval_data.AudioCaps.csv_path)
|
||||
elif dataset_name.startswith('moviegen'):
|
||||
dataset = MovieGen(cfg.eval_data.MovieGen.video_path,
|
||||
cfg.eval_data.MovieGen.jsonl_path,
|
||||
duration_sec=cfg.duration_s)
|
||||
elif dataset_name.startswith('vggsound'):
|
||||
dataset = VGGSound(cfg.eval_data.VGGSound.video_path,
|
||||
cfg.eval_data.VGGSound.csv_path,
|
||||
duration_sec=cfg.duration_s)
|
||||
else:
|
||||
raise ValueError(f'Invalid dataset name: {dataset_name}')
|
||||
|
||||
batch_size = cfg.batch_size
|
||||
num_workers = cfg.num_workers
|
||||
pin_memory = cfg.pin_memory
|
||||
_, loader = construct_loader(dataset,
|
||||
batch_size,
|
||||
num_workers,
|
||||
shuffle=False,
|
||||
drop_last=False,
|
||||
pin_memory=pin_memory,
|
||||
error_avoidance=True)
|
||||
return dataset, loader
|
||||
|
||||
|
||||
def error_avoidance_collate(batch):
|
||||
batch = list(filter(lambda x: x is not None, batch))
|
||||
return default_collate(batch)
|
||||
|
||||
|
||||
def construct_loader(dataset: Dataset,
|
||||
batch_size: int,
|
||||
num_workers: int,
|
||||
*,
|
||||
shuffle: bool = True,
|
||||
drop_last: bool = True,
|
||||
pin_memory: bool = False,
|
||||
error_avoidance: bool = False) -> tuple[DistributedSampler, DataLoader]:
|
||||
train_sampler = DistributedSampler(dataset, rank=local_rank, shuffle=shuffle)
|
||||
train_loader = DataLoader(dataset,
|
||||
batch_size,
|
||||
sampler=train_sampler,
|
||||
num_workers=num_workers,
|
||||
worker_init_fn=worker_init_fn,
|
||||
drop_last=drop_last,
|
||||
persistent_workers=num_workers > 0,
|
||||
pin_memory=pin_memory,
|
||||
collate_fn=error_avoidance_collate if error_avoidance else None)
|
||||
return train_sampler, train_loader
|
||||
0
postprocessing/mmaudio/data/eval/__init__.py
Normal file
0
postprocessing/mmaudio/data/eval/__init__.py
Normal file
39
postprocessing/mmaudio/data/eval/audiocaps.py
Normal file
39
postprocessing/mmaudio/data/eval/audiocaps.py
Normal file
@ -0,0 +1,39 @@
|
||||
import logging
|
||||
import os
|
||||
from collections import defaultdict
|
||||
from pathlib import Path
|
||||
from typing import Union
|
||||
|
||||
import pandas as pd
|
||||
import torch
|
||||
from torch.utils.data.dataset import Dataset
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
|
||||
class AudioCapsData(Dataset):
|
||||
|
||||
def __init__(self, audio_path: Union[str, Path], csv_path: Union[str, Path]):
|
||||
df = pd.read_csv(csv_path).to_dict(orient='records')
|
||||
|
||||
audio_files = sorted(os.listdir(audio_path))
|
||||
audio_files = set(
|
||||
[Path(f).stem for f in audio_files if f.endswith('.wav') or f.endswith('.flac')])
|
||||
|
||||
self.data = []
|
||||
for row in df:
|
||||
self.data.append({
|
||||
'name': row['name'],
|
||||
'caption': row['caption'],
|
||||
})
|
||||
|
||||
self.audio_path = Path(audio_path)
|
||||
self.csv_path = Path(csv_path)
|
||||
|
||||
log.info(f'Found {len(self.data)} matching audio files in {self.audio_path}')
|
||||
|
||||
def __getitem__(self, idx: int) -> torch.Tensor:
|
||||
return self.data[idx]
|
||||
|
||||
def __len__(self):
|
||||
return len(self.data)
|
||||
131
postprocessing/mmaudio/data/eval/moviegen.py
Normal file
131
postprocessing/mmaudio/data/eval/moviegen.py
Normal file
@ -0,0 +1,131 @@
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Union
|
||||
|
||||
import torch
|
||||
from torch.utils.data.dataset import Dataset
|
||||
from torchvision.transforms import v2
|
||||
from torio.io import StreamingMediaDecoder
|
||||
|
||||
from ...utils.dist_utils import local_rank
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
_CLIP_SIZE = 384
|
||||
_CLIP_FPS = 8.0
|
||||
|
||||
_SYNC_SIZE = 224
|
||||
_SYNC_FPS = 25.0
|
||||
|
||||
|
||||
class MovieGenData(Dataset):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
video_root: Union[str, Path],
|
||||
sync_root: Union[str, Path],
|
||||
jsonl_root: Union[str, Path],
|
||||
*,
|
||||
duration_sec: float = 10.0,
|
||||
read_clip: bool = True,
|
||||
):
|
||||
self.video_root = Path(video_root)
|
||||
self.sync_root = Path(sync_root)
|
||||
self.jsonl_root = Path(jsonl_root)
|
||||
self.read_clip = read_clip
|
||||
|
||||
videos = sorted(os.listdir(self.video_root))
|
||||
videos = [v[:-4] for v in videos] # remove extensions
|
||||
self.captions = {}
|
||||
|
||||
for v in videos:
|
||||
with open(self.jsonl_root / (v + '.jsonl')) as f:
|
||||
data = json.load(f)
|
||||
self.captions[v] = data['audio_prompt']
|
||||
|
||||
if local_rank == 0:
|
||||
log.info(f'{len(videos)} videos found in {video_root}')
|
||||
|
||||
self.duration_sec = duration_sec
|
||||
|
||||
self.clip_expected_length = int(_CLIP_FPS * self.duration_sec)
|
||||
self.sync_expected_length = int(_SYNC_FPS * self.duration_sec)
|
||||
|
||||
self.clip_augment = v2.Compose([
|
||||
v2.Resize((_CLIP_SIZE, _CLIP_SIZE), interpolation=v2.InterpolationMode.BICUBIC),
|
||||
v2.ToImage(),
|
||||
v2.ToDtype(torch.float32, scale=True),
|
||||
])
|
||||
|
||||
self.sync_augment = v2.Compose([
|
||||
v2.Resize((_SYNC_SIZE, _SYNC_SIZE), interpolation=v2.InterpolationMode.BICUBIC),
|
||||
v2.CenterCrop(_SYNC_SIZE),
|
||||
v2.ToImage(),
|
||||
v2.ToDtype(torch.float32, scale=True),
|
||||
v2.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
|
||||
])
|
||||
|
||||
self.videos = videos
|
||||
|
||||
def sample(self, idx: int) -> dict[str, torch.Tensor]:
|
||||
video_id = self.videos[idx]
|
||||
caption = self.captions[video_id]
|
||||
|
||||
reader = StreamingMediaDecoder(self.video_root / (video_id + '.mp4'))
|
||||
reader.add_basic_video_stream(
|
||||
frames_per_chunk=int(_CLIP_FPS * self.duration_sec),
|
||||
frame_rate=_CLIP_FPS,
|
||||
format='rgb24',
|
||||
)
|
||||
reader.add_basic_video_stream(
|
||||
frames_per_chunk=int(_SYNC_FPS * self.duration_sec),
|
||||
frame_rate=_SYNC_FPS,
|
||||
format='rgb24',
|
||||
)
|
||||
|
||||
reader.fill_buffer()
|
||||
data_chunk = reader.pop_chunks()
|
||||
|
||||
clip_chunk = data_chunk[0]
|
||||
sync_chunk = data_chunk[1]
|
||||
if clip_chunk is None:
|
||||
raise RuntimeError(f'CLIP video returned None {video_id}')
|
||||
if clip_chunk.shape[0] < self.clip_expected_length:
|
||||
raise RuntimeError(f'CLIP video too short {video_id}')
|
||||
|
||||
if sync_chunk is None:
|
||||
raise RuntimeError(f'Sync video returned None {video_id}')
|
||||
if sync_chunk.shape[0] < self.sync_expected_length:
|
||||
raise RuntimeError(f'Sync video too short {video_id}')
|
||||
|
||||
# truncate the video
|
||||
clip_chunk = clip_chunk[:self.clip_expected_length]
|
||||
if clip_chunk.shape[0] != self.clip_expected_length:
|
||||
raise RuntimeError(f'CLIP video wrong length {video_id}, '
|
||||
f'expected {self.clip_expected_length}, '
|
||||
f'got {clip_chunk.shape[0]}')
|
||||
clip_chunk = self.clip_augment(clip_chunk)
|
||||
|
||||
sync_chunk = sync_chunk[:self.sync_expected_length]
|
||||
if sync_chunk.shape[0] != self.sync_expected_length:
|
||||
raise RuntimeError(f'Sync video wrong length {video_id}, '
|
||||
f'expected {self.sync_expected_length}, '
|
||||
f'got {sync_chunk.shape[0]}')
|
||||
sync_chunk = self.sync_augment(sync_chunk)
|
||||
|
||||
data = {
|
||||
'name': video_id,
|
||||
'caption': caption,
|
||||
'clip_video': clip_chunk,
|
||||
'sync_video': sync_chunk,
|
||||
}
|
||||
|
||||
return data
|
||||
|
||||
def __getitem__(self, idx: int) -> dict[str, torch.Tensor]:
|
||||
return self.sample(idx)
|
||||
|
||||
def __len__(self):
|
||||
return len(self.captions)
|
||||
197
postprocessing/mmaudio/data/eval/video_dataset.py
Normal file
197
postprocessing/mmaudio/data/eval/video_dataset.py
Normal file
@ -0,0 +1,197 @@
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Union
|
||||
|
||||
import pandas as pd
|
||||
import torch
|
||||
from torch.utils.data.dataset import Dataset
|
||||
from torchvision.transforms import v2
|
||||
from torio.io import StreamingMediaDecoder
|
||||
|
||||
from ...utils.dist_utils import local_rank
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
_CLIP_SIZE = 384
|
||||
_CLIP_FPS = 8.0
|
||||
|
||||
_SYNC_SIZE = 224
|
||||
_SYNC_FPS = 25.0
|
||||
|
||||
|
||||
class VideoDataset(Dataset):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
video_root: Union[str, Path],
|
||||
*,
|
||||
duration_sec: float = 8.0,
|
||||
):
|
||||
self.video_root = Path(video_root)
|
||||
|
||||
self.duration_sec = duration_sec
|
||||
|
||||
self.clip_expected_length = int(_CLIP_FPS * self.duration_sec)
|
||||
self.sync_expected_length = int(_SYNC_FPS * self.duration_sec)
|
||||
|
||||
self.clip_transform = v2.Compose([
|
||||
v2.Resize((_CLIP_SIZE, _CLIP_SIZE), interpolation=v2.InterpolationMode.BICUBIC),
|
||||
v2.ToImage(),
|
||||
v2.ToDtype(torch.float32, scale=True),
|
||||
])
|
||||
|
||||
self.sync_transform = v2.Compose([
|
||||
v2.Resize(_SYNC_SIZE, interpolation=v2.InterpolationMode.BICUBIC),
|
||||
v2.CenterCrop(_SYNC_SIZE),
|
||||
v2.ToImage(),
|
||||
v2.ToDtype(torch.float32, scale=True),
|
||||
v2.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
|
||||
])
|
||||
|
||||
# to be implemented by subclasses
|
||||
self.captions = {}
|
||||
self.videos = sorted(list(self.captions.keys()))
|
||||
|
||||
def sample(self, idx: int) -> dict[str, torch.Tensor]:
|
||||
video_id = self.videos[idx]
|
||||
caption = self.captions[video_id]
|
||||
|
||||
reader = StreamingMediaDecoder(self.video_root / (video_id + '.mp4'))
|
||||
reader.add_basic_video_stream(
|
||||
frames_per_chunk=int(_CLIP_FPS * self.duration_sec),
|
||||
frame_rate=_CLIP_FPS,
|
||||
format='rgb24',
|
||||
)
|
||||
reader.add_basic_video_stream(
|
||||
frames_per_chunk=int(_SYNC_FPS * self.duration_sec),
|
||||
frame_rate=_SYNC_FPS,
|
||||
format='rgb24',
|
||||
)
|
||||
|
||||
reader.fill_buffer()
|
||||
data_chunk = reader.pop_chunks()
|
||||
|
||||
clip_chunk = data_chunk[0]
|
||||
sync_chunk = data_chunk[1]
|
||||
if clip_chunk is None:
|
||||
raise RuntimeError(f'CLIP video returned None {video_id}')
|
||||
if clip_chunk.shape[0] < self.clip_expected_length:
|
||||
raise RuntimeError(
|
||||
f'CLIP video too short {video_id}, expected {self.clip_expected_length}, got {clip_chunk.shape[0]}'
|
||||
)
|
||||
|
||||
if sync_chunk is None:
|
||||
raise RuntimeError(f'Sync video returned None {video_id}')
|
||||
if sync_chunk.shape[0] < self.sync_expected_length:
|
||||
raise RuntimeError(
|
||||
f'Sync video too short {video_id}, expected {self.sync_expected_length}, got {sync_chunk.shape[0]}'
|
||||
)
|
||||
|
||||
# truncate the video
|
||||
clip_chunk = clip_chunk[:self.clip_expected_length]
|
||||
if clip_chunk.shape[0] != self.clip_expected_length:
|
||||
raise RuntimeError(f'CLIP video wrong length {video_id}, '
|
||||
f'expected {self.clip_expected_length}, '
|
||||
f'got {clip_chunk.shape[0]}')
|
||||
clip_chunk = self.clip_transform(clip_chunk)
|
||||
|
||||
sync_chunk = sync_chunk[:self.sync_expected_length]
|
||||
if sync_chunk.shape[0] != self.sync_expected_length:
|
||||
raise RuntimeError(f'Sync video wrong length {video_id}, '
|
||||
f'expected {self.sync_expected_length}, '
|
||||
f'got {sync_chunk.shape[0]}')
|
||||
sync_chunk = self.sync_transform(sync_chunk)
|
||||
|
||||
data = {
|
||||
'name': video_id,
|
||||
'caption': caption,
|
||||
'clip_video': clip_chunk,
|
||||
'sync_video': sync_chunk,
|
||||
}
|
||||
|
||||
return data
|
||||
|
||||
def __getitem__(self, idx: int) -> dict[str, torch.Tensor]:
|
||||
try:
|
||||
return self.sample(idx)
|
||||
except Exception as e:
|
||||
log.error(f'Error loading video {self.videos[idx]}: {e}')
|
||||
return None
|
||||
|
||||
def __len__(self):
|
||||
return len(self.captions)
|
||||
|
||||
|
||||
class VGGSound(VideoDataset):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
video_root: Union[str, Path],
|
||||
csv_path: Union[str, Path],
|
||||
*,
|
||||
duration_sec: float = 8.0,
|
||||
):
|
||||
super().__init__(video_root, duration_sec=duration_sec)
|
||||
self.video_root = Path(video_root)
|
||||
self.csv_path = Path(csv_path)
|
||||
|
||||
videos = sorted(os.listdir(self.video_root))
|
||||
if local_rank == 0:
|
||||
log.info(f'{len(videos)} videos found in {video_root}')
|
||||
self.captions = {}
|
||||
|
||||
df = pd.read_csv(csv_path, header=None, names=['id', 'sec', 'caption',
|
||||
'split']).to_dict(orient='records')
|
||||
|
||||
videos_no_found = []
|
||||
for row in df:
|
||||
if row['split'] == 'test':
|
||||
start_sec = int(row['sec'])
|
||||
video_id = str(row['id'])
|
||||
# this is how our videos are named
|
||||
video_name = f'{video_id}_{start_sec:06d}'
|
||||
if video_name + '.mp4' not in videos:
|
||||
videos_no_found.append(video_name)
|
||||
continue
|
||||
|
||||
self.captions[video_name] = row['caption']
|
||||
|
||||
if local_rank == 0:
|
||||
log.info(f'{len(videos)} videos found in {video_root}')
|
||||
log.info(f'{len(self.captions)} useable videos found')
|
||||
if videos_no_found:
|
||||
log.info(f'{len(videos_no_found)} found in {csv_path} but not in {video_root}')
|
||||
log.info(
|
||||
'A small amount is expected, as not all videos are still available on YouTube')
|
||||
|
||||
self.videos = sorted(list(self.captions.keys()))
|
||||
|
||||
|
||||
class MovieGen(VideoDataset):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
video_root: Union[str, Path],
|
||||
jsonl_root: Union[str, Path],
|
||||
*,
|
||||
duration_sec: float = 10.0,
|
||||
):
|
||||
super().__init__(video_root, duration_sec=duration_sec)
|
||||
self.video_root = Path(video_root)
|
||||
self.jsonl_root = Path(jsonl_root)
|
||||
|
||||
videos = sorted(os.listdir(self.video_root))
|
||||
videos = [v[:-4] for v in videos] # remove extensions
|
||||
self.captions = {}
|
||||
|
||||
for v in videos:
|
||||
with open(self.jsonl_root / (v + '.jsonl')) as f:
|
||||
data = json.load(f)
|
||||
self.captions[v] = data['audio_prompt']
|
||||
|
||||
if local_rank == 0:
|
||||
log.info(f'{len(videos)} videos found in {video_root}')
|
||||
|
||||
self.videos = videos
|
||||
88
postprocessing/mmaudio/data/extracted_audio.py
Normal file
88
postprocessing/mmaudio/data/extracted_audio.py
Normal file
@ -0,0 +1,88 @@
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Union
|
||||
|
||||
import pandas as pd
|
||||
import torch
|
||||
from tensordict import TensorDict
|
||||
from torch.utils.data.dataset import Dataset
|
||||
|
||||
from ..utils.dist_utils import local_rank
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
|
||||
class ExtractedAudio(Dataset):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
tsv_path: Union[str, Path],
|
||||
*,
|
||||
premade_mmap_dir: Union[str, Path],
|
||||
data_dim: dict[str, int],
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
self.data_dim = data_dim
|
||||
self.df_list = pd.read_csv(tsv_path, sep='\t').to_dict('records')
|
||||
self.ids = [str(d['id']) for d in self.df_list]
|
||||
|
||||
log.info(f'Loading precomputed mmap from {premade_mmap_dir}')
|
||||
# load precomputed memory mapped tensors
|
||||
premade_mmap_dir = Path(premade_mmap_dir)
|
||||
td = TensorDict.load_memmap(premade_mmap_dir)
|
||||
log.info(f'Loaded precomputed mmap from {premade_mmap_dir}')
|
||||
self.mean = td['mean']
|
||||
self.std = td['std']
|
||||
self.text_features = td['text_features']
|
||||
|
||||
log.info(f'Loaded {len(self)} samples from {premade_mmap_dir}.')
|
||||
log.info(f'Loaded mean: {self.mean.shape}.')
|
||||
log.info(f'Loaded std: {self.std.shape}.')
|
||||
log.info(f'Loaded text features: {self.text_features.shape}.')
|
||||
|
||||
assert self.mean.shape[1] == self.data_dim['latent_seq_len'], \
|
||||
f'{self.mean.shape[1]} != {self.data_dim["latent_seq_len"]}'
|
||||
assert self.std.shape[1] == self.data_dim['latent_seq_len'], \
|
||||
f'{self.std.shape[1]} != {self.data_dim["latent_seq_len"]}'
|
||||
|
||||
assert self.text_features.shape[1] == self.data_dim['text_seq_len'], \
|
||||
f'{self.text_features.shape[1]} != {self.data_dim["text_seq_len"]}'
|
||||
assert self.text_features.shape[-1] == self.data_dim['text_dim'], \
|
||||
f'{self.text_features.shape[-1]} != {self.data_dim["text_dim"]}'
|
||||
|
||||
self.fake_clip_features = torch.zeros(self.data_dim['clip_seq_len'],
|
||||
self.data_dim['clip_dim'])
|
||||
self.fake_sync_features = torch.zeros(self.data_dim['sync_seq_len'],
|
||||
self.data_dim['sync_dim'])
|
||||
self.video_exist = torch.tensor(0, dtype=torch.bool)
|
||||
self.text_exist = torch.tensor(1, dtype=torch.bool)
|
||||
|
||||
def compute_latent_stats(self) -> tuple[torch.Tensor, torch.Tensor]:
|
||||
latents = self.mean
|
||||
return latents.mean(dim=(0, 1)), latents.std(dim=(0, 1))
|
||||
|
||||
def get_memory_mapped_tensor(self) -> TensorDict:
|
||||
td = TensorDict({
|
||||
'mean': self.mean,
|
||||
'std': self.std,
|
||||
'text_features': self.text_features,
|
||||
})
|
||||
return td
|
||||
|
||||
def __getitem__(self, idx: int) -> dict[str, torch.Tensor]:
|
||||
data = {
|
||||
'id': str(self.df_list[idx]['id']),
|
||||
'a_mean': self.mean[idx],
|
||||
'a_std': self.std[idx],
|
||||
'clip_features': self.fake_clip_features,
|
||||
'sync_features': self.fake_sync_features,
|
||||
'text_features': self.text_features[idx],
|
||||
'caption': self.df_list[idx]['caption'],
|
||||
'video_exist': self.video_exist,
|
||||
'text_exist': self.text_exist,
|
||||
}
|
||||
return data
|
||||
|
||||
def __len__(self):
|
||||
return len(self.ids)
|
||||
101
postprocessing/mmaudio/data/extracted_vgg.py
Normal file
101
postprocessing/mmaudio/data/extracted_vgg.py
Normal file
@ -0,0 +1,101 @@
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Union
|
||||
|
||||
import pandas as pd
|
||||
import torch
|
||||
from tensordict import TensorDict
|
||||
from torch.utils.data.dataset import Dataset
|
||||
|
||||
from ..utils.dist_utils import local_rank
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
|
||||
class ExtractedVGG(Dataset):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
tsv_path: Union[str, Path],
|
||||
*,
|
||||
premade_mmap_dir: Union[str, Path],
|
||||
data_dim: dict[str, int],
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
self.data_dim = data_dim
|
||||
self.df_list = pd.read_csv(tsv_path, sep='\t').to_dict('records')
|
||||
self.ids = [d['id'] for d in self.df_list]
|
||||
|
||||
log.info(f'Loading precomputed mmap from {premade_mmap_dir}')
|
||||
# load precomputed memory mapped tensors
|
||||
premade_mmap_dir = Path(premade_mmap_dir)
|
||||
td = TensorDict.load_memmap(premade_mmap_dir)
|
||||
log.info(f'Loaded precomputed mmap from {premade_mmap_dir}')
|
||||
self.mean = td['mean']
|
||||
self.std = td['std']
|
||||
self.clip_features = td['clip_features']
|
||||
self.sync_features = td['sync_features']
|
||||
self.text_features = td['text_features']
|
||||
|
||||
if local_rank == 0:
|
||||
log.info(f'Loaded {len(self)} samples.')
|
||||
log.info(f'Loaded mean: {self.mean.shape}.')
|
||||
log.info(f'Loaded std: {self.std.shape}.')
|
||||
log.info(f'Loaded clip_features: {self.clip_features.shape}.')
|
||||
log.info(f'Loaded sync_features: {self.sync_features.shape}.')
|
||||
log.info(f'Loaded text_features: {self.text_features.shape}.')
|
||||
|
||||
assert self.mean.shape[1] == self.data_dim['latent_seq_len'], \
|
||||
f'{self.mean.shape[1]} != {self.data_dim["latent_seq_len"]}'
|
||||
assert self.std.shape[1] == self.data_dim['latent_seq_len'], \
|
||||
f'{self.std.shape[1]} != {self.data_dim["latent_seq_len"]}'
|
||||
|
||||
assert self.clip_features.shape[1] == self.data_dim['clip_seq_len'], \
|
||||
f'{self.clip_features.shape[1]} != {self.data_dim["clip_seq_len"]}'
|
||||
assert self.sync_features.shape[1] == self.data_dim['sync_seq_len'], \
|
||||
f'{self.sync_features.shape[1]} != {self.data_dim["sync_seq_len"]}'
|
||||
assert self.text_features.shape[1] == self.data_dim['text_seq_len'], \
|
||||
f'{self.text_features.shape[1]} != {self.data_dim["text_seq_len"]}'
|
||||
|
||||
assert self.clip_features.shape[-1] == self.data_dim['clip_dim'], \
|
||||
f'{self.clip_features.shape[-1]} != {self.data_dim["clip_dim"]}'
|
||||
assert self.sync_features.shape[-1] == self.data_dim['sync_dim'], \
|
||||
f'{self.sync_features.shape[-1]} != {self.data_dim["sync_dim"]}'
|
||||
assert self.text_features.shape[-1] == self.data_dim['text_dim'], \
|
||||
f'{self.text_features.shape[-1]} != {self.data_dim["text_dim"]}'
|
||||
|
||||
self.video_exist = torch.tensor(1, dtype=torch.bool)
|
||||
self.text_exist = torch.tensor(1, dtype=torch.bool)
|
||||
|
||||
def compute_latent_stats(self) -> tuple[torch.Tensor, torch.Tensor]:
|
||||
latents = self.mean
|
||||
return latents.mean(dim=(0, 1)), latents.std(dim=(0, 1))
|
||||
|
||||
def get_memory_mapped_tensor(self) -> TensorDict:
|
||||
td = TensorDict({
|
||||
'mean': self.mean,
|
||||
'std': self.std,
|
||||
'clip_features': self.clip_features,
|
||||
'sync_features': self.sync_features,
|
||||
'text_features': self.text_features,
|
||||
})
|
||||
return td
|
||||
|
||||
def __getitem__(self, idx: int) -> dict[str, torch.Tensor]:
|
||||
data = {
|
||||
'id': self.df_list[idx]['id'],
|
||||
'a_mean': self.mean[idx],
|
||||
'a_std': self.std[idx],
|
||||
'clip_features': self.clip_features[idx],
|
||||
'sync_features': self.sync_features[idx],
|
||||
'text_features': self.text_features[idx],
|
||||
'caption': self.df_list[idx]['label'],
|
||||
'video_exist': self.video_exist,
|
||||
'text_exist': self.text_exist,
|
||||
}
|
||||
|
||||
return data
|
||||
|
||||
def __len__(self):
|
||||
return len(self.ids)
|
||||
0
postprocessing/mmaudio/data/extraction/__init__.py
Normal file
0
postprocessing/mmaudio/data/extraction/__init__.py
Normal file
193
postprocessing/mmaudio/data/extraction/vgg_sound.py
Normal file
193
postprocessing/mmaudio/data/extraction/vgg_sound.py
Normal file
@ -0,0 +1,193 @@
|
||||
import logging
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Optional, Union
|
||||
|
||||
import pandas as pd
|
||||
import torch
|
||||
import torchaudio
|
||||
from torch.utils.data.dataset import Dataset
|
||||
from torchvision.transforms import v2
|
||||
from torio.io import StreamingMediaDecoder
|
||||
|
||||
from ...utils.dist_utils import local_rank
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
_CLIP_SIZE = 384
|
||||
_CLIP_FPS = 8.0
|
||||
|
||||
_SYNC_SIZE = 224
|
||||
_SYNC_FPS = 25.0
|
||||
|
||||
|
||||
class VGGSound(Dataset):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
root: Union[str, Path],
|
||||
*,
|
||||
tsv_path: Union[str, Path] = 'sets/vgg3-train.tsv',
|
||||
sample_rate: int = 16_000,
|
||||
duration_sec: float = 8.0,
|
||||
audio_samples: Optional[int] = None,
|
||||
normalize_audio: bool = False,
|
||||
):
|
||||
self.root = Path(root)
|
||||
self.normalize_audio = normalize_audio
|
||||
if audio_samples is None:
|
||||
self.audio_samples = int(sample_rate * duration_sec)
|
||||
else:
|
||||
self.audio_samples = audio_samples
|
||||
effective_duration = audio_samples / sample_rate
|
||||
# make sure the duration is close enough, within 15ms
|
||||
assert abs(effective_duration - duration_sec) < 0.015, \
|
||||
f'audio_samples {audio_samples} does not match duration_sec {duration_sec}'
|
||||
|
||||
videos = sorted(os.listdir(self.root))
|
||||
videos = set([Path(v).stem for v in videos]) # remove extensions
|
||||
self.labels = {}
|
||||
self.videos = []
|
||||
missing_videos = []
|
||||
|
||||
# read the tsv for subset information
|
||||
df_list = pd.read_csv(tsv_path, sep='\t', dtype={'id': str}).to_dict('records')
|
||||
for record in df_list:
|
||||
id = record['id']
|
||||
label = record['label']
|
||||
if id in videos:
|
||||
self.labels[id] = label
|
||||
self.videos.append(id)
|
||||
else:
|
||||
missing_videos.append(id)
|
||||
|
||||
if local_rank == 0:
|
||||
log.info(f'{len(videos)} videos found in {root}')
|
||||
log.info(f'{len(self.videos)} videos found in {tsv_path}')
|
||||
log.info(f'{len(missing_videos)} videos missing in {root}')
|
||||
|
||||
self.sample_rate = sample_rate
|
||||
self.duration_sec = duration_sec
|
||||
|
||||
self.expected_audio_length = audio_samples
|
||||
self.clip_expected_length = int(_CLIP_FPS * self.duration_sec)
|
||||
self.sync_expected_length = int(_SYNC_FPS * self.duration_sec)
|
||||
|
||||
self.clip_transform = v2.Compose([
|
||||
v2.Resize((_CLIP_SIZE, _CLIP_SIZE), interpolation=v2.InterpolationMode.BICUBIC),
|
||||
v2.ToImage(),
|
||||
v2.ToDtype(torch.float32, scale=True),
|
||||
])
|
||||
|
||||
self.sync_transform = v2.Compose([
|
||||
v2.Resize(_SYNC_SIZE, interpolation=v2.InterpolationMode.BICUBIC),
|
||||
v2.CenterCrop(_SYNC_SIZE),
|
||||
v2.ToImage(),
|
||||
v2.ToDtype(torch.float32, scale=True),
|
||||
v2.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
|
||||
])
|
||||
|
||||
self.resampler = {}
|
||||
|
||||
def sample(self, idx: int) -> dict[str, torch.Tensor]:
|
||||
video_id = self.videos[idx]
|
||||
label = self.labels[video_id]
|
||||
|
||||
reader = StreamingMediaDecoder(self.root / (video_id + '.mp4'))
|
||||
reader.add_basic_video_stream(
|
||||
frames_per_chunk=int(_CLIP_FPS * self.duration_sec),
|
||||
frame_rate=_CLIP_FPS,
|
||||
format='rgb24',
|
||||
)
|
||||
reader.add_basic_video_stream(
|
||||
frames_per_chunk=int(_SYNC_FPS * self.duration_sec),
|
||||
frame_rate=_SYNC_FPS,
|
||||
format='rgb24',
|
||||
)
|
||||
reader.add_basic_audio_stream(frames_per_chunk=2**30, )
|
||||
|
||||
reader.fill_buffer()
|
||||
data_chunk = reader.pop_chunks()
|
||||
|
||||
clip_chunk = data_chunk[0]
|
||||
sync_chunk = data_chunk[1]
|
||||
audio_chunk = data_chunk[2]
|
||||
|
||||
if clip_chunk is None:
|
||||
raise RuntimeError(f'CLIP video returned None {video_id}')
|
||||
if clip_chunk.shape[0] < self.clip_expected_length:
|
||||
raise RuntimeError(
|
||||
f'CLIP video too short {video_id}, expected {self.clip_expected_length}, got {clip_chunk.shape[0]}'
|
||||
)
|
||||
|
||||
if sync_chunk is None:
|
||||
raise RuntimeError(f'Sync video returned None {video_id}')
|
||||
if sync_chunk.shape[0] < self.sync_expected_length:
|
||||
raise RuntimeError(
|
||||
f'Sync video too short {video_id}, expected {self.sync_expected_length}, got {sync_chunk.shape[0]}'
|
||||
)
|
||||
|
||||
# process audio
|
||||
sample_rate = int(reader.get_out_stream_info(2).sample_rate)
|
||||
audio_chunk = audio_chunk.transpose(0, 1)
|
||||
audio_chunk = audio_chunk.mean(dim=0) # mono
|
||||
if self.normalize_audio:
|
||||
abs_max = audio_chunk.abs().max()
|
||||
audio_chunk = audio_chunk / abs_max * 0.95
|
||||
if abs_max <= 1e-6:
|
||||
raise RuntimeError(f'Audio is silent {video_id}')
|
||||
|
||||
# resample
|
||||
if sample_rate == self.sample_rate:
|
||||
audio_chunk = audio_chunk
|
||||
else:
|
||||
if sample_rate not in self.resampler:
|
||||
# https://pytorch.org/audio/stable/tutorials/audio_resampling_tutorial.html#kaiser-best
|
||||
self.resampler[sample_rate] = torchaudio.transforms.Resample(
|
||||
sample_rate,
|
||||
self.sample_rate,
|
||||
lowpass_filter_width=64,
|
||||
rolloff=0.9475937167399596,
|
||||
resampling_method='sinc_interp_kaiser',
|
||||
beta=14.769656459379492,
|
||||
)
|
||||
audio_chunk = self.resampler[sample_rate](audio_chunk)
|
||||
|
||||
if audio_chunk.shape[0] < self.expected_audio_length:
|
||||
raise RuntimeError(f'Audio too short {video_id}')
|
||||
audio_chunk = audio_chunk[:self.expected_audio_length]
|
||||
|
||||
# truncate the video
|
||||
clip_chunk = clip_chunk[:self.clip_expected_length]
|
||||
if clip_chunk.shape[0] != self.clip_expected_length:
|
||||
raise RuntimeError(f'CLIP video wrong length {video_id}, '
|
||||
f'expected {self.clip_expected_length}, '
|
||||
f'got {clip_chunk.shape[0]}')
|
||||
clip_chunk = self.clip_transform(clip_chunk)
|
||||
|
||||
sync_chunk = sync_chunk[:self.sync_expected_length]
|
||||
if sync_chunk.shape[0] != self.sync_expected_length:
|
||||
raise RuntimeError(f'Sync video wrong length {video_id}, '
|
||||
f'expected {self.sync_expected_length}, '
|
||||
f'got {sync_chunk.shape[0]}')
|
||||
sync_chunk = self.sync_transform(sync_chunk)
|
||||
|
||||
data = {
|
||||
'id': video_id,
|
||||
'caption': label,
|
||||
'audio': audio_chunk,
|
||||
'clip_video': clip_chunk,
|
||||
'sync_video': sync_chunk,
|
||||
}
|
||||
|
||||
return data
|
||||
|
||||
def __getitem__(self, idx: int) -> dict[str, torch.Tensor]:
|
||||
try:
|
||||
return self.sample(idx)
|
||||
except Exception as e:
|
||||
log.error(f'Error loading video {self.videos[idx]}: {e}')
|
||||
return None
|
||||
|
||||
def __len__(self):
|
||||
return len(self.labels)
|
||||
132
postprocessing/mmaudio/data/extraction/wav_dataset.py
Normal file
132
postprocessing/mmaudio/data/extraction/wav_dataset.py
Normal file
@ -0,0 +1,132 @@
|
||||
import logging
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Union
|
||||
|
||||
import open_clip
|
||||
import pandas as pd
|
||||
import torch
|
||||
import torchaudio
|
||||
from torch.utils.data.dataset import Dataset
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
|
||||
class WavTextClipsDataset(Dataset):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
root: Union[str, Path],
|
||||
*,
|
||||
captions_tsv: Union[str, Path],
|
||||
clips_tsv: Union[str, Path],
|
||||
sample_rate: int,
|
||||
num_samples: int,
|
||||
normalize_audio: bool = False,
|
||||
reject_silent: bool = False,
|
||||
tokenizer_id: str = 'ViT-H-14-378-quickgelu',
|
||||
):
|
||||
self.root = Path(root)
|
||||
self.sample_rate = sample_rate
|
||||
self.num_samples = num_samples
|
||||
self.normalize_audio = normalize_audio
|
||||
self.reject_silent = reject_silent
|
||||
self.tokenizer = open_clip.get_tokenizer(tokenizer_id)
|
||||
|
||||
audios = sorted(os.listdir(self.root))
|
||||
audios = set([
|
||||
Path(audio).stem for audio in audios
|
||||
if audio.endswith('.wav') or audio.endswith('.flac')
|
||||
])
|
||||
self.captions = {}
|
||||
|
||||
# read the caption tsv
|
||||
df_list = pd.read_csv(captions_tsv, sep='\t', dtype={'id': str}).to_dict('records')
|
||||
for record in df_list:
|
||||
id = record['id']
|
||||
caption = record['caption']
|
||||
self.captions[id] = caption
|
||||
|
||||
# read the clip tsv
|
||||
df_list = pd.read_csv(clips_tsv, sep='\t', dtype={
|
||||
'id': str,
|
||||
'name': str
|
||||
}).to_dict('records')
|
||||
self.clips = []
|
||||
for record in df_list:
|
||||
record['id'] = record['id']
|
||||
record['name'] = record['name']
|
||||
id = record['id']
|
||||
name = record['name']
|
||||
if name not in self.captions:
|
||||
log.warning(f'Audio {name} not found in {captions_tsv}')
|
||||
continue
|
||||
record['caption'] = self.captions[name]
|
||||
self.clips.append(record)
|
||||
|
||||
log.info(f'Found {len(self.clips)} audio files in {self.root}')
|
||||
|
||||
self.resampler = {}
|
||||
|
||||
def __getitem__(self, idx: int) -> torch.Tensor:
|
||||
try:
|
||||
clip = self.clips[idx]
|
||||
audio_name = clip['name']
|
||||
audio_id = clip['id']
|
||||
caption = clip['caption']
|
||||
start_sample = clip['start_sample']
|
||||
end_sample = clip['end_sample']
|
||||
|
||||
audio_path = self.root / f'{audio_name}.flac'
|
||||
if not audio_path.exists():
|
||||
audio_path = self.root / f'{audio_name}.wav'
|
||||
assert audio_path.exists()
|
||||
|
||||
audio_chunk, sample_rate = torchaudio.load(audio_path)
|
||||
audio_chunk = audio_chunk.mean(dim=0) # mono
|
||||
abs_max = audio_chunk.abs().max()
|
||||
if self.normalize_audio:
|
||||
audio_chunk = audio_chunk / abs_max * 0.95
|
||||
|
||||
if self.reject_silent and abs_max < 1e-6:
|
||||
log.warning(f'Rejecting silent audio')
|
||||
return None
|
||||
|
||||
audio_chunk = audio_chunk[start_sample:end_sample]
|
||||
|
||||
# resample
|
||||
if sample_rate == self.sample_rate:
|
||||
audio_chunk = audio_chunk
|
||||
else:
|
||||
if sample_rate not in self.resampler:
|
||||
# https://pytorch.org/audio/stable/tutorials/audio_resampling_tutorial.html#kaiser-best
|
||||
self.resampler[sample_rate] = torchaudio.transforms.Resample(
|
||||
sample_rate,
|
||||
self.sample_rate,
|
||||
lowpass_filter_width=64,
|
||||
rolloff=0.9475937167399596,
|
||||
resampling_method='sinc_interp_kaiser',
|
||||
beta=14.769656459379492,
|
||||
)
|
||||
audio_chunk = self.resampler[sample_rate](audio_chunk)
|
||||
|
||||
if audio_chunk.shape[0] < self.num_samples:
|
||||
raise ValueError('Audio is too short')
|
||||
audio_chunk = audio_chunk[:self.num_samples]
|
||||
|
||||
tokens = self.tokenizer([caption])[0]
|
||||
|
||||
output = {
|
||||
'waveform': audio_chunk,
|
||||
'id': audio_id,
|
||||
'caption': caption,
|
||||
'tokens': tokens,
|
||||
}
|
||||
|
||||
return output
|
||||
except Exception as e:
|
||||
log.error(f'Error reading {audio_path}: {e}')
|
||||
return None
|
||||
|
||||
def __len__(self):
|
||||
return len(self.clips)
|
||||
45
postprocessing/mmaudio/data/mm_dataset.py
Normal file
45
postprocessing/mmaudio/data/mm_dataset.py
Normal file
@ -0,0 +1,45 @@
|
||||
import bisect
|
||||
|
||||
import torch
|
||||
from torch.utils.data.dataset import Dataset
|
||||
|
||||
|
||||
# modified from https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html#ConcatDataset
|
||||
class MultiModalDataset(Dataset):
|
||||
datasets: list[Dataset]
|
||||
cumulative_sizes: list[int]
|
||||
|
||||
@staticmethod
|
||||
def cumsum(sequence):
|
||||
r, s = [], 0
|
||||
for e in sequence:
|
||||
l = len(e)
|
||||
r.append(l + s)
|
||||
s += l
|
||||
return r
|
||||
|
||||
def __init__(self, video_datasets: list[Dataset], audio_datasets: list[Dataset]):
|
||||
super().__init__()
|
||||
self.video_datasets = list(video_datasets)
|
||||
self.audio_datasets = list(audio_datasets)
|
||||
self.datasets = self.video_datasets + self.audio_datasets
|
||||
|
||||
self.cumulative_sizes = self.cumsum(self.datasets)
|
||||
|
||||
def __len__(self):
|
||||
return self.cumulative_sizes[-1]
|
||||
|
||||
def __getitem__(self, idx):
|
||||
if idx < 0:
|
||||
if -idx > len(self):
|
||||
raise ValueError("absolute value of index should not exceed dataset length")
|
||||
idx = len(self) + idx
|
||||
dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)
|
||||
if dataset_idx == 0:
|
||||
sample_idx = idx
|
||||
else:
|
||||
sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]
|
||||
return self.datasets[dataset_idx][sample_idx]
|
||||
|
||||
def compute_latent_stats(self) -> tuple[torch.Tensor, torch.Tensor]:
|
||||
return self.video_datasets[0].compute_latent_stats()
|
||||
148
postprocessing/mmaudio/data/utils.py
Normal file
148
postprocessing/mmaudio/data/utils.py
Normal file
@ -0,0 +1,148 @@
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional, Union
|
||||
|
||||
import torch
|
||||
import torch.distributed as dist
|
||||
from tensordict import MemoryMappedTensor
|
||||
from torch.utils.data import DataLoader
|
||||
from torch.utils.data.dataset import Dataset
|
||||
from tqdm import tqdm
|
||||
|
||||
from ..utils.dist_utils import local_rank, world_size
|
||||
|
||||
scratch_path = Path(os.environ['SLURM_SCRATCH'] if 'SLURM_SCRATCH' in os.environ else '/dev/shm')
|
||||
shm_path = Path('/dev/shm')
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
|
||||
def reseed(seed):
|
||||
random.seed(seed)
|
||||
torch.manual_seed(seed)
|
||||
|
||||
|
||||
def local_scatter_torch(obj: Optional[Any]):
|
||||
if world_size == 1:
|
||||
# Just one worker. Do nothing.
|
||||
return obj
|
||||
|
||||
array = [obj] * world_size
|
||||
target_array = [None]
|
||||
if local_rank == 0:
|
||||
dist.scatter_object_list(target_array, scatter_object_input_list=array, src=0)
|
||||
else:
|
||||
dist.scatter_object_list(target_array, scatter_object_input_list=None, src=0)
|
||||
return target_array[0]
|
||||
|
||||
|
||||
class ShardDataset(Dataset):
|
||||
|
||||
def __init__(self, root):
|
||||
self.root = root
|
||||
self.shards = sorted(os.listdir(root))
|
||||
|
||||
def __len__(self):
|
||||
return len(self.shards)
|
||||
|
||||
def __getitem__(self, idx):
|
||||
return torch.load(os.path.join(self.root, self.shards[idx]), weights_only=True)
|
||||
|
||||
|
||||
def get_tmp_dir(in_memory: bool) -> Path:
|
||||
return shm_path if in_memory else scratch_path
|
||||
|
||||
|
||||
def load_shards_and_share(data_path: Union[str, Path], ids: list[int],
|
||||
in_memory: bool) -> MemoryMappedTensor:
|
||||
if local_rank == 0:
|
||||
with tempfile.NamedTemporaryFile(prefix='shared-tensor-', dir=get_tmp_dir(in_memory)) as f:
|
||||
log.info(f'Loading shards from {data_path} into {f.name}...')
|
||||
data = load_shards(data_path, ids=ids, tmp_file_path=f.name)
|
||||
data = share_tensor_to_all(data)
|
||||
torch.distributed.barrier()
|
||||
f.close() # why does the context manager not close the file for me?
|
||||
else:
|
||||
log.info('Waiting for the data to be shared with me...')
|
||||
data = share_tensor_to_all(None)
|
||||
torch.distributed.barrier()
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def load_shards(
|
||||
data_path: Union[str, Path],
|
||||
ids: list[int],
|
||||
*,
|
||||
tmp_file_path: str,
|
||||
) -> Union[torch.Tensor, dict[str, torch.Tensor]]:
|
||||
|
||||
id_set = set(ids)
|
||||
shards = sorted(os.listdir(data_path))
|
||||
log.info(f'Found {len(shards)} shards in {data_path}.')
|
||||
first_shard = torch.load(os.path.join(data_path, shards[0]), weights_only=True)
|
||||
|
||||
log.info(f'Rank {local_rank} created file {tmp_file_path}')
|
||||
first_item = next(iter(first_shard.values()))
|
||||
log.info(f'First item shape: {first_item.shape}')
|
||||
mm_tensor = MemoryMappedTensor.empty(shape=(len(ids), *first_item.shape),
|
||||
dtype=torch.float32,
|
||||
filename=tmp_file_path,
|
||||
existsok=True)
|
||||
total_count = 0
|
||||
used_index = set()
|
||||
id_indexing = {i: idx for idx, i in enumerate(ids)}
|
||||
# faster with no workers; otherwise we need to set_sharing_strategy('file_system')
|
||||
loader = DataLoader(ShardDataset(data_path), batch_size=1, num_workers=0)
|
||||
for data in tqdm(loader, desc='Loading shards'):
|
||||
for i, v in data.items():
|
||||
if i not in id_set:
|
||||
continue
|
||||
|
||||
# tensor_index = ids.index(i)
|
||||
tensor_index = id_indexing[i]
|
||||
if tensor_index in used_index:
|
||||
raise ValueError(f'Duplicate id {i} found in {data_path}.')
|
||||
used_index.add(tensor_index)
|
||||
mm_tensor[tensor_index] = v
|
||||
total_count += 1
|
||||
|
||||
assert total_count == len(ids), f'Expected {len(ids)} tensors, got {total_count}.'
|
||||
log.info(f'Loaded {total_count} tensors from {data_path}.')
|
||||
|
||||
return mm_tensor
|
||||
|
||||
|
||||
def share_tensor_to_all(x: Optional[MemoryMappedTensor]) -> MemoryMappedTensor:
|
||||
"""
|
||||
x: the tensor to be shared; None if local_rank != 0
|
||||
return: the shared tensor
|
||||
"""
|
||||
|
||||
# there is no need to share your stuff with anyone if you are alone; must be in memory
|
||||
if world_size == 1:
|
||||
return x
|
||||
|
||||
if local_rank == 0:
|
||||
assert x is not None, 'x must not be None if local_rank == 0'
|
||||
else:
|
||||
assert x is None, 'x must be None if local_rank != 0'
|
||||
|
||||
if local_rank == 0:
|
||||
filename = x.filename
|
||||
meta_information = (filename, x.shape, x.dtype)
|
||||
else:
|
||||
meta_information = None
|
||||
|
||||
filename, data_shape, data_type = local_scatter_torch(meta_information)
|
||||
if local_rank == 0:
|
||||
data = x
|
||||
else:
|
||||
data = MemoryMappedTensor.from_filename(filename=filename,
|
||||
dtype=data_type,
|
||||
shape=data_shape)
|
||||
|
||||
return data
|
||||
259
postprocessing/mmaudio/eval_utils.py
Normal file
259
postprocessing/mmaudio/eval_utils.py
Normal file
@ -0,0 +1,259 @@
|
||||
import dataclasses
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
# from colorlog import ColoredFormatter
|
||||
from PIL import Image
|
||||
from torchvision.transforms import v2
|
||||
|
||||
from .data.av_utils import ImageInfo, VideoInfo, read_frames, reencode_with_audio, remux_with_audio
|
||||
from .model.flow_matching import FlowMatching
|
||||
from .model.networks import MMAudio
|
||||
from .model.sequence_config import CONFIG_16K, CONFIG_44K, SequenceConfig
|
||||
from .model.utils.features_utils import FeaturesUtils
|
||||
from .utils.download_utils import download_model_if_needed
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class ModelConfig:
|
||||
model_name: str
|
||||
model_path: Path
|
||||
vae_path: Path
|
||||
bigvgan_16k_path: Optional[Path]
|
||||
mode: str
|
||||
synchformer_ckpt: Path = Path('ckpts/mmaudio/synchformer_state_dict.pth')
|
||||
|
||||
@property
|
||||
def seq_cfg(self) -> SequenceConfig:
|
||||
if self.mode == '16k':
|
||||
return CONFIG_16K
|
||||
elif self.mode == '44k':
|
||||
return CONFIG_44K
|
||||
|
||||
def download_if_needed(self):
|
||||
download_model_if_needed(self.model_path)
|
||||
download_model_if_needed(self.vae_path)
|
||||
if self.bigvgan_16k_path is not None:
|
||||
download_model_if_needed(self.bigvgan_16k_path)
|
||||
download_model_if_needed(self.synchformer_ckpt)
|
||||
|
||||
|
||||
small_16k = ModelConfig(model_name='small_16k',
|
||||
model_path=Path('./weights/mmaudio_small_16k.pth'),
|
||||
vae_path=Path('./ext_weights/v1-16.pth'),
|
||||
bigvgan_16k_path=Path('./ext_weights/best_netG.pt'),
|
||||
mode='16k')
|
||||
small_44k = ModelConfig(model_name='small_44k',
|
||||
model_path=Path('./weights/mmaudio_small_44k.pth'),
|
||||
vae_path=Path('./ext_weights/v1-44.pth'),
|
||||
bigvgan_16k_path=None,
|
||||
mode='44k')
|
||||
medium_44k = ModelConfig(model_name='medium_44k',
|
||||
model_path=Path('./weights/mmaudio_medium_44k.pth'),
|
||||
vae_path=Path('./ext_weights/v1-44.pth'),
|
||||
bigvgan_16k_path=None,
|
||||
mode='44k')
|
||||
large_44k = ModelConfig(model_name='large_44k',
|
||||
model_path=Path('./weights/mmaudio_large_44k.pth'),
|
||||
vae_path=Path('./ext_weights/v1-44.pth'),
|
||||
bigvgan_16k_path=None,
|
||||
mode='44k')
|
||||
large_44k_v2 = ModelConfig(model_name='large_44k_v2',
|
||||
model_path=Path('ckpts/mmaudio/mmaudio_large_44k_v2.pth'),
|
||||
vae_path=Path('ckpts/mmaudio/v1-44.pth'),
|
||||
bigvgan_16k_path=None,
|
||||
mode='44k')
|
||||
all_model_cfg: dict[str, ModelConfig] = {
|
||||
'small_16k': small_16k,
|
||||
'small_44k': small_44k,
|
||||
'medium_44k': medium_44k,
|
||||
'large_44k': large_44k,
|
||||
'large_44k_v2': large_44k_v2,
|
||||
}
|
||||
|
||||
|
||||
def generate(
|
||||
clip_video: Optional[torch.Tensor],
|
||||
sync_video: Optional[torch.Tensor],
|
||||
text: Optional[list[str]],
|
||||
*,
|
||||
negative_text: Optional[list[str]] = None,
|
||||
feature_utils: FeaturesUtils,
|
||||
net: MMAudio,
|
||||
fm: FlowMatching,
|
||||
rng: torch.Generator,
|
||||
cfg_strength: float,
|
||||
clip_batch_size_multiplier: int = 40,
|
||||
sync_batch_size_multiplier: int = 40,
|
||||
image_input: bool = False,
|
||||
offloadobj = None
|
||||
) -> torch.Tensor:
|
||||
device = feature_utils.device
|
||||
dtype = feature_utils.dtype
|
||||
|
||||
bs = len(text)
|
||||
if clip_video is not None:
|
||||
clip_video = clip_video.to(device, dtype, non_blocking=True)
|
||||
clip_features = feature_utils.encode_video_with_clip(clip_video,
|
||||
batch_size=bs *
|
||||
clip_batch_size_multiplier)
|
||||
if image_input:
|
||||
clip_features = clip_features.expand(-1, net.clip_seq_len, -1)
|
||||
else:
|
||||
clip_features = net.get_empty_clip_sequence(bs)
|
||||
|
||||
if sync_video is not None and not image_input:
|
||||
sync_video = sync_video.to(device, dtype, non_blocking=True)
|
||||
sync_features = feature_utils.encode_video_with_sync(sync_video,
|
||||
batch_size=bs *
|
||||
sync_batch_size_multiplier)
|
||||
else:
|
||||
sync_features = net.get_empty_sync_sequence(bs)
|
||||
|
||||
if text is not None:
|
||||
text_features = feature_utils.encode_text(text)
|
||||
else:
|
||||
text_features = net.get_empty_string_sequence(bs)
|
||||
|
||||
if negative_text is not None:
|
||||
assert len(negative_text) == bs
|
||||
negative_text_features = feature_utils.encode_text(negative_text)
|
||||
else:
|
||||
negative_text_features = net.get_empty_string_sequence(bs)
|
||||
if offloadobj != None:
|
||||
offloadobj.ensure_model_loaded("net")
|
||||
x0 = torch.randn(bs,
|
||||
net.latent_seq_len,
|
||||
net.latent_dim,
|
||||
device=device,
|
||||
dtype=dtype,
|
||||
generator=rng)
|
||||
preprocessed_conditions = net.preprocess_conditions(clip_features, sync_features, text_features)
|
||||
empty_conditions = net.get_empty_conditions(
|
||||
bs, negative_text_features=negative_text_features if negative_text is not None else None)
|
||||
|
||||
cfg_ode_wrapper = lambda t, x: net.ode_wrapper(t, x, preprocessed_conditions, empty_conditions,
|
||||
cfg_strength)
|
||||
x1 = fm.to_data(cfg_ode_wrapper, x0)
|
||||
x1 = net.unnormalize(x1)
|
||||
spec = feature_utils.decode(x1)
|
||||
audio = feature_utils.vocode(spec)
|
||||
return audio
|
||||
|
||||
|
||||
LOGFORMAT = "[%(log_color)s%(levelname)-8s%(reset)s]: %(log_color)s%(message)s%(reset)s"
|
||||
|
||||
|
||||
def setup_eval_logging(log_level: int = logging.INFO):
|
||||
logging.root.setLevel(log_level)
|
||||
# formatter = ColoredFormatter(LOGFORMAT)
|
||||
formatter = None
|
||||
stream = logging.StreamHandler()
|
||||
stream.setLevel(log_level)
|
||||
stream.setFormatter(formatter)
|
||||
log = logging.getLogger()
|
||||
log.setLevel(log_level)
|
||||
log.addHandler(stream)
|
||||
|
||||
|
||||
_CLIP_SIZE = 384
|
||||
_CLIP_FPS = 8.0
|
||||
|
||||
_SYNC_SIZE = 224
|
||||
_SYNC_FPS = 25.0
|
||||
|
||||
|
||||
def load_video(video_path: Path, duration_sec: float, load_all_frames: bool = True) -> VideoInfo:
|
||||
|
||||
clip_transform = v2.Compose([
|
||||
v2.Resize((_CLIP_SIZE, _CLIP_SIZE), interpolation=v2.InterpolationMode.BICUBIC),
|
||||
v2.ToImage(),
|
||||
v2.ToDtype(torch.float32, scale=True),
|
||||
])
|
||||
|
||||
sync_transform = v2.Compose([
|
||||
v2.Resize(_SYNC_SIZE, interpolation=v2.InterpolationMode.BICUBIC),
|
||||
v2.CenterCrop(_SYNC_SIZE),
|
||||
v2.ToImage(),
|
||||
v2.ToDtype(torch.float32, scale=True),
|
||||
v2.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
|
||||
])
|
||||
|
||||
output_frames, all_frames, orig_fps = read_frames(video_path,
|
||||
list_of_fps=[_CLIP_FPS, _SYNC_FPS],
|
||||
start_sec=0,
|
||||
end_sec=duration_sec,
|
||||
need_all_frames=load_all_frames)
|
||||
|
||||
clip_chunk, sync_chunk = output_frames
|
||||
clip_chunk = torch.from_numpy(clip_chunk).permute(0, 3, 1, 2)
|
||||
sync_chunk = torch.from_numpy(sync_chunk).permute(0, 3, 1, 2)
|
||||
|
||||
clip_frames = clip_transform(clip_chunk)
|
||||
sync_frames = sync_transform(sync_chunk)
|
||||
|
||||
clip_length_sec = clip_frames.shape[0] / _CLIP_FPS
|
||||
sync_length_sec = sync_frames.shape[0] / _SYNC_FPS
|
||||
|
||||
if clip_length_sec < duration_sec:
|
||||
log.warning(f'Clip video is too short: {clip_length_sec:.2f} < {duration_sec:.2f}')
|
||||
log.warning(f'Truncating to {clip_length_sec:.2f} sec')
|
||||
duration_sec = clip_length_sec
|
||||
|
||||
if sync_length_sec < duration_sec:
|
||||
log.warning(f'Sync video is too short: {sync_length_sec:.2f} < {duration_sec:.2f}')
|
||||
log.warning(f'Truncating to {sync_length_sec:.2f} sec')
|
||||
duration_sec = sync_length_sec
|
||||
|
||||
clip_frames = clip_frames[:int(_CLIP_FPS * duration_sec)]
|
||||
sync_frames = sync_frames[:int(_SYNC_FPS * duration_sec)]
|
||||
|
||||
video_info = VideoInfo(
|
||||
duration_sec=duration_sec,
|
||||
fps=orig_fps,
|
||||
clip_frames=clip_frames,
|
||||
sync_frames=sync_frames,
|
||||
all_frames=all_frames if load_all_frames else None,
|
||||
)
|
||||
return video_info
|
||||
|
||||
|
||||
def load_image(image_path: Path) -> VideoInfo:
|
||||
clip_transform = v2.Compose([
|
||||
v2.Resize((_CLIP_SIZE, _CLIP_SIZE), interpolation=v2.InterpolationMode.BICUBIC),
|
||||
v2.ToImage(),
|
||||
v2.ToDtype(torch.float32, scale=True),
|
||||
])
|
||||
|
||||
sync_transform = v2.Compose([
|
||||
v2.Resize(_SYNC_SIZE, interpolation=v2.InterpolationMode.BICUBIC),
|
||||
v2.CenterCrop(_SYNC_SIZE),
|
||||
v2.ToImage(),
|
||||
v2.ToDtype(torch.float32, scale=True),
|
||||
v2.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
|
||||
])
|
||||
|
||||
frame = np.array(Image.open(image_path))
|
||||
|
||||
clip_chunk = torch.from_numpy(frame).unsqueeze(0).permute(0, 3, 1, 2)
|
||||
sync_chunk = torch.from_numpy(frame).unsqueeze(0).permute(0, 3, 1, 2)
|
||||
|
||||
clip_frames = clip_transform(clip_chunk)
|
||||
sync_frames = sync_transform(sync_chunk)
|
||||
|
||||
video_info = ImageInfo(
|
||||
clip_frames=clip_frames,
|
||||
sync_frames=sync_frames,
|
||||
original_frame=frame,
|
||||
)
|
||||
return video_info
|
||||
|
||||
|
||||
def make_video(source_path, video_info: VideoInfo, output_path: Path, audio: torch.Tensor, sampling_rate: int):
|
||||
# reencode_with_audio(video_info, output_path, audio, sampling_rate)
|
||||
remux_with_audio(source_path, output_path, audio, sampling_rate)
|
||||
1
postprocessing/mmaudio/ext/__init__.py
Normal file
1
postprocessing/mmaudio/ext/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
|
||||
1
postprocessing/mmaudio/ext/autoencoder/__init__.py
Normal file
1
postprocessing/mmaudio/ext/autoencoder/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
from .autoencoder import AutoEncoderModule
|
||||
52
postprocessing/mmaudio/ext/autoencoder/autoencoder.py
Normal file
52
postprocessing/mmaudio/ext/autoencoder/autoencoder.py
Normal file
@ -0,0 +1,52 @@
|
||||
from typing import Literal, Optional
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
|
||||
from ..autoencoder.vae import VAE, get_my_vae
|
||||
from ..bigvgan import BigVGAN
|
||||
from ..bigvgan_v2.bigvgan import BigVGAN as BigVGANv2
|
||||
from ...model.utils.distributions import DiagonalGaussianDistribution
|
||||
|
||||
|
||||
class AutoEncoderModule(nn.Module):
|
||||
|
||||
def __init__(self,
|
||||
*,
|
||||
vae_ckpt_path,
|
||||
vocoder_ckpt_path: Optional[str] = None,
|
||||
mode: Literal['16k', '44k'],
|
||||
need_vae_encoder: bool = True):
|
||||
super().__init__()
|
||||
self.vae: VAE = get_my_vae(mode).eval()
|
||||
vae_state_dict = torch.load(vae_ckpt_path, weights_only=True, map_location='cpu')
|
||||
self.vae.load_state_dict(vae_state_dict)
|
||||
self.vae.remove_weight_norm()
|
||||
|
||||
if mode == '16k':
|
||||
assert vocoder_ckpt_path is not None
|
||||
self.vocoder = BigVGAN(vocoder_ckpt_path).eval()
|
||||
elif mode == '44k':
|
||||
self.vocoder = BigVGANv2.from_pretrained('nvidia/bigvgan_v2_44khz_128band_512x',
|
||||
use_cuda_kernel=False)
|
||||
self.vocoder.remove_weight_norm()
|
||||
else:
|
||||
raise ValueError(f'Unknown mode: {mode}')
|
||||
|
||||
for param in self.parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
if not need_vae_encoder:
|
||||
del self.vae.encoder
|
||||
|
||||
@torch.inference_mode()
|
||||
def encode(self, x: torch.Tensor) -> DiagonalGaussianDistribution:
|
||||
return self.vae.encode(x)
|
||||
|
||||
@torch.inference_mode()
|
||||
def decode(self, z: torch.Tensor) -> torch.Tensor:
|
||||
return self.vae.decode(z)
|
||||
|
||||
@torch.inference_mode()
|
||||
def vocode(self, spec: torch.Tensor) -> torch.Tensor:
|
||||
return self.vocoder(spec)
|
||||
168
postprocessing/mmaudio/ext/autoencoder/edm2_utils.py
Normal file
168
postprocessing/mmaudio/ext/autoencoder/edm2_utils.py
Normal file
@ -0,0 +1,168 @@
|
||||
# Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
#
|
||||
# This work is licensed under a Creative Commons
|
||||
# Attribution-NonCommercial-ShareAlike 4.0 International License.
|
||||
# You should have received a copy of the license along with this
|
||||
# work. If not, see http://creativecommons.org/licenses/by-nc-sa/4.0/
|
||||
"""Improved diffusion model architecture proposed in the paper
|
||||
"Analyzing and Improving the Training Dynamics of Diffusion Models"."""
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
|
||||
#----------------------------------------------------------------------------
|
||||
# Variant of constant() that inherits dtype and device from the given
|
||||
# reference tensor by default.
|
||||
|
||||
_constant_cache = dict()
|
||||
|
||||
|
||||
def constant(value, shape=None, dtype=None, device=None, memory_format=None):
|
||||
value = np.asarray(value)
|
||||
if shape is not None:
|
||||
shape = tuple(shape)
|
||||
if dtype is None:
|
||||
dtype = torch.get_default_dtype()
|
||||
if device is None:
|
||||
device = torch.device('cpu')
|
||||
if memory_format is None:
|
||||
memory_format = torch.contiguous_format
|
||||
|
||||
key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format)
|
||||
tensor = _constant_cache.get(key, None)
|
||||
if tensor is None:
|
||||
tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device)
|
||||
if shape is not None:
|
||||
tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape))
|
||||
tensor = tensor.contiguous(memory_format=memory_format)
|
||||
_constant_cache[key] = tensor
|
||||
return tensor
|
||||
|
||||
|
||||
def const_like(ref, value, shape=None, dtype=None, device=None, memory_format=None):
|
||||
if dtype is None:
|
||||
dtype = ref.dtype
|
||||
if device is None:
|
||||
device = ref.device
|
||||
return constant(value, shape=shape, dtype=dtype, device=device, memory_format=memory_format)
|
||||
|
||||
|
||||
#----------------------------------------------------------------------------
|
||||
# Normalize given tensor to unit magnitude with respect to the given
|
||||
# dimensions. Default = all dimensions except the first.
|
||||
|
||||
|
||||
def normalize(x, dim=None, eps=1e-4):
|
||||
if dim is None:
|
||||
dim = list(range(1, x.ndim))
|
||||
norm = torch.linalg.vector_norm(x, dim=dim, keepdim=True, dtype=torch.float32)
|
||||
norm = torch.add(eps, norm, alpha=np.sqrt(norm.numel() / x.numel()))
|
||||
return x / norm.to(x.dtype)
|
||||
|
||||
|
||||
class Normalize(torch.nn.Module):
|
||||
|
||||
def __init__(self, dim=None, eps=1e-4):
|
||||
super().__init__()
|
||||
self.dim = dim
|
||||
self.eps = eps
|
||||
|
||||
def forward(self, x):
|
||||
return normalize(x, dim=self.dim, eps=self.eps)
|
||||
|
||||
|
||||
#----------------------------------------------------------------------------
|
||||
# Upsample or downsample the given tensor with the given filter,
|
||||
# or keep it as is.
|
||||
|
||||
|
||||
def resample(x, f=[1, 1], mode='keep'):
|
||||
if mode == 'keep':
|
||||
return x
|
||||
f = np.float32(f)
|
||||
assert f.ndim == 1 and len(f) % 2 == 0
|
||||
pad = (len(f) - 1) // 2
|
||||
f = f / f.sum()
|
||||
f = np.outer(f, f)[np.newaxis, np.newaxis, :, :]
|
||||
f = const_like(x, f)
|
||||
c = x.shape[1]
|
||||
if mode == 'down':
|
||||
return torch.nn.functional.conv2d(x,
|
||||
f.tile([c, 1, 1, 1]),
|
||||
groups=c,
|
||||
stride=2,
|
||||
padding=(pad, ))
|
||||
assert mode == 'up'
|
||||
return torch.nn.functional.conv_transpose2d(x, (f * 4).tile([c, 1, 1, 1]),
|
||||
groups=c,
|
||||
stride=2,
|
||||
padding=(pad, ))
|
||||
|
||||
|
||||
#----------------------------------------------------------------------------
|
||||
# Magnitude-preserving SiLU (Equation 81).
|
||||
|
||||
|
||||
def mp_silu(x):
|
||||
return torch.nn.functional.silu(x) / 0.596
|
||||
|
||||
|
||||
class MPSiLU(torch.nn.Module):
|
||||
|
||||
def forward(self, x):
|
||||
return mp_silu(x)
|
||||
|
||||
|
||||
#----------------------------------------------------------------------------
|
||||
# Magnitude-preserving sum (Equation 88).
|
||||
|
||||
|
||||
def mp_sum(a, b, t=0.5):
|
||||
return a.lerp(b, t) / np.sqrt((1 - t)**2 + t**2)
|
||||
|
||||
|
||||
#----------------------------------------------------------------------------
|
||||
# Magnitude-preserving concatenation (Equation 103).
|
||||
|
||||
|
||||
def mp_cat(a, b, dim=1, t=0.5):
|
||||
Na = a.shape[dim]
|
||||
Nb = b.shape[dim]
|
||||
C = np.sqrt((Na + Nb) / ((1 - t)**2 + t**2))
|
||||
wa = C / np.sqrt(Na) * (1 - t)
|
||||
wb = C / np.sqrt(Nb) * t
|
||||
return torch.cat([wa * a, wb * b], dim=dim)
|
||||
|
||||
|
||||
#----------------------------------------------------------------------------
|
||||
# Magnitude-preserving convolution or fully-connected layer (Equation 47)
|
||||
# with force weight normalization (Equation 66).
|
||||
|
||||
|
||||
class MPConv1D(torch.nn.Module):
|
||||
|
||||
def __init__(self, in_channels, out_channels, kernel_size):
|
||||
super().__init__()
|
||||
self.out_channels = out_channels
|
||||
self.weight = torch.nn.Parameter(torch.randn(out_channels, in_channels, kernel_size))
|
||||
|
||||
self.weight_norm_removed = False
|
||||
|
||||
def forward(self, x, gain=1):
|
||||
assert self.weight_norm_removed, 'call remove_weight_norm() before inference'
|
||||
|
||||
w = self.weight * gain
|
||||
if w.ndim == 2:
|
||||
return x @ w.t()
|
||||
assert w.ndim == 3
|
||||
return torch.nn.functional.conv1d(x, w, padding=(w.shape[-1] // 2, ))
|
||||
|
||||
def remove_weight_norm(self):
|
||||
w = self.weight.to(torch.float32)
|
||||
w = normalize(w) # traditional weight normalization
|
||||
w = w / np.sqrt(w[0].numel())
|
||||
w = w.to(self.weight.dtype)
|
||||
self.weight.data.copy_(w)
|
||||
|
||||
self.weight_norm_removed = True
|
||||
return self
|
||||
369
postprocessing/mmaudio/ext/autoencoder/vae.py
Normal file
369
postprocessing/mmaudio/ext/autoencoder/vae.py
Normal file
@ -0,0 +1,369 @@
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
|
||||
from ...ext.autoencoder.edm2_utils import MPConv1D
|
||||
from ...ext.autoencoder.vae_modules import (AttnBlock1D, Downsample1D, ResnetBlock1D,
|
||||
Upsample1D, nonlinearity)
|
||||
from ...model.utils.distributions import DiagonalGaussianDistribution
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
DATA_MEAN_80D = [
|
||||
-1.6058, -1.3676, -1.2520, -1.2453, -1.2078, -1.2224, -1.2419, -1.2439, -1.2922, -1.2927,
|
||||
-1.3170, -1.3543, -1.3401, -1.3836, -1.3907, -1.3912, -1.4313, -1.4152, -1.4527, -1.4728,
|
||||
-1.4568, -1.5101, -1.5051, -1.5172, -1.5623, -1.5373, -1.5746, -1.5687, -1.6032, -1.6131,
|
||||
-1.6081, -1.6331, -1.6489, -1.6489, -1.6700, -1.6738, -1.6953, -1.6969, -1.7048, -1.7280,
|
||||
-1.7361, -1.7495, -1.7658, -1.7814, -1.7889, -1.8064, -1.8221, -1.8377, -1.8417, -1.8643,
|
||||
-1.8857, -1.8929, -1.9173, -1.9379, -1.9531, -1.9673, -1.9824, -2.0042, -2.0215, -2.0436,
|
||||
-2.0766, -2.1064, -2.1418, -2.1855, -2.2319, -2.2767, -2.3161, -2.3572, -2.3954, -2.4282,
|
||||
-2.4659, -2.5072, -2.5552, -2.6074, -2.6584, -2.7107, -2.7634, -2.8266, -2.8981, -2.9673
|
||||
]
|
||||
|
||||
DATA_STD_80D = [
|
||||
1.0291, 1.0411, 1.0043, 0.9820, 0.9677, 0.9543, 0.9450, 0.9392, 0.9343, 0.9297, 0.9276, 0.9263,
|
||||
0.9242, 0.9254, 0.9232, 0.9281, 0.9263, 0.9315, 0.9274, 0.9247, 0.9277, 0.9199, 0.9188, 0.9194,
|
||||
0.9160, 0.9161, 0.9146, 0.9161, 0.9100, 0.9095, 0.9145, 0.9076, 0.9066, 0.9095, 0.9032, 0.9043,
|
||||
0.9038, 0.9011, 0.9019, 0.9010, 0.8984, 0.8983, 0.8986, 0.8961, 0.8962, 0.8978, 0.8962, 0.8973,
|
||||
0.8993, 0.8976, 0.8995, 0.9016, 0.8982, 0.8972, 0.8974, 0.8949, 0.8940, 0.8947, 0.8936, 0.8939,
|
||||
0.8951, 0.8956, 0.9017, 0.9167, 0.9436, 0.9690, 1.0003, 1.0225, 1.0381, 1.0491, 1.0545, 1.0604,
|
||||
1.0761, 1.0929, 1.1089, 1.1196, 1.1176, 1.1156, 1.1117, 1.1070
|
||||
]
|
||||
|
||||
DATA_MEAN_128D = [
|
||||
-3.3462, -2.6723, -2.4893, -2.3143, -2.2664, -2.3317, -2.1802, -2.4006, -2.2357, -2.4597,
|
||||
-2.3717, -2.4690, -2.5142, -2.4919, -2.6610, -2.5047, -2.7483, -2.5926, -2.7462, -2.7033,
|
||||
-2.7386, -2.8112, -2.7502, -2.9594, -2.7473, -3.0035, -2.8891, -2.9922, -2.9856, -3.0157,
|
||||
-3.1191, -2.9893, -3.1718, -3.0745, -3.1879, -3.2310, -3.1424, -3.2296, -3.2791, -3.2782,
|
||||
-3.2756, -3.3134, -3.3509, -3.3750, -3.3951, -3.3698, -3.4505, -3.4509, -3.5089, -3.4647,
|
||||
-3.5536, -3.5788, -3.5867, -3.6036, -3.6400, -3.6747, -3.7072, -3.7279, -3.7283, -3.7795,
|
||||
-3.8259, -3.8447, -3.8663, -3.9182, -3.9605, -3.9861, -4.0105, -4.0373, -4.0762, -4.1121,
|
||||
-4.1488, -4.1874, -4.2461, -4.3170, -4.3639, -4.4452, -4.5282, -4.6297, -4.7019, -4.7960,
|
||||
-4.8700, -4.9507, -5.0303, -5.0866, -5.1634, -5.2342, -5.3242, -5.4053, -5.4927, -5.5712,
|
||||
-5.6464, -5.7052, -5.7619, -5.8410, -5.9188, -6.0103, -6.0955, -6.1673, -6.2362, -6.3120,
|
||||
-6.3926, -6.4797, -6.5565, -6.6511, -6.8130, -6.9961, -7.1275, -7.2457, -7.3576, -7.4663,
|
||||
-7.6136, -7.7469, -7.8815, -8.0132, -8.1515, -8.3071, -8.4722, -8.7418, -9.3975, -9.6628,
|
||||
-9.7671, -9.8863, -9.9992, -10.0860, -10.1709, -10.5418, -11.2795, -11.3861
|
||||
]
|
||||
|
||||
DATA_STD_128D = [
|
||||
2.3804, 2.4368, 2.3772, 2.3145, 2.2803, 2.2510, 2.2316, 2.2083, 2.1996, 2.1835, 2.1769, 2.1659,
|
||||
2.1631, 2.1618, 2.1540, 2.1606, 2.1571, 2.1567, 2.1612, 2.1579, 2.1679, 2.1683, 2.1634, 2.1557,
|
||||
2.1668, 2.1518, 2.1415, 2.1449, 2.1406, 2.1350, 2.1313, 2.1415, 2.1281, 2.1352, 2.1219, 2.1182,
|
||||
2.1327, 2.1195, 2.1137, 2.1080, 2.1179, 2.1036, 2.1087, 2.1036, 2.1015, 2.1068, 2.0975, 2.0991,
|
||||
2.0902, 2.1015, 2.0857, 2.0920, 2.0893, 2.0897, 2.0910, 2.0881, 2.0925, 2.0873, 2.0960, 2.0900,
|
||||
2.0957, 2.0958, 2.0978, 2.0936, 2.0886, 2.0905, 2.0845, 2.0855, 2.0796, 2.0840, 2.0813, 2.0817,
|
||||
2.0838, 2.0840, 2.0917, 2.1061, 2.1431, 2.1976, 2.2482, 2.3055, 2.3700, 2.4088, 2.4372, 2.4609,
|
||||
2.4731, 2.4847, 2.5072, 2.5451, 2.5772, 2.6147, 2.6529, 2.6596, 2.6645, 2.6726, 2.6803, 2.6812,
|
||||
2.6899, 2.6916, 2.6931, 2.6998, 2.7062, 2.7262, 2.7222, 2.7158, 2.7041, 2.7485, 2.7491, 2.7451,
|
||||
2.7485, 2.7233, 2.7297, 2.7233, 2.7145, 2.6958, 2.6788, 2.6439, 2.6007, 2.4786, 2.2469, 2.1877,
|
||||
2.1392, 2.0717, 2.0107, 1.9676, 1.9140, 1.7102, 0.9101, 0.7164
|
||||
]
|
||||
|
||||
|
||||
class VAE(nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
data_dim: int,
|
||||
embed_dim: int,
|
||||
hidden_dim: int,
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
if data_dim == 80:
|
||||
self.data_mean = nn.Buffer(torch.tensor(DATA_MEAN_80D, dtype=torch.float32))
|
||||
self.data_std = nn.Buffer(torch.tensor(DATA_STD_80D, dtype=torch.float32))
|
||||
elif data_dim == 128:
|
||||
self.data_mean = nn.Buffer(torch.tensor(DATA_MEAN_128D, dtype=torch.float32))
|
||||
self.data_std = nn.Buffer(torch.tensor(DATA_STD_128D, dtype=torch.float32))
|
||||
|
||||
self.data_mean = self.data_mean.view(1, -1, 1)
|
||||
self.data_std = self.data_std.view(1, -1, 1)
|
||||
|
||||
self.encoder = Encoder1D(
|
||||
dim=hidden_dim,
|
||||
ch_mult=(1, 2, 4),
|
||||
num_res_blocks=2,
|
||||
attn_layers=[3],
|
||||
down_layers=[0],
|
||||
in_dim=data_dim,
|
||||
embed_dim=embed_dim,
|
||||
)
|
||||
self.decoder = Decoder1D(
|
||||
dim=hidden_dim,
|
||||
ch_mult=(1, 2, 4),
|
||||
num_res_blocks=2,
|
||||
attn_layers=[3],
|
||||
down_layers=[0],
|
||||
in_dim=data_dim,
|
||||
out_dim=data_dim,
|
||||
embed_dim=embed_dim,
|
||||
)
|
||||
|
||||
self.embed_dim = embed_dim
|
||||
# self.quant_conv = nn.Conv1d(2 * embed_dim, 2 * embed_dim, 1)
|
||||
# self.post_quant_conv = nn.Conv1d(embed_dim, embed_dim, 1)
|
||||
|
||||
self.initialize_weights()
|
||||
|
||||
def initialize_weights(self):
|
||||
pass
|
||||
|
||||
def encode(self, x: torch.Tensor, normalize: bool = True) -> DiagonalGaussianDistribution:
|
||||
if normalize:
|
||||
x = self.normalize(x)
|
||||
moments = self.encoder(x)
|
||||
posterior = DiagonalGaussianDistribution(moments)
|
||||
return posterior
|
||||
|
||||
def decode(self, z: torch.Tensor, unnormalize: bool = True) -> torch.Tensor:
|
||||
dec = self.decoder(z)
|
||||
if unnormalize:
|
||||
dec = self.unnormalize(dec)
|
||||
return dec
|
||||
|
||||
def normalize(self, x: torch.Tensor) -> torch.Tensor:
|
||||
return (x - self.data_mean) / self.data_std
|
||||
|
||||
def unnormalize(self, x: torch.Tensor) -> torch.Tensor:
|
||||
return x * self.data_std + self.data_mean
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor,
|
||||
sample_posterior: bool = True,
|
||||
rng: Optional[torch.Generator] = None,
|
||||
normalize: bool = True,
|
||||
unnormalize: bool = True,
|
||||
) -> tuple[torch.Tensor, DiagonalGaussianDistribution]:
|
||||
|
||||
posterior = self.encode(x, normalize=normalize)
|
||||
if sample_posterior:
|
||||
z = posterior.sample(rng)
|
||||
else:
|
||||
z = posterior.mode()
|
||||
dec = self.decode(z, unnormalize=unnormalize)
|
||||
return dec, posterior
|
||||
|
||||
def load_weights(self, src_dict) -> None:
|
||||
self.load_state_dict(src_dict, strict=True)
|
||||
|
||||
@property
|
||||
def device(self) -> torch.device:
|
||||
return next(self.parameters()).device
|
||||
|
||||
def get_last_layer(self):
|
||||
return self.decoder.conv_out.weight
|
||||
|
||||
def remove_weight_norm(self):
|
||||
for name, m in self.named_modules():
|
||||
if isinstance(m, MPConv1D):
|
||||
m.remove_weight_norm()
|
||||
log.debug(f"Removed weight norm from {name}")
|
||||
return self
|
||||
|
||||
|
||||
class Encoder1D(nn.Module):
|
||||
|
||||
def __init__(self,
|
||||
*,
|
||||
dim: int,
|
||||
ch_mult: tuple[int] = (1, 2, 4, 8),
|
||||
num_res_blocks: int,
|
||||
attn_layers: list[int] = [],
|
||||
down_layers: list[int] = [],
|
||||
resamp_with_conv: bool = True,
|
||||
in_dim: int,
|
||||
embed_dim: int,
|
||||
double_z: bool = True,
|
||||
kernel_size: int = 3,
|
||||
clip_act: float = 256.0):
|
||||
super().__init__()
|
||||
self.dim = dim
|
||||
self.num_layers = len(ch_mult)
|
||||
self.num_res_blocks = num_res_blocks
|
||||
self.in_channels = in_dim
|
||||
self.clip_act = clip_act
|
||||
self.down_layers = down_layers
|
||||
self.attn_layers = attn_layers
|
||||
self.conv_in = MPConv1D(in_dim, self.dim, kernel_size=kernel_size)
|
||||
|
||||
in_ch_mult = (1, ) + tuple(ch_mult)
|
||||
self.in_ch_mult = in_ch_mult
|
||||
# downsampling
|
||||
self.down = nn.ModuleList()
|
||||
for i_level in range(self.num_layers):
|
||||
block = nn.ModuleList()
|
||||
attn = nn.ModuleList()
|
||||
block_in = dim * in_ch_mult[i_level]
|
||||
block_out = dim * ch_mult[i_level]
|
||||
for i_block in range(self.num_res_blocks):
|
||||
block.append(
|
||||
ResnetBlock1D(in_dim=block_in,
|
||||
out_dim=block_out,
|
||||
kernel_size=kernel_size,
|
||||
use_norm=True))
|
||||
block_in = block_out
|
||||
if i_level in attn_layers:
|
||||
attn.append(AttnBlock1D(block_in))
|
||||
down = nn.Module()
|
||||
down.block = block
|
||||
down.attn = attn
|
||||
if i_level in down_layers:
|
||||
down.downsample = Downsample1D(block_in, resamp_with_conv)
|
||||
self.down.append(down)
|
||||
|
||||
# middle
|
||||
self.mid = nn.Module()
|
||||
self.mid.block_1 = ResnetBlock1D(in_dim=block_in,
|
||||
out_dim=block_in,
|
||||
kernel_size=kernel_size,
|
||||
use_norm=True)
|
||||
self.mid.attn_1 = AttnBlock1D(block_in)
|
||||
self.mid.block_2 = ResnetBlock1D(in_dim=block_in,
|
||||
out_dim=block_in,
|
||||
kernel_size=kernel_size,
|
||||
use_norm=True)
|
||||
|
||||
# end
|
||||
self.conv_out = MPConv1D(block_in,
|
||||
2 * embed_dim if double_z else embed_dim,
|
||||
kernel_size=kernel_size)
|
||||
|
||||
self.learnable_gain = nn.Parameter(torch.zeros([]))
|
||||
|
||||
def forward(self, x):
|
||||
|
||||
# downsampling
|
||||
hs = [self.conv_in(x)]
|
||||
for i_level in range(self.num_layers):
|
||||
for i_block in range(self.num_res_blocks):
|
||||
h = self.down[i_level].block[i_block](hs[-1])
|
||||
if len(self.down[i_level].attn) > 0:
|
||||
h = self.down[i_level].attn[i_block](h)
|
||||
h = h.clamp(-self.clip_act, self.clip_act)
|
||||
hs.append(h)
|
||||
if i_level in self.down_layers:
|
||||
hs.append(self.down[i_level].downsample(hs[-1]))
|
||||
|
||||
# middle
|
||||
h = hs[-1]
|
||||
h = self.mid.block_1(h)
|
||||
h = self.mid.attn_1(h)
|
||||
h = self.mid.block_2(h)
|
||||
h = h.clamp(-self.clip_act, self.clip_act)
|
||||
|
||||
# end
|
||||
h = nonlinearity(h)
|
||||
h = self.conv_out(h, gain=(self.learnable_gain + 1))
|
||||
return h
|
||||
|
||||
|
||||
class Decoder1D(nn.Module):
|
||||
|
||||
def __init__(self,
|
||||
*,
|
||||
dim: int,
|
||||
out_dim: int,
|
||||
ch_mult: tuple[int] = (1, 2, 4, 8),
|
||||
num_res_blocks: int,
|
||||
attn_layers: list[int] = [],
|
||||
down_layers: list[int] = [],
|
||||
kernel_size: int = 3,
|
||||
resamp_with_conv: bool = True,
|
||||
in_dim: int,
|
||||
embed_dim: int,
|
||||
clip_act: float = 256.0):
|
||||
super().__init__()
|
||||
self.ch = dim
|
||||
self.num_layers = len(ch_mult)
|
||||
self.num_res_blocks = num_res_blocks
|
||||
self.in_channels = in_dim
|
||||
self.clip_act = clip_act
|
||||
self.down_layers = [i + 1 for i in down_layers] # each downlayer add one
|
||||
|
||||
# compute in_ch_mult, block_in and curr_res at lowest res
|
||||
block_in = dim * ch_mult[self.num_layers - 1]
|
||||
|
||||
# z to block_in
|
||||
self.conv_in = MPConv1D(embed_dim, block_in, kernel_size=kernel_size)
|
||||
|
||||
# middle
|
||||
self.mid = nn.Module()
|
||||
self.mid.block_1 = ResnetBlock1D(in_dim=block_in, out_dim=block_in, use_norm=True)
|
||||
self.mid.attn_1 = AttnBlock1D(block_in)
|
||||
self.mid.block_2 = ResnetBlock1D(in_dim=block_in, out_dim=block_in, use_norm=True)
|
||||
|
||||
# upsampling
|
||||
self.up = nn.ModuleList()
|
||||
for i_level in reversed(range(self.num_layers)):
|
||||
block = nn.ModuleList()
|
||||
attn = nn.ModuleList()
|
||||
block_out = dim * ch_mult[i_level]
|
||||
for i_block in range(self.num_res_blocks + 1):
|
||||
block.append(ResnetBlock1D(in_dim=block_in, out_dim=block_out, use_norm=True))
|
||||
block_in = block_out
|
||||
if i_level in attn_layers:
|
||||
attn.append(AttnBlock1D(block_in))
|
||||
up = nn.Module()
|
||||
up.block = block
|
||||
up.attn = attn
|
||||
if i_level in self.down_layers:
|
||||
up.upsample = Upsample1D(block_in, resamp_with_conv)
|
||||
self.up.insert(0, up) # prepend to get consistent order
|
||||
|
||||
# end
|
||||
self.conv_out = MPConv1D(block_in, out_dim, kernel_size=kernel_size)
|
||||
self.learnable_gain = nn.Parameter(torch.zeros([]))
|
||||
|
||||
def forward(self, z):
|
||||
# z to block_in
|
||||
h = self.conv_in(z)
|
||||
|
||||
# middle
|
||||
h = self.mid.block_1(h)
|
||||
h = self.mid.attn_1(h)
|
||||
h = self.mid.block_2(h)
|
||||
h = h.clamp(-self.clip_act, self.clip_act)
|
||||
|
||||
# upsampling
|
||||
for i_level in reversed(range(self.num_layers)):
|
||||
for i_block in range(self.num_res_blocks + 1):
|
||||
h = self.up[i_level].block[i_block](h)
|
||||
if len(self.up[i_level].attn) > 0:
|
||||
h = self.up[i_level].attn[i_block](h)
|
||||
h = h.clamp(-self.clip_act, self.clip_act)
|
||||
if i_level in self.down_layers:
|
||||
h = self.up[i_level].upsample(h)
|
||||
|
||||
h = nonlinearity(h)
|
||||
h = self.conv_out(h, gain=(self.learnable_gain + 1))
|
||||
return h
|
||||
|
||||
|
||||
def VAE_16k(**kwargs) -> VAE:
|
||||
return VAE(data_dim=80, embed_dim=20, hidden_dim=384, **kwargs)
|
||||
|
||||
|
||||
def VAE_44k(**kwargs) -> VAE:
|
||||
return VAE(data_dim=128, embed_dim=40, hidden_dim=512, **kwargs)
|
||||
|
||||
|
||||
def get_my_vae(name: str, **kwargs) -> VAE:
|
||||
if name == '16k':
|
||||
return VAE_16k(**kwargs)
|
||||
if name == '44k':
|
||||
return VAE_44k(**kwargs)
|
||||
raise ValueError(f'Unknown model: {name}')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
network = get_my_vae('standard')
|
||||
|
||||
# print the number of parameters in terms of millions
|
||||
num_params = sum(p.numel() for p in network.parameters()) / 1e6
|
||||
print(f'Number of parameters: {num_params:.2f}M')
|
||||
117
postprocessing/mmaudio/ext/autoencoder/vae_modules.py
Normal file
117
postprocessing/mmaudio/ext/autoencoder/vae_modules.py
Normal file
@ -0,0 +1,117 @@
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
from einops import rearrange
|
||||
|
||||
from ...ext.autoencoder.edm2_utils import (MPConv1D, mp_silu, mp_sum, normalize)
|
||||
|
||||
|
||||
def nonlinearity(x):
|
||||
# swish
|
||||
return mp_silu(x)
|
||||
|
||||
|
||||
class ResnetBlock1D(nn.Module):
|
||||
|
||||
def __init__(self, *, in_dim, out_dim=None, conv_shortcut=False, kernel_size=3, use_norm=True):
|
||||
super().__init__()
|
||||
self.in_dim = in_dim
|
||||
out_dim = in_dim if out_dim is None else out_dim
|
||||
self.out_dim = out_dim
|
||||
self.use_conv_shortcut = conv_shortcut
|
||||
self.use_norm = use_norm
|
||||
|
||||
self.conv1 = MPConv1D(in_dim, out_dim, kernel_size=kernel_size)
|
||||
self.conv2 = MPConv1D(out_dim, out_dim, kernel_size=kernel_size)
|
||||
if self.in_dim != self.out_dim:
|
||||
if self.use_conv_shortcut:
|
||||
self.conv_shortcut = MPConv1D(in_dim, out_dim, kernel_size=kernel_size)
|
||||
else:
|
||||
self.nin_shortcut = MPConv1D(in_dim, out_dim, kernel_size=1)
|
||||
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
|
||||
# pixel norm
|
||||
if self.use_norm:
|
||||
x = normalize(x, dim=1)
|
||||
|
||||
h = x
|
||||
h = nonlinearity(h)
|
||||
h = self.conv1(h)
|
||||
|
||||
h = nonlinearity(h)
|
||||
h = self.conv2(h)
|
||||
|
||||
if self.in_dim != self.out_dim:
|
||||
if self.use_conv_shortcut:
|
||||
x = self.conv_shortcut(x)
|
||||
else:
|
||||
x = self.nin_shortcut(x)
|
||||
|
||||
return mp_sum(x, h, t=0.3)
|
||||
|
||||
|
||||
class AttnBlock1D(nn.Module):
|
||||
|
||||
def __init__(self, in_channels, num_heads=1):
|
||||
super().__init__()
|
||||
self.in_channels = in_channels
|
||||
|
||||
self.num_heads = num_heads
|
||||
self.qkv = MPConv1D(in_channels, in_channels * 3, kernel_size=1)
|
||||
self.proj_out = MPConv1D(in_channels, in_channels, kernel_size=1)
|
||||
|
||||
def forward(self, x):
|
||||
h = x
|
||||
y = self.qkv(h)
|
||||
y = y.reshape(y.shape[0], self.num_heads, -1, 3, y.shape[-1])
|
||||
q, k, v = normalize(y, dim=2).unbind(3)
|
||||
|
||||
q = rearrange(q, 'b h c l -> b h l c')
|
||||
k = rearrange(k, 'b h c l -> b h l c')
|
||||
v = rearrange(v, 'b h c l -> b h l c')
|
||||
|
||||
h = F.scaled_dot_product_attention(q, k, v)
|
||||
h = rearrange(h, 'b h l c -> b (h c) l')
|
||||
|
||||
h = self.proj_out(h)
|
||||
|
||||
return mp_sum(x, h, t=0.3)
|
||||
|
||||
|
||||
class Upsample1D(nn.Module):
|
||||
|
||||
def __init__(self, in_channels, with_conv):
|
||||
super().__init__()
|
||||
self.with_conv = with_conv
|
||||
if self.with_conv:
|
||||
self.conv = MPConv1D(in_channels, in_channels, kernel_size=3)
|
||||
|
||||
def forward(self, x):
|
||||
x = F.interpolate(x, scale_factor=2.0, mode='nearest-exact') # support 3D tensor(B,C,T)
|
||||
if self.with_conv:
|
||||
x = self.conv(x)
|
||||
return x
|
||||
|
||||
|
||||
class Downsample1D(nn.Module):
|
||||
|
||||
def __init__(self, in_channels, with_conv):
|
||||
super().__init__()
|
||||
self.with_conv = with_conv
|
||||
if self.with_conv:
|
||||
# no asymmetric padding in torch conv, must do it ourselves
|
||||
self.conv1 = MPConv1D(in_channels, in_channels, kernel_size=1)
|
||||
self.conv2 = MPConv1D(in_channels, in_channels, kernel_size=1)
|
||||
|
||||
def forward(self, x):
|
||||
|
||||
if self.with_conv:
|
||||
x = self.conv1(x)
|
||||
|
||||
x = F.avg_pool1d(x, kernel_size=2, stride=2)
|
||||
|
||||
if self.with_conv:
|
||||
x = self.conv2(x)
|
||||
|
||||
return x
|
||||
21
postprocessing/mmaudio/ext/bigvgan/LICENSE
Normal file
21
postprocessing/mmaudio/ext/bigvgan/LICENSE
Normal file
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2022 NVIDIA CORPORATION.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
1
postprocessing/mmaudio/ext/bigvgan/__init__.py
Normal file
1
postprocessing/mmaudio/ext/bigvgan/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
from .bigvgan import BigVGAN
|
||||
120
postprocessing/mmaudio/ext/bigvgan/activations.py
Normal file
120
postprocessing/mmaudio/ext/bigvgan/activations.py
Normal file
@ -0,0 +1,120 @@
|
||||
# Implementation adapted from https://github.com/EdwardDixon/snake under the MIT license.
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import torch
|
||||
from torch import nn, sin, pow
|
||||
from torch.nn import Parameter
|
||||
|
||||
|
||||
class Snake(nn.Module):
|
||||
'''
|
||||
Implementation of a sine-based periodic activation function
|
||||
Shape:
|
||||
- Input: (B, C, T)
|
||||
- Output: (B, C, T), same shape as the input
|
||||
Parameters:
|
||||
- alpha - trainable parameter
|
||||
References:
|
||||
- This activation function is from this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda:
|
||||
https://arxiv.org/abs/2006.08195
|
||||
Examples:
|
||||
>>> a1 = snake(256)
|
||||
>>> x = torch.randn(256)
|
||||
>>> x = a1(x)
|
||||
'''
|
||||
def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False):
|
||||
'''
|
||||
Initialization.
|
||||
INPUT:
|
||||
- in_features: shape of the input
|
||||
- alpha: trainable parameter
|
||||
alpha is initialized to 1 by default, higher values = higher-frequency.
|
||||
alpha will be trained along with the rest of your model.
|
||||
'''
|
||||
super(Snake, self).__init__()
|
||||
self.in_features = in_features
|
||||
|
||||
# initialize alpha
|
||||
self.alpha_logscale = alpha_logscale
|
||||
if self.alpha_logscale: # log scale alphas initialized to zeros
|
||||
self.alpha = Parameter(torch.zeros(in_features) * alpha)
|
||||
else: # linear scale alphas initialized to ones
|
||||
self.alpha = Parameter(torch.ones(in_features) * alpha)
|
||||
|
||||
self.alpha.requires_grad = alpha_trainable
|
||||
|
||||
self.no_div_by_zero = 0.000000001
|
||||
|
||||
def forward(self, x):
|
||||
'''
|
||||
Forward pass of the function.
|
||||
Applies the function to the input elementwise.
|
||||
Snake ∶= x + 1/a * sin^2 (xa)
|
||||
'''
|
||||
alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # line up with x to [B, C, T]
|
||||
if self.alpha_logscale:
|
||||
alpha = torch.exp(alpha)
|
||||
x = x + (1.0 / (alpha + self.no_div_by_zero)) * pow(sin(x * alpha), 2)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
class SnakeBeta(nn.Module):
|
||||
'''
|
||||
A modified Snake function which uses separate parameters for the magnitude of the periodic components
|
||||
Shape:
|
||||
- Input: (B, C, T)
|
||||
- Output: (B, C, T), same shape as the input
|
||||
Parameters:
|
||||
- alpha - trainable parameter that controls frequency
|
||||
- beta - trainable parameter that controls magnitude
|
||||
References:
|
||||
- This activation function is a modified version based on this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda:
|
||||
https://arxiv.org/abs/2006.08195
|
||||
Examples:
|
||||
>>> a1 = snakebeta(256)
|
||||
>>> x = torch.randn(256)
|
||||
>>> x = a1(x)
|
||||
'''
|
||||
def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False):
|
||||
'''
|
||||
Initialization.
|
||||
INPUT:
|
||||
- in_features: shape of the input
|
||||
- alpha - trainable parameter that controls frequency
|
||||
- beta - trainable parameter that controls magnitude
|
||||
alpha is initialized to 1 by default, higher values = higher-frequency.
|
||||
beta is initialized to 1 by default, higher values = higher-magnitude.
|
||||
alpha will be trained along with the rest of your model.
|
||||
'''
|
||||
super(SnakeBeta, self).__init__()
|
||||
self.in_features = in_features
|
||||
|
||||
# initialize alpha
|
||||
self.alpha_logscale = alpha_logscale
|
||||
if self.alpha_logscale: # log scale alphas initialized to zeros
|
||||
self.alpha = Parameter(torch.zeros(in_features) * alpha)
|
||||
self.beta = Parameter(torch.zeros(in_features) * alpha)
|
||||
else: # linear scale alphas initialized to ones
|
||||
self.alpha = Parameter(torch.ones(in_features) * alpha)
|
||||
self.beta = Parameter(torch.ones(in_features) * alpha)
|
||||
|
||||
self.alpha.requires_grad = alpha_trainable
|
||||
self.beta.requires_grad = alpha_trainable
|
||||
|
||||
self.no_div_by_zero = 0.000000001
|
||||
|
||||
def forward(self, x):
|
||||
'''
|
||||
Forward pass of the function.
|
||||
Applies the function to the input elementwise.
|
||||
SnakeBeta ∶= x + 1/b * sin^2 (xa)
|
||||
'''
|
||||
alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # line up with x to [B, C, T]
|
||||
beta = self.beta.unsqueeze(0).unsqueeze(-1)
|
||||
if self.alpha_logscale:
|
||||
alpha = torch.exp(alpha)
|
||||
beta = torch.exp(beta)
|
||||
x = x + (1.0 / (beta + self.no_div_by_zero)) * pow(sin(x * alpha), 2)
|
||||
|
||||
return x
|
||||
@ -0,0 +1,6 @@
|
||||
# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
from .filter import *
|
||||
from .resample import *
|
||||
from .act import *
|
||||
28
postprocessing/mmaudio/ext/bigvgan/alias_free_torch/act.py
Normal file
28
postprocessing/mmaudio/ext/bigvgan/alias_free_torch/act.py
Normal file
@ -0,0 +1,28 @@
|
||||
# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import torch.nn as nn
|
||||
from .resample import UpSample1d, DownSample1d
|
||||
|
||||
|
||||
class Activation1d(nn.Module):
|
||||
def __init__(self,
|
||||
activation,
|
||||
up_ratio: int = 2,
|
||||
down_ratio: int = 2,
|
||||
up_kernel_size: int = 12,
|
||||
down_kernel_size: int = 12):
|
||||
super().__init__()
|
||||
self.up_ratio = up_ratio
|
||||
self.down_ratio = down_ratio
|
||||
self.act = activation
|
||||
self.upsample = UpSample1d(up_ratio, up_kernel_size)
|
||||
self.downsample = DownSample1d(down_ratio, down_kernel_size)
|
||||
|
||||
# x: [B,C,T]
|
||||
def forward(self, x):
|
||||
x = self.upsample(x)
|
||||
x = self.act(x)
|
||||
x = self.downsample(x)
|
||||
|
||||
return x
|
||||
@ -0,0 +1,95 @@
|
||||
# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
import math
|
||||
|
||||
if 'sinc' in dir(torch):
|
||||
sinc = torch.sinc
|
||||
else:
|
||||
# This code is adopted from adefossez's julius.core.sinc under the MIT License
|
||||
# https://adefossez.github.io/julius/julius/core.html
|
||||
# LICENSE is in incl_licenses directory.
|
||||
def sinc(x: torch.Tensor):
|
||||
"""
|
||||
Implementation of sinc, i.e. sin(pi * x) / (pi * x)
|
||||
__Warning__: Different to julius.sinc, the input is multiplied by `pi`!
|
||||
"""
|
||||
return torch.where(x == 0,
|
||||
torch.tensor(1., device=x.device, dtype=x.dtype),
|
||||
torch.sin(math.pi * x) / math.pi / x)
|
||||
|
||||
|
||||
# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License
|
||||
# https://adefossez.github.io/julius/julius/lowpass.html
|
||||
# LICENSE is in incl_licenses directory.
|
||||
def kaiser_sinc_filter1d(cutoff, half_width, kernel_size): # return filter [1,1,kernel_size]
|
||||
even = (kernel_size % 2 == 0)
|
||||
half_size = kernel_size // 2
|
||||
|
||||
#For kaiser window
|
||||
delta_f = 4 * half_width
|
||||
A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95
|
||||
if A > 50.:
|
||||
beta = 0.1102 * (A - 8.7)
|
||||
elif A >= 21.:
|
||||
beta = 0.5842 * (A - 21)**0.4 + 0.07886 * (A - 21.)
|
||||
else:
|
||||
beta = 0.
|
||||
window = torch.kaiser_window(kernel_size, beta=beta, periodic=False)
|
||||
|
||||
# ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio
|
||||
if even:
|
||||
time = (torch.arange(-half_size, half_size) + 0.5)
|
||||
else:
|
||||
time = torch.arange(kernel_size) - half_size
|
||||
if cutoff == 0:
|
||||
filter_ = torch.zeros_like(time)
|
||||
else:
|
||||
filter_ = 2 * cutoff * window * sinc(2 * cutoff * time)
|
||||
# Normalize filter to have sum = 1, otherwise we will have a small leakage
|
||||
# of the constant component in the input signal.
|
||||
filter_ /= filter_.sum()
|
||||
filter = filter_.view(1, 1, kernel_size)
|
||||
|
||||
return filter
|
||||
|
||||
|
||||
class LowPassFilter1d(nn.Module):
|
||||
def __init__(self,
|
||||
cutoff=0.5,
|
||||
half_width=0.6,
|
||||
stride: int = 1,
|
||||
padding: bool = True,
|
||||
padding_mode: str = 'replicate',
|
||||
kernel_size: int = 12):
|
||||
# kernel_size should be even number for stylegan3 setup,
|
||||
# in this implementation, odd number is also possible.
|
||||
super().__init__()
|
||||
if cutoff < -0.:
|
||||
raise ValueError("Minimum cutoff must be larger than zero.")
|
||||
if cutoff > 0.5:
|
||||
raise ValueError("A cutoff above 0.5 does not make sense.")
|
||||
self.kernel_size = kernel_size
|
||||
self.even = (kernel_size % 2 == 0)
|
||||
self.pad_left = kernel_size // 2 - int(self.even)
|
||||
self.pad_right = kernel_size // 2
|
||||
self.stride = stride
|
||||
self.padding = padding
|
||||
self.padding_mode = padding_mode
|
||||
filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size)
|
||||
self.register_buffer("filter", filter)
|
||||
|
||||
#input [B, C, T]
|
||||
def forward(self, x):
|
||||
_, C, _ = x.shape
|
||||
|
||||
if self.padding:
|
||||
x = F.pad(x, (self.pad_left, self.pad_right),
|
||||
mode=self.padding_mode)
|
||||
out = F.conv1d(x, self.filter.expand(C, -1, -1),
|
||||
stride=self.stride, groups=C)
|
||||
|
||||
return out
|
||||
@ -0,0 +1,49 @@
|
||||
# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import torch.nn as nn
|
||||
from torch.nn import functional as F
|
||||
from .filter import LowPassFilter1d
|
||||
from .filter import kaiser_sinc_filter1d
|
||||
|
||||
|
||||
class UpSample1d(nn.Module):
|
||||
def __init__(self, ratio=2, kernel_size=None):
|
||||
super().__init__()
|
||||
self.ratio = ratio
|
||||
self.kernel_size = int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size
|
||||
self.stride = ratio
|
||||
self.pad = self.kernel_size // ratio - 1
|
||||
self.pad_left = self.pad * self.stride + (self.kernel_size - self.stride) // 2
|
||||
self.pad_right = self.pad * self.stride + (self.kernel_size - self.stride + 1) // 2
|
||||
filter = kaiser_sinc_filter1d(cutoff=0.5 / ratio,
|
||||
half_width=0.6 / ratio,
|
||||
kernel_size=self.kernel_size)
|
||||
self.register_buffer("filter", filter)
|
||||
|
||||
# x: [B, C, T]
|
||||
def forward(self, x):
|
||||
_, C, _ = x.shape
|
||||
|
||||
x = F.pad(x, (self.pad, self.pad), mode='replicate')
|
||||
x = self.ratio * F.conv_transpose1d(
|
||||
x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C)
|
||||
x = x[..., self.pad_left:-self.pad_right]
|
||||
|
||||
return x
|
||||
|
||||
|
||||
class DownSample1d(nn.Module):
|
||||
def __init__(self, ratio=2, kernel_size=None):
|
||||
super().__init__()
|
||||
self.ratio = ratio
|
||||
self.kernel_size = int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size
|
||||
self.lowpass = LowPassFilter1d(cutoff=0.5 / ratio,
|
||||
half_width=0.6 / ratio,
|
||||
stride=ratio,
|
||||
kernel_size=self.kernel_size)
|
||||
|
||||
def forward(self, x):
|
||||
xx = self.lowpass(x)
|
||||
|
||||
return xx
|
||||
32
postprocessing/mmaudio/ext/bigvgan/bigvgan.py
Normal file
32
postprocessing/mmaudio/ext/bigvgan/bigvgan.py
Normal file
@ -0,0 +1,32 @@
|
||||
from pathlib import Path
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from omegaconf import OmegaConf
|
||||
|
||||
from ...ext.bigvgan.models import BigVGANVocoder
|
||||
|
||||
_bigvgan_vocoder_path = Path(__file__).parent / 'bigvgan_vocoder.yml'
|
||||
|
||||
|
||||
class BigVGAN(nn.Module):
|
||||
|
||||
def __init__(self, ckpt_path, config_path=_bigvgan_vocoder_path):
|
||||
super().__init__()
|
||||
vocoder_cfg = OmegaConf.load(config_path)
|
||||
self.vocoder = BigVGANVocoder(vocoder_cfg).eval()
|
||||
vocoder_ckpt = torch.load(ckpt_path, map_location='cpu', weights_only=True)['generator']
|
||||
self.vocoder.load_state_dict(vocoder_ckpt)
|
||||
|
||||
self.weight_norm_removed = False
|
||||
self.remove_weight_norm()
|
||||
|
||||
@torch.inference_mode()
|
||||
def forward(self, x):
|
||||
assert self.weight_norm_removed, 'call remove_weight_norm() before inference'
|
||||
return self.vocoder(x)
|
||||
|
||||
def remove_weight_norm(self):
|
||||
self.vocoder.remove_weight_norm()
|
||||
self.weight_norm_removed = True
|
||||
return self
|
||||
63
postprocessing/mmaudio/ext/bigvgan/bigvgan_vocoder.yml
Normal file
63
postprocessing/mmaudio/ext/bigvgan/bigvgan_vocoder.yml
Normal file
@ -0,0 +1,63 @@
|
||||
resblock: '1'
|
||||
num_gpus: 0
|
||||
batch_size: 64
|
||||
num_mels: 80
|
||||
learning_rate: 0.0001
|
||||
adam_b1: 0.8
|
||||
adam_b2: 0.99
|
||||
lr_decay: 0.999
|
||||
seed: 1234
|
||||
upsample_rates:
|
||||
- 4
|
||||
- 4
|
||||
- 2
|
||||
- 2
|
||||
- 2
|
||||
- 2
|
||||
upsample_kernel_sizes:
|
||||
- 8
|
||||
- 8
|
||||
- 4
|
||||
- 4
|
||||
- 4
|
||||
- 4
|
||||
upsample_initial_channel: 1536
|
||||
resblock_kernel_sizes:
|
||||
- 3
|
||||
- 7
|
||||
- 11
|
||||
resblock_dilation_sizes:
|
||||
- - 1
|
||||
- 3
|
||||
- 5
|
||||
- - 1
|
||||
- 3
|
||||
- 5
|
||||
- - 1
|
||||
- 3
|
||||
- 5
|
||||
activation: snakebeta
|
||||
snake_logscale: true
|
||||
resolutions:
|
||||
- - 1024
|
||||
- 120
|
||||
- 600
|
||||
- - 2048
|
||||
- 240
|
||||
- 1200
|
||||
- - 512
|
||||
- 50
|
||||
- 240
|
||||
mpd_reshapes:
|
||||
- 2
|
||||
- 3
|
||||
- 5
|
||||
- 7
|
||||
- 11
|
||||
use_spectral_norm: false
|
||||
discriminator_channel_mult: 1
|
||||
num_workers: 4
|
||||
dist_config:
|
||||
dist_backend: nccl
|
||||
dist_url: tcp://localhost:54341
|
||||
world_size: 1
|
||||
18
postprocessing/mmaudio/ext/bigvgan/env.py
Normal file
18
postprocessing/mmaudio/ext/bigvgan/env.py
Normal file
@ -0,0 +1,18 @@
|
||||
# Adapted from https://github.com/jik876/hifi-gan under the MIT license.
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import os
|
||||
import shutil
|
||||
|
||||
|
||||
class AttrDict(dict):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(AttrDict, self).__init__(*args, **kwargs)
|
||||
self.__dict__ = self
|
||||
|
||||
|
||||
def build_env(config, config_name, path):
|
||||
t_path = os.path.join(path, config_name)
|
||||
if config != t_path:
|
||||
os.makedirs(path, exist_ok=True)
|
||||
shutil.copyfile(config, os.path.join(path, config_name))
|
||||
21
postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_1
Normal file
21
postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_1
Normal file
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2020 Jungil Kong
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
21
postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_2
Normal file
21
postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_2
Normal file
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2020 Edward Dixon
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
201
postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_3
Normal file
201
postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_3
Normal file
@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
29
postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_4
Normal file
29
postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_4
Normal file
@ -0,0 +1,29 @@
|
||||
BSD 3-Clause License
|
||||
|
||||
Copyright (c) 2019, Seungwon Park 박승원
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
16
postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_5
Normal file
16
postprocessing/mmaudio/ext/bigvgan/incl_licenses/LICENSE_5
Normal file
@ -0,0 +1,16 @@
|
||||
Copyright 2020 Alexandre Défossez
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
|
||||
associated documentation files (the "Software"), to deal in the Software without restriction,
|
||||
including without limitation the rights to use, copy, modify, merge, publish, distribute,
|
||||
sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or
|
||||
substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT
|
||||
NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
255
postprocessing/mmaudio/ext/bigvgan/models.py
Normal file
255
postprocessing/mmaudio/ext/bigvgan/models.py
Normal file
@ -0,0 +1,255 @@
|
||||
# Copyright (c) 2022 NVIDIA CORPORATION.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
# Adapted from https://github.com/jik876/hifi-gan under the MIT license.
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from torch.nn import Conv1d, ConvTranspose1d
|
||||
from torch.nn.utils.parametrizations import weight_norm
|
||||
from torch.nn.utils.parametrize import remove_parametrizations
|
||||
|
||||
from ...ext.bigvgan import activations
|
||||
from ...ext.bigvgan.alias_free_torch import *
|
||||
from ...ext.bigvgan.utils import get_padding, init_weights
|
||||
|
||||
LRELU_SLOPE = 0.1
|
||||
|
||||
|
||||
class AMPBlock1(torch.nn.Module):
|
||||
|
||||
def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5), activation=None):
|
||||
super(AMPBlock1, self).__init__()
|
||||
self.h = h
|
||||
|
||||
self.convs1 = nn.ModuleList([
|
||||
weight_norm(
|
||||
Conv1d(channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=dilation[0],
|
||||
padding=get_padding(kernel_size, dilation[0]))),
|
||||
weight_norm(
|
||||
Conv1d(channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=dilation[1],
|
||||
padding=get_padding(kernel_size, dilation[1]))),
|
||||
weight_norm(
|
||||
Conv1d(channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=dilation[2],
|
||||
padding=get_padding(kernel_size, dilation[2])))
|
||||
])
|
||||
self.convs1.apply(init_weights)
|
||||
|
||||
self.convs2 = nn.ModuleList([
|
||||
weight_norm(
|
||||
Conv1d(channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=1,
|
||||
padding=get_padding(kernel_size, 1))),
|
||||
weight_norm(
|
||||
Conv1d(channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=1,
|
||||
padding=get_padding(kernel_size, 1))),
|
||||
weight_norm(
|
||||
Conv1d(channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=1,
|
||||
padding=get_padding(kernel_size, 1)))
|
||||
])
|
||||
self.convs2.apply(init_weights)
|
||||
|
||||
self.num_layers = len(self.convs1) + len(self.convs2) # total number of conv layers
|
||||
|
||||
if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing
|
||||
self.activations = nn.ModuleList([
|
||||
Activation1d(
|
||||
activation=activations.Snake(channels, alpha_logscale=h.snake_logscale))
|
||||
for _ in range(self.num_layers)
|
||||
])
|
||||
elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing
|
||||
self.activations = nn.ModuleList([
|
||||
Activation1d(
|
||||
activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale))
|
||||
for _ in range(self.num_layers)
|
||||
])
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
"activation incorrectly specified. check the config file and look for 'activation'."
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
acts1, acts2 = self.activations[::2], self.activations[1::2]
|
||||
for c1, c2, a1, a2 in zip(self.convs1, self.convs2, acts1, acts2):
|
||||
xt = a1(x)
|
||||
xt = c1(xt)
|
||||
xt = a2(xt)
|
||||
xt = c2(xt)
|
||||
x = xt + x
|
||||
|
||||
return x
|
||||
|
||||
def remove_weight_norm(self):
|
||||
for l in self.convs1:
|
||||
remove_parametrizations(l, 'weight')
|
||||
for l in self.convs2:
|
||||
remove_parametrizations(l, 'weight')
|
||||
|
||||
|
||||
class AMPBlock2(torch.nn.Module):
|
||||
|
||||
def __init__(self, h, channels, kernel_size=3, dilation=(1, 3), activation=None):
|
||||
super(AMPBlock2, self).__init__()
|
||||
self.h = h
|
||||
|
||||
self.convs = nn.ModuleList([
|
||||
weight_norm(
|
||||
Conv1d(channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=dilation[0],
|
||||
padding=get_padding(kernel_size, dilation[0]))),
|
||||
weight_norm(
|
||||
Conv1d(channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
1,
|
||||
dilation=dilation[1],
|
||||
padding=get_padding(kernel_size, dilation[1])))
|
||||
])
|
||||
self.convs.apply(init_weights)
|
||||
|
||||
self.num_layers = len(self.convs) # total number of conv layers
|
||||
|
||||
if activation == 'snake': # periodic nonlinearity with snake function and anti-aliasing
|
||||
self.activations = nn.ModuleList([
|
||||
Activation1d(
|
||||
activation=activations.Snake(channels, alpha_logscale=h.snake_logscale))
|
||||
for _ in range(self.num_layers)
|
||||
])
|
||||
elif activation == 'snakebeta': # periodic nonlinearity with snakebeta function and anti-aliasing
|
||||
self.activations = nn.ModuleList([
|
||||
Activation1d(
|
||||
activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale))
|
||||
for _ in range(self.num_layers)
|
||||
])
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
"activation incorrectly specified. check the config file and look for 'activation'."
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
for c, a in zip(self.convs, self.activations):
|
||||
xt = a(x)
|
||||
xt = c(xt)
|
||||
x = xt + x
|
||||
|
||||
return x
|
||||
|
||||
def remove_weight_norm(self):
|
||||
for l in self.convs:
|
||||
remove_parametrizations(l, 'weight')
|
||||
|
||||
|
||||
class BigVGANVocoder(torch.nn.Module):
|
||||
# this is our main BigVGAN model. Applies anti-aliased periodic activation for resblocks.
|
||||
def __init__(self, h):
|
||||
super().__init__()
|
||||
self.h = h
|
||||
|
||||
self.num_kernels = len(h.resblock_kernel_sizes)
|
||||
self.num_upsamples = len(h.upsample_rates)
|
||||
|
||||
# pre conv
|
||||
self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3))
|
||||
|
||||
# define which AMPBlock to use. BigVGAN uses AMPBlock1 as default
|
||||
resblock = AMPBlock1 if h.resblock == '1' else AMPBlock2
|
||||
|
||||
# transposed conv-based upsamplers. does not apply anti-aliasing
|
||||
self.ups = nn.ModuleList()
|
||||
for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)):
|
||||
self.ups.append(
|
||||
nn.ModuleList([
|
||||
weight_norm(
|
||||
ConvTranspose1d(h.upsample_initial_channel // (2**i),
|
||||
h.upsample_initial_channel // (2**(i + 1)),
|
||||
k,
|
||||
u,
|
||||
padding=(k - u) // 2))
|
||||
]))
|
||||
|
||||
# residual blocks using anti-aliased multi-periodicity composition modules (AMP)
|
||||
self.resblocks = nn.ModuleList()
|
||||
for i in range(len(self.ups)):
|
||||
ch = h.upsample_initial_channel // (2**(i + 1))
|
||||
for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)):
|
||||
self.resblocks.append(resblock(h, ch, k, d, activation=h.activation))
|
||||
|
||||
# post conv
|
||||
if h.activation == "snake": # periodic nonlinearity with snake function and anti-aliasing
|
||||
activation_post = activations.Snake(ch, alpha_logscale=h.snake_logscale)
|
||||
self.activation_post = Activation1d(activation=activation_post)
|
||||
elif h.activation == "snakebeta": # periodic nonlinearity with snakebeta function and anti-aliasing
|
||||
activation_post = activations.SnakeBeta(ch, alpha_logscale=h.snake_logscale)
|
||||
self.activation_post = Activation1d(activation=activation_post)
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
"activation incorrectly specified. check the config file and look for 'activation'."
|
||||
)
|
||||
|
||||
self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
|
||||
|
||||
# weight initialization
|
||||
for i in range(len(self.ups)):
|
||||
self.ups[i].apply(init_weights)
|
||||
self.conv_post.apply(init_weights)
|
||||
|
||||
def forward(self, x):
|
||||
# pre conv
|
||||
x = self.conv_pre(x)
|
||||
|
||||
for i in range(self.num_upsamples):
|
||||
# upsampling
|
||||
for i_up in range(len(self.ups[i])):
|
||||
x = self.ups[i][i_up](x)
|
||||
# AMP blocks
|
||||
xs = None
|
||||
for j in range(self.num_kernels):
|
||||
if xs is None:
|
||||
xs = self.resblocks[i * self.num_kernels + j](x)
|
||||
else:
|
||||
xs += self.resblocks[i * self.num_kernels + j](x)
|
||||
x = xs / self.num_kernels
|
||||
|
||||
# post conv
|
||||
x = self.activation_post(x)
|
||||
x = self.conv_post(x)
|
||||
x = torch.tanh(x)
|
||||
|
||||
return x
|
||||
|
||||
def remove_weight_norm(self):
|
||||
print('Removing weight norm...')
|
||||
for l in self.ups:
|
||||
for l_i in l:
|
||||
remove_parametrizations(l_i, 'weight')
|
||||
for l in self.resblocks:
|
||||
l.remove_weight_norm()
|
||||
remove_parametrizations(self.conv_pre, 'weight')
|
||||
remove_parametrizations(self.conv_post, 'weight')
|
||||
31
postprocessing/mmaudio/ext/bigvgan/utils.py
Normal file
31
postprocessing/mmaudio/ext/bigvgan/utils.py
Normal file
@ -0,0 +1,31 @@
|
||||
# Adapted from https://github.com/jik876/hifi-gan under the MIT license.
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import os
|
||||
|
||||
import torch
|
||||
from torch.nn.utils.parametrizations import weight_norm
|
||||
|
||||
|
||||
def init_weights(m, mean=0.0, std=0.01):
|
||||
classname = m.__class__.__name__
|
||||
if classname.find("Conv") != -1:
|
||||
m.weight.data.normal_(mean, std)
|
||||
|
||||
|
||||
def apply_weight_norm(m):
|
||||
classname = m.__class__.__name__
|
||||
if classname.find("Conv") != -1:
|
||||
weight_norm(m)
|
||||
|
||||
|
||||
def get_padding(kernel_size, dilation=1):
|
||||
return int((kernel_size * dilation - dilation) / 2)
|
||||
|
||||
|
||||
def load_checkpoint(filepath, device):
|
||||
assert os.path.isfile(filepath)
|
||||
print("Loading '{}'".format(filepath))
|
||||
checkpoint_dict = torch.load(filepath, map_location=device)
|
||||
print("Complete.")
|
||||
return checkpoint_dict
|
||||
21
postprocessing/mmaudio/ext/bigvgan_v2/LICENSE
Normal file
21
postprocessing/mmaudio/ext/bigvgan_v2/LICENSE
Normal file
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2024 NVIDIA CORPORATION.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
0
postprocessing/mmaudio/ext/bigvgan_v2/__init__.py
Normal file
0
postprocessing/mmaudio/ext/bigvgan_v2/__init__.py
Normal file
126
postprocessing/mmaudio/ext/bigvgan_v2/activations.py
Normal file
126
postprocessing/mmaudio/ext/bigvgan_v2/activations.py
Normal file
@ -0,0 +1,126 @@
|
||||
# Implementation adapted from https://github.com/EdwardDixon/snake under the MIT license.
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import torch
|
||||
from torch import nn, sin, pow
|
||||
from torch.nn import Parameter
|
||||
|
||||
|
||||
class Snake(nn.Module):
|
||||
"""
|
||||
Implementation of a sine-based periodic activation function
|
||||
Shape:
|
||||
- Input: (B, C, T)
|
||||
- Output: (B, C, T), same shape as the input
|
||||
Parameters:
|
||||
- alpha - trainable parameter
|
||||
References:
|
||||
- This activation function is from this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda:
|
||||
https://arxiv.org/abs/2006.08195
|
||||
Examples:
|
||||
>>> a1 = snake(256)
|
||||
>>> x = torch.randn(256)
|
||||
>>> x = a1(x)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False
|
||||
):
|
||||
"""
|
||||
Initialization.
|
||||
INPUT:
|
||||
- in_features: shape of the input
|
||||
- alpha: trainable parameter
|
||||
alpha is initialized to 1 by default, higher values = higher-frequency.
|
||||
alpha will be trained along with the rest of your model.
|
||||
"""
|
||||
super(Snake, self).__init__()
|
||||
self.in_features = in_features
|
||||
|
||||
# Initialize alpha
|
||||
self.alpha_logscale = alpha_logscale
|
||||
if self.alpha_logscale: # Log scale alphas initialized to zeros
|
||||
self.alpha = Parameter(torch.zeros(in_features) * alpha)
|
||||
else: # Linear scale alphas initialized to ones
|
||||
self.alpha = Parameter(torch.ones(in_features) * alpha)
|
||||
|
||||
self.alpha.requires_grad = alpha_trainable
|
||||
|
||||
self.no_div_by_zero = 0.000000001
|
||||
|
||||
def forward(self, x):
|
||||
"""
|
||||
Forward pass of the function.
|
||||
Applies the function to the input elementwise.
|
||||
Snake ∶= x + 1/a * sin^2 (xa)
|
||||
"""
|
||||
alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # Line up with x to [B, C, T]
|
||||
if self.alpha_logscale:
|
||||
alpha = torch.exp(alpha)
|
||||
x = x + (1.0 / (alpha + self.no_div_by_zero)) * pow(sin(x * alpha), 2)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
class SnakeBeta(nn.Module):
|
||||
"""
|
||||
A modified Snake function which uses separate parameters for the magnitude of the periodic components
|
||||
Shape:
|
||||
- Input: (B, C, T)
|
||||
- Output: (B, C, T), same shape as the input
|
||||
Parameters:
|
||||
- alpha - trainable parameter that controls frequency
|
||||
- beta - trainable parameter that controls magnitude
|
||||
References:
|
||||
- This activation function is a modified version based on this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda:
|
||||
https://arxiv.org/abs/2006.08195
|
||||
Examples:
|
||||
>>> a1 = snakebeta(256)
|
||||
>>> x = torch.randn(256)
|
||||
>>> x = a1(x)
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False
|
||||
):
|
||||
"""
|
||||
Initialization.
|
||||
INPUT:
|
||||
- in_features: shape of the input
|
||||
- alpha - trainable parameter that controls frequency
|
||||
- beta - trainable parameter that controls magnitude
|
||||
alpha is initialized to 1 by default, higher values = higher-frequency.
|
||||
beta is initialized to 1 by default, higher values = higher-magnitude.
|
||||
alpha will be trained along with the rest of your model.
|
||||
"""
|
||||
super(SnakeBeta, self).__init__()
|
||||
self.in_features = in_features
|
||||
|
||||
# Initialize alpha
|
||||
self.alpha_logscale = alpha_logscale
|
||||
if self.alpha_logscale: # Log scale alphas initialized to zeros
|
||||
self.alpha = Parameter(torch.zeros(in_features) * alpha)
|
||||
self.beta = Parameter(torch.zeros(in_features) * alpha)
|
||||
else: # Linear scale alphas initialized to ones
|
||||
self.alpha = Parameter(torch.ones(in_features) * alpha)
|
||||
self.beta = Parameter(torch.ones(in_features) * alpha)
|
||||
|
||||
self.alpha.requires_grad = alpha_trainable
|
||||
self.beta.requires_grad = alpha_trainable
|
||||
|
||||
self.no_div_by_zero = 0.000000001
|
||||
|
||||
def forward(self, x):
|
||||
"""
|
||||
Forward pass of the function.
|
||||
Applies the function to the input elementwise.
|
||||
SnakeBeta ∶= x + 1/b * sin^2 (xa)
|
||||
"""
|
||||
alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # Line up with x to [B, C, T]
|
||||
beta = self.beta.unsqueeze(0).unsqueeze(-1)
|
||||
if self.alpha_logscale:
|
||||
alpha = torch.exp(alpha)
|
||||
beta = torch.exp(beta)
|
||||
x = x + (1.0 / (beta + self.no_div_by_zero)) * pow(sin(x * alpha), 2)
|
||||
|
||||
return x
|
||||
@ -0,0 +1,77 @@
|
||||
# Copyright (c) 2024 NVIDIA CORPORATION.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from alias_free_activation.torch.resample import UpSample1d, DownSample1d
|
||||
|
||||
# load fused CUDA kernel: this enables importing anti_alias_activation_cuda
|
||||
from alias_free_activation.cuda import load
|
||||
|
||||
anti_alias_activation_cuda = load.load()
|
||||
|
||||
|
||||
class FusedAntiAliasActivation(torch.autograd.Function):
|
||||
"""
|
||||
Assumes filter size 12, replication padding on upsampling/downsampling, and logscale alpha/beta parameters as inputs.
|
||||
The hyperparameters are hard-coded in the kernel to maximize speed.
|
||||
NOTE: The fused kenrel is incorrect for Activation1d with different hyperparameters.
|
||||
"""
|
||||
|
||||
@staticmethod
|
||||
def forward(ctx, inputs, up_ftr, down_ftr, alpha, beta):
|
||||
activation_results = anti_alias_activation_cuda.forward(
|
||||
inputs, up_ftr, down_ftr, alpha, beta
|
||||
)
|
||||
|
||||
return activation_results
|
||||
|
||||
@staticmethod
|
||||
def backward(ctx, output_grads):
|
||||
raise NotImplementedError
|
||||
return output_grads, None, None
|
||||
|
||||
|
||||
class Activation1d(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
activation,
|
||||
up_ratio: int = 2,
|
||||
down_ratio: int = 2,
|
||||
up_kernel_size: int = 12,
|
||||
down_kernel_size: int = 12,
|
||||
fused: bool = True,
|
||||
):
|
||||
super().__init__()
|
||||
self.up_ratio = up_ratio
|
||||
self.down_ratio = down_ratio
|
||||
self.act = activation
|
||||
self.upsample = UpSample1d(up_ratio, up_kernel_size)
|
||||
self.downsample = DownSample1d(down_ratio, down_kernel_size)
|
||||
|
||||
self.fused = fused # Whether to use fused CUDA kernel or not
|
||||
|
||||
def forward(self, x):
|
||||
if not self.fused:
|
||||
x = self.upsample(x)
|
||||
x = self.act(x)
|
||||
x = self.downsample(x)
|
||||
return x
|
||||
else:
|
||||
if self.act.__class__.__name__ == "Snake":
|
||||
beta = self.act.alpha.data # Snake uses same params for alpha and beta
|
||||
else:
|
||||
beta = (
|
||||
self.act.beta.data
|
||||
) # Snakebeta uses different params for alpha and beta
|
||||
alpha = self.act.alpha.data
|
||||
if (
|
||||
not self.act.alpha_logscale
|
||||
): # Exp baked into cuda kernel, cancel it out with a log
|
||||
alpha = torch.log(alpha)
|
||||
beta = torch.log(beta)
|
||||
|
||||
x = FusedAntiAliasActivation.apply(
|
||||
x, self.upsample.filter, self.downsample.lowpass.filter, alpha, beta
|
||||
)
|
||||
return x
|
||||
@ -0,0 +1,23 @@
|
||||
/* coding=utf-8
|
||||
* Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#include <torch/extension.h>
|
||||
|
||||
extern "C" torch::Tensor fwd_cuda(torch::Tensor const &input, torch::Tensor const &up_filter, torch::Tensor const &down_filter, torch::Tensor const &alpha, torch::Tensor const &beta);
|
||||
|
||||
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
|
||||
m.def("forward", &fwd_cuda, "Anti-Alias Activation forward (CUDA)");
|
||||
}
|
||||
@ -0,0 +1,246 @@
|
||||
/* coding=utf-8
|
||||
* Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#include <ATen/ATen.h>
|
||||
#include <cuda.h>
|
||||
#include <cuda_runtime.h>
|
||||
#include <cuda_fp16.h>
|
||||
#include <cuda_profiler_api.h>
|
||||
#include <ATen/cuda/CUDAContext.h>
|
||||
#include <torch/extension.h>
|
||||
#include "type_shim.h"
|
||||
#include <assert.h>
|
||||
#include <cfloat>
|
||||
#include <limits>
|
||||
#include <stdint.h>
|
||||
#include <c10/macros/Macros.h>
|
||||
|
||||
namespace
|
||||
{
|
||||
// Hard-coded hyperparameters
|
||||
// WARP_SIZE and WARP_BATCH must match the return values batches_per_warp and
|
||||
constexpr int ELEMENTS_PER_LDG_STG = 1; //(WARP_ITERATIONS < 4) ? 1 : 4;
|
||||
constexpr int BUFFER_SIZE = 32;
|
||||
constexpr int FILTER_SIZE = 12;
|
||||
constexpr int HALF_FILTER_SIZE = 6;
|
||||
constexpr int UPSAMPLE_REPLICATION_PAD = 5; // 5 on each side, matching torch impl
|
||||
constexpr int DOWNSAMPLE_REPLICATION_PAD_LEFT = 5; // matching torch impl
|
||||
constexpr int DOWNSAMPLE_REPLICATION_PAD_RIGHT = 6; // matching torch impl
|
||||
|
||||
template <typename input_t, typename output_t, typename acc_t>
|
||||
__global__ void anti_alias_activation_forward(
|
||||
output_t *dst,
|
||||
const input_t *src,
|
||||
const input_t *up_ftr,
|
||||
const input_t *down_ftr,
|
||||
const input_t *alpha,
|
||||
const input_t *beta,
|
||||
int batch_size,
|
||||
int channels,
|
||||
int seq_len)
|
||||
{
|
||||
// Up and downsample filters
|
||||
input_t up_filter[FILTER_SIZE];
|
||||
input_t down_filter[FILTER_SIZE];
|
||||
|
||||
// Load data from global memory including extra indices reserved for replication paddings
|
||||
input_t elements[2 * FILTER_SIZE + 2 * BUFFER_SIZE + 2 * UPSAMPLE_REPLICATION_PAD] = {0};
|
||||
input_t intermediates[2 * FILTER_SIZE + 2 * BUFFER_SIZE + DOWNSAMPLE_REPLICATION_PAD_LEFT + DOWNSAMPLE_REPLICATION_PAD_RIGHT] = {0};
|
||||
|
||||
// Output stores downsampled output before writing to dst
|
||||
output_t output[BUFFER_SIZE];
|
||||
|
||||
// blockDim/threadIdx = (128, 1, 1)
|
||||
// gridDim/blockIdx = (seq_blocks, channels, batches)
|
||||
int block_offset = (blockIdx.x * 128 * BUFFER_SIZE + seq_len * (blockIdx.y + gridDim.y * blockIdx.z));
|
||||
int local_offset = threadIdx.x * BUFFER_SIZE;
|
||||
int seq_offset = blockIdx.x * 128 * BUFFER_SIZE + local_offset;
|
||||
|
||||
// intermediate have double the seq_len
|
||||
int intermediate_local_offset = threadIdx.x * BUFFER_SIZE * 2;
|
||||
int intermediate_seq_offset = blockIdx.x * 128 * BUFFER_SIZE * 2 + intermediate_local_offset;
|
||||
|
||||
// Get values needed for replication padding before moving pointer
|
||||
const input_t *right_most_pntr = src + (seq_len * (blockIdx.y + gridDim.y * blockIdx.z));
|
||||
input_t seq_left_most_value = right_most_pntr[0];
|
||||
input_t seq_right_most_value = right_most_pntr[seq_len - 1];
|
||||
|
||||
// Move src and dst pointers
|
||||
src += block_offset + local_offset;
|
||||
dst += block_offset + local_offset;
|
||||
|
||||
// Alpha and beta values for snake activatons. Applies exp by default
|
||||
alpha = alpha + blockIdx.y;
|
||||
input_t alpha_val = expf(alpha[0]);
|
||||
beta = beta + blockIdx.y;
|
||||
input_t beta_val = expf(beta[0]);
|
||||
|
||||
#pragma unroll
|
||||
for (int it = 0; it < FILTER_SIZE; it += 1)
|
||||
{
|
||||
up_filter[it] = up_ftr[it];
|
||||
down_filter[it] = down_ftr[it];
|
||||
}
|
||||
|
||||
// Apply replication padding for upsampling, matching torch impl
|
||||
#pragma unroll
|
||||
for (int it = -HALF_FILTER_SIZE; it < BUFFER_SIZE + HALF_FILTER_SIZE; it += 1)
|
||||
{
|
||||
int element_index = seq_offset + it; // index for element
|
||||
if ((element_index < 0) && (element_index >= -UPSAMPLE_REPLICATION_PAD))
|
||||
{
|
||||
elements[2 * (HALF_FILTER_SIZE + it)] = 2 * seq_left_most_value;
|
||||
}
|
||||
if ((element_index >= seq_len) && (element_index < seq_len + UPSAMPLE_REPLICATION_PAD))
|
||||
{
|
||||
elements[2 * (HALF_FILTER_SIZE + it)] = 2 * seq_right_most_value;
|
||||
}
|
||||
if ((element_index >= 0) && (element_index < seq_len))
|
||||
{
|
||||
elements[2 * (HALF_FILTER_SIZE + it)] = 2 * src[it];
|
||||
}
|
||||
}
|
||||
|
||||
// Apply upsampling strided convolution and write to intermediates. It reserves DOWNSAMPLE_REPLICATION_PAD_LEFT for replication padding of the downsampilng conv later
|
||||
#pragma unroll
|
||||
for (int it = 0; it < (2 * BUFFER_SIZE + 2 * FILTER_SIZE); it += 1)
|
||||
{
|
||||
input_t acc = 0.0;
|
||||
int element_index = intermediate_seq_offset + it; // index for intermediate
|
||||
#pragma unroll
|
||||
for (int f_idx = 0; f_idx < FILTER_SIZE; f_idx += 1)
|
||||
{
|
||||
if ((element_index + f_idx) >= 0)
|
||||
{
|
||||
acc += up_filter[f_idx] * elements[it + f_idx];
|
||||
}
|
||||
}
|
||||
intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] = acc;
|
||||
}
|
||||
|
||||
// Apply activation function. It reserves DOWNSAMPLE_REPLICATION_PAD_LEFT and DOWNSAMPLE_REPLICATION_PAD_RIGHT for replication padding of the downsampilng conv later
|
||||
double no_div_by_zero = 0.000000001;
|
||||
#pragma unroll
|
||||
for (int it = 0; it < 2 * BUFFER_SIZE + 2 * FILTER_SIZE; it += 1)
|
||||
{
|
||||
intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] += (1.0 / (beta_val + no_div_by_zero)) * sinf(intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] * alpha_val) * sinf(intermediates[it + DOWNSAMPLE_REPLICATION_PAD_LEFT] * alpha_val);
|
||||
}
|
||||
|
||||
// Apply replication padding before downsampling conv from intermediates
|
||||
#pragma unroll
|
||||
for (int it = 0; it < DOWNSAMPLE_REPLICATION_PAD_LEFT; it += 1)
|
||||
{
|
||||
intermediates[it] = intermediates[DOWNSAMPLE_REPLICATION_PAD_LEFT];
|
||||
}
|
||||
#pragma unroll
|
||||
for (int it = DOWNSAMPLE_REPLICATION_PAD_LEFT + 2 * BUFFER_SIZE + 2 * FILTER_SIZE; it < DOWNSAMPLE_REPLICATION_PAD_LEFT + 2 * BUFFER_SIZE + 2 * FILTER_SIZE + DOWNSAMPLE_REPLICATION_PAD_RIGHT; it += 1)
|
||||
{
|
||||
intermediates[it] = intermediates[DOWNSAMPLE_REPLICATION_PAD_LEFT + 2 * BUFFER_SIZE + 2 * FILTER_SIZE - 1];
|
||||
}
|
||||
|
||||
// Apply downsample strided convolution (assuming stride=2) from intermediates
|
||||
#pragma unroll
|
||||
for (int it = 0; it < BUFFER_SIZE; it += 1)
|
||||
{
|
||||
input_t acc = 0.0;
|
||||
#pragma unroll
|
||||
for (int f_idx = 0; f_idx < FILTER_SIZE; f_idx += 1)
|
||||
{
|
||||
// Add constant DOWNSAMPLE_REPLICATION_PAD_RIGHT to match torch implementation
|
||||
acc += down_filter[f_idx] * intermediates[it * 2 + f_idx + DOWNSAMPLE_REPLICATION_PAD_RIGHT];
|
||||
}
|
||||
output[it] = acc;
|
||||
}
|
||||
|
||||
// Write output to dst
|
||||
#pragma unroll
|
||||
for (int it = 0; it < BUFFER_SIZE; it += ELEMENTS_PER_LDG_STG)
|
||||
{
|
||||
int element_index = seq_offset + it;
|
||||
if (element_index < seq_len)
|
||||
{
|
||||
dst[it] = output[it];
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
template <typename input_t, typename output_t, typename acc_t>
|
||||
void dispatch_anti_alias_activation_forward(
|
||||
output_t *dst,
|
||||
const input_t *src,
|
||||
const input_t *up_ftr,
|
||||
const input_t *down_ftr,
|
||||
const input_t *alpha,
|
||||
const input_t *beta,
|
||||
int batch_size,
|
||||
int channels,
|
||||
int seq_len)
|
||||
{
|
||||
if (seq_len == 0)
|
||||
{
|
||||
return;
|
||||
}
|
||||
else
|
||||
{
|
||||
// Use 128 threads per block to maximimize gpu utilization
|
||||
constexpr int threads_per_block = 128;
|
||||
constexpr int seq_len_per_block = 4096;
|
||||
int blocks_per_seq_len = (seq_len + seq_len_per_block - 1) / seq_len_per_block;
|
||||
dim3 blocks(blocks_per_seq_len, channels, batch_size);
|
||||
dim3 threads(threads_per_block, 1, 1);
|
||||
|
||||
anti_alias_activation_forward<input_t, output_t, acc_t>
|
||||
<<<blocks, threads, 0, at::cuda::getCurrentCUDAStream()>>>(dst, src, up_ftr, down_ftr, alpha, beta, batch_size, channels, seq_len);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
extern "C" torch::Tensor fwd_cuda(torch::Tensor const &input, torch::Tensor const &up_filter, torch::Tensor const &down_filter, torch::Tensor const &alpha, torch::Tensor const &beta)
|
||||
{
|
||||
// Input is a 3d tensor with dimensions [batches, channels, seq_len]
|
||||
const int batches = input.size(0);
|
||||
const int channels = input.size(1);
|
||||
const int seq_len = input.size(2);
|
||||
|
||||
// Output
|
||||
auto act_options = input.options().requires_grad(false);
|
||||
|
||||
torch::Tensor anti_alias_activation_results =
|
||||
torch::empty({batches, channels, seq_len}, act_options);
|
||||
|
||||
void *input_ptr = static_cast<void *>(input.data_ptr());
|
||||
void *up_filter_ptr = static_cast<void *>(up_filter.data_ptr());
|
||||
void *down_filter_ptr = static_cast<void *>(down_filter.data_ptr());
|
||||
void *alpha_ptr = static_cast<void *>(alpha.data_ptr());
|
||||
void *beta_ptr = static_cast<void *>(beta.data_ptr());
|
||||
void *anti_alias_activation_results_ptr = static_cast<void *>(anti_alias_activation_results.data_ptr());
|
||||
|
||||
DISPATCH_FLOAT_HALF_AND_BFLOAT(
|
||||
input.scalar_type(),
|
||||
"dispatch anti alias activation_forward",
|
||||
dispatch_anti_alias_activation_forward<scalar_t, scalar_t, float>(
|
||||
reinterpret_cast<scalar_t *>(anti_alias_activation_results_ptr),
|
||||
reinterpret_cast<const scalar_t *>(input_ptr),
|
||||
reinterpret_cast<const scalar_t *>(up_filter_ptr),
|
||||
reinterpret_cast<const scalar_t *>(down_filter_ptr),
|
||||
reinterpret_cast<const scalar_t *>(alpha_ptr),
|
||||
reinterpret_cast<const scalar_t *>(beta_ptr),
|
||||
batches,
|
||||
channels,
|
||||
seq_len););
|
||||
return anti_alias_activation_results;
|
||||
}
|
||||
@ -0,0 +1,29 @@
|
||||
/* coding=utf-8
|
||||
* Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
/*This code is copied fron NVIDIA apex:
|
||||
* https://github.com/NVIDIA/apex
|
||||
* with minor changes. */
|
||||
|
||||
#ifndef TORCH_CHECK
|
||||
#define TORCH_CHECK AT_CHECK
|
||||
#endif
|
||||
|
||||
#ifdef VERSION_GE_1_3
|
||||
#define DATA_PTR data_ptr
|
||||
#else
|
||||
#define DATA_PTR data
|
||||
#endif
|
||||
@ -0,0 +1,86 @@
|
||||
# Copyright (c) 2024 NVIDIA CORPORATION.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
import os
|
||||
import pathlib
|
||||
import subprocess
|
||||
|
||||
from torch.utils import cpp_extension
|
||||
|
||||
"""
|
||||
Setting this param to a list has a problem of generating different compilation commands (with diferent order of architectures) and leading to recompilation of fused kernels.
|
||||
Set it to empty stringo avoid recompilation and assign arch flags explicity in extra_cuda_cflags below
|
||||
"""
|
||||
os.environ["TORCH_CUDA_ARCH_LIST"] = ""
|
||||
|
||||
|
||||
def load():
|
||||
# Check if cuda 11 is installed for compute capability 8.0
|
||||
cc_flag = []
|
||||
_, bare_metal_major, _ = _get_cuda_bare_metal_version(cpp_extension.CUDA_HOME)
|
||||
if int(bare_metal_major) >= 11:
|
||||
cc_flag.append("-gencode")
|
||||
cc_flag.append("arch=compute_80,code=sm_80")
|
||||
|
||||
# Build path
|
||||
srcpath = pathlib.Path(__file__).parent.absolute()
|
||||
buildpath = srcpath / "build"
|
||||
_create_build_dir(buildpath)
|
||||
|
||||
# Helper function to build the kernels.
|
||||
def _cpp_extention_load_helper(name, sources, extra_cuda_flags):
|
||||
return cpp_extension.load(
|
||||
name=name,
|
||||
sources=sources,
|
||||
build_directory=buildpath,
|
||||
extra_cflags=[
|
||||
"-O3",
|
||||
],
|
||||
extra_cuda_cflags=[
|
||||
"-O3",
|
||||
"-gencode",
|
||||
"arch=compute_70,code=sm_70",
|
||||
"--use_fast_math",
|
||||
]
|
||||
+ extra_cuda_flags
|
||||
+ cc_flag,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
extra_cuda_flags = [
|
||||
"-U__CUDA_NO_HALF_OPERATORS__",
|
||||
"-U__CUDA_NO_HALF_CONVERSIONS__",
|
||||
"--expt-relaxed-constexpr",
|
||||
"--expt-extended-lambda",
|
||||
]
|
||||
|
||||
sources = [
|
||||
srcpath / "anti_alias_activation.cpp",
|
||||
srcpath / "anti_alias_activation_cuda.cu",
|
||||
]
|
||||
anti_alias_activation_cuda = _cpp_extention_load_helper(
|
||||
"anti_alias_activation_cuda", sources, extra_cuda_flags
|
||||
)
|
||||
|
||||
return anti_alias_activation_cuda
|
||||
|
||||
|
||||
def _get_cuda_bare_metal_version(cuda_dir):
|
||||
raw_output = subprocess.check_output(
|
||||
[cuda_dir + "/bin/nvcc", "-V"], universal_newlines=True
|
||||
)
|
||||
output = raw_output.split()
|
||||
release_idx = output.index("release") + 1
|
||||
release = output[release_idx].split(".")
|
||||
bare_metal_major = release[0]
|
||||
bare_metal_minor = release[1][0]
|
||||
|
||||
return raw_output, bare_metal_major, bare_metal_minor
|
||||
|
||||
|
||||
def _create_build_dir(buildpath):
|
||||
try:
|
||||
os.mkdir(buildpath)
|
||||
except OSError:
|
||||
if not os.path.isdir(buildpath):
|
||||
print(f"Creation of the build directory {buildpath} failed")
|
||||
@ -0,0 +1,92 @@
|
||||
/* coding=utf-8
|
||||
* Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
#include <ATen/ATen.h>
|
||||
#include "compat.h"
|
||||
|
||||
#define DISPATCH_FLOAT_HALF_AND_BFLOAT(TYPE, NAME, ...) \
|
||||
switch (TYPE) \
|
||||
{ \
|
||||
case at::ScalarType::Float: \
|
||||
{ \
|
||||
using scalar_t = float; \
|
||||
__VA_ARGS__; \
|
||||
break; \
|
||||
} \
|
||||
case at::ScalarType::Half: \
|
||||
{ \
|
||||
using scalar_t = at::Half; \
|
||||
__VA_ARGS__; \
|
||||
break; \
|
||||
} \
|
||||
case at::ScalarType::BFloat16: \
|
||||
{ \
|
||||
using scalar_t = at::BFloat16; \
|
||||
__VA_ARGS__; \
|
||||
break; \
|
||||
} \
|
||||
default: \
|
||||
AT_ERROR(#NAME, " not implemented for '", toString(TYPE), "'"); \
|
||||
}
|
||||
|
||||
#define DISPATCH_FLOAT_HALF_AND_BFLOAT_INOUT_TYPES(TYPEIN, TYPEOUT, NAME, ...) \
|
||||
switch (TYPEIN) \
|
||||
{ \
|
||||
case at::ScalarType::Float: \
|
||||
{ \
|
||||
using scalar_t_in = float; \
|
||||
switch (TYPEOUT) \
|
||||
{ \
|
||||
case at::ScalarType::Float: \
|
||||
{ \
|
||||
using scalar_t_out = float; \
|
||||
__VA_ARGS__; \
|
||||
break; \
|
||||
} \
|
||||
case at::ScalarType::Half: \
|
||||
{ \
|
||||
using scalar_t_out = at::Half; \
|
||||
__VA_ARGS__; \
|
||||
break; \
|
||||
} \
|
||||
case at::ScalarType::BFloat16: \
|
||||
{ \
|
||||
using scalar_t_out = at::BFloat16; \
|
||||
__VA_ARGS__; \
|
||||
break; \
|
||||
} \
|
||||
default: \
|
||||
AT_ERROR(#NAME, " not implemented for '", toString(TYPEOUT), "'"); \
|
||||
} \
|
||||
break; \
|
||||
} \
|
||||
case at::ScalarType::Half: \
|
||||
{ \
|
||||
using scalar_t_in = at::Half; \
|
||||
using scalar_t_out = at::Half; \
|
||||
__VA_ARGS__; \
|
||||
break; \
|
||||
} \
|
||||
case at::ScalarType::BFloat16: \
|
||||
{ \
|
||||
using scalar_t_in = at::BFloat16; \
|
||||
using scalar_t_out = at::BFloat16; \
|
||||
__VA_ARGS__; \
|
||||
break; \
|
||||
} \
|
||||
default: \
|
||||
AT_ERROR(#NAME, " not implemented for '", toString(TYPEIN), "'"); \
|
||||
}
|
||||
@ -0,0 +1,6 @@
|
||||
# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
from .filter import *
|
||||
from .resample import *
|
||||
from .act import *
|
||||
@ -0,0 +1,32 @@
|
||||
# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import torch.nn as nn
|
||||
|
||||
from .resample import (DownSample1d, UpSample1d)
|
||||
|
||||
|
||||
class Activation1d(nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
activation,
|
||||
up_ratio: int = 2,
|
||||
down_ratio: int = 2,
|
||||
up_kernel_size: int = 12,
|
||||
down_kernel_size: int = 12,
|
||||
):
|
||||
super().__init__()
|
||||
self.up_ratio = up_ratio
|
||||
self.down_ratio = down_ratio
|
||||
self.act = activation
|
||||
self.upsample = UpSample1d(up_ratio, up_kernel_size)
|
||||
self.downsample = DownSample1d(down_ratio, down_kernel_size)
|
||||
|
||||
# x: [B,C,T]
|
||||
def forward(self, x):
|
||||
x = self.upsample(x)
|
||||
x = self.act(x)
|
||||
x = self.downsample(x)
|
||||
|
||||
return x
|
||||
@ -0,0 +1,101 @@
|
||||
# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
import math
|
||||
|
||||
if "sinc" in dir(torch):
|
||||
sinc = torch.sinc
|
||||
else:
|
||||
# This code is adopted from adefossez's julius.core.sinc under the MIT License
|
||||
# https://adefossez.github.io/julius/julius/core.html
|
||||
# LICENSE is in incl_licenses directory.
|
||||
def sinc(x: torch.Tensor):
|
||||
"""
|
||||
Implementation of sinc, i.e. sin(pi * x) / (pi * x)
|
||||
__Warning__: Different to julius.sinc, the input is multiplied by `pi`!
|
||||
"""
|
||||
return torch.where(
|
||||
x == 0,
|
||||
torch.tensor(1.0, device=x.device, dtype=x.dtype),
|
||||
torch.sin(math.pi * x) / math.pi / x,
|
||||
)
|
||||
|
||||
|
||||
# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License
|
||||
# https://adefossez.github.io/julius/julius/lowpass.html
|
||||
# LICENSE is in incl_licenses directory.
|
||||
def kaiser_sinc_filter1d(
|
||||
cutoff, half_width, kernel_size
|
||||
): # return filter [1,1,kernel_size]
|
||||
even = kernel_size % 2 == 0
|
||||
half_size = kernel_size // 2
|
||||
|
||||
# For kaiser window
|
||||
delta_f = 4 * half_width
|
||||
A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95
|
||||
if A > 50.0:
|
||||
beta = 0.1102 * (A - 8.7)
|
||||
elif A >= 21.0:
|
||||
beta = 0.5842 * (A - 21) ** 0.4 + 0.07886 * (A - 21.0)
|
||||
else:
|
||||
beta = 0.0
|
||||
window = torch.kaiser_window(kernel_size, beta=beta, periodic=False)
|
||||
|
||||
# ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio
|
||||
if even:
|
||||
time = torch.arange(-half_size, half_size) + 0.5
|
||||
else:
|
||||
time = torch.arange(kernel_size) - half_size
|
||||
if cutoff == 0:
|
||||
filter_ = torch.zeros_like(time)
|
||||
else:
|
||||
filter_ = 2 * cutoff * window * sinc(2 * cutoff * time)
|
||||
"""
|
||||
Normalize filter to have sum = 1, otherwise we will have a small leakage of the constant component in the input signal.
|
||||
"""
|
||||
filter_ /= filter_.sum()
|
||||
filter = filter_.view(1, 1, kernel_size)
|
||||
|
||||
return filter
|
||||
|
||||
|
||||
class LowPassFilter1d(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
cutoff=0.5,
|
||||
half_width=0.6,
|
||||
stride: int = 1,
|
||||
padding: bool = True,
|
||||
padding_mode: str = "replicate",
|
||||
kernel_size: int = 12,
|
||||
):
|
||||
"""
|
||||
kernel_size should be even number for stylegan3 setup, in this implementation, odd number is also possible.
|
||||
"""
|
||||
super().__init__()
|
||||
if cutoff < -0.0:
|
||||
raise ValueError("Minimum cutoff must be larger than zero.")
|
||||
if cutoff > 0.5:
|
||||
raise ValueError("A cutoff above 0.5 does not make sense.")
|
||||
self.kernel_size = kernel_size
|
||||
self.even = kernel_size % 2 == 0
|
||||
self.pad_left = kernel_size // 2 - int(self.even)
|
||||
self.pad_right = kernel_size // 2
|
||||
self.stride = stride
|
||||
self.padding = padding
|
||||
self.padding_mode = padding_mode
|
||||
filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size)
|
||||
self.register_buffer("filter", filter)
|
||||
|
||||
# Input [B, C, T]
|
||||
def forward(self, x):
|
||||
_, C, _ = x.shape
|
||||
|
||||
if self.padding:
|
||||
x = F.pad(x, (self.pad_left, self.pad_right), mode=self.padding_mode)
|
||||
out = F.conv1d(x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C)
|
||||
|
||||
return out
|
||||
@ -0,0 +1,54 @@
|
||||
# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import torch.nn as nn
|
||||
from torch.nn import functional as F
|
||||
|
||||
from .filter import (LowPassFilter1d,
|
||||
kaiser_sinc_filter1d)
|
||||
|
||||
|
||||
class UpSample1d(nn.Module):
|
||||
|
||||
def __init__(self, ratio=2, kernel_size=None):
|
||||
super().__init__()
|
||||
self.ratio = ratio
|
||||
self.kernel_size = (int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size)
|
||||
self.stride = ratio
|
||||
self.pad = self.kernel_size // ratio - 1
|
||||
self.pad_left = self.pad * self.stride + (self.kernel_size - self.stride) // 2
|
||||
self.pad_right = (self.pad * self.stride + (self.kernel_size - self.stride + 1) // 2)
|
||||
filter = kaiser_sinc_filter1d(cutoff=0.5 / ratio,
|
||||
half_width=0.6 / ratio,
|
||||
kernel_size=self.kernel_size)
|
||||
self.register_buffer("filter", filter)
|
||||
|
||||
# x: [B, C, T]
|
||||
def forward(self, x):
|
||||
_, C, _ = x.shape
|
||||
|
||||
x = F.pad(x, (self.pad, self.pad), mode="replicate")
|
||||
x = self.ratio * F.conv_transpose1d(
|
||||
x, self.filter.expand(C, -1, -1), stride=self.stride, groups=C)
|
||||
x = x[..., self.pad_left:-self.pad_right]
|
||||
|
||||
return x
|
||||
|
||||
|
||||
class DownSample1d(nn.Module):
|
||||
|
||||
def __init__(self, ratio=2, kernel_size=None):
|
||||
super().__init__()
|
||||
self.ratio = ratio
|
||||
self.kernel_size = (int(6 * ratio // 2) * 2 if kernel_size is None else kernel_size)
|
||||
self.lowpass = LowPassFilter1d(
|
||||
cutoff=0.5 / ratio,
|
||||
half_width=0.6 / ratio,
|
||||
stride=ratio,
|
||||
kernel_size=self.kernel_size,
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
xx = self.lowpass(x)
|
||||
|
||||
return xx
|
||||
439
postprocessing/mmaudio/ext/bigvgan_v2/bigvgan.py
Normal file
439
postprocessing/mmaudio/ext/bigvgan_v2/bigvgan.py
Normal file
@ -0,0 +1,439 @@
|
||||
# Copyright (c) 2024 NVIDIA CORPORATION.
|
||||
# Licensed under the MIT license.
|
||||
|
||||
# Adapted from https://github.com/jik876/hifi-gan under the MIT license.
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Dict, Optional, Union
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from huggingface_hub import PyTorchModelHubMixin, hf_hub_download
|
||||
from torch.nn import Conv1d, ConvTranspose1d
|
||||
from torch.nn.utils.parametrizations import weight_norm
|
||||
from torch.nn.utils.parametrize import remove_parametrizations
|
||||
|
||||
from ...ext.bigvgan_v2 import activations
|
||||
from ...ext.bigvgan_v2.alias_free_activation.torch.act import \
|
||||
Activation1d as TorchActivation1d
|
||||
from ...ext.bigvgan_v2.env import AttrDict
|
||||
from ...ext.bigvgan_v2.utils import get_padding, init_weights
|
||||
|
||||
|
||||
def load_hparams_from_json(path) -> AttrDict:
|
||||
with open(path) as f:
|
||||
data = f.read()
|
||||
return AttrDict(json.loads(data))
|
||||
|
||||
|
||||
class AMPBlock1(torch.nn.Module):
|
||||
"""
|
||||
AMPBlock applies Snake / SnakeBeta activation functions with trainable parameters that control periodicity, defined for each layer.
|
||||
AMPBlock1 has additional self.convs2 that contains additional Conv1d layers with a fixed dilation=1 followed by each layer in self.convs1
|
||||
|
||||
Args:
|
||||
h (AttrDict): Hyperparameters.
|
||||
channels (int): Number of convolution channels.
|
||||
kernel_size (int): Size of the convolution kernel. Default is 3.
|
||||
dilation (tuple): Dilation rates for the convolutions. Each dilation layer has two convolutions. Default is (1, 3, 5).
|
||||
activation (str): Activation function type. Should be either 'snake' or 'snakebeta'. Default is None.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
h: AttrDict,
|
||||
channels: int,
|
||||
kernel_size: int = 3,
|
||||
dilation: tuple = (1, 3, 5),
|
||||
activation: str = None,
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
self.h = h
|
||||
|
||||
self.convs1 = nn.ModuleList([
|
||||
weight_norm(
|
||||
Conv1d(
|
||||
channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
stride=1,
|
||||
dilation=d,
|
||||
padding=get_padding(kernel_size, d),
|
||||
)) for d in dilation
|
||||
])
|
||||
self.convs1.apply(init_weights)
|
||||
|
||||
self.convs2 = nn.ModuleList([
|
||||
weight_norm(
|
||||
Conv1d(
|
||||
channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
stride=1,
|
||||
dilation=1,
|
||||
padding=get_padding(kernel_size, 1),
|
||||
)) for _ in range(len(dilation))
|
||||
])
|
||||
self.convs2.apply(init_weights)
|
||||
|
||||
self.num_layers = len(self.convs1) + len(self.convs2) # Total number of conv layers
|
||||
|
||||
# Select which Activation1d, lazy-load cuda version to ensure backward compatibility
|
||||
if self.h.get("use_cuda_kernel", False):
|
||||
from alias_free_activation.cuda.activation1d import \
|
||||
Activation1d as CudaActivation1d
|
||||
|
||||
Activation1d = CudaActivation1d
|
||||
else:
|
||||
Activation1d = TorchActivation1d
|
||||
|
||||
# Activation functions
|
||||
if activation == "snake":
|
||||
self.activations = nn.ModuleList([
|
||||
Activation1d(
|
||||
activation=activations.Snake(channels, alpha_logscale=h.snake_logscale))
|
||||
for _ in range(self.num_layers)
|
||||
])
|
||||
elif activation == "snakebeta":
|
||||
self.activations = nn.ModuleList([
|
||||
Activation1d(
|
||||
activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale))
|
||||
for _ in range(self.num_layers)
|
||||
])
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
"activation incorrectly specified. check the config file and look for 'activation'."
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
acts1, acts2 = self.activations[::2], self.activations[1::2]
|
||||
for c1, c2, a1, a2 in zip(self.convs1, self.convs2, acts1, acts2):
|
||||
xt = a1(x)
|
||||
xt = c1(xt)
|
||||
xt = a2(xt)
|
||||
xt = c2(xt)
|
||||
x = xt + x
|
||||
|
||||
return x
|
||||
|
||||
def remove_weight_norm(self):
|
||||
for l in self.convs1:
|
||||
remove_parametrizations(l, 'weight')
|
||||
for l in self.convs2:
|
||||
remove_parametrizations(l, 'weight')
|
||||
|
||||
|
||||
class AMPBlock2(torch.nn.Module):
|
||||
"""
|
||||
AMPBlock applies Snake / SnakeBeta activation functions with trainable parameters that control periodicity, defined for each layer.
|
||||
Unlike AMPBlock1, AMPBlock2 does not contain extra Conv1d layers with fixed dilation=1
|
||||
|
||||
Args:
|
||||
h (AttrDict): Hyperparameters.
|
||||
channels (int): Number of convolution channels.
|
||||
kernel_size (int): Size of the convolution kernel. Default is 3.
|
||||
dilation (tuple): Dilation rates for the convolutions. Each dilation layer has two convolutions. Default is (1, 3, 5).
|
||||
activation (str): Activation function type. Should be either 'snake' or 'snakebeta'. Default is None.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
h: AttrDict,
|
||||
channels: int,
|
||||
kernel_size: int = 3,
|
||||
dilation: tuple = (1, 3, 5),
|
||||
activation: str = None,
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
self.h = h
|
||||
|
||||
self.convs = nn.ModuleList([
|
||||
weight_norm(
|
||||
Conv1d(
|
||||
channels,
|
||||
channels,
|
||||
kernel_size,
|
||||
stride=1,
|
||||
dilation=d,
|
||||
padding=get_padding(kernel_size, d),
|
||||
)) for d in dilation
|
||||
])
|
||||
self.convs.apply(init_weights)
|
||||
|
||||
self.num_layers = len(self.convs) # Total number of conv layers
|
||||
|
||||
# Select which Activation1d, lazy-load cuda version to ensure backward compatibility
|
||||
if self.h.get("use_cuda_kernel", False):
|
||||
from alias_free_activation.cuda.activation1d import \
|
||||
Activation1d as CudaActivation1d
|
||||
|
||||
Activation1d = CudaActivation1d
|
||||
else:
|
||||
Activation1d = TorchActivation1d
|
||||
|
||||
# Activation functions
|
||||
if activation == "snake":
|
||||
self.activations = nn.ModuleList([
|
||||
Activation1d(
|
||||
activation=activations.Snake(channels, alpha_logscale=h.snake_logscale))
|
||||
for _ in range(self.num_layers)
|
||||
])
|
||||
elif activation == "snakebeta":
|
||||
self.activations = nn.ModuleList([
|
||||
Activation1d(
|
||||
activation=activations.SnakeBeta(channels, alpha_logscale=h.snake_logscale))
|
||||
for _ in range(self.num_layers)
|
||||
])
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
"activation incorrectly specified. check the config file and look for 'activation'."
|
||||
)
|
||||
|
||||
def forward(self, x):
|
||||
for c, a in zip(self.convs, self.activations):
|
||||
xt = a(x)
|
||||
xt = c(xt)
|
||||
x = xt + x
|
||||
return x
|
||||
|
||||
def remove_weight_norm(self):
|
||||
for l in self.convs:
|
||||
remove_weight_norm(l)
|
||||
|
||||
|
||||
class BigVGAN(
|
||||
torch.nn.Module,
|
||||
PyTorchModelHubMixin,
|
||||
library_name="bigvgan",
|
||||
repo_url="https://github.com/NVIDIA/BigVGAN",
|
||||
docs_url="https://github.com/NVIDIA/BigVGAN/blob/main/README.md",
|
||||
pipeline_tag="audio-to-audio",
|
||||
license="mit",
|
||||
tags=["neural-vocoder", "audio-generation", "arxiv:2206.04658"],
|
||||
):
|
||||
"""
|
||||
BigVGAN is a neural vocoder model that applies anti-aliased periodic activation for residual blocks (resblocks).
|
||||
New in BigVGAN-v2: it can optionally use optimized CUDA kernels for AMP (anti-aliased multi-periodicity) blocks.
|
||||
|
||||
Args:
|
||||
h (AttrDict): Hyperparameters.
|
||||
use_cuda_kernel (bool): If set to True, loads optimized CUDA kernels for AMP. This should be used for inference only, as training is not supported with CUDA kernels.
|
||||
|
||||
Note:
|
||||
- The `use_cuda_kernel` parameter should be used for inference only, as training with CUDA kernels is not supported.
|
||||
- Ensure that the activation function is correctly specified in the hyperparameters (h.activation).
|
||||
"""
|
||||
|
||||
def __init__(self, h: AttrDict, use_cuda_kernel: bool = False):
|
||||
super().__init__()
|
||||
self.h = h
|
||||
self.h["use_cuda_kernel"] = use_cuda_kernel
|
||||
|
||||
# Select which Activation1d, lazy-load cuda version to ensure backward compatibility
|
||||
if self.h.get("use_cuda_kernel", False):
|
||||
from alias_free_activation.cuda.activation1d import \
|
||||
Activation1d as CudaActivation1d
|
||||
|
||||
Activation1d = CudaActivation1d
|
||||
else:
|
||||
Activation1d = TorchActivation1d
|
||||
|
||||
self.num_kernels = len(h.resblock_kernel_sizes)
|
||||
self.num_upsamples = len(h.upsample_rates)
|
||||
|
||||
# Pre-conv
|
||||
self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3))
|
||||
|
||||
# Define which AMPBlock to use. BigVGAN uses AMPBlock1 as default
|
||||
if h.resblock == "1":
|
||||
resblock_class = AMPBlock1
|
||||
elif h.resblock == "2":
|
||||
resblock_class = AMPBlock2
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Incorrect resblock class specified in hyperparameters. Got {h.resblock}")
|
||||
|
||||
# Transposed conv-based upsamplers. does not apply anti-aliasing
|
||||
self.ups = nn.ModuleList()
|
||||
for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)):
|
||||
self.ups.append(
|
||||
nn.ModuleList([
|
||||
weight_norm(
|
||||
ConvTranspose1d(
|
||||
h.upsample_initial_channel // (2**i),
|
||||
h.upsample_initial_channel // (2**(i + 1)),
|
||||
k,
|
||||
u,
|
||||
padding=(k - u) // 2,
|
||||
))
|
||||
]))
|
||||
|
||||
# Residual blocks using anti-aliased multi-periodicity composition modules (AMP)
|
||||
self.resblocks = nn.ModuleList()
|
||||
for i in range(len(self.ups)):
|
||||
ch = h.upsample_initial_channel // (2**(i + 1))
|
||||
for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)):
|
||||
self.resblocks.append(resblock_class(h, ch, k, d, activation=h.activation))
|
||||
|
||||
# Post-conv
|
||||
activation_post = (activations.Snake(ch, alpha_logscale=h.snake_logscale)
|
||||
if h.activation == "snake" else
|
||||
(activations.SnakeBeta(ch, alpha_logscale=h.snake_logscale)
|
||||
if h.activation == "snakebeta" else None))
|
||||
if activation_post is None:
|
||||
raise NotImplementedError(
|
||||
"activation incorrectly specified. check the config file and look for 'activation'."
|
||||
)
|
||||
|
||||
self.activation_post = Activation1d(activation=activation_post)
|
||||
|
||||
# Whether to use bias for the final conv_post. Default to True for backward compatibility
|
||||
self.use_bias_at_final = h.get("use_bias_at_final", True)
|
||||
self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3, bias=self.use_bias_at_final))
|
||||
|
||||
# Weight initialization
|
||||
for i in range(len(self.ups)):
|
||||
self.ups[i].apply(init_weights)
|
||||
self.conv_post.apply(init_weights)
|
||||
|
||||
# Final tanh activation. Defaults to True for backward compatibility
|
||||
self.use_tanh_at_final = h.get("use_tanh_at_final", True)
|
||||
|
||||
def forward(self, x):
|
||||
# Pre-conv
|
||||
x = self.conv_pre(x)
|
||||
|
||||
for i in range(self.num_upsamples):
|
||||
# Upsampling
|
||||
for i_up in range(len(self.ups[i])):
|
||||
x = self.ups[i][i_up](x)
|
||||
# AMP blocks
|
||||
xs = None
|
||||
for j in range(self.num_kernels):
|
||||
if xs is None:
|
||||
xs = self.resblocks[i * self.num_kernels + j](x)
|
||||
else:
|
||||
xs += self.resblocks[i * self.num_kernels + j](x)
|
||||
x = xs / self.num_kernels
|
||||
|
||||
# Post-conv
|
||||
x = self.activation_post(x)
|
||||
x = self.conv_post(x)
|
||||
# Final tanh activation
|
||||
if self.use_tanh_at_final:
|
||||
x = torch.tanh(x)
|
||||
else:
|
||||
x = torch.clamp(x, min=-1.0, max=1.0) # Bound the output to [-1, 1]
|
||||
|
||||
return x
|
||||
|
||||
def remove_weight_norm(self):
|
||||
try:
|
||||
print("Removing weight norm...")
|
||||
for l in self.ups:
|
||||
for l_i in l:
|
||||
remove_parametrizations(l_i, 'weight')
|
||||
for l in self.resblocks:
|
||||
l.remove_weight_norm()
|
||||
remove_parametrizations(self.conv_pre, 'weight')
|
||||
remove_parametrizations(self.conv_post, 'weight')
|
||||
except ValueError:
|
||||
print("[INFO] Model already removed weight norm. Skipping!")
|
||||
pass
|
||||
|
||||
# Additional methods for huggingface_hub support
|
||||
def _save_pretrained(self, save_directory: Path) -> None:
|
||||
"""Save weights and config.json from a Pytorch model to a local directory."""
|
||||
|
||||
model_path = save_directory / "bigvgan_generator.pt"
|
||||
torch.save({"generator": self.state_dict()}, model_path)
|
||||
|
||||
config_path = save_directory / "config.json"
|
||||
with open(config_path, "w") as config_file:
|
||||
json.dump(self.h, config_file, indent=4)
|
||||
|
||||
@classmethod
|
||||
def _from_pretrained(
|
||||
cls,
|
||||
*,
|
||||
model_id: str,
|
||||
revision: str,
|
||||
cache_dir: str,
|
||||
force_download: bool,
|
||||
proxies: Optional[Dict],
|
||||
resume_download: bool,
|
||||
local_files_only: bool,
|
||||
token: Union[str, bool, None],
|
||||
map_location: str = "cpu", # Additional argument
|
||||
strict: bool = False, # Additional argument
|
||||
use_cuda_kernel: bool = False,
|
||||
**model_kwargs,
|
||||
):
|
||||
"""Load Pytorch pretrained weights and return the loaded model."""
|
||||
|
||||
# Download and load hyperparameters (h) used by BigVGAN
|
||||
if os.path.isdir(model_id):
|
||||
print("Loading config.json from local directory")
|
||||
config_file = os.path.join(model_id, "config.json")
|
||||
else:
|
||||
config_file = hf_hub_download(
|
||||
repo_id=model_id,
|
||||
filename="config.json",
|
||||
revision=revision,
|
||||
cache_dir=cache_dir,
|
||||
force_download=force_download,
|
||||
proxies=proxies,
|
||||
resume_download=resume_download,
|
||||
token=token,
|
||||
local_files_only=local_files_only,
|
||||
)
|
||||
h = load_hparams_from_json(config_file)
|
||||
|
||||
# instantiate BigVGAN using h
|
||||
if use_cuda_kernel:
|
||||
print(
|
||||
f"[WARNING] You have specified use_cuda_kernel=True during BigVGAN.from_pretrained(). Only inference is supported (training is not implemented)!"
|
||||
)
|
||||
print(
|
||||
f"[WARNING] You need nvcc and ninja installed in your system that matches your PyTorch build is using to build the kernel. If not, the model will fail to initialize or generate incorrect waveform!"
|
||||
)
|
||||
print(
|
||||
f"[WARNING] For detail, see the official GitHub repository: https://github.com/NVIDIA/BigVGAN?tab=readme-ov-file#using-custom-cuda-kernel-for-synthesis"
|
||||
)
|
||||
model = cls(h, use_cuda_kernel=use_cuda_kernel)
|
||||
|
||||
# Download and load pretrained generator weight
|
||||
if os.path.isdir(model_id):
|
||||
print("Loading weights from local directory")
|
||||
model_file = os.path.join(model_id, "bigvgan_generator.pt")
|
||||
else:
|
||||
print(f"Loading weights from {model_id}")
|
||||
model_file = hf_hub_download(
|
||||
repo_id=model_id,
|
||||
filename="bigvgan_generator.pt",
|
||||
revision=revision,
|
||||
cache_dir=cache_dir,
|
||||
force_download=force_download,
|
||||
proxies=proxies,
|
||||
resume_download=resume_download,
|
||||
token=token,
|
||||
local_files_only=local_files_only,
|
||||
)
|
||||
|
||||
checkpoint_dict = torch.load(model_file, map_location=map_location, weights_only=True)
|
||||
|
||||
try:
|
||||
model.load_state_dict(checkpoint_dict["generator"])
|
||||
except RuntimeError:
|
||||
print(
|
||||
f"[INFO] the pretrained checkpoint does not contain weight norm. Loading the checkpoint after removing weight norm!"
|
||||
)
|
||||
model.remove_weight_norm()
|
||||
model.load_state_dict(checkpoint_dict["generator"])
|
||||
|
||||
return model
|
||||
18
postprocessing/mmaudio/ext/bigvgan_v2/env.py
Normal file
18
postprocessing/mmaudio/ext/bigvgan_v2/env.py
Normal file
@ -0,0 +1,18 @@
|
||||
# Adapted from https://github.com/jik876/hifi-gan under the MIT license.
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import os
|
||||
import shutil
|
||||
|
||||
|
||||
class AttrDict(dict):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(AttrDict, self).__init__(*args, **kwargs)
|
||||
self.__dict__ = self
|
||||
|
||||
|
||||
def build_env(config, config_name, path):
|
||||
t_path = os.path.join(path, config_name)
|
||||
if config != t_path:
|
||||
os.makedirs(path, exist_ok=True)
|
||||
shutil.copyfile(config, os.path.join(path, config_name))
|
||||
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2020 Jungil Kong
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2020 Edward Dixon
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
201
postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_3
Normal file
201
postprocessing/mmaudio/ext/bigvgan_v2/incl_licenses/LICENSE_3
Normal file
@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
@ -0,0 +1,29 @@
|
||||
BSD 3-Clause License
|
||||
|
||||
Copyright (c) 2019, Seungwon Park 박승원
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
@ -0,0 +1,16 @@
|
||||
Copyright 2020 Alexandre Défossez
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
|
||||
associated documentation files (the "Software"), to deal in the Software without restriction,
|
||||
including without limitation the rights to use, copy, modify, merge, publish, distribute,
|
||||
sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or
|
||||
substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT
|
||||
NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023-present, Descript
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 Charactr Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 Amphion
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
31
postprocessing/mmaudio/ext/bigvgan_v2/utils.py
Normal file
31
postprocessing/mmaudio/ext/bigvgan_v2/utils.py
Normal file
@ -0,0 +1,31 @@
|
||||
# Adapted from https://github.com/jik876/hifi-gan under the MIT license.
|
||||
# LICENSE is in incl_licenses directory.
|
||||
|
||||
import os
|
||||
|
||||
import torch
|
||||
from torch.nn.utils import weight_norm
|
||||
|
||||
|
||||
def init_weights(m, mean=0.0, std=0.01):
|
||||
classname = m.__class__.__name__
|
||||
if classname.find("Conv") != -1:
|
||||
m.weight.data.normal_(mean, std)
|
||||
|
||||
|
||||
def apply_weight_norm(m):
|
||||
classname = m.__class__.__name__
|
||||
if classname.find("Conv") != -1:
|
||||
weight_norm(m)
|
||||
|
||||
|
||||
def get_padding(kernel_size, dilation=1):
|
||||
return int((kernel_size * dilation - dilation) / 2)
|
||||
|
||||
|
||||
def load_checkpoint(filepath, device):
|
||||
assert os.path.isfile(filepath)
|
||||
print(f"Loading '{filepath}'")
|
||||
checkpoint_dict = torch.load(filepath, map_location=device)
|
||||
print("Complete.")
|
||||
return checkpoint_dict
|
||||
106
postprocessing/mmaudio/ext/mel_converter.py
Normal file
106
postprocessing/mmaudio/ext/mel_converter.py
Normal file
@ -0,0 +1,106 @@
|
||||
# Reference: # https://github.com/bytedance/Make-An-Audio-2
|
||||
from typing import Literal
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from librosa.filters import mel as librosa_mel_fn
|
||||
|
||||
|
||||
def dynamic_range_compression_torch(x, C=1, clip_val=1e-5, *, norm_fn):
|
||||
return norm_fn(torch.clamp(x, min=clip_val) * C)
|
||||
|
||||
|
||||
def spectral_normalize_torch(magnitudes, norm_fn):
|
||||
output = dynamic_range_compression_torch(magnitudes, norm_fn=norm_fn)
|
||||
return output
|
||||
|
||||
|
||||
class MelConverter(nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
sampling_rate: float,
|
||||
n_fft: int,
|
||||
num_mels: int,
|
||||
hop_size: int,
|
||||
win_size: int,
|
||||
fmin: float,
|
||||
fmax: float,
|
||||
norm_fn,
|
||||
):
|
||||
super().__init__()
|
||||
self.sampling_rate = sampling_rate
|
||||
self.n_fft = n_fft
|
||||
self.num_mels = num_mels
|
||||
self.hop_size = hop_size
|
||||
self.win_size = win_size
|
||||
self.fmin = fmin
|
||||
self.fmax = fmax
|
||||
self.norm_fn = norm_fn
|
||||
|
||||
mel = librosa_mel_fn(sr=self.sampling_rate,
|
||||
n_fft=self.n_fft,
|
||||
n_mels=self.num_mels,
|
||||
fmin=self.fmin,
|
||||
fmax=self.fmax)
|
||||
mel_basis = torch.from_numpy(mel).float()
|
||||
hann_window = torch.hann_window(self.win_size)
|
||||
|
||||
self.register_buffer('mel_basis', mel_basis)
|
||||
self.register_buffer('hann_window', hann_window)
|
||||
|
||||
@property
|
||||
def device(self):
|
||||
return self.mel_basis.device
|
||||
|
||||
def forward(self, waveform: torch.Tensor, center: bool = False) -> torch.Tensor:
|
||||
waveform = waveform.clamp(min=-1., max=1.).to(self.device)
|
||||
|
||||
waveform = torch.nn.functional.pad(
|
||||
waveform.unsqueeze(1),
|
||||
[int((self.n_fft - self.hop_size) / 2),
|
||||
int((self.n_fft - self.hop_size) / 2)],
|
||||
mode='reflect')
|
||||
waveform = waveform.squeeze(1)
|
||||
|
||||
spec = torch.stft(waveform,
|
||||
self.n_fft,
|
||||
hop_length=self.hop_size,
|
||||
win_length=self.win_size,
|
||||
window=self.hann_window,
|
||||
center=center,
|
||||
pad_mode='reflect',
|
||||
normalized=False,
|
||||
onesided=True,
|
||||
return_complex=True)
|
||||
|
||||
spec = torch.view_as_real(spec)
|
||||
spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9))
|
||||
spec = torch.matmul(self.mel_basis, spec)
|
||||
spec = spectral_normalize_torch(spec, self.norm_fn)
|
||||
|
||||
return spec
|
||||
|
||||
|
||||
def get_mel_converter(mode: Literal['16k', '44k']) -> MelConverter:
|
||||
if mode == '16k':
|
||||
return MelConverter(sampling_rate=16_000,
|
||||
n_fft=1024,
|
||||
num_mels=80,
|
||||
hop_size=256,
|
||||
win_size=1024,
|
||||
fmin=0,
|
||||
fmax=8_000,
|
||||
norm_fn=torch.log10)
|
||||
elif mode == '44k':
|
||||
return MelConverter(sampling_rate=44_100,
|
||||
n_fft=2048,
|
||||
num_mels=128,
|
||||
hop_size=512,
|
||||
win_size=2048,
|
||||
fmin=0,
|
||||
fmax=44100 / 2,
|
||||
norm_fn=torch.log)
|
||||
else:
|
||||
raise ValueError(f'Unknown mode: {mode}')
|
||||
35
postprocessing/mmaudio/ext/rotary_embeddings.py
Normal file
35
postprocessing/mmaudio/ext/rotary_embeddings.py
Normal file
@ -0,0 +1,35 @@
|
||||
from typing import Union
|
||||
|
||||
import torch
|
||||
from einops import rearrange
|
||||
from torch import Tensor
|
||||
|
||||
# Ref: https://github.com/black-forest-labs/flux/blob/main/src/flux/math.py
|
||||
# Ref: https://github.com/lucidrains/rotary-embedding-torch
|
||||
|
||||
|
||||
def compute_rope_rotations(length: int,
|
||||
dim: int,
|
||||
theta: int,
|
||||
*,
|
||||
freq_scaling: float = 1.0,
|
||||
device: Union[torch.device, str] = 'cpu') -> Tensor:
|
||||
assert dim % 2 == 0
|
||||
|
||||
with torch.amp.autocast(device_type='cuda', enabled=False):
|
||||
pos = torch.arange(length, dtype=torch.float32, device=device)
|
||||
freqs = 1.0 / (theta**(torch.arange(0, dim, 2, dtype=torch.float32, device=device) / dim))
|
||||
freqs *= freq_scaling
|
||||
|
||||
rot = torch.einsum('..., f -> ... f', pos, freqs)
|
||||
rot = torch.stack([torch.cos(rot), -torch.sin(rot), torch.sin(rot), torch.cos(rot)], dim=-1)
|
||||
rot = rearrange(rot, 'n d (i j) -> 1 n d i j', i=2, j=2)
|
||||
return rot
|
||||
|
||||
|
||||
def apply_rope(x: Tensor, rot: Tensor) -> tuple[Tensor, Tensor]:
|
||||
with torch.amp.autocast(device_type='cuda', enabled=False):
|
||||
_x = x.float()
|
||||
_x = _x.view(*_x.shape[:-1], -1, 1, 2)
|
||||
x_out = rot[..., 0] * _x[..., 0] + rot[..., 1] * _x[..., 1]
|
||||
return x_out.reshape(*x.shape).to(dtype=x.dtype)
|
||||
183
postprocessing/mmaudio/ext/stft_converter.py
Normal file
183
postprocessing/mmaudio/ext/stft_converter.py
Normal file
@ -0,0 +1,183 @@
|
||||
# Reference: # https://github.com/bytedance/Make-An-Audio-2
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torchaudio
|
||||
from einops import rearrange
|
||||
from librosa.filters import mel as librosa_mel_fn
|
||||
|
||||
|
||||
def dynamic_range_compression_torch(x, C=1, clip_val=1e-5, norm_fn=torch.log10):
|
||||
return norm_fn(torch.clamp(x, min=clip_val) * C)
|
||||
|
||||
|
||||
def spectral_normalize_torch(magnitudes, norm_fn):
|
||||
output = dynamic_range_compression_torch(magnitudes, norm_fn=norm_fn)
|
||||
return output
|
||||
|
||||
|
||||
class STFTConverter(nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
sampling_rate: float = 16_000,
|
||||
n_fft: int = 1024,
|
||||
num_mels: int = 128,
|
||||
hop_size: int = 256,
|
||||
win_size: int = 1024,
|
||||
fmin: float = 0,
|
||||
fmax: float = 8_000,
|
||||
norm_fn=torch.log,
|
||||
):
|
||||
super().__init__()
|
||||
self.sampling_rate = sampling_rate
|
||||
self.n_fft = n_fft
|
||||
self.num_mels = num_mels
|
||||
self.hop_size = hop_size
|
||||
self.win_size = win_size
|
||||
self.fmin = fmin
|
||||
self.fmax = fmax
|
||||
self.norm_fn = norm_fn
|
||||
|
||||
mel = librosa_mel_fn(sr=self.sampling_rate,
|
||||
n_fft=self.n_fft,
|
||||
n_mels=self.num_mels,
|
||||
fmin=self.fmin,
|
||||
fmax=self.fmax)
|
||||
mel_basis = torch.from_numpy(mel).float()
|
||||
hann_window = torch.hann_window(self.win_size)
|
||||
|
||||
self.register_buffer('mel_basis', mel_basis)
|
||||
self.register_buffer('hann_window', hann_window)
|
||||
|
||||
@property
|
||||
def device(self):
|
||||
return self.hann_window.device
|
||||
|
||||
def forward(self, waveform: torch.Tensor) -> torch.Tensor:
|
||||
# input: batch_size * length
|
||||
bs = waveform.shape[0]
|
||||
waveform = waveform.clamp(min=-1., max=1.)
|
||||
|
||||
spec = torch.stft(waveform,
|
||||
self.n_fft,
|
||||
hop_length=self.hop_size,
|
||||
win_length=self.win_size,
|
||||
window=self.hann_window,
|
||||
center=True,
|
||||
pad_mode='reflect',
|
||||
normalized=False,
|
||||
onesided=True,
|
||||
return_complex=True)
|
||||
|
||||
spec = torch.view_as_real(spec)
|
||||
# print('After stft', spec.shape, spec.min(), spec.max(), spec.mean())
|
||||
|
||||
power = spec.pow(2).sum(-1)
|
||||
angle = torch.atan2(spec[..., 1], spec[..., 0])
|
||||
|
||||
print('power', power.shape, power.min(), power.max(), power.mean())
|
||||
print('angle', angle.shape, angle.min(), angle.max(), angle.mean())
|
||||
|
||||
# print('mel', self.mel_basis.shape, self.mel_basis.min(), self.mel_basis.max(),
|
||||
# self.mel_basis.mean())
|
||||
|
||||
# spec = rearrange(spec, 'b f t c -> (b c) f t')
|
||||
|
||||
# spec = self.mel_transform(spec)
|
||||
|
||||
# spec = torch.matmul(self.mel_basis, spec)
|
||||
|
||||
# print('After mel', spec.shape, spec.min(), spec.max(), spec.mean())
|
||||
|
||||
# spec = spectral_normalize_torch(spec, self.norm_fn)
|
||||
|
||||
# print('After norm', spec.shape, spec.min(), spec.max(), spec.mean())
|
||||
|
||||
# compute magnitude
|
||||
# magnitude = torch.sqrt((spec**2).sum(-1))
|
||||
# normalize by magnitude
|
||||
# scaled_magnitude = torch.log10(magnitude.clamp(min=1e-5)) * 10
|
||||
# spec = spec / magnitude.unsqueeze(-1) * scaled_magnitude.unsqueeze(-1)
|
||||
|
||||
# power = torch.log10(power.clamp(min=1e-5)) * 10
|
||||
power = torch.log10(power.clamp(min=1e-5))
|
||||
|
||||
print('After scaling', power.shape, power.min(), power.max(), power.mean())
|
||||
|
||||
spec = torch.stack([power, angle], dim=-1)
|
||||
|
||||
# spec = rearrange(spec, '(b c) f t -> b c f t', b=bs)
|
||||
spec = rearrange(spec, 'b f t c -> b c f t', b=bs)
|
||||
|
||||
# spec[:, :, 400:] = 0
|
||||
|
||||
return spec
|
||||
|
||||
def invert(self, spec: torch.Tensor, length: int) -> torch.Tensor:
|
||||
bs = spec.shape[0]
|
||||
|
||||
# spec = rearrange(spec, 'b c f t -> (b c) f t')
|
||||
# print(spec.shape, self.mel_basis.shape)
|
||||
# spec = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), spec).solution
|
||||
# spec = torch.linalg.pinv(self.mel_basis.unsqueeze(0)) @ spec
|
||||
|
||||
# spec = self.invmel_transform(spec)
|
||||
|
||||
spec = rearrange(spec, 'b c f t -> b f t c', b=bs).contiguous()
|
||||
|
||||
# spec[..., 0] = 10**(spec[..., 0] / 10)
|
||||
|
||||
power = spec[..., 0]
|
||||
power = 10**power
|
||||
|
||||
# print('After unscaling', spec[..., 0].shape, spec[..., 0].min(), spec[..., 0].max(),
|
||||
# spec[..., 0].mean())
|
||||
|
||||
unit_vector = torch.stack([
|
||||
torch.cos(spec[..., 1]),
|
||||
torch.sin(spec[..., 1]),
|
||||
], dim=-1)
|
||||
|
||||
spec = torch.sqrt(power) * unit_vector
|
||||
|
||||
# spec = rearrange(spec, '(b c) f t -> b f t c', b=bs).contiguous()
|
||||
spec = torch.view_as_complex(spec)
|
||||
|
||||
waveform = torch.istft(
|
||||
spec,
|
||||
self.n_fft,
|
||||
length=length,
|
||||
hop_length=self.hop_size,
|
||||
win_length=self.win_size,
|
||||
window=self.hann_window,
|
||||
center=True,
|
||||
normalized=False,
|
||||
onesided=True,
|
||||
return_complex=False,
|
||||
)
|
||||
|
||||
return waveform
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
converter = STFTConverter(sampling_rate=16000)
|
||||
|
||||
signal = torchaudio.load('./output/ZZ6GRocWW38_000090.wav')[0]
|
||||
# resample signal at 44100 Hz
|
||||
# signal = torchaudio.transforms.Resample(16_000, 44_100)(signal)
|
||||
|
||||
L = signal.shape[1]
|
||||
print('Input signal', signal.shape)
|
||||
spec = converter(signal)
|
||||
|
||||
print('Final spec', spec.shape)
|
||||
|
||||
signal_recon = converter.invert(spec, length=L)
|
||||
print('Output signal', signal_recon.shape, signal_recon.min(), signal_recon.max(),
|
||||
signal_recon.mean())
|
||||
|
||||
print('MSE', torch.nn.functional.mse_loss(signal, signal_recon))
|
||||
torchaudio.save('./output/ZZ6GRocWW38_000090_recon.wav', signal_recon, 16000)
|
||||
234
postprocessing/mmaudio/ext/stft_converter_mel.py
Normal file
234
postprocessing/mmaudio/ext/stft_converter_mel.py
Normal file
@ -0,0 +1,234 @@
|
||||
# Reference: # https://github.com/bytedance/Make-An-Audio-2
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torchaudio
|
||||
from einops import rearrange
|
||||
from librosa.filters import mel as librosa_mel_fn
|
||||
|
||||
|
||||
def dynamic_range_compression_torch(x, C=1, clip_val=1e-5, norm_fn=torch.log10):
|
||||
return norm_fn(torch.clamp(x, min=clip_val) * C)
|
||||
|
||||
|
||||
def spectral_normalize_torch(magnitudes, norm_fn):
|
||||
output = dynamic_range_compression_torch(magnitudes, norm_fn=norm_fn)
|
||||
return output
|
||||
|
||||
|
||||
class STFTConverter(nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
sampling_rate: float = 16_000,
|
||||
n_fft: int = 1024,
|
||||
num_mels: int = 128,
|
||||
hop_size: int = 256,
|
||||
win_size: int = 1024,
|
||||
fmin: float = 0,
|
||||
fmax: float = 8_000,
|
||||
norm_fn=torch.log,
|
||||
):
|
||||
super().__init__()
|
||||
self.sampling_rate = sampling_rate
|
||||
self.n_fft = n_fft
|
||||
self.num_mels = num_mels
|
||||
self.hop_size = hop_size
|
||||
self.win_size = win_size
|
||||
self.fmin = fmin
|
||||
self.fmax = fmax
|
||||
self.norm_fn = norm_fn
|
||||
|
||||
mel = librosa_mel_fn(sr=self.sampling_rate,
|
||||
n_fft=self.n_fft,
|
||||
n_mels=self.num_mels,
|
||||
fmin=self.fmin,
|
||||
fmax=self.fmax)
|
||||
mel_basis = torch.from_numpy(mel).float()
|
||||
hann_window = torch.hann_window(self.win_size)
|
||||
|
||||
self.register_buffer('mel_basis', mel_basis)
|
||||
self.register_buffer('hann_window', hann_window)
|
||||
|
||||
@property
|
||||
def device(self):
|
||||
return self.hann_window.device
|
||||
|
||||
def forward(self, waveform: torch.Tensor) -> torch.Tensor:
|
||||
# input: batch_size * length
|
||||
bs = waveform.shape[0]
|
||||
waveform = waveform.clamp(min=-1., max=1.)
|
||||
|
||||
spec = torch.stft(waveform,
|
||||
self.n_fft,
|
||||
hop_length=self.hop_size,
|
||||
win_length=self.win_size,
|
||||
window=self.hann_window,
|
||||
center=True,
|
||||
pad_mode='reflect',
|
||||
normalized=False,
|
||||
onesided=True,
|
||||
return_complex=True)
|
||||
|
||||
spec = torch.view_as_real(spec)
|
||||
# print('After stft', spec.shape, spec.min(), spec.max(), spec.mean())
|
||||
|
||||
power = (spec.pow(2).sum(-1))**(0.5)
|
||||
angle = torch.atan2(spec[..., 1], spec[..., 0])
|
||||
|
||||
print('power 1', power.shape, power.min(), power.max(), power.mean())
|
||||
print('angle 1', angle.shape, angle.min(), angle.max(), angle.mean(), angle[:, :2, :2])
|
||||
|
||||
# print('mel', self.mel_basis.shape, self.mel_basis.min(), self.mel_basis.max(),
|
||||
# self.mel_basis.mean())
|
||||
|
||||
# spec = self.mel_transform(spec)
|
||||
|
||||
# power = torch.matmul(self.mel_basis, power)
|
||||
|
||||
spec = rearrange(spec, 'b f t c -> (b c) f t')
|
||||
spec = self.mel_basis.unsqueeze(0) @ spec
|
||||
spec = rearrange(spec, '(b c) f t -> b f t c', b=bs)
|
||||
|
||||
power = (spec.pow(2).sum(-1))**(0.5)
|
||||
angle = torch.atan2(spec[..., 1], spec[..., 0])
|
||||
|
||||
print('power', power.shape, power.min(), power.max(), power.mean())
|
||||
print('angle', angle.shape, angle.min(), angle.max(), angle.mean(), angle[:, :2, :2])
|
||||
|
||||
# print('After mel', spec.shape, spec.min(), spec.max(), spec.mean())
|
||||
|
||||
# spec = spectral_normalize_torch(spec, self.norm_fn)
|
||||
|
||||
# print('After norm', spec.shape, spec.min(), spec.max(), spec.mean())
|
||||
|
||||
# compute magnitude
|
||||
# magnitude = torch.sqrt((spec**2).sum(-1))
|
||||
# normalize by magnitude
|
||||
# scaled_magnitude = torch.log10(magnitude.clamp(min=1e-5)) * 10
|
||||
# spec = spec / magnitude.unsqueeze(-1) * scaled_magnitude.unsqueeze(-1)
|
||||
|
||||
# power = torch.log10(power.clamp(min=1e-5)) * 10
|
||||
power = torch.log10(power.clamp(min=1e-8))
|
||||
|
||||
print('After scaling', power.shape, power.min(), power.max(), power.mean())
|
||||
|
||||
# spec = torch.stack([power, angle], dim=-1)
|
||||
|
||||
# spec = rearrange(spec, '(b c) f t -> b c f t', b=bs)
|
||||
# spec = rearrange(spec, 'b f t c -> b c f t', b=bs)
|
||||
|
||||
# spec[:, :, 400:] = 0
|
||||
|
||||
return power, angle
|
||||
# return spec[..., 0], spec[..., 1]
|
||||
|
||||
def invert(self, spec: torch.Tensor, length: int) -> torch.Tensor:
|
||||
|
||||
power, angle = spec
|
||||
|
||||
bs = power.shape[0]
|
||||
|
||||
# spec = rearrange(spec, 'b c f t -> (b c) f t')
|
||||
# print(spec.shape, self.mel_basis.shape)
|
||||
# spec = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), spec).solution
|
||||
# spec = torch.linalg.pinv(self.mel_basis.unsqueeze(0)) @ spec
|
||||
|
||||
# spec = self.invmel_transform(spec)
|
||||
|
||||
# spec = rearrange(spec, 'b c f t -> b f t c', b=bs).contiguous()
|
||||
|
||||
# spec[..., 0] = 10**(spec[..., 0] / 10)
|
||||
|
||||
# power = spec[..., 0]
|
||||
power = 10**power
|
||||
|
||||
# print('After unscaling', spec[..., 0].shape, spec[..., 0].min(), spec[..., 0].max(),
|
||||
# spec[..., 0].mean())
|
||||
|
||||
unit_vector = torch.stack([
|
||||
torch.cos(angle),
|
||||
torch.sin(angle),
|
||||
], dim=-1)
|
||||
|
||||
spec = power.unsqueeze(-1) * unit_vector
|
||||
|
||||
# power = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), power).solution
|
||||
spec = rearrange(spec, 'b f t c -> (b c) f t')
|
||||
spec = torch.linalg.pinv(self.mel_basis.unsqueeze(0)) @ spec
|
||||
# spec = torch.linalg.lstsq(self.mel_basis.unsqueeze(0), spec).solution
|
||||
spec = rearrange(spec, '(b c) f t -> b f t c', b=bs).contiguous()
|
||||
|
||||
power = (spec.pow(2).sum(-1))**(0.5)
|
||||
angle = torch.atan2(spec[..., 1], spec[..., 0])
|
||||
|
||||
print('power 2', power.shape, power.min(), power.max(), power.mean())
|
||||
print('angle 2', angle.shape, angle.min(), angle.max(), angle.mean(), angle[:, :2, :2])
|
||||
|
||||
# spec = rearrange(spec, '(b c) f t -> b f t c', b=bs).contiguous()
|
||||
spec = torch.view_as_complex(spec)
|
||||
|
||||
waveform = torch.istft(
|
||||
spec,
|
||||
self.n_fft,
|
||||
length=length,
|
||||
hop_length=self.hop_size,
|
||||
win_length=self.win_size,
|
||||
window=self.hann_window,
|
||||
center=True,
|
||||
normalized=False,
|
||||
onesided=True,
|
||||
return_complex=False,
|
||||
)
|
||||
|
||||
return waveform
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
converter = STFTConverter(sampling_rate=16000)
|
||||
|
||||
signal = torchaudio.load('./output/ZZ6GRocWW38_000090.wav')[0]
|
||||
# resample signal at 44100 Hz
|
||||
# signal = torchaudio.transforms.Resample(16_000, 44_100)(signal)
|
||||
|
||||
L = signal.shape[1]
|
||||
print('Input signal', signal.shape)
|
||||
spec = converter(signal)
|
||||
|
||||
power, angle = spec
|
||||
|
||||
# print(power.shape, angle.shape)
|
||||
# print(power, power.min(), power.max(), power.mean())
|
||||
# power = power.clamp(-1, 1)
|
||||
# angle = angle.clamp(-1, 1)
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
# Visualize power
|
||||
plt.figure()
|
||||
plt.imshow(power[0].detach().numpy(), aspect='auto', origin='lower')
|
||||
plt.colorbar()
|
||||
plt.title('Power')
|
||||
plt.xlabel('Time')
|
||||
plt.ylabel('Frequency')
|
||||
plt.savefig('./output/power.png')
|
||||
|
||||
# Visualize angle
|
||||
plt.figure()
|
||||
plt.imshow(angle[0].detach().numpy(), aspect='auto', origin='lower')
|
||||
plt.colorbar()
|
||||
plt.title('Angle')
|
||||
plt.xlabel('Time')
|
||||
plt.ylabel('Frequency')
|
||||
plt.savefig('./output/angle.png')
|
||||
|
||||
# print('Final spec', spec.shape)
|
||||
|
||||
signal_recon = converter.invert(spec, length=L)
|
||||
print('Output signal', signal_recon.shape, signal_recon.min(), signal_recon.max(),
|
||||
signal_recon.mean())
|
||||
|
||||
print('MSE', torch.nn.functional.mse_loss(signal, signal_recon))
|
||||
torchaudio.save('./output/ZZ6GRocWW38_000090_recon.wav', signal_recon, 16000)
|
||||
21
postprocessing/mmaudio/ext/synchformer/LICENSE
Normal file
21
postprocessing/mmaudio/ext/synchformer/LICENSE
Normal file
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2024 Vladimir Iashin
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
1
postprocessing/mmaudio/ext/synchformer/__init__.py
Normal file
1
postprocessing/mmaudio/ext/synchformer/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
# from .synchformer import Synchformer
|
||||
84
postprocessing/mmaudio/ext/synchformer/divided_224_16x4.yaml
Normal file
84
postprocessing/mmaudio/ext/synchformer/divided_224_16x4.yaml
Normal file
@ -0,0 +1,84 @@
|
||||
TRAIN:
|
||||
ENABLE: True
|
||||
DATASET: Ssv2
|
||||
BATCH_SIZE: 32
|
||||
EVAL_PERIOD: 5
|
||||
CHECKPOINT_PERIOD: 5
|
||||
AUTO_RESUME: True
|
||||
CHECKPOINT_EPOCH_RESET: True
|
||||
CHECKPOINT_FILE_PATH: /checkpoint/fmetze/neurips_sota/40944587/checkpoints/checkpoint_epoch_00035.pyth
|
||||
DATA:
|
||||
NUM_FRAMES: 16
|
||||
SAMPLING_RATE: 4
|
||||
TRAIN_JITTER_SCALES: [256, 320]
|
||||
TRAIN_CROP_SIZE: 224
|
||||
TEST_CROP_SIZE: 224
|
||||
INPUT_CHANNEL_NUM: [3]
|
||||
MEAN: [0.5, 0.5, 0.5]
|
||||
STD: [0.5, 0.5, 0.5]
|
||||
PATH_TO_DATA_DIR: /private/home/mandelapatrick/slowfast/data/ssv2
|
||||
PATH_PREFIX: /datasets01/SomethingV2/092720/20bn-something-something-v2-frames
|
||||
INV_UNIFORM_SAMPLE: True
|
||||
RANDOM_FLIP: False
|
||||
REVERSE_INPUT_CHANNEL: True
|
||||
USE_RAND_AUGMENT: True
|
||||
RE_PROB: 0.0
|
||||
USE_REPEATED_AUG: False
|
||||
USE_RANDOM_RESIZE_CROPS: False
|
||||
COLORJITTER: False
|
||||
GRAYSCALE: False
|
||||
GAUSSIAN: False
|
||||
SOLVER:
|
||||
BASE_LR: 1e-4
|
||||
LR_POLICY: steps_with_relative_lrs
|
||||
LRS: [1, 0.1, 0.01]
|
||||
STEPS: [0, 20, 30]
|
||||
MAX_EPOCH: 35
|
||||
MOMENTUM: 0.9
|
||||
WEIGHT_DECAY: 5e-2
|
||||
WARMUP_EPOCHS: 0.0
|
||||
OPTIMIZING_METHOD: adamw
|
||||
USE_MIXED_PRECISION: True
|
||||
SMOOTHING: 0.2
|
||||
SLOWFAST:
|
||||
ALPHA: 8
|
||||
VIT:
|
||||
PATCH_SIZE: 16
|
||||
PATCH_SIZE_TEMP: 2
|
||||
CHANNELS: 3
|
||||
EMBED_DIM: 768
|
||||
DEPTH: 12
|
||||
NUM_HEADS: 12
|
||||
MLP_RATIO: 4
|
||||
QKV_BIAS: True
|
||||
VIDEO_INPUT: True
|
||||
TEMPORAL_RESOLUTION: 8
|
||||
USE_MLP: True
|
||||
DROP: 0.0
|
||||
POS_DROPOUT: 0.0
|
||||
DROP_PATH: 0.2
|
||||
IM_PRETRAINED: True
|
||||
HEAD_DROPOUT: 0.0
|
||||
HEAD_ACT: tanh
|
||||
PRETRAINED_WEIGHTS: vit_1k
|
||||
ATTN_LAYER: divided
|
||||
MODEL:
|
||||
NUM_CLASSES: 174
|
||||
ARCH: slow
|
||||
MODEL_NAME: VisionTransformer
|
||||
LOSS_FUNC: cross_entropy
|
||||
TEST:
|
||||
ENABLE: True
|
||||
DATASET: Ssv2
|
||||
BATCH_SIZE: 64
|
||||
NUM_ENSEMBLE_VIEWS: 1
|
||||
NUM_SPATIAL_CROPS: 3
|
||||
DATA_LOADER:
|
||||
NUM_WORKERS: 4
|
||||
PIN_MEMORY: True
|
||||
NUM_GPUS: 8
|
||||
NUM_SHARDS: 4
|
||||
RNG_SEED: 0
|
||||
OUTPUT_DIR: .
|
||||
TENSORBOARD:
|
||||
ENABLE: True
|
||||
400
postprocessing/mmaudio/ext/synchformer/motionformer.py
Normal file
400
postprocessing/mmaudio/ext/synchformer/motionformer.py
Normal file
@ -0,0 +1,400 @@
|
||||
import logging
|
||||
from pathlib import Path
|
||||
|
||||
import einops
|
||||
import torch
|
||||
from omegaconf import OmegaConf
|
||||
from timm.layers import trunc_normal_
|
||||
from torch import nn
|
||||
|
||||
from .utils import check_if_file_exists_else_download
|
||||
from .video_model_builder import VisionTransformer
|
||||
|
||||
FILE2URL = {
|
||||
# cfg
|
||||
'motionformer_224_16x4.yaml':
|
||||
'https://raw.githubusercontent.com/facebookresearch/Motionformer/bf43d50/configs/SSV2/motionformer_224_16x4.yaml',
|
||||
'joint_224_16x4.yaml':
|
||||
'https://raw.githubusercontent.com/facebookresearch/Motionformer/bf43d50/configs/SSV2/joint_224_16x4.yaml',
|
||||
'divided_224_16x4.yaml':
|
||||
'https://raw.githubusercontent.com/facebookresearch/Motionformer/bf43d50/configs/SSV2/divided_224_16x4.yaml',
|
||||
# ckpt
|
||||
'ssv2_motionformer_224_16x4.pyth':
|
||||
'https://dl.fbaipublicfiles.com/motionformer/ssv2_motionformer_224_16x4.pyth',
|
||||
'ssv2_joint_224_16x4.pyth':
|
||||
'https://dl.fbaipublicfiles.com/motionformer/ssv2_joint_224_16x4.pyth',
|
||||
'ssv2_divided_224_16x4.pyth':
|
||||
'https://dl.fbaipublicfiles.com/motionformer/ssv2_divided_224_16x4.pyth',
|
||||
}
|
||||
|
||||
|
||||
class MotionFormer(VisionTransformer):
|
||||
''' This class serves three puposes:
|
||||
1. Renames the class to MotionFormer.
|
||||
2. Downloads the cfg from the original repo and patches it if needed.
|
||||
3. Takes care of feature extraction by redefining .forward()
|
||||
- if `extract_features=True` and `factorize_space_time=False`,
|
||||
the output is of shape (B, T, D) where T = 1 + (224 // 16) * (224 // 16) * 8
|
||||
- if `extract_features=True` and `factorize_space_time=True`, the output is of shape (B*S, D)
|
||||
and spatial and temporal transformer encoder layers are used.
|
||||
- if `extract_features=True` and `factorize_space_time=True` as well as `add_global_repr=True`
|
||||
the output is of shape (B, D) and spatial and temporal transformer encoder layers
|
||||
are used as well as the global representation is extracted from segments (extra pos emb
|
||||
is added).
|
||||
'''
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
extract_features: bool = False,
|
||||
ckpt_path: str = None,
|
||||
factorize_space_time: bool = None,
|
||||
agg_space_module: str = None,
|
||||
agg_time_module: str = None,
|
||||
add_global_repr: bool = True,
|
||||
agg_segments_module: str = None,
|
||||
max_segments: int = None,
|
||||
):
|
||||
self.extract_features = extract_features
|
||||
self.ckpt_path = ckpt_path
|
||||
self.factorize_space_time = factorize_space_time
|
||||
|
||||
if self.ckpt_path is not None:
|
||||
check_if_file_exists_else_download(self.ckpt_path, FILE2URL)
|
||||
ckpt = torch.load(self.ckpt_path, map_location='cpu')
|
||||
mformer_ckpt2cfg = {
|
||||
'ssv2_motionformer_224_16x4.pyth': 'motionformer_224_16x4.yaml',
|
||||
'ssv2_joint_224_16x4.pyth': 'joint_224_16x4.yaml',
|
||||
'ssv2_divided_224_16x4.pyth': 'divided_224_16x4.yaml',
|
||||
}
|
||||
# init from motionformer ckpt or from our Stage I ckpt
|
||||
# depending on whether the feat extractor was pre-trained on AVCLIPMoCo or not, we need to
|
||||
# load the state dict differently
|
||||
was_pt_on_avclip = self.ckpt_path.endswith(
|
||||
'.pt') # checks if it is a stage I ckpt (FIXME: a bit generic)
|
||||
if self.ckpt_path.endswith(tuple(mformer_ckpt2cfg.keys())):
|
||||
cfg_fname = mformer_ckpt2cfg[Path(self.ckpt_path).name]
|
||||
elif was_pt_on_avclip:
|
||||
# TODO: this is a hack, we should be able to get the cfg from the ckpt (earlier ckpt didn't have it)
|
||||
s1_cfg = ckpt.get('args', None) # Stage I cfg
|
||||
if s1_cfg is not None:
|
||||
s1_vfeat_extractor_ckpt_path = s1_cfg.model.params.vfeat_extractor.params.ckpt_path
|
||||
# if the stage I ckpt was initialized from a motionformer ckpt or train from scratch
|
||||
if s1_vfeat_extractor_ckpt_path is not None:
|
||||
cfg_fname = mformer_ckpt2cfg[Path(s1_vfeat_extractor_ckpt_path).name]
|
||||
else:
|
||||
cfg_fname = 'divided_224_16x4.yaml'
|
||||
else:
|
||||
cfg_fname = 'divided_224_16x4.yaml'
|
||||
else:
|
||||
raise ValueError(f'ckpt_path {self.ckpt_path} is not supported.')
|
||||
else:
|
||||
was_pt_on_avclip = False
|
||||
cfg_fname = 'divided_224_16x4.yaml'
|
||||
# logging.info(f'No ckpt_path provided, using {cfg_fname} config.')
|
||||
|
||||
if cfg_fname in ['motionformer_224_16x4.yaml', 'divided_224_16x4.yaml']:
|
||||
pos_emb_type = 'separate'
|
||||
elif cfg_fname == 'joint_224_16x4.yaml':
|
||||
pos_emb_type = 'joint'
|
||||
|
||||
self.mformer_cfg_path = Path(__file__).absolute().parent / cfg_fname
|
||||
|
||||
check_if_file_exists_else_download(self.mformer_cfg_path, FILE2URL)
|
||||
mformer_cfg = OmegaConf.load(self.mformer_cfg_path)
|
||||
logging.info(f'Loading MotionFormer config from {self.mformer_cfg_path.absolute()}')
|
||||
|
||||
# patch the cfg (from the default cfg defined in the repo `Motionformer/slowfast/config/defaults.py`)
|
||||
mformer_cfg.VIT.ATTN_DROPOUT = 0.0
|
||||
mformer_cfg.VIT.POS_EMBED = pos_emb_type
|
||||
mformer_cfg.VIT.USE_ORIGINAL_TRAJ_ATTN_CODE = True
|
||||
mformer_cfg.VIT.APPROX_ATTN_TYPE = 'none' # guessing
|
||||
mformer_cfg.VIT.APPROX_ATTN_DIM = 64 # from ckpt['cfg']
|
||||
|
||||
# finally init VisionTransformer with the cfg
|
||||
super().__init__(mformer_cfg)
|
||||
|
||||
# load the ckpt now if ckpt is provided and not from AVCLIPMoCo-pretrained ckpt
|
||||
if (self.ckpt_path is not None) and (not was_pt_on_avclip):
|
||||
_ckpt_load_status = self.load_state_dict(ckpt['model_state'], strict=False)
|
||||
if len(_ckpt_load_status.missing_keys) > 0 or len(
|
||||
_ckpt_load_status.unexpected_keys) > 0:
|
||||
logging.warning(f'Loading exact vfeat_extractor ckpt from {self.ckpt_path} failed.' \
|
||||
f'Missing keys: {_ckpt_load_status.missing_keys}, ' \
|
||||
f'Unexpected keys: {_ckpt_load_status.unexpected_keys}')
|
||||
else:
|
||||
logging.info(f'Loading vfeat_extractor ckpt from {self.ckpt_path} succeeded.')
|
||||
|
||||
if self.extract_features:
|
||||
assert isinstance(self.norm,
|
||||
nn.LayerNorm), 'early x[:, 1:, :] may not be safe for per-tr weights'
|
||||
# pre-logits are Sequential(nn.Linear(emb, emd), act) and `act` is tanh but see the logger
|
||||
self.pre_logits = nn.Identity()
|
||||
# we don't need the classification head (saving memory)
|
||||
self.head = nn.Identity()
|
||||
self.head_drop = nn.Identity()
|
||||
# avoiding code duplication (used only if agg_*_module is TransformerEncoderLayer)
|
||||
transf_enc_layer_kwargs = dict(
|
||||
d_model=self.embed_dim,
|
||||
nhead=self.num_heads,
|
||||
activation=nn.GELU(),
|
||||
batch_first=True,
|
||||
dim_feedforward=self.mlp_ratio * self.embed_dim,
|
||||
dropout=self.drop_rate,
|
||||
layer_norm_eps=1e-6,
|
||||
norm_first=True,
|
||||
)
|
||||
# define adapters if needed
|
||||
if self.factorize_space_time:
|
||||
if agg_space_module == 'TransformerEncoderLayer':
|
||||
self.spatial_attn_agg = SpatialTransformerEncoderLayer(
|
||||
**transf_enc_layer_kwargs)
|
||||
elif agg_space_module == 'AveragePooling':
|
||||
self.spatial_attn_agg = AveragePooling(avg_pattern='BS D t h w -> BS D t',
|
||||
then_permute_pattern='BS D t -> BS t D')
|
||||
if agg_time_module == 'TransformerEncoderLayer':
|
||||
self.temp_attn_agg = TemporalTransformerEncoderLayer(**transf_enc_layer_kwargs)
|
||||
elif agg_time_module == 'AveragePooling':
|
||||
self.temp_attn_agg = AveragePooling(avg_pattern='BS t D -> BS D')
|
||||
elif 'Identity' in agg_time_module:
|
||||
self.temp_attn_agg = nn.Identity()
|
||||
# define a global aggregation layer (aggregarate over segments)
|
||||
self.add_global_repr = add_global_repr
|
||||
if add_global_repr:
|
||||
if agg_segments_module == 'TransformerEncoderLayer':
|
||||
# we can reuse the same layer as for temporal factorization (B, dim_to_agg, D) -> (B, D)
|
||||
# we need to add pos emb (PE) because previously we added the same PE for each segment
|
||||
pos_max_len = max_segments if max_segments is not None else 16 # 16 = 10sec//0.64sec + 1
|
||||
self.global_attn_agg = TemporalTransformerEncoderLayer(
|
||||
add_pos_emb=True,
|
||||
pos_emb_drop=mformer_cfg.VIT.POS_DROPOUT,
|
||||
pos_max_len=pos_max_len,
|
||||
**transf_enc_layer_kwargs)
|
||||
elif agg_segments_module == 'AveragePooling':
|
||||
self.global_attn_agg = AveragePooling(avg_pattern='B S D -> B D')
|
||||
|
||||
if was_pt_on_avclip:
|
||||
# we need to filter out the state_dict of the AVCLIP model (has both A and V extractors)
|
||||
# and keep only the state_dict of the feat extractor
|
||||
ckpt_weights = dict()
|
||||
for k, v in ckpt['state_dict'].items():
|
||||
if k.startswith(('module.v_encoder.', 'v_encoder.')):
|
||||
k = k.replace('module.', '').replace('v_encoder.', '')
|
||||
ckpt_weights[k] = v
|
||||
_load_status = self.load_state_dict(ckpt_weights, strict=False)
|
||||
if len(_load_status.missing_keys) > 0 or len(_load_status.unexpected_keys) > 0:
|
||||
logging.warning(f'Loading exact vfeat_extractor ckpt from {self.ckpt_path} failed. \n' \
|
||||
f'Missing keys ({len(_load_status.missing_keys)}): ' \
|
||||
f'{_load_status.missing_keys}, \n' \
|
||||
f'Unexpected keys ({len(_load_status.unexpected_keys)}): ' \
|
||||
f'{_load_status.unexpected_keys} \n' \
|
||||
f'temp_attn_agg are expected to be missing if ckpt was pt contrastively.')
|
||||
else:
|
||||
logging.info(f'Loading vfeat_extractor ckpt from {self.ckpt_path} succeeded.')
|
||||
|
||||
# patch_embed is not used in MotionFormer, only patch_embed_3d, because cfg.VIT.PATCH_SIZE_TEMP > 1
|
||||
# but it used to calculate the number of patches, so we need to set keep it
|
||||
self.patch_embed.requires_grad_(False)
|
||||
|
||||
def forward(self, x):
|
||||
'''
|
||||
x is of shape (B, S, C, T, H, W) where S is the number of segments.
|
||||
'''
|
||||
# Batch, Segments, Channels, T=frames, Height, Width
|
||||
B, S, C, T, H, W = x.shape
|
||||
# Motionformer expects a tensor of shape (1, B, C, T, H, W).
|
||||
# The first dimension (1) is a dummy dimension to make the input tensor and won't be used:
|
||||
# see `video_model_builder.video_input`.
|
||||
# x = x.unsqueeze(0) # (1, B, S, C, T, H, W)
|
||||
|
||||
orig_shape = (B, S, C, T, H, W)
|
||||
x = x.view(B * S, C, T, H, W) # flatten batch and segments
|
||||
x = self.forward_segments(x, orig_shape=orig_shape)
|
||||
# unpack the segments (using rest dimensions to support different shapes e.g. (BS, D) or (BS, t, D))
|
||||
x = x.view(B, S, *x.shape[1:])
|
||||
# x is now of shape (B*S, D) or (B*S, t, D) if `self.temp_attn_agg` is `Identity`
|
||||
|
||||
return x # x is (B, S, ...)
|
||||
|
||||
def forward_segments(self, x, orig_shape: tuple) -> torch.Tensor:
|
||||
'''x is of shape (1, BS, C, T, H, W) where S is the number of segments.'''
|
||||
x, x_mask = self.forward_features(x)
|
||||
|
||||
assert self.extract_features
|
||||
|
||||
# (BS, T, D) where T = 1 + (224 // 16) * (224 // 16) * 8
|
||||
x = x[:,
|
||||
1:, :] # without the CLS token for efficiency (should be safe for LayerNorm and FC)
|
||||
x = self.norm(x)
|
||||
x = self.pre_logits(x)
|
||||
if self.factorize_space_time:
|
||||
x = self.restore_spatio_temp_dims(x, orig_shape) # (B*S, D, t, h, w) <- (B*S, t*h*w, D)
|
||||
|
||||
x = self.spatial_attn_agg(x, x_mask) # (B*S, t, D)
|
||||
x = self.temp_attn_agg(
|
||||
x) # (B*S, D) or (BS, t, D) if `self.temp_attn_agg` is `Identity`
|
||||
|
||||
return x
|
||||
|
||||
def restore_spatio_temp_dims(self, feats: torch.Tensor, orig_shape: tuple) -> torch.Tensor:
|
||||
'''
|
||||
feats are of shape (B*S, T, D) where T = 1 + (224 // 16) * (224 // 16) * 8
|
||||
Our goal is to make them of shape (B*S, t, h, w, D) where h, w are the spatial dimensions.
|
||||
From `self.patch_embed_3d`, it follows that we could reshape feats with:
|
||||
`feats.transpose(1, 2).view(B*S, D, t, h, w)`
|
||||
'''
|
||||
B, S, C, T, H, W = orig_shape
|
||||
D = self.embed_dim
|
||||
|
||||
# num patches in each dimension
|
||||
t = T // self.patch_embed_3d.z_block_size
|
||||
h = self.patch_embed_3d.height
|
||||
w = self.patch_embed_3d.width
|
||||
|
||||
feats = feats.permute(0, 2, 1) # (B*S, D, T)
|
||||
feats = feats.view(B * S, D, t, h, w) # (B*S, D, t, h, w)
|
||||
|
||||
return feats
|
||||
|
||||
|
||||
class BaseEncoderLayer(nn.TransformerEncoderLayer):
|
||||
'''
|
||||
This is a wrapper around nn.TransformerEncoderLayer that adds a CLS token
|
||||
to the sequence and outputs the CLS token's representation.
|
||||
This base class parents both SpatialEncoderLayer and TemporalEncoderLayer for the RGB stream
|
||||
and the FrequencyEncoderLayer and TemporalEncoderLayer for the audio stream stream.
|
||||
We also, optionally, add a positional embedding to the input sequence which
|
||||
allows to reuse it for global aggregation (of segments) for both streams.
|
||||
'''
|
||||
|
||||
def __init__(self,
|
||||
add_pos_emb: bool = False,
|
||||
pos_emb_drop: float = None,
|
||||
pos_max_len: int = None,
|
||||
*args_transformer_enc,
|
||||
**kwargs_transformer_enc):
|
||||
super().__init__(*args_transformer_enc, **kwargs_transformer_enc)
|
||||
self.cls_token = nn.Parameter(torch.zeros(1, 1, self.self_attn.embed_dim))
|
||||
trunc_normal_(self.cls_token, std=.02)
|
||||
|
||||
# add positional embedding
|
||||
self.add_pos_emb = add_pos_emb
|
||||
if add_pos_emb:
|
||||
self.pos_max_len = 1 + pos_max_len # +1 (for CLS)
|
||||
self.pos_emb = nn.Parameter(torch.zeros(1, self.pos_max_len, self.self_attn.embed_dim))
|
||||
self.pos_drop = nn.Dropout(pos_emb_drop)
|
||||
trunc_normal_(self.pos_emb, std=.02)
|
||||
|
||||
self.apply(self._init_weights)
|
||||
|
||||
def forward(self, x: torch.Tensor, x_mask: torch.Tensor = None):
|
||||
''' x is of shape (B, N, D); if provided x_mask is of shape (B, N)'''
|
||||
batch_dim = x.shape[0]
|
||||
|
||||
# add CLS token
|
||||
cls_tokens = self.cls_token.expand(batch_dim, -1, -1) # expanding to match batch dimension
|
||||
x = torch.cat((cls_tokens, x), dim=-2) # (batch_dim, 1+seq_len, D)
|
||||
if x_mask is not None:
|
||||
cls_mask = torch.ones((batch_dim, 1), dtype=torch.bool,
|
||||
device=x_mask.device) # 1=keep; 0=mask
|
||||
x_mask_w_cls = torch.cat((cls_mask, x_mask), dim=-1) # (batch_dim, 1+seq_len)
|
||||
B, N = x_mask_w_cls.shape
|
||||
# torch expects (N, N) or (B*num_heads, N, N) mask (sadness ahead); torch masks
|
||||
x_mask_w_cls = x_mask_w_cls.reshape(B, 1, 1, N)\
|
||||
.expand(-1, self.self_attn.num_heads, N, -1)\
|
||||
.reshape(B * self.self_attn.num_heads, N, N)
|
||||
assert x_mask_w_cls.dtype == x_mask_w_cls.bool().dtype, 'x_mask_w_cls.dtype != bool'
|
||||
x_mask_w_cls = ~x_mask_w_cls # invert mask (1=mask)
|
||||
else:
|
||||
x_mask_w_cls = None
|
||||
|
||||
# add positional embedding
|
||||
if self.add_pos_emb:
|
||||
seq_len = x.shape[
|
||||
1] # (don't even think about moving it before the CLS token concatenation)
|
||||
assert seq_len <= self.pos_max_len, f'Seq len ({seq_len}) > pos_max_len ({self.pos_max_len})'
|
||||
x = x + self.pos_emb[:, :seq_len, :]
|
||||
x = self.pos_drop(x)
|
||||
|
||||
# apply encoder layer (calls nn.TransformerEncoderLayer.forward);
|
||||
x = super().forward(src=x, src_mask=x_mask_w_cls) # (batch_dim, 1+seq_len, D)
|
||||
|
||||
# CLS token is expected to hold spatial information for each frame
|
||||
x = x[:, 0, :] # (batch_dim, D)
|
||||
|
||||
return x
|
||||
|
||||
def _init_weights(self, m):
|
||||
if isinstance(m, nn.Linear):
|
||||
trunc_normal_(m.weight, std=.02)
|
||||
if isinstance(m, nn.Linear) and m.bias is not None:
|
||||
nn.init.constant_(m.bias, 0)
|
||||
elif isinstance(m, nn.LayerNorm):
|
||||
nn.init.constant_(m.bias, 0)
|
||||
nn.init.constant_(m.weight, 1.0)
|
||||
|
||||
@torch.jit.ignore
|
||||
def no_weight_decay(self):
|
||||
return {'cls_token', 'pos_emb'}
|
||||
|
||||
|
||||
class SpatialTransformerEncoderLayer(BaseEncoderLayer):
|
||||
''' Aggregates spatial dimensions by applying attention individually to each frame. '''
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def forward(self, x: torch.Tensor, x_mask: torch.Tensor = None) -> torch.Tensor:
|
||||
''' x is of shape (B*S, D, t, h, w) where S is the number of segments.
|
||||
if specified x_mask (B*S, t, h, w), 0=masked, 1=kept
|
||||
Returns a tensor of shape (B*S, t, D) pooling spatial information for each frame. '''
|
||||
BS, D, t, h, w = x.shape
|
||||
|
||||
# time as a batch dimension and flatten spatial dimensions as sequence
|
||||
x = einops.rearrange(x, 'BS D t h w -> (BS t) (h w) D')
|
||||
# similar to mask
|
||||
if x_mask is not None:
|
||||
x_mask = einops.rearrange(x_mask, 'BS t h w -> (BS t) (h w)')
|
||||
|
||||
# apply encoder layer (BaseEncoderLayer.forward) - it will add CLS token and output its representation
|
||||
x = super().forward(x=x, x_mask=x_mask) # (B*S*t, D)
|
||||
|
||||
# reshape back to (B*S, t, D)
|
||||
x = einops.rearrange(x, '(BS t) D -> BS t D', BS=BS, t=t)
|
||||
|
||||
# (B*S, t, D)
|
||||
return x
|
||||
|
||||
|
||||
class TemporalTransformerEncoderLayer(BaseEncoderLayer):
|
||||
''' Aggregates temporal dimension with attention. Also used with pos emb as global aggregation
|
||||
in both streams. '''
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
def forward(self, x):
|
||||
''' x is of shape (B*S, t, D) where S is the number of segments.
|
||||
Returns a tensor of shape (B*S, D) pooling temporal information. '''
|
||||
BS, t, D = x.shape
|
||||
|
||||
# apply encoder layer (BaseEncoderLayer.forward) - it will add CLS token and output its representation
|
||||
x = super().forward(x) # (B*S, D)
|
||||
|
||||
return x # (B*S, D)
|
||||
|
||||
|
||||
class AveragePooling(nn.Module):
|
||||
|
||||
def __init__(self, avg_pattern: str, then_permute_pattern: str = None) -> None:
|
||||
''' patterns are e.g. "bs t d -> bs d" '''
|
||||
super().__init__()
|
||||
# TODO: need to register them as buffers (but fails because these are strings)
|
||||
self.reduce_fn = 'mean'
|
||||
self.avg_pattern = avg_pattern
|
||||
self.then_permute_pattern = then_permute_pattern
|
||||
|
||||
def forward(self, x: torch.Tensor, x_mask: torch.Tensor = None) -> torch.Tensor:
|
||||
x = einops.reduce(x, self.avg_pattern, self.reduce_fn)
|
||||
if self.then_permute_pattern is not None:
|
||||
x = einops.rearrange(x, self.then_permute_pattern)
|
||||
return x
|
||||
55
postprocessing/mmaudio/ext/synchformer/synchformer.py
Normal file
55
postprocessing/mmaudio/ext/synchformer/synchformer.py
Normal file
@ -0,0 +1,55 @@
|
||||
import logging
|
||||
from typing import Any, Mapping
|
||||
|
||||
import torch
|
||||
from torch import nn
|
||||
|
||||
from .motionformer import MotionFormer
|
||||
|
||||
|
||||
class Synchformer(nn.Module):
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
self.vfeat_extractor = MotionFormer(extract_features=True,
|
||||
factorize_space_time=True,
|
||||
agg_space_module='TransformerEncoderLayer',
|
||||
agg_time_module='torch.nn.Identity',
|
||||
add_global_repr=False)
|
||||
|
||||
# self.vfeat_extractor = instantiate_from_config(vfeat_extractor)
|
||||
# self.afeat_extractor = instantiate_from_config(afeat_extractor)
|
||||
# # bridging the s3d latent dim (1024) into what is specified in the config
|
||||
# # to match e.g. the transformer dim
|
||||
# self.vproj = instantiate_from_config(vproj)
|
||||
# self.aproj = instantiate_from_config(aproj)
|
||||
# self.transformer = instantiate_from_config(transformer)
|
||||
|
||||
def forward(self, vis):
|
||||
B, S, Tv, C, H, W = vis.shape
|
||||
vis = vis.permute(0, 1, 3, 2, 4, 5) # (B, S, C, Tv, H, W)
|
||||
# feat extractors return a tuple of segment-level and global features (ignored for sync)
|
||||
# (B, S, tv, D), e.g. (B, 7, 8, 768)
|
||||
vis = self.vfeat_extractor(vis)
|
||||
return vis
|
||||
|
||||
def load_state_dict(self, sd: Mapping[str, Any], strict: bool = True):
|
||||
# discard all entries except vfeat_extractor
|
||||
sd = {k: v for k, v in sd.items() if k.startswith('vfeat_extractor')}
|
||||
|
||||
return super().load_state_dict(sd, strict)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
model = Synchformer().cuda().eval()
|
||||
sd = torch.load('./ext_weights/synchformer_state_dict.pth', weights_only=True)
|
||||
model.load_state_dict(sd)
|
||||
|
||||
vid = torch.randn(2, 7, 16, 3, 224, 224).cuda()
|
||||
features = model.extract_vfeats(vid, for_loop=False).detach().cpu()
|
||||
print(features.shape)
|
||||
|
||||
# extract and save the state dict only
|
||||
# sd = torch.load('./ext_weights/sync_model_audioset.pt')['model']
|
||||
# torch.save(sd, './ext_weights/synchformer_state_dict.pth')
|
||||
92
postprocessing/mmaudio/ext/synchformer/utils.py
Normal file
92
postprocessing/mmaudio/ext/synchformer/utils.py
Normal file
@ -0,0 +1,92 @@
|
||||
from hashlib import md5
|
||||
from pathlib import Path
|
||||
|
||||
import requests
|
||||
from tqdm import tqdm
|
||||
|
||||
PARENT_LINK = 'https://a3s.fi/swift/v1/AUTH_a235c0f452d648828f745589cde1219a'
|
||||
FNAME2LINK = {
|
||||
# S3: Synchability: AudioSet (run 2)
|
||||
'24-01-22T20-34-52.pt':
|
||||
f'{PARENT_LINK}/sync/sync_models/24-01-22T20-34-52/24-01-22T20-34-52.pt',
|
||||
'cfg-24-01-22T20-34-52.yaml':
|
||||
f'{PARENT_LINK}/sync/sync_models/24-01-22T20-34-52/cfg-24-01-22T20-34-52.yaml',
|
||||
# S2: Synchformer: AudioSet (run 2)
|
||||
'24-01-04T16-39-21.pt':
|
||||
f'{PARENT_LINK}/sync/sync_models/24-01-04T16-39-21/24-01-04T16-39-21.pt',
|
||||
'cfg-24-01-04T16-39-21.yaml':
|
||||
f'{PARENT_LINK}/sync/sync_models/24-01-04T16-39-21/cfg-24-01-04T16-39-21.yaml',
|
||||
# S2: Synchformer: AudioSet (run 1)
|
||||
'23-08-28T11-23-23.pt':
|
||||
f'{PARENT_LINK}/sync/sync_models/23-08-28T11-23-23/23-08-28T11-23-23.pt',
|
||||
'cfg-23-08-28T11-23-23.yaml':
|
||||
f'{PARENT_LINK}/sync/sync_models/23-08-28T11-23-23/cfg-23-08-28T11-23-23.yaml',
|
||||
# S2: Synchformer: LRS3 (run 2)
|
||||
'23-12-23T18-33-57.pt':
|
||||
f'{PARENT_LINK}/sync/sync_models/23-12-23T18-33-57/23-12-23T18-33-57.pt',
|
||||
'cfg-23-12-23T18-33-57.yaml':
|
||||
f'{PARENT_LINK}/sync/sync_models/23-12-23T18-33-57/cfg-23-12-23T18-33-57.yaml',
|
||||
# S2: Synchformer: VGS (run 2)
|
||||
'24-01-02T10-00-53.pt':
|
||||
f'{PARENT_LINK}/sync/sync_models/24-01-02T10-00-53/24-01-02T10-00-53.pt',
|
||||
'cfg-24-01-02T10-00-53.yaml':
|
||||
f'{PARENT_LINK}/sync/sync_models/24-01-02T10-00-53/cfg-24-01-02T10-00-53.yaml',
|
||||
# SparseSync: ft VGGSound-Full
|
||||
'22-09-21T21-00-52.pt':
|
||||
f'{PARENT_LINK}/sync/sync_models/22-09-21T21-00-52/22-09-21T21-00-52.pt',
|
||||
'cfg-22-09-21T21-00-52.yaml':
|
||||
f'{PARENT_LINK}/sync/sync_models/22-09-21T21-00-52/cfg-22-09-21T21-00-52.yaml',
|
||||
# SparseSync: ft VGGSound-Sparse
|
||||
'22-07-28T15-49-45.pt':
|
||||
f'{PARENT_LINK}/sync/sync_models/22-07-28T15-49-45/22-07-28T15-49-45.pt',
|
||||
'cfg-22-07-28T15-49-45.yaml':
|
||||
f'{PARENT_LINK}/sync/sync_models/22-07-28T15-49-45/cfg-22-07-28T15-49-45.yaml',
|
||||
# SparseSync: only pt on LRS3
|
||||
'22-07-13T22-25-49.pt':
|
||||
f'{PARENT_LINK}/sync/sync_models/22-07-13T22-25-49/22-07-13T22-25-49.pt',
|
||||
'cfg-22-07-13T22-25-49.yaml':
|
||||
f'{PARENT_LINK}/sync/sync_models/22-07-13T22-25-49/cfg-22-07-13T22-25-49.yaml',
|
||||
# SparseSync: feature extractors
|
||||
'ResNetAudio-22-08-04T09-51-04.pt':
|
||||
f'{PARENT_LINK}/sync/ResNetAudio-22-08-04T09-51-04.pt', # 2s
|
||||
'ResNetAudio-22-08-03T23-14-49.pt':
|
||||
f'{PARENT_LINK}/sync/ResNetAudio-22-08-03T23-14-49.pt', # 3s
|
||||
'ResNetAudio-22-08-03T23-14-28.pt':
|
||||
f'{PARENT_LINK}/sync/ResNetAudio-22-08-03T23-14-28.pt', # 4s
|
||||
'ResNetAudio-22-06-24T08-10-33.pt':
|
||||
f'{PARENT_LINK}/sync/ResNetAudio-22-06-24T08-10-33.pt', # 5s
|
||||
'ResNetAudio-22-06-24T17-31-07.pt':
|
||||
f'{PARENT_LINK}/sync/ResNetAudio-22-06-24T17-31-07.pt', # 6s
|
||||
'ResNetAudio-22-06-24T23-57-11.pt':
|
||||
f'{PARENT_LINK}/sync/ResNetAudio-22-06-24T23-57-11.pt', # 7s
|
||||
'ResNetAudio-22-06-25T04-35-42.pt':
|
||||
f'{PARENT_LINK}/sync/ResNetAudio-22-06-25T04-35-42.pt', # 8s
|
||||
}
|
||||
|
||||
|
||||
def check_if_file_exists_else_download(path, fname2link=FNAME2LINK, chunk_size=1024):
|
||||
'''Checks if file exists, if not downloads it from the link to the path'''
|
||||
path = Path(path)
|
||||
if not path.exists():
|
||||
path.parent.mkdir(exist_ok=True, parents=True)
|
||||
link = fname2link.get(path.name, None)
|
||||
if link is None:
|
||||
raise ValueError(f'Cant find the checkpoint file: {path}.',
|
||||
f'Please download it manually and ensure the path exists.')
|
||||
with requests.get(fname2link[path.name], stream=True) as r:
|
||||
total_size = int(r.headers.get('content-length', 0))
|
||||
with tqdm(total=total_size, unit='B', unit_scale=True) as pbar:
|
||||
with open(path, 'wb') as f:
|
||||
for data in r.iter_content(chunk_size=chunk_size):
|
||||
if data:
|
||||
f.write(data)
|
||||
pbar.update(chunk_size)
|
||||
|
||||
|
||||
def get_md5sum(path):
|
||||
hash_md5 = md5()
|
||||
with open(path, 'rb') as f:
|
||||
for chunk in iter(lambda: f.read(4096 * 8), b''):
|
||||
hash_md5.update(chunk)
|
||||
md5sum = hash_md5.hexdigest()
|
||||
return md5sum
|
||||
277
postprocessing/mmaudio/ext/synchformer/video_model_builder.py
Normal file
277
postprocessing/mmaudio/ext/synchformer/video_model_builder.py
Normal file
@ -0,0 +1,277 @@
|
||||
#!/usr/bin/env python3
|
||||
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
||||
# Copyright 2020 Ross Wightman
|
||||
# Modified Model definition
|
||||
|
||||
from collections import OrderedDict
|
||||
from functools import partial
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from timm.layers import trunc_normal_
|
||||
|
||||
from . import vit_helper
|
||||
|
||||
|
||||
class VisionTransformer(nn.Module):
|
||||
""" Vision Transformer with support for patch or hybrid CNN input stage """
|
||||
|
||||
def __init__(self, cfg):
|
||||
super().__init__()
|
||||
self.img_size = cfg.DATA.TRAIN_CROP_SIZE
|
||||
self.patch_size = cfg.VIT.PATCH_SIZE
|
||||
self.in_chans = cfg.VIT.CHANNELS
|
||||
if cfg.TRAIN.DATASET == "Epickitchens":
|
||||
self.num_classes = [97, 300]
|
||||
else:
|
||||
self.num_classes = cfg.MODEL.NUM_CLASSES
|
||||
self.embed_dim = cfg.VIT.EMBED_DIM
|
||||
self.depth = cfg.VIT.DEPTH
|
||||
self.num_heads = cfg.VIT.NUM_HEADS
|
||||
self.mlp_ratio = cfg.VIT.MLP_RATIO
|
||||
self.qkv_bias = cfg.VIT.QKV_BIAS
|
||||
self.drop_rate = cfg.VIT.DROP
|
||||
self.drop_path_rate = cfg.VIT.DROP_PATH
|
||||
self.head_dropout = cfg.VIT.HEAD_DROPOUT
|
||||
self.video_input = cfg.VIT.VIDEO_INPUT
|
||||
self.temporal_resolution = cfg.VIT.TEMPORAL_RESOLUTION
|
||||
self.use_mlp = cfg.VIT.USE_MLP
|
||||
self.num_features = self.embed_dim
|
||||
norm_layer = partial(nn.LayerNorm, eps=1e-6)
|
||||
self.attn_drop_rate = cfg.VIT.ATTN_DROPOUT
|
||||
self.head_act = cfg.VIT.HEAD_ACT
|
||||
self.cfg = cfg
|
||||
|
||||
# Patch Embedding
|
||||
self.patch_embed = vit_helper.PatchEmbed(img_size=224,
|
||||
patch_size=self.patch_size,
|
||||
in_chans=self.in_chans,
|
||||
embed_dim=self.embed_dim)
|
||||
|
||||
# 3D Patch Embedding
|
||||
self.patch_embed_3d = vit_helper.PatchEmbed3D(img_size=self.img_size,
|
||||
temporal_resolution=self.temporal_resolution,
|
||||
patch_size=self.patch_size,
|
||||
in_chans=self.in_chans,
|
||||
embed_dim=self.embed_dim,
|
||||
z_block_size=self.cfg.VIT.PATCH_SIZE_TEMP)
|
||||
self.patch_embed_3d.proj.weight.data = torch.zeros_like(
|
||||
self.patch_embed_3d.proj.weight.data)
|
||||
|
||||
# Number of patches
|
||||
if self.video_input:
|
||||
num_patches = self.patch_embed.num_patches * self.temporal_resolution
|
||||
else:
|
||||
num_patches = self.patch_embed.num_patches
|
||||
self.num_patches = num_patches
|
||||
|
||||
# CLS token
|
||||
self.cls_token = nn.Parameter(torch.zeros(1, 1, self.embed_dim))
|
||||
trunc_normal_(self.cls_token, std=.02)
|
||||
|
||||
# Positional embedding
|
||||
self.pos_embed = nn.Parameter(
|
||||
torch.zeros(1, self.patch_embed.num_patches + 1, self.embed_dim))
|
||||
self.pos_drop = nn.Dropout(p=cfg.VIT.POS_DROPOUT)
|
||||
trunc_normal_(self.pos_embed, std=.02)
|
||||
|
||||
if self.cfg.VIT.POS_EMBED == "joint":
|
||||
self.st_embed = nn.Parameter(torch.zeros(1, num_patches + 1, self.embed_dim))
|
||||
trunc_normal_(self.st_embed, std=.02)
|
||||
elif self.cfg.VIT.POS_EMBED == "separate":
|
||||
self.temp_embed = nn.Parameter(torch.zeros(1, self.temporal_resolution, self.embed_dim))
|
||||
|
||||
# Layer Blocks
|
||||
dpr = [x.item() for x in torch.linspace(0, self.drop_path_rate, self.depth)]
|
||||
if self.cfg.VIT.ATTN_LAYER == "divided":
|
||||
self.blocks = nn.ModuleList([
|
||||
vit_helper.DividedSpaceTimeBlock(
|
||||
attn_type=cfg.VIT.ATTN_LAYER,
|
||||
dim=self.embed_dim,
|
||||
num_heads=self.num_heads,
|
||||
mlp_ratio=self.mlp_ratio,
|
||||
qkv_bias=self.qkv_bias,
|
||||
drop=self.drop_rate,
|
||||
attn_drop=self.attn_drop_rate,
|
||||
drop_path=dpr[i],
|
||||
norm_layer=norm_layer,
|
||||
) for i in range(self.depth)
|
||||
])
|
||||
else:
|
||||
self.blocks = nn.ModuleList([
|
||||
vit_helper.Block(attn_type=cfg.VIT.ATTN_LAYER,
|
||||
dim=self.embed_dim,
|
||||
num_heads=self.num_heads,
|
||||
mlp_ratio=self.mlp_ratio,
|
||||
qkv_bias=self.qkv_bias,
|
||||
drop=self.drop_rate,
|
||||
attn_drop=self.attn_drop_rate,
|
||||
drop_path=dpr[i],
|
||||
norm_layer=norm_layer,
|
||||
use_original_code=self.cfg.VIT.USE_ORIGINAL_TRAJ_ATTN_CODE)
|
||||
for i in range(self.depth)
|
||||
])
|
||||
self.norm = norm_layer(self.embed_dim)
|
||||
|
||||
# MLP head
|
||||
if self.use_mlp:
|
||||
hidden_dim = self.embed_dim
|
||||
if self.head_act == 'tanh':
|
||||
# logging.info("Using TanH activation in MLP")
|
||||
act = nn.Tanh()
|
||||
elif self.head_act == 'gelu':
|
||||
# logging.info("Using GELU activation in MLP")
|
||||
act = nn.GELU()
|
||||
else:
|
||||
# logging.info("Using ReLU activation in MLP")
|
||||
act = nn.ReLU()
|
||||
self.pre_logits = nn.Sequential(
|
||||
OrderedDict([
|
||||
('fc', nn.Linear(self.embed_dim, hidden_dim)),
|
||||
('act', act),
|
||||
]))
|
||||
else:
|
||||
self.pre_logits = nn.Identity()
|
||||
|
||||
# Classifier Head
|
||||
self.head_drop = nn.Dropout(p=self.head_dropout)
|
||||
if isinstance(self.num_classes, (list, )) and len(self.num_classes) > 1:
|
||||
for a, i in enumerate(range(len(self.num_classes))):
|
||||
setattr(self, "head%d" % a, nn.Linear(self.embed_dim, self.num_classes[i]))
|
||||
else:
|
||||
self.head = nn.Linear(self.embed_dim,
|
||||
self.num_classes) if self.num_classes > 0 else nn.Identity()
|
||||
|
||||
# Initialize weights
|
||||
self.apply(self._init_weights)
|
||||
|
||||
def _init_weights(self, m):
|
||||
if isinstance(m, nn.Linear):
|
||||
trunc_normal_(m.weight, std=.02)
|
||||
if isinstance(m, nn.Linear) and m.bias is not None:
|
||||
nn.init.constant_(m.bias, 0)
|
||||
elif isinstance(m, nn.LayerNorm):
|
||||
nn.init.constant_(m.bias, 0)
|
||||
nn.init.constant_(m.weight, 1.0)
|
||||
|
||||
@torch.jit.ignore
|
||||
def no_weight_decay(self):
|
||||
if self.cfg.VIT.POS_EMBED == "joint":
|
||||
return {'pos_embed', 'cls_token', 'st_embed'}
|
||||
else:
|
||||
return {'pos_embed', 'cls_token', 'temp_embed'}
|
||||
|
||||
def get_classifier(self):
|
||||
return self.head
|
||||
|
||||
def reset_classifier(self, num_classes, global_pool=''):
|
||||
self.num_classes = num_classes
|
||||
self.head = (nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity())
|
||||
|
||||
def forward_features(self, x):
|
||||
# if self.video_input:
|
||||
# x = x[0]
|
||||
B = x.shape[0]
|
||||
|
||||
# Tokenize input
|
||||
# if self.cfg.VIT.PATCH_SIZE_TEMP > 1:
|
||||
# for simplicity of mapping between content dimensions (input x) and token dims (after patching)
|
||||
# we use the same trick as for AST (see modeling_ast.ASTModel.forward for the details):
|
||||
|
||||
# apply patching on input
|
||||
x = self.patch_embed_3d(x)
|
||||
tok_mask = None
|
||||
|
||||
# else:
|
||||
# tok_mask = None
|
||||
# # 2D tokenization
|
||||
# if self.video_input:
|
||||
# x = x.permute(0, 2, 1, 3, 4)
|
||||
# (B, T, C, H, W) = x.shape
|
||||
# x = x.reshape(B * T, C, H, W)
|
||||
|
||||
# x = self.patch_embed(x)
|
||||
|
||||
# if self.video_input:
|
||||
# (B2, T2, D2) = x.shape
|
||||
# x = x.reshape(B, T * T2, D2)
|
||||
|
||||
# Append CLS token
|
||||
cls_tokens = self.cls_token.expand(B, -1, -1)
|
||||
x = torch.cat((cls_tokens, x), dim=1)
|
||||
# if tok_mask is not None:
|
||||
# # prepend 1(=keep) to the mask to account for the CLS token as well
|
||||
# tok_mask = torch.cat((torch.ones_like(tok_mask[:, [0]]), tok_mask), dim=1)
|
||||
|
||||
# Interpolate positinoal embeddings
|
||||
# if self.cfg.DATA.TRAIN_CROP_SIZE != 224:
|
||||
# pos_embed = self.pos_embed
|
||||
# N = pos_embed.shape[1] - 1
|
||||
# npatch = int((x.size(1) - 1) / self.temporal_resolution)
|
||||
# class_emb = pos_embed[:, 0]
|
||||
# pos_embed = pos_embed[:, 1:]
|
||||
# dim = x.shape[-1]
|
||||
# pos_embed = torch.nn.functional.interpolate(
|
||||
# pos_embed.reshape(1, int(math.sqrt(N)), int(math.sqrt(N)), dim).permute(0, 3, 1, 2),
|
||||
# scale_factor=math.sqrt(npatch / N),
|
||||
# mode='bicubic',
|
||||
# )
|
||||
# pos_embed = pos_embed.permute(0, 2, 3, 1).view(1, -1, dim)
|
||||
# new_pos_embed = torch.cat((class_emb.unsqueeze(0), pos_embed), dim=1)
|
||||
# else:
|
||||
new_pos_embed = self.pos_embed
|
||||
npatch = self.patch_embed.num_patches
|
||||
|
||||
# Add positional embeddings to input
|
||||
if self.video_input:
|
||||
if self.cfg.VIT.POS_EMBED == "separate":
|
||||
cls_embed = self.pos_embed[:, 0, :].unsqueeze(1)
|
||||
tile_pos_embed = new_pos_embed[:, 1:, :].repeat(1, self.temporal_resolution, 1)
|
||||
tile_temporal_embed = self.temp_embed.repeat_interleave(npatch, 1)
|
||||
total_pos_embed = tile_pos_embed + tile_temporal_embed
|
||||
total_pos_embed = torch.cat([cls_embed, total_pos_embed], dim=1)
|
||||
x = x + total_pos_embed
|
||||
elif self.cfg.VIT.POS_EMBED == "joint":
|
||||
x = x + self.st_embed
|
||||
else:
|
||||
# image input
|
||||
x = x + new_pos_embed
|
||||
|
||||
# Apply positional dropout
|
||||
x = self.pos_drop(x)
|
||||
|
||||
# Encoding using transformer layers
|
||||
for i, blk in enumerate(self.blocks):
|
||||
x = blk(x,
|
||||
seq_len=npatch,
|
||||
num_frames=self.temporal_resolution,
|
||||
approx=self.cfg.VIT.APPROX_ATTN_TYPE,
|
||||
num_landmarks=self.cfg.VIT.APPROX_ATTN_DIM,
|
||||
tok_mask=tok_mask)
|
||||
|
||||
### v-iashin: I moved it to the forward pass
|
||||
# x = self.norm(x)[:, 0]
|
||||
# x = self.pre_logits(x)
|
||||
###
|
||||
return x, tok_mask
|
||||
|
||||
# def forward(self, x):
|
||||
# x = self.forward_features(x)
|
||||
# ### v-iashin: here. This should leave the same forward output as before
|
||||
# x = self.norm(x)[:, 0]
|
||||
# x = self.pre_logits(x)
|
||||
# ###
|
||||
# x = self.head_drop(x)
|
||||
# if isinstance(self.num_classes, (list, )) and len(self.num_classes) > 1:
|
||||
# output = []
|
||||
# for head in range(len(self.num_classes)):
|
||||
# x_out = getattr(self, "head%d" % head)(x)
|
||||
# if not self.training:
|
||||
# x_out = torch.nn.functional.softmax(x_out, dim=-1)
|
||||
# output.append(x_out)
|
||||
# return output
|
||||
# else:
|
||||
# x = self.head(x)
|
||||
# if not self.training:
|
||||
# x = torch.nn.functional.softmax(x, dim=-1)
|
||||
# return x
|
||||
399
postprocessing/mmaudio/ext/synchformer/vit_helper.py
Normal file
399
postprocessing/mmaudio/ext/synchformer/vit_helper.py
Normal file
@ -0,0 +1,399 @@
|
||||
#!/usr/bin/env python3
|
||||
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
||||
# Copyright 2020 Ross Wightman
|
||||
# Modified Model definition
|
||||
"""Video models."""
|
||||
|
||||
import math
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from einops import rearrange, repeat
|
||||
from timm.layers import to_2tuple
|
||||
from torch import einsum
|
||||
from torch.nn import functional as F
|
||||
|
||||
default_cfgs = {
|
||||
'vit_1k':
|
||||
'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_p16_224-80ecf9dd.pth',
|
||||
'vit_1k_large':
|
||||
'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_p16_224-4ee7a4dc.pth',
|
||||
}
|
||||
|
||||
|
||||
def qkv_attn(q, k, v, tok_mask: torch.Tensor = None):
|
||||
sim = einsum('b i d, b j d -> b i j', q, k)
|
||||
# apply masking if provided, tok_mask is (B*S*H, N): 1s - keep; sim is (B*S*H, H, N, N)
|
||||
if tok_mask is not None:
|
||||
BSH, N = tok_mask.shape
|
||||
sim = sim.masked_fill(tok_mask.view(BSH, 1, N) == 0,
|
||||
float('-inf')) # 1 - broadcasts across N
|
||||
attn = sim.softmax(dim=-1)
|
||||
out = einsum('b i j, b j d -> b i d', attn, v)
|
||||
return out
|
||||
|
||||
|
||||
class DividedAttention(nn.Module):
|
||||
|
||||
def __init__(self, dim, num_heads=8, qkv_bias=False, attn_drop=0., proj_drop=0.):
|
||||
super().__init__()
|
||||
self.num_heads = num_heads
|
||||
head_dim = dim // num_heads
|
||||
self.scale = head_dim**-0.5
|
||||
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
|
||||
self.proj = nn.Linear(dim, dim)
|
||||
|
||||
# init to zeros
|
||||
self.qkv.weight.data.fill_(0)
|
||||
self.qkv.bias.data.fill_(0)
|
||||
self.proj.weight.data.fill_(1)
|
||||
self.proj.bias.data.fill_(0)
|
||||
|
||||
self.attn_drop = nn.Dropout(attn_drop)
|
||||
self.proj_drop = nn.Dropout(proj_drop)
|
||||
|
||||
def forward(self, x, einops_from, einops_to, tok_mask: torch.Tensor = None, **einops_dims):
|
||||
# num of heads variable
|
||||
h = self.num_heads
|
||||
|
||||
# project x to q, k, v vaalues
|
||||
q, k, v = self.qkv(x).chunk(3, dim=-1)
|
||||
q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v))
|
||||
if tok_mask is not None:
|
||||
# replicate token mask across heads (b, n) -> (b, h, n) -> (b*h, n) -- same as qkv but w/o d
|
||||
assert len(tok_mask.shape) == 2
|
||||
tok_mask = tok_mask.unsqueeze(1).expand(-1, h, -1).reshape(-1, tok_mask.shape[1])
|
||||
|
||||
# Scale q
|
||||
q *= self.scale
|
||||
|
||||
# Take out cls_q, cls_k, cls_v
|
||||
(cls_q, q_), (cls_k, k_), (cls_v, v_) = map(lambda t: (t[:, 0:1], t[:, 1:]), (q, k, v))
|
||||
# the same for masking
|
||||
if tok_mask is not None:
|
||||
cls_mask, mask_ = tok_mask[:, 0:1], tok_mask[:, 1:]
|
||||
else:
|
||||
cls_mask, mask_ = None, None
|
||||
|
||||
# let CLS token attend to key / values of all patches across time and space
|
||||
cls_out = qkv_attn(cls_q, k, v, tok_mask=tok_mask)
|
||||
|
||||
# rearrange across time or space
|
||||
q_, k_, v_ = map(lambda t: rearrange(t, f'{einops_from} -> {einops_to}', **einops_dims),
|
||||
(q_, k_, v_))
|
||||
|
||||
# expand CLS token keys and values across time or space and concat
|
||||
r = q_.shape[0] // cls_k.shape[0]
|
||||
cls_k, cls_v = map(lambda t: repeat(t, 'b () d -> (b r) () d', r=r), (cls_k, cls_v))
|
||||
|
||||
k_ = torch.cat((cls_k, k_), dim=1)
|
||||
v_ = torch.cat((cls_v, v_), dim=1)
|
||||
|
||||
# the same for masking (if provided)
|
||||
if tok_mask is not None:
|
||||
# since mask does not have the latent dim (d), we need to remove it from einops dims
|
||||
mask_ = rearrange(mask_, f'{einops_from} -> {einops_to}'.replace(' d', ''),
|
||||
**einops_dims)
|
||||
cls_mask = repeat(cls_mask, 'b () -> (b r) ()',
|
||||
r=r) # expand cls_mask across time or space
|
||||
mask_ = torch.cat((cls_mask, mask_), dim=1)
|
||||
|
||||
# attention
|
||||
out = qkv_attn(q_, k_, v_, tok_mask=mask_)
|
||||
|
||||
# merge back time or space
|
||||
out = rearrange(out, f'{einops_to} -> {einops_from}', **einops_dims)
|
||||
|
||||
# concat back the cls token
|
||||
out = torch.cat((cls_out, out), dim=1)
|
||||
|
||||
# merge back the heads
|
||||
out = rearrange(out, '(b h) n d -> b n (h d)', h=h)
|
||||
|
||||
## to out
|
||||
x = self.proj(out)
|
||||
x = self.proj_drop(x)
|
||||
return x
|
||||
|
||||
|
||||
class DividedSpaceTimeBlock(nn.Module):
|
||||
|
||||
def __init__(self,
|
||||
dim=768,
|
||||
num_heads=12,
|
||||
attn_type='divided',
|
||||
mlp_ratio=4.,
|
||||
qkv_bias=False,
|
||||
drop=0.,
|
||||
attn_drop=0.,
|
||||
drop_path=0.,
|
||||
act_layer=nn.GELU,
|
||||
norm_layer=nn.LayerNorm):
|
||||
super().__init__()
|
||||
|
||||
self.einops_from_space = 'b (f n) d'
|
||||
self.einops_to_space = '(b f) n d'
|
||||
self.einops_from_time = 'b (f n) d'
|
||||
self.einops_to_time = '(b n) f d'
|
||||
|
||||
self.norm1 = norm_layer(dim)
|
||||
|
||||
self.attn = DividedAttention(dim,
|
||||
num_heads=num_heads,
|
||||
qkv_bias=qkv_bias,
|
||||
attn_drop=attn_drop,
|
||||
proj_drop=drop)
|
||||
|
||||
self.timeattn = DividedAttention(dim,
|
||||
num_heads=num_heads,
|
||||
qkv_bias=qkv_bias,
|
||||
attn_drop=attn_drop,
|
||||
proj_drop=drop)
|
||||
|
||||
# self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
|
||||
self.drop_path = nn.Identity()
|
||||
self.norm2 = norm_layer(dim)
|
||||
mlp_hidden_dim = int(dim * mlp_ratio)
|
||||
self.mlp = Mlp(in_features=dim,
|
||||
hidden_features=mlp_hidden_dim,
|
||||
act_layer=act_layer,
|
||||
drop=drop)
|
||||
self.norm3 = norm_layer(dim)
|
||||
|
||||
def forward(self,
|
||||
x,
|
||||
seq_len=196,
|
||||
num_frames=8,
|
||||
approx='none',
|
||||
num_landmarks=128,
|
||||
tok_mask: torch.Tensor = None):
|
||||
time_output = self.timeattn(self.norm3(x),
|
||||
self.einops_from_time,
|
||||
self.einops_to_time,
|
||||
n=seq_len,
|
||||
tok_mask=tok_mask)
|
||||
time_residual = x + time_output
|
||||
|
||||
space_output = self.attn(self.norm1(time_residual),
|
||||
self.einops_from_space,
|
||||
self.einops_to_space,
|
||||
f=num_frames,
|
||||
tok_mask=tok_mask)
|
||||
space_residual = time_residual + self.drop_path(space_output)
|
||||
|
||||
x = space_residual
|
||||
x = x + self.drop_path(self.mlp(self.norm2(x)))
|
||||
return x
|
||||
|
||||
|
||||
class Mlp(nn.Module):
|
||||
|
||||
def __init__(self,
|
||||
in_features,
|
||||
hidden_features=None,
|
||||
out_features=None,
|
||||
act_layer=nn.GELU,
|
||||
drop=0.):
|
||||
super().__init__()
|
||||
out_features = out_features or in_features
|
||||
hidden_features = hidden_features or in_features
|
||||
self.fc1 = nn.Linear(in_features, hidden_features)
|
||||
self.act = act_layer()
|
||||
self.fc2 = nn.Linear(hidden_features, out_features)
|
||||
self.drop = nn.Dropout(drop)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.fc1(x)
|
||||
x = self.act(x)
|
||||
x = self.drop(x)
|
||||
x = self.fc2(x)
|
||||
x = self.drop(x)
|
||||
return x
|
||||
|
||||
|
||||
class PatchEmbed(nn.Module):
|
||||
""" Image to Patch Embedding
|
||||
"""
|
||||
|
||||
def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768):
|
||||
super().__init__()
|
||||
img_size = img_size if type(img_size) is tuple else to_2tuple(img_size)
|
||||
patch_size = img_size if type(patch_size) is tuple else to_2tuple(patch_size)
|
||||
num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0])
|
||||
self.img_size = img_size
|
||||
self.patch_size = patch_size
|
||||
self.num_patches = num_patches
|
||||
|
||||
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
|
||||
|
||||
def forward(self, x):
|
||||
B, C, H, W = x.shape
|
||||
x = self.proj(x).flatten(2).transpose(1, 2)
|
||||
return x
|
||||
|
||||
|
||||
class PatchEmbed3D(nn.Module):
|
||||
""" Image to Patch Embedding """
|
||||
|
||||
def __init__(self,
|
||||
img_size=224,
|
||||
temporal_resolution=4,
|
||||
in_chans=3,
|
||||
patch_size=16,
|
||||
z_block_size=2,
|
||||
embed_dim=768,
|
||||
flatten=True):
|
||||
super().__init__()
|
||||
self.height = (img_size // patch_size)
|
||||
self.width = (img_size // patch_size)
|
||||
### v-iashin: these two are incorrect
|
||||
# self.frames = (temporal_resolution // z_block_size)
|
||||
# self.num_patches = self.height * self.width * self.frames
|
||||
self.z_block_size = z_block_size
|
||||
###
|
||||
self.proj = nn.Conv3d(in_chans,
|
||||
embed_dim,
|
||||
kernel_size=(z_block_size, patch_size, patch_size),
|
||||
stride=(z_block_size, patch_size, patch_size))
|
||||
self.flatten = flatten
|
||||
|
||||
def forward(self, x):
|
||||
B, C, T, H, W = x.shape
|
||||
x = self.proj(x)
|
||||
if self.flatten:
|
||||
x = x.flatten(2).transpose(1, 2)
|
||||
return x
|
||||
|
||||
|
||||
class HeadMLP(nn.Module):
|
||||
|
||||
def __init__(self, n_input, n_classes, n_hidden=512, p=0.1):
|
||||
super(HeadMLP, self).__init__()
|
||||
self.n_input = n_input
|
||||
self.n_classes = n_classes
|
||||
self.n_hidden = n_hidden
|
||||
if n_hidden is None:
|
||||
# use linear classifier
|
||||
self.block_forward = nn.Sequential(nn.Dropout(p=p),
|
||||
nn.Linear(n_input, n_classes, bias=True))
|
||||
else:
|
||||
# use simple MLP classifier
|
||||
self.block_forward = nn.Sequential(nn.Dropout(p=p),
|
||||
nn.Linear(n_input, n_hidden, bias=True),
|
||||
nn.BatchNorm1d(n_hidden), nn.ReLU(inplace=True),
|
||||
nn.Dropout(p=p),
|
||||
nn.Linear(n_hidden, n_classes, bias=True))
|
||||
print(f"Dropout-NLP: {p}")
|
||||
|
||||
def forward(self, x):
|
||||
return self.block_forward(x)
|
||||
|
||||
|
||||
def _conv_filter(state_dict, patch_size=16):
|
||||
""" convert patch embedding weight from manual patchify + linear proj to conv"""
|
||||
out_dict = {}
|
||||
for k, v in state_dict.items():
|
||||
if 'patch_embed.proj.weight' in k:
|
||||
v = v.reshape((v.shape[0], 3, patch_size, patch_size))
|
||||
out_dict[k] = v
|
||||
return out_dict
|
||||
|
||||
|
||||
def adapt_input_conv(in_chans, conv_weight, agg='sum'):
|
||||
conv_type = conv_weight.dtype
|
||||
conv_weight = conv_weight.float()
|
||||
O, I, J, K = conv_weight.shape
|
||||
if in_chans == 1:
|
||||
if I > 3:
|
||||
assert conv_weight.shape[1] % 3 == 0
|
||||
# For models with space2depth stems
|
||||
conv_weight = conv_weight.reshape(O, I // 3, 3, J, K)
|
||||
conv_weight = conv_weight.sum(dim=2, keepdim=False)
|
||||
else:
|
||||
if agg == 'sum':
|
||||
print("Summing conv1 weights")
|
||||
conv_weight = conv_weight.sum(dim=1, keepdim=True)
|
||||
else:
|
||||
print("Averaging conv1 weights")
|
||||
conv_weight = conv_weight.mean(dim=1, keepdim=True)
|
||||
elif in_chans != 3:
|
||||
if I != 3:
|
||||
raise NotImplementedError('Weight format not supported by conversion.')
|
||||
else:
|
||||
if agg == 'sum':
|
||||
print("Summing conv1 weights")
|
||||
repeat = int(math.ceil(in_chans / 3))
|
||||
conv_weight = conv_weight.repeat(1, repeat, 1, 1)[:, :in_chans, :, :]
|
||||
conv_weight *= (3 / float(in_chans))
|
||||
else:
|
||||
print("Averaging conv1 weights")
|
||||
conv_weight = conv_weight.mean(dim=1, keepdim=True)
|
||||
conv_weight = conv_weight.repeat(1, in_chans, 1, 1)
|
||||
conv_weight = conv_weight.to(conv_type)
|
||||
return conv_weight
|
||||
|
||||
|
||||
def load_pretrained(model,
|
||||
cfg=None,
|
||||
num_classes=1000,
|
||||
in_chans=3,
|
||||
filter_fn=None,
|
||||
strict=True,
|
||||
progress=False):
|
||||
# Load state dict
|
||||
assert (f"{cfg.VIT.PRETRAINED_WEIGHTS} not in [vit_1k, vit_1k_large]")
|
||||
state_dict = torch.hub.load_state_dict_from_url(url=default_cfgs[cfg.VIT.PRETRAINED_WEIGHTS])
|
||||
|
||||
if filter_fn is not None:
|
||||
state_dict = filter_fn(state_dict)
|
||||
|
||||
input_convs = 'patch_embed.proj'
|
||||
if input_convs is not None and in_chans != 3:
|
||||
if isinstance(input_convs, str):
|
||||
input_convs = (input_convs, )
|
||||
for input_conv_name in input_convs:
|
||||
weight_name = input_conv_name + '.weight'
|
||||
try:
|
||||
state_dict[weight_name] = adapt_input_conv(in_chans,
|
||||
state_dict[weight_name],
|
||||
agg='avg')
|
||||
print(
|
||||
f'Converted input conv {input_conv_name} pretrained weights from 3 to {in_chans} channel(s)'
|
||||
)
|
||||
except NotImplementedError as e:
|
||||
del state_dict[weight_name]
|
||||
strict = False
|
||||
print(
|
||||
f'Unable to convert pretrained {input_conv_name} weights, using random init for this layer.'
|
||||
)
|
||||
|
||||
classifier_name = 'head'
|
||||
label_offset = cfg.get('label_offset', 0)
|
||||
pretrain_classes = 1000
|
||||
if num_classes != pretrain_classes:
|
||||
# completely discard fully connected if model num_classes doesn't match pretrained weights
|
||||
del state_dict[classifier_name + '.weight']
|
||||
del state_dict[classifier_name + '.bias']
|
||||
strict = False
|
||||
elif label_offset > 0:
|
||||
# special case for pretrained weights with an extra background class in pretrained weights
|
||||
classifier_weight = state_dict[classifier_name + '.weight']
|
||||
state_dict[classifier_name + '.weight'] = classifier_weight[label_offset:]
|
||||
classifier_bias = state_dict[classifier_name + '.bias']
|
||||
state_dict[classifier_name + '.bias'] = classifier_bias[label_offset:]
|
||||
|
||||
loaded_state = state_dict
|
||||
self_state = model.state_dict()
|
||||
all_names = set(self_state.keys())
|
||||
saved_names = set([])
|
||||
for name, param in loaded_state.items():
|
||||
param = param
|
||||
if 'module.' in name:
|
||||
name = name.replace('module.', '')
|
||||
if name in self_state.keys() and param.shape == self_state[name].shape:
|
||||
saved_names.add(name)
|
||||
self_state[name].copy_(param)
|
||||
else:
|
||||
print(f"didnt load: {name} of shape: {param.shape}")
|
||||
print("Missing Keys:")
|
||||
print(all_names - saved_names)
|
||||
120
postprocessing/mmaudio/mmaudio.py
Normal file
120
postprocessing/mmaudio/mmaudio.py
Normal file
@ -0,0 +1,120 @@
|
||||
import gc
|
||||
import logging
|
||||
|
||||
import torch
|
||||
|
||||
from .eval_utils import (ModelConfig, VideoInfo, all_model_cfg, generate, load_image,
|
||||
load_video, make_video, setup_eval_logging)
|
||||
from .model.flow_matching import FlowMatching
|
||||
from .model.networks import MMAudio, get_my_mmaudio
|
||||
from .model.sequence_config import SequenceConfig
|
||||
from .model.utils.features_utils import FeaturesUtils
|
||||
|
||||
persistent_offloadobj = None
|
||||
|
||||
def get_model(persistent_models = False, verboseLevel = 1) -> tuple[MMAudio, FeaturesUtils, SequenceConfig]:
|
||||
torch.backends.cuda.matmul.allow_tf32 = True
|
||||
torch.backends.cudnn.allow_tf32 = True
|
||||
|
||||
global device, persistent_offloadobj, persistent_net, persistent_features_utils, persistent_seq_cfg
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
device = 'cpu' #"cuda"
|
||||
# if torch.cuda.is_available():
|
||||
# device = 'cuda'
|
||||
# elif torch.backends.mps.is_available():
|
||||
# device = 'mps'
|
||||
# else:
|
||||
# log.warning('CUDA/MPS are not available, running on CPU')
|
||||
dtype = torch.bfloat16
|
||||
|
||||
model: ModelConfig = all_model_cfg['large_44k_v2']
|
||||
# model.download_if_needed()
|
||||
|
||||
setup_eval_logging()
|
||||
|
||||
seq_cfg = model.seq_cfg
|
||||
if persistent_offloadobj == None:
|
||||
from accelerate import init_empty_weights
|
||||
# with init_empty_weights():
|
||||
net: MMAudio = get_my_mmaudio(model.model_name)
|
||||
net.load_weights(torch.load(model.model_path, map_location=device, weights_only=True))
|
||||
net.to(device, dtype).eval()
|
||||
log.info(f'Loaded weights from {model.model_path}')
|
||||
feature_utils = FeaturesUtils(tod_vae_ckpt=model.vae_path,
|
||||
synchformer_ckpt=model.synchformer_ckpt,
|
||||
enable_conditions=True,
|
||||
mode=model.mode,
|
||||
bigvgan_vocoder_ckpt=model.bigvgan_16k_path,
|
||||
need_vae_encoder=False)
|
||||
feature_utils = feature_utils.to(device, dtype).eval()
|
||||
feature_utils.device = "cuda"
|
||||
|
||||
pipe = { "net" : net, "clip" : feature_utils.clip_model, "syncformer" : feature_utils.synchformer, "vocode" : feature_utils.tod.vocoder, "vae" : feature_utils.tod.vae }
|
||||
from mmgp import offload
|
||||
offloadobj = offload.profile(pipe, profile_no=4, verboseLevel=2)
|
||||
if persistent_models:
|
||||
persistent_offloadobj = offloadobj
|
||||
persistent_net = net
|
||||
persistent_features_utils = feature_utils
|
||||
persistent_seq_cfg = seq_cfg
|
||||
|
||||
else:
|
||||
offloadobj = persistent_offloadobj
|
||||
net = persistent_net
|
||||
feature_utils = persistent_features_utils
|
||||
seq_cfg = persistent_seq_cfg
|
||||
|
||||
if not persistent_models:
|
||||
persistent_offloadobj = None
|
||||
persistent_net = None
|
||||
persistent_features_utils = None
|
||||
persistent_seq_cfg = None
|
||||
|
||||
return net, feature_utils, seq_cfg, offloadobj
|
||||
|
||||
@torch.inference_mode()
|
||||
def video_to_audio(video, prompt: str, negative_prompt: str, seed: int, num_steps: int,
|
||||
cfg_strength: float, duration: float, video_save_path , persistent_models = False, verboseLevel = 1):
|
||||
|
||||
global device
|
||||
|
||||
net, feature_utils, seq_cfg, offloadobj = get_model(persistent_models, verboseLevel )
|
||||
|
||||
rng = torch.Generator(device="cuda")
|
||||
if seed >= 0:
|
||||
rng.manual_seed(seed)
|
||||
else:
|
||||
rng.seed()
|
||||
fm = FlowMatching(min_sigma=0, inference_mode='euler', num_steps=num_steps)
|
||||
|
||||
video_info = load_video(video, duration)
|
||||
clip_frames = video_info.clip_frames
|
||||
sync_frames = video_info.sync_frames
|
||||
duration = video_info.duration_sec
|
||||
clip_frames = clip_frames.unsqueeze(0)
|
||||
sync_frames = sync_frames.unsqueeze(0)
|
||||
seq_cfg.duration = duration
|
||||
net.update_seq_lengths(seq_cfg.latent_seq_len, seq_cfg.clip_seq_len, seq_cfg.sync_seq_len)
|
||||
|
||||
audios = generate(clip_frames,
|
||||
sync_frames, [prompt],
|
||||
negative_text=[negative_prompt],
|
||||
feature_utils=feature_utils,
|
||||
net=net,
|
||||
fm=fm,
|
||||
rng=rng,
|
||||
cfg_strength=cfg_strength,
|
||||
offloadobj = offloadobj
|
||||
)
|
||||
audio = audios.float().cpu()[0]
|
||||
|
||||
make_video(video, video_info, video_save_path, audio, sampling_rate=seq_cfg.sampling_rate)
|
||||
offloadobj.unload_all()
|
||||
if not persistent_models:
|
||||
offloadobj.release()
|
||||
|
||||
torch.cuda.empty_cache()
|
||||
gc.collect()
|
||||
return video_save_path
|
||||
0
postprocessing/mmaudio/model/__init__.py
Normal file
0
postprocessing/mmaudio/model/__init__.py
Normal file
49
postprocessing/mmaudio/model/embeddings.py
Normal file
49
postprocessing/mmaudio/model/embeddings.py
Normal file
@ -0,0 +1,49 @@
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
|
||||
# https://github.com/facebookresearch/DiT
|
||||
|
||||
|
||||
class TimestepEmbedder(nn.Module):
|
||||
"""
|
||||
Embeds scalar timesteps into vector representations.
|
||||
"""
|
||||
|
||||
def __init__(self, dim, frequency_embedding_size, max_period):
|
||||
super().__init__()
|
||||
self.mlp = nn.Sequential(
|
||||
nn.Linear(frequency_embedding_size, dim),
|
||||
nn.SiLU(),
|
||||
nn.Linear(dim, dim),
|
||||
)
|
||||
self.dim = dim
|
||||
self.max_period = max_period
|
||||
assert dim % 2 == 0, 'dim must be even.'
|
||||
|
||||
with torch.autocast('cuda', enabled=False):
|
||||
self.freqs = nn.Buffer(
|
||||
1.0 / (10000**(torch.arange(0, frequency_embedding_size, 2, dtype=torch.float32) /
|
||||
frequency_embedding_size)),
|
||||
persistent=False)
|
||||
freq_scale = 10000 / max_period
|
||||
self.freqs = freq_scale * self.freqs
|
||||
|
||||
def timestep_embedding(self, t):
|
||||
"""
|
||||
Create sinusoidal timestep embeddings.
|
||||
:param t: a 1-D Tensor of N indices, one per batch element.
|
||||
These may be fractional.
|
||||
:param dim: the dimension of the output.
|
||||
:param max_period: controls the minimum frequency of the embeddings.
|
||||
:return: an (N, D) Tensor of positional embeddings.
|
||||
"""
|
||||
# https://github.com/openai/glide-text2im/blob/main/glide_text2im/nn.py
|
||||
|
||||
args = t[:, None].float() * self.freqs[None]
|
||||
embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
|
||||
return embedding
|
||||
|
||||
def forward(self, t):
|
||||
t_freq = self.timestep_embedding(t).to(t.dtype)
|
||||
t_emb = self.mlp(t_freq)
|
||||
return t_emb
|
||||
71
postprocessing/mmaudio/model/flow_matching.py
Normal file
71
postprocessing/mmaudio/model/flow_matching.py
Normal file
@ -0,0 +1,71 @@
|
||||
import logging
|
||||
from typing import Callable, Optional
|
||||
|
||||
import torch
|
||||
from torchdiffeq import odeint
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
|
||||
# Partially from https://github.com/gle-bellier/flow-matching
|
||||
class FlowMatching:
|
||||
|
||||
def __init__(self, min_sigma: float = 0.0, inference_mode='euler', num_steps: int = 25):
|
||||
# inference_mode: 'euler' or 'adaptive'
|
||||
# num_steps: number of steps in the euler inference mode
|
||||
super().__init__()
|
||||
self.min_sigma = min_sigma
|
||||
self.inference_mode = inference_mode
|
||||
self.num_steps = num_steps
|
||||
|
||||
# self.fm = ExactOptimalTransportConditionalFlowMatcher(sigma=min_sigma)
|
||||
|
||||
assert self.inference_mode in ['euler', 'adaptive']
|
||||
if self.inference_mode == 'adaptive' and num_steps > 0:
|
||||
log.info('The number of steps is ignored in adaptive inference mode ')
|
||||
|
||||
def get_conditional_flow(self, x0: torch.Tensor, x1: torch.Tensor,
|
||||
t: torch.Tensor) -> torch.Tensor:
|
||||
# which is psi_t(x), eq 22 in flow matching for generative models
|
||||
t = t[:, None, None].expand_as(x0)
|
||||
return (1 - (1 - self.min_sigma) * t) * x0 + t * x1
|
||||
|
||||
def loss(self, predicted_v: torch.Tensor, x0: torch.Tensor, x1: torch.Tensor) -> torch.Tensor:
|
||||
# return the mean error without reducing the batch dimension
|
||||
reduce_dim = list(range(1, len(predicted_v.shape)))
|
||||
target_v = x1 - (1 - self.min_sigma) * x0
|
||||
return (predicted_v - target_v).pow(2).mean(dim=reduce_dim)
|
||||
|
||||
def get_x0_xt_c(
|
||||
self,
|
||||
x1: torch.Tensor,
|
||||
t: torch.Tensor,
|
||||
Cs: list[torch.Tensor],
|
||||
generator: Optional[torch.Generator] = None
|
||||
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
x0 = torch.empty_like(x1).normal_(generator=generator)
|
||||
|
||||
xt = self.get_conditional_flow(x0, x1, t)
|
||||
return x0, x1, xt, Cs
|
||||
|
||||
def to_prior(self, fn: Callable, x1: torch.Tensor) -> torch.Tensor:
|
||||
return self.run_t0_to_t1(fn, x1, 1, 0)
|
||||
|
||||
def to_data(self, fn: Callable, x0: torch.Tensor) -> torch.Tensor:
|
||||
return self.run_t0_to_t1(fn, x0, 0, 1)
|
||||
|
||||
def run_t0_to_t1(self, fn: Callable, x0: torch.Tensor, t0: float, t1: float) -> torch.Tensor:
|
||||
# fn: a function that takes (t, x) and returns the direction x0->x1
|
||||
|
||||
if self.inference_mode == 'adaptive':
|
||||
return odeint(fn, x0, torch.tensor([t0, t1], device=x0.device, dtype=x0.dtype))
|
||||
elif self.inference_mode == 'euler':
|
||||
x = x0
|
||||
steps = torch.linspace(t0, t1 - self.min_sigma, self.num_steps + 1)
|
||||
for ti, t in enumerate(steps[:-1]):
|
||||
flow = fn(t, x)
|
||||
next_t = steps[ti + 1]
|
||||
dt = next_t - t
|
||||
x = x + dt * flow
|
||||
|
||||
return x
|
||||
95
postprocessing/mmaudio/model/low_level.py
Normal file
95
postprocessing/mmaudio/model/low_level.py
Normal file
@ -0,0 +1,95 @@
|
||||
import torch
|
||||
from torch import nn
|
||||
from torch.nn import functional as F
|
||||
|
||||
|
||||
class ChannelLastConv1d(nn.Conv1d):
|
||||
|
||||
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
||||
x = x.permute(0, 2, 1)
|
||||
x = super().forward(x)
|
||||
x = x.permute(0, 2, 1)
|
||||
return x
|
||||
|
||||
|
||||
# https://github.com/Stability-AI/sd3-ref
|
||||
class MLP(nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
dim: int,
|
||||
hidden_dim: int,
|
||||
multiple_of: int = 256,
|
||||
):
|
||||
"""
|
||||
Initialize the FeedForward module.
|
||||
|
||||
Args:
|
||||
dim (int): Input dimension.
|
||||
hidden_dim (int): Hidden dimension of the feedforward layer.
|
||||
multiple_of (int): Value to ensure hidden dimension is a multiple of this value.
|
||||
|
||||
Attributes:
|
||||
w1 (ColumnParallelLinear): Linear transformation for the first layer.
|
||||
w2 (RowParallelLinear): Linear transformation for the second layer.
|
||||
w3 (ColumnParallelLinear): Linear transformation for the third layer.
|
||||
|
||||
"""
|
||||
super().__init__()
|
||||
hidden_dim = int(2 * hidden_dim / 3)
|
||||
hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of)
|
||||
|
||||
self.w1 = nn.Linear(dim, hidden_dim, bias=False)
|
||||
self.w2 = nn.Linear(hidden_dim, dim, bias=False)
|
||||
self.w3 = nn.Linear(dim, hidden_dim, bias=False)
|
||||
|
||||
def forward(self, x):
|
||||
return self.w2(F.silu(self.w1(x)) * self.w3(x))
|
||||
|
||||
|
||||
class ConvMLP(nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
dim: int,
|
||||
hidden_dim: int,
|
||||
multiple_of: int = 256,
|
||||
kernel_size: int = 3,
|
||||
padding: int = 1,
|
||||
):
|
||||
"""
|
||||
Initialize the FeedForward module.
|
||||
|
||||
Args:
|
||||
dim (int): Input dimension.
|
||||
hidden_dim (int): Hidden dimension of the feedforward layer.
|
||||
multiple_of (int): Value to ensure hidden dimension is a multiple of this value.
|
||||
|
||||
Attributes:
|
||||
w1 (ColumnParallelLinear): Linear transformation for the first layer.
|
||||
w2 (RowParallelLinear): Linear transformation for the second layer.
|
||||
w3 (ColumnParallelLinear): Linear transformation for the third layer.
|
||||
|
||||
"""
|
||||
super().__init__()
|
||||
hidden_dim = int(2 * hidden_dim / 3)
|
||||
hidden_dim = multiple_of * ((hidden_dim + multiple_of - 1) // multiple_of)
|
||||
|
||||
self.w1 = ChannelLastConv1d(dim,
|
||||
hidden_dim,
|
||||
bias=False,
|
||||
kernel_size=kernel_size,
|
||||
padding=padding)
|
||||
self.w2 = ChannelLastConv1d(hidden_dim,
|
||||
dim,
|
||||
bias=False,
|
||||
kernel_size=kernel_size,
|
||||
padding=padding)
|
||||
self.w3 = ChannelLastConv1d(dim,
|
||||
hidden_dim,
|
||||
bias=False,
|
||||
kernel_size=kernel_size,
|
||||
padding=padding)
|
||||
|
||||
def forward(self, x):
|
||||
return self.w2(F.silu(self.w1(x)) * self.w3(x))
|
||||
477
postprocessing/mmaudio/model/networks.py
Normal file
477
postprocessing/mmaudio/model/networks.py
Normal file
@ -0,0 +1,477 @@
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import Optional
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
from ..ext.rotary_embeddings import compute_rope_rotations
|
||||
from .embeddings import TimestepEmbedder
|
||||
from .low_level import MLP, ChannelLastConv1d, ConvMLP
|
||||
from .transformer_layers import (FinalBlock, JointBlock, MMDitSingleBlock)
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
|
||||
@dataclass
|
||||
class PreprocessedConditions:
|
||||
clip_f: torch.Tensor
|
||||
sync_f: torch.Tensor
|
||||
text_f: torch.Tensor
|
||||
clip_f_c: torch.Tensor
|
||||
text_f_c: torch.Tensor
|
||||
|
||||
|
||||
# Partially from https://github.com/facebookresearch/DiT
|
||||
class MMAudio(nn.Module):
|
||||
|
||||
def __init__(self,
|
||||
*,
|
||||
latent_dim: int,
|
||||
clip_dim: int,
|
||||
sync_dim: int,
|
||||
text_dim: int,
|
||||
hidden_dim: int,
|
||||
depth: int,
|
||||
fused_depth: int,
|
||||
num_heads: int,
|
||||
mlp_ratio: float = 4.0,
|
||||
latent_seq_len: int,
|
||||
clip_seq_len: int,
|
||||
sync_seq_len: int,
|
||||
text_seq_len: int = 77,
|
||||
latent_mean: Optional[torch.Tensor] = None,
|
||||
latent_std: Optional[torch.Tensor] = None,
|
||||
empty_string_feat: Optional[torch.Tensor] = None,
|
||||
v2: bool = False) -> None:
|
||||
super().__init__()
|
||||
|
||||
self.v2 = v2
|
||||
self.latent_dim = latent_dim
|
||||
self._latent_seq_len = latent_seq_len
|
||||
self._clip_seq_len = clip_seq_len
|
||||
self._sync_seq_len = sync_seq_len
|
||||
self._text_seq_len = text_seq_len
|
||||
self.hidden_dim = hidden_dim
|
||||
self.num_heads = num_heads
|
||||
|
||||
if v2:
|
||||
self.audio_input_proj = nn.Sequential(
|
||||
ChannelLastConv1d(latent_dim, hidden_dim, kernel_size=7, padding=3),
|
||||
nn.SiLU(),
|
||||
ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=7, padding=3),
|
||||
)
|
||||
|
||||
self.clip_input_proj = nn.Sequential(
|
||||
nn.Linear(clip_dim, hidden_dim),
|
||||
nn.SiLU(),
|
||||
ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1),
|
||||
)
|
||||
|
||||
self.sync_input_proj = nn.Sequential(
|
||||
ChannelLastConv1d(sync_dim, hidden_dim, kernel_size=7, padding=3),
|
||||
nn.SiLU(),
|
||||
ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1),
|
||||
)
|
||||
|
||||
self.text_input_proj = nn.Sequential(
|
||||
nn.Linear(text_dim, hidden_dim),
|
||||
nn.SiLU(),
|
||||
MLP(hidden_dim, hidden_dim * 4),
|
||||
)
|
||||
else:
|
||||
self.audio_input_proj = nn.Sequential(
|
||||
ChannelLastConv1d(latent_dim, hidden_dim, kernel_size=7, padding=3),
|
||||
nn.SELU(),
|
||||
ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=7, padding=3),
|
||||
)
|
||||
|
||||
self.clip_input_proj = nn.Sequential(
|
||||
nn.Linear(clip_dim, hidden_dim),
|
||||
ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1),
|
||||
)
|
||||
|
||||
self.sync_input_proj = nn.Sequential(
|
||||
ChannelLastConv1d(sync_dim, hidden_dim, kernel_size=7, padding=3),
|
||||
nn.SELU(),
|
||||
ConvMLP(hidden_dim, hidden_dim * 4, kernel_size=3, padding=1),
|
||||
)
|
||||
|
||||
self.text_input_proj = nn.Sequential(
|
||||
nn.Linear(text_dim, hidden_dim),
|
||||
MLP(hidden_dim, hidden_dim * 4),
|
||||
)
|
||||
|
||||
self.clip_cond_proj = nn.Linear(hidden_dim, hidden_dim)
|
||||
self.text_cond_proj = nn.Linear(hidden_dim, hidden_dim)
|
||||
self.global_cond_mlp = MLP(hidden_dim, hidden_dim * 4)
|
||||
# each synchformer output segment has 8 feature frames
|
||||
self.sync_pos_emb = nn.Parameter(torch.zeros((1, 1, 8, sync_dim)))
|
||||
|
||||
self.final_layer = FinalBlock(hidden_dim, latent_dim)
|
||||
|
||||
if v2:
|
||||
self.t_embed = TimestepEmbedder(hidden_dim,
|
||||
frequency_embedding_size=hidden_dim,
|
||||
max_period=1)
|
||||
else:
|
||||
self.t_embed = TimestepEmbedder(hidden_dim,
|
||||
frequency_embedding_size=256,
|
||||
max_period=10000)
|
||||
self.joint_blocks = nn.ModuleList([
|
||||
JointBlock(hidden_dim,
|
||||
num_heads,
|
||||
mlp_ratio=mlp_ratio,
|
||||
pre_only=(i == depth - fused_depth - 1)) for i in range(depth - fused_depth)
|
||||
])
|
||||
|
||||
self.fused_blocks = nn.ModuleList([
|
||||
MMDitSingleBlock(hidden_dim, num_heads, mlp_ratio=mlp_ratio, kernel_size=3, padding=1)
|
||||
for i in range(fused_depth)
|
||||
])
|
||||
|
||||
if latent_mean is None:
|
||||
# these values are not meant to be used
|
||||
# if you don't provide mean/std here, we should load them later from a checkpoint
|
||||
assert latent_std is None
|
||||
latent_mean = torch.ones(latent_dim).view(1, 1, -1).fill_(float('nan'))
|
||||
latent_std = torch.ones(latent_dim).view(1, 1, -1).fill_(float('nan'))
|
||||
else:
|
||||
assert latent_std is not None
|
||||
assert latent_mean.numel() == latent_dim, f'{latent_mean.numel()=} != {latent_dim=}'
|
||||
if empty_string_feat is None:
|
||||
empty_string_feat = torch.zeros((text_seq_len, text_dim))
|
||||
self.latent_mean = nn.Parameter(latent_mean.view(1, 1, -1), requires_grad=False)
|
||||
self.latent_std = nn.Parameter(latent_std.view(1, 1, -1), requires_grad=False)
|
||||
|
||||
self.empty_string_feat = nn.Parameter(empty_string_feat, requires_grad=False)
|
||||
self.empty_clip_feat = nn.Parameter(torch.zeros(1, clip_dim), requires_grad=True)
|
||||
self.empty_sync_feat = nn.Parameter(torch.zeros(1, sync_dim), requires_grad=True)
|
||||
|
||||
self.initialize_weights()
|
||||
self.initialize_rotations()
|
||||
|
||||
def initialize_rotations(self):
|
||||
base_freq = 1.0
|
||||
latent_rot = compute_rope_rotations(self._latent_seq_len,
|
||||
self.hidden_dim // self.num_heads,
|
||||
10000,
|
||||
freq_scaling=base_freq,
|
||||
device=self.device)
|
||||
clip_rot = compute_rope_rotations(self._clip_seq_len,
|
||||
self.hidden_dim // self.num_heads,
|
||||
10000,
|
||||
freq_scaling=base_freq * self._latent_seq_len /
|
||||
self._clip_seq_len,
|
||||
device=self.device)
|
||||
|
||||
self.latent_rot = latent_rot #, persistent=False)
|
||||
self.clip_rot = clip_rot #, persistent=False)
|
||||
|
||||
def update_seq_lengths(self, latent_seq_len: int, clip_seq_len: int, sync_seq_len: int) -> None:
|
||||
self._latent_seq_len = latent_seq_len
|
||||
self._clip_seq_len = clip_seq_len
|
||||
self._sync_seq_len = sync_seq_len
|
||||
self.initialize_rotations()
|
||||
|
||||
def initialize_weights(self):
|
||||
|
||||
def _basic_init(module):
|
||||
if isinstance(module, nn.Linear):
|
||||
torch.nn.init.xavier_uniform_(module.weight)
|
||||
if module.bias is not None:
|
||||
nn.init.constant_(module.bias, 0)
|
||||
|
||||
self.apply(_basic_init)
|
||||
|
||||
# Initialize timestep embedding MLP:
|
||||
nn.init.normal_(self.t_embed.mlp[0].weight, std=0.02)
|
||||
nn.init.normal_(self.t_embed.mlp[2].weight, std=0.02)
|
||||
|
||||
# Zero-out adaLN modulation layers in DiT blocks:
|
||||
for block in self.joint_blocks:
|
||||
nn.init.constant_(block.latent_block.adaLN_modulation[-1].weight, 0)
|
||||
nn.init.constant_(block.latent_block.adaLN_modulation[-1].bias, 0)
|
||||
nn.init.constant_(block.clip_block.adaLN_modulation[-1].weight, 0)
|
||||
nn.init.constant_(block.clip_block.adaLN_modulation[-1].bias, 0)
|
||||
nn.init.constant_(block.text_block.adaLN_modulation[-1].weight, 0)
|
||||
nn.init.constant_(block.text_block.adaLN_modulation[-1].bias, 0)
|
||||
for block in self.fused_blocks:
|
||||
nn.init.constant_(block.adaLN_modulation[-1].weight, 0)
|
||||
nn.init.constant_(block.adaLN_modulation[-1].bias, 0)
|
||||
|
||||
# Zero-out output layers:
|
||||
nn.init.constant_(self.final_layer.adaLN_modulation[-1].weight, 0)
|
||||
nn.init.constant_(self.final_layer.adaLN_modulation[-1].bias, 0)
|
||||
nn.init.constant_(self.final_layer.conv.weight, 0)
|
||||
nn.init.constant_(self.final_layer.conv.bias, 0)
|
||||
|
||||
# empty string feat shall be initialized by a CLIP encoder
|
||||
nn.init.constant_(self.sync_pos_emb, 0)
|
||||
nn.init.constant_(self.empty_clip_feat, 0)
|
||||
nn.init.constant_(self.empty_sync_feat, 0)
|
||||
|
||||
def normalize(self, x: torch.Tensor) -> torch.Tensor:
|
||||
# return (x - self.latent_mean) / self.latent_std
|
||||
return x.sub_(self.latent_mean).div_(self.latent_std)
|
||||
|
||||
def unnormalize(self, x: torch.Tensor) -> torch.Tensor:
|
||||
# return x * self.latent_std + self.latent_mean
|
||||
return x.mul_(self.latent_std).add_(self.latent_mean)
|
||||
|
||||
def preprocess_conditions(self, clip_f: torch.Tensor, sync_f: torch.Tensor,
|
||||
text_f: torch.Tensor) -> PreprocessedConditions:
|
||||
"""
|
||||
cache computations that do not depend on the latent/time step
|
||||
i.e., the features are reused over steps during inference
|
||||
"""
|
||||
assert clip_f.shape[1] == self._clip_seq_len, f'{clip_f.shape=} {self._clip_seq_len=}'
|
||||
assert sync_f.shape[1] == self._sync_seq_len, f'{sync_f.shape=} {self._sync_seq_len=}'
|
||||
assert text_f.shape[1] == self._text_seq_len, f'{text_f.shape=} {self._text_seq_len=}'
|
||||
|
||||
bs = clip_f.shape[0]
|
||||
|
||||
# B * num_segments (24) * 8 * 768
|
||||
num_sync_segments = self._sync_seq_len // 8
|
||||
sync_f = sync_f.view(bs, num_sync_segments, 8, -1) + self.sync_pos_emb
|
||||
sync_f = sync_f.flatten(1, 2) # (B, VN, D)
|
||||
|
||||
# extend vf to match x
|
||||
clip_f = self.clip_input_proj(clip_f) # (B, VN, D)
|
||||
sync_f = self.sync_input_proj(sync_f) # (B, VN, D)
|
||||
text_f = self.text_input_proj(text_f) # (B, VN, D)
|
||||
|
||||
# upsample the sync features to match the audio
|
||||
sync_f = sync_f.transpose(1, 2) # (B, D, VN)
|
||||
sync_f = F.interpolate(sync_f, size=self._latent_seq_len, mode='nearest-exact')
|
||||
sync_f = sync_f.transpose(1, 2) # (B, N, D)
|
||||
|
||||
# get conditional features from the clip side
|
||||
clip_f_c = self.clip_cond_proj(clip_f.mean(dim=1)) # (B, D)
|
||||
text_f_c = self.text_cond_proj(text_f.mean(dim=1)) # (B, D)
|
||||
|
||||
return PreprocessedConditions(clip_f=clip_f,
|
||||
sync_f=sync_f,
|
||||
text_f=text_f,
|
||||
clip_f_c=clip_f_c,
|
||||
text_f_c=text_f_c)
|
||||
|
||||
def predict_flow(self, latent: torch.Tensor, t: torch.Tensor,
|
||||
conditions: PreprocessedConditions) -> torch.Tensor:
|
||||
"""
|
||||
for non-cacheable computations
|
||||
"""
|
||||
assert latent.shape[1] == self._latent_seq_len, f'{latent.shape=} {self._latent_seq_len=}'
|
||||
|
||||
clip_f = conditions.clip_f
|
||||
sync_f = conditions.sync_f
|
||||
text_f = conditions.text_f
|
||||
clip_f_c = conditions.clip_f_c
|
||||
text_f_c = conditions.text_f_c
|
||||
|
||||
latent = self.audio_input_proj(latent) # (B, N, D)
|
||||
global_c = self.global_cond_mlp(clip_f_c + text_f_c) # (B, D)
|
||||
|
||||
global_c = self.t_embed(t).unsqueeze(1) + global_c.unsqueeze(1) # (B, D)
|
||||
extended_c = global_c + sync_f
|
||||
|
||||
|
||||
|
||||
self.latent_rot = self.latent_rot.to("cuda")
|
||||
self.clip_rot = self.clip_rot.to("cuda")
|
||||
for block in self.joint_blocks:
|
||||
latent, clip_f, text_f = block(latent, clip_f, text_f, global_c, extended_c,
|
||||
self.latent_rot, self.clip_rot) # (B, N, D)
|
||||
|
||||
for block in self.fused_blocks:
|
||||
latent = block(latent, extended_c, self.latent_rot)
|
||||
self.latent_rot = self.latent_rot.to("cpu")
|
||||
self.clip_rot = self.clip_rot.to("cpu")
|
||||
|
||||
# should be extended_c; this is a minor implementation error #55
|
||||
flow = self.final_layer(latent, global_c) # (B, N, out_dim), remove t
|
||||
return flow
|
||||
|
||||
def forward(self, latent: torch.Tensor, clip_f: torch.Tensor, sync_f: torch.Tensor,
|
||||
text_f: torch.Tensor, t: torch.Tensor) -> torch.Tensor:
|
||||
"""
|
||||
latent: (B, N, C)
|
||||
vf: (B, T, C_V)
|
||||
t: (B,)
|
||||
"""
|
||||
conditions = self.preprocess_conditions(clip_f, sync_f, text_f)
|
||||
flow = self.predict_flow(latent, t, conditions)
|
||||
return flow
|
||||
|
||||
def get_empty_string_sequence(self, bs: int) -> torch.Tensor:
|
||||
return self.empty_string_feat.unsqueeze(0).expand(bs, -1, -1)
|
||||
|
||||
def get_empty_clip_sequence(self, bs: int) -> torch.Tensor:
|
||||
return self.empty_clip_feat.unsqueeze(0).expand(bs, self._clip_seq_len, -1)
|
||||
|
||||
def get_empty_sync_sequence(self, bs: int) -> torch.Tensor:
|
||||
return self.empty_sync_feat.unsqueeze(0).expand(bs, self._sync_seq_len, -1)
|
||||
|
||||
def get_empty_conditions(
|
||||
self,
|
||||
bs: int,
|
||||
*,
|
||||
negative_text_features: Optional[torch.Tensor] = None) -> PreprocessedConditions:
|
||||
if negative_text_features is not None:
|
||||
empty_text = negative_text_features
|
||||
else:
|
||||
empty_text = self.get_empty_string_sequence(1)
|
||||
|
||||
empty_clip = self.get_empty_clip_sequence(1)
|
||||
empty_sync = self.get_empty_sync_sequence(1)
|
||||
conditions = self.preprocess_conditions(empty_clip, empty_sync, empty_text)
|
||||
conditions.clip_f = conditions.clip_f.expand(bs, -1, -1)
|
||||
conditions.sync_f = conditions.sync_f.expand(bs, -1, -1)
|
||||
conditions.clip_f_c = conditions.clip_f_c.expand(bs, -1)
|
||||
if negative_text_features is None:
|
||||
conditions.text_f = conditions.text_f.expand(bs, -1, -1)
|
||||
conditions.text_f_c = conditions.text_f_c.expand(bs, -1)
|
||||
|
||||
return conditions
|
||||
|
||||
def ode_wrapper(self, t: torch.Tensor, latent: torch.Tensor, conditions: PreprocessedConditions,
|
||||
empty_conditions: PreprocessedConditions, cfg_strength: float) -> torch.Tensor:
|
||||
t = t * torch.ones(len(latent), device=latent.device, dtype=latent.dtype)
|
||||
|
||||
if cfg_strength < 1.0:
|
||||
return self.predict_flow(latent, t, conditions)
|
||||
else:
|
||||
return (cfg_strength * self.predict_flow(latent, t, conditions) +
|
||||
(1 - cfg_strength) * self.predict_flow(latent, t, empty_conditions))
|
||||
|
||||
def load_weights(self, src_dict) -> None:
|
||||
if 't_embed.freqs' in src_dict:
|
||||
del src_dict['t_embed.freqs']
|
||||
if 'latent_rot' in src_dict:
|
||||
del src_dict['latent_rot']
|
||||
if 'clip_rot' in src_dict:
|
||||
del src_dict['clip_rot']
|
||||
|
||||
a,b = self.load_state_dict(src_dict, strict=True, assign= True)
|
||||
pass
|
||||
|
||||
@property
|
||||
def device(self) -> torch.device:
|
||||
return self.latent_mean.device
|
||||
|
||||
@property
|
||||
def latent_seq_len(self) -> int:
|
||||
return self._latent_seq_len
|
||||
|
||||
@property
|
||||
def clip_seq_len(self) -> int:
|
||||
return self._clip_seq_len
|
||||
|
||||
@property
|
||||
def sync_seq_len(self) -> int:
|
||||
return self._sync_seq_len
|
||||
|
||||
|
||||
def small_16k(**kwargs) -> MMAudio:
|
||||
num_heads = 7
|
||||
return MMAudio(latent_dim=20,
|
||||
clip_dim=1024,
|
||||
sync_dim=768,
|
||||
text_dim=1024,
|
||||
hidden_dim=64 * num_heads,
|
||||
depth=12,
|
||||
fused_depth=8,
|
||||
num_heads=num_heads,
|
||||
latent_seq_len=250,
|
||||
clip_seq_len=64,
|
||||
sync_seq_len=192,
|
||||
**kwargs)
|
||||
|
||||
|
||||
def small_44k(**kwargs) -> MMAudio:
|
||||
num_heads = 7
|
||||
return MMAudio(latent_dim=40,
|
||||
clip_dim=1024,
|
||||
sync_dim=768,
|
||||
text_dim=1024,
|
||||
hidden_dim=64 * num_heads,
|
||||
depth=12,
|
||||
fused_depth=8,
|
||||
num_heads=num_heads,
|
||||
latent_seq_len=345,
|
||||
clip_seq_len=64,
|
||||
sync_seq_len=192,
|
||||
**kwargs)
|
||||
|
||||
|
||||
def medium_44k(**kwargs) -> MMAudio:
|
||||
num_heads = 14
|
||||
return MMAudio(latent_dim=40,
|
||||
clip_dim=1024,
|
||||
sync_dim=768,
|
||||
text_dim=1024,
|
||||
hidden_dim=64 * num_heads,
|
||||
depth=12,
|
||||
fused_depth=8,
|
||||
num_heads=num_heads,
|
||||
latent_seq_len=345,
|
||||
clip_seq_len=64,
|
||||
sync_seq_len=192,
|
||||
**kwargs)
|
||||
|
||||
|
||||
def large_44k(**kwargs) -> MMAudio:
|
||||
num_heads = 14
|
||||
return MMAudio(latent_dim=40,
|
||||
clip_dim=1024,
|
||||
sync_dim=768,
|
||||
text_dim=1024,
|
||||
hidden_dim=64 * num_heads,
|
||||
depth=21,
|
||||
fused_depth=14,
|
||||
num_heads=num_heads,
|
||||
latent_seq_len=345,
|
||||
clip_seq_len=64,
|
||||
sync_seq_len=192,
|
||||
**kwargs)
|
||||
|
||||
|
||||
def large_44k_v2(**kwargs) -> MMAudio:
|
||||
num_heads = 14
|
||||
return MMAudio(latent_dim=40,
|
||||
clip_dim=1024,
|
||||
sync_dim=768,
|
||||
text_dim=1024,
|
||||
hidden_dim=64 * num_heads,
|
||||
depth=21,
|
||||
fused_depth=14,
|
||||
num_heads=num_heads,
|
||||
latent_seq_len=345,
|
||||
clip_seq_len=64,
|
||||
sync_seq_len=192,
|
||||
v2=True,
|
||||
**kwargs)
|
||||
|
||||
|
||||
def get_my_mmaudio(name: str, **kwargs) -> MMAudio:
|
||||
if name == 'small_16k':
|
||||
return small_16k(**kwargs)
|
||||
if name == 'small_44k':
|
||||
return small_44k(**kwargs)
|
||||
if name == 'medium_44k':
|
||||
return medium_44k(**kwargs)
|
||||
if name == 'large_44k':
|
||||
return large_44k(**kwargs)
|
||||
if name == 'large_44k_v2':
|
||||
return large_44k_v2(**kwargs)
|
||||
|
||||
raise ValueError(f'Unknown model name: {name}')
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
network = get_my_mmaudio('small_16k')
|
||||
|
||||
# print the number of parameters in terms of millions
|
||||
num_params = sum(p.numel() for p in network.parameters()) / 1e6
|
||||
print(f'Number of parameters: {num_params:.2f}M')
|
||||
58
postprocessing/mmaudio/model/sequence_config.py
Normal file
58
postprocessing/mmaudio/model/sequence_config.py
Normal file
@ -0,0 +1,58 @@
|
||||
import dataclasses
|
||||
import math
|
||||
|
||||
|
||||
@dataclasses.dataclass
|
||||
class SequenceConfig:
|
||||
# general
|
||||
duration: float
|
||||
|
||||
# audio
|
||||
sampling_rate: int
|
||||
spectrogram_frame_rate: int
|
||||
latent_downsample_rate: int = 2
|
||||
|
||||
# visual
|
||||
clip_frame_rate: int = 8
|
||||
sync_frame_rate: int = 25
|
||||
sync_num_frames_per_segment: int = 16
|
||||
sync_step_size: int = 8
|
||||
sync_downsample_rate: int = 2
|
||||
|
||||
@property
|
||||
def num_audio_frames(self) -> int:
|
||||
# we need an integer number of latents
|
||||
return self.latent_seq_len * self.spectrogram_frame_rate * self.latent_downsample_rate
|
||||
|
||||
@property
|
||||
def latent_seq_len(self) -> int:
|
||||
return int(
|
||||
math.ceil(self.duration * self.sampling_rate / self.spectrogram_frame_rate /
|
||||
self.latent_downsample_rate))
|
||||
|
||||
@property
|
||||
def clip_seq_len(self) -> int:
|
||||
return int(self.duration * self.clip_frame_rate)
|
||||
|
||||
@property
|
||||
def sync_seq_len(self) -> int:
|
||||
num_frames = self.duration * self.sync_frame_rate
|
||||
num_segments = (num_frames - self.sync_num_frames_per_segment) // self.sync_step_size + 1
|
||||
return int(num_segments * self.sync_num_frames_per_segment / self.sync_downsample_rate)
|
||||
|
||||
|
||||
CONFIG_16K = SequenceConfig(duration=8.0, sampling_rate=16000, spectrogram_frame_rate=256)
|
||||
CONFIG_44K = SequenceConfig(duration=8.0, sampling_rate=44100, spectrogram_frame_rate=512)
|
||||
|
||||
if __name__ == '__main__':
|
||||
assert CONFIG_16K.latent_seq_len == 250
|
||||
assert CONFIG_16K.clip_seq_len == 64
|
||||
assert CONFIG_16K.sync_seq_len == 192
|
||||
assert CONFIG_16K.num_audio_frames == 128000
|
||||
|
||||
assert CONFIG_44K.latent_seq_len == 345
|
||||
assert CONFIG_44K.clip_seq_len == 64
|
||||
assert CONFIG_44K.sync_seq_len == 192
|
||||
assert CONFIG_44K.num_audio_frames == 353280
|
||||
|
||||
print('Passed')
|
||||
202
postprocessing/mmaudio/model/transformer_layers.py
Normal file
202
postprocessing/mmaudio/model/transformer_layers.py
Normal file
@ -0,0 +1,202 @@
|
||||
from typing import Optional
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
from einops import rearrange
|
||||
from einops.layers.torch import Rearrange
|
||||
|
||||
from ..ext.rotary_embeddings import apply_rope
|
||||
from ..model.low_level import MLP, ChannelLastConv1d, ConvMLP
|
||||
|
||||
|
||||
def modulate(x: torch.Tensor, shift: torch.Tensor, scale: torch.Tensor):
|
||||
return x * (1 + scale) + shift
|
||||
|
||||
|
||||
def attention(q: torch.Tensor, k: torch.Tensor, v: torch.Tensor):
|
||||
# training will crash without these contiguous calls and the CUDNN limitation
|
||||
# I believe this is related to https://github.com/pytorch/pytorch/issues/133974
|
||||
# unresolved at the time of writing
|
||||
q = q.contiguous()
|
||||
k = k.contiguous()
|
||||
v = v.contiguous()
|
||||
out = F.scaled_dot_product_attention(q, k, v)
|
||||
out = rearrange(out, 'b h n d -> b n (h d)').contiguous()
|
||||
return out
|
||||
|
||||
|
||||
class SelfAttention(nn.Module):
|
||||
|
||||
def __init__(self, dim: int, nheads: int):
|
||||
super().__init__()
|
||||
self.dim = dim
|
||||
self.nheads = nheads
|
||||
|
||||
self.qkv = nn.Linear(dim, dim * 3, bias=True)
|
||||
self.q_norm = nn.RMSNorm(dim // nheads)
|
||||
self.k_norm = nn.RMSNorm(dim // nheads)
|
||||
|
||||
self.split_into_heads = Rearrange('b n (h d j) -> b h n d j',
|
||||
h=nheads,
|
||||
d=dim // nheads,
|
||||
j=3)
|
||||
|
||||
def pre_attention(
|
||||
self, x: torch.Tensor,
|
||||
rot: Optional[torch.Tensor]) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
# x: batch_size * n_tokens * n_channels
|
||||
qkv = self.qkv(x)
|
||||
q, k, v = self.split_into_heads(qkv).chunk(3, dim=-1)
|
||||
q = q.squeeze(-1)
|
||||
k = k.squeeze(-1)
|
||||
v = v.squeeze(-1)
|
||||
q = self.q_norm(q)
|
||||
k = self.k_norm(k)
|
||||
|
||||
if rot is not None:
|
||||
q = apply_rope(q, rot)
|
||||
k = apply_rope(k, rot)
|
||||
|
||||
return q, k, v
|
||||
|
||||
def forward(
|
||||
self,
|
||||
x: torch.Tensor, # batch_size * n_tokens * n_channels
|
||||
) -> torch.Tensor:
|
||||
q, v, k = self.pre_attention(x)
|
||||
out = attention(q, k, v)
|
||||
return out
|
||||
|
||||
|
||||
class MMDitSingleBlock(nn.Module):
|
||||
|
||||
def __init__(self,
|
||||
dim: int,
|
||||
nhead: int,
|
||||
mlp_ratio: float = 4.0,
|
||||
pre_only: bool = False,
|
||||
kernel_size: int = 7,
|
||||
padding: int = 3):
|
||||
super().__init__()
|
||||
self.norm1 = nn.LayerNorm(dim, elementwise_affine=False)
|
||||
self.attn = SelfAttention(dim, nhead)
|
||||
|
||||
self.pre_only = pre_only
|
||||
if pre_only:
|
||||
self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(dim, 2 * dim, bias=True))
|
||||
else:
|
||||
if kernel_size == 1:
|
||||
self.linear1 = nn.Linear(dim, dim)
|
||||
else:
|
||||
self.linear1 = ChannelLastConv1d(dim, dim, kernel_size=kernel_size, padding=padding)
|
||||
self.norm2 = nn.LayerNorm(dim, elementwise_affine=False)
|
||||
|
||||
if kernel_size == 1:
|
||||
self.ffn = MLP(dim, int(dim * mlp_ratio))
|
||||
else:
|
||||
self.ffn = ConvMLP(dim,
|
||||
int(dim * mlp_ratio),
|
||||
kernel_size=kernel_size,
|
||||
padding=padding)
|
||||
|
||||
self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(dim, 6 * dim, bias=True))
|
||||
|
||||
def pre_attention(self, x: torch.Tensor, c: torch.Tensor, rot: Optional[torch.Tensor]):
|
||||
# x: BS * N * D
|
||||
# cond: BS * D
|
||||
modulation = self.adaLN_modulation(c)
|
||||
if self.pre_only:
|
||||
(shift_msa, scale_msa) = modulation.chunk(2, dim=-1)
|
||||
gate_msa = shift_mlp = scale_mlp = gate_mlp = None
|
||||
else:
|
||||
(shift_msa, scale_msa, gate_msa, shift_mlp, scale_mlp,
|
||||
gate_mlp) = modulation.chunk(6, dim=-1)
|
||||
|
||||
x = modulate(self.norm1(x), shift_msa, scale_msa)
|
||||
q, k, v = self.attn.pre_attention(x, rot)
|
||||
return (q, k, v), (gate_msa, shift_mlp, scale_mlp, gate_mlp)
|
||||
|
||||
def post_attention(self, x: torch.Tensor, attn_out: torch.Tensor, c: tuple[torch.Tensor]):
|
||||
if self.pre_only:
|
||||
return x
|
||||
|
||||
(gate_msa, shift_mlp, scale_mlp, gate_mlp) = c
|
||||
x = x + self.linear1(attn_out) * gate_msa
|
||||
r = modulate(self.norm2(x), shift_mlp, scale_mlp)
|
||||
x = x + self.ffn(r) * gate_mlp
|
||||
|
||||
return x
|
||||
|
||||
def forward(self, x: torch.Tensor, cond: torch.Tensor,
|
||||
rot: Optional[torch.Tensor]) -> torch.Tensor:
|
||||
# x: BS * N * D
|
||||
# cond: BS * D
|
||||
x_qkv, x_conditions = self.pre_attention(x, cond, rot)
|
||||
attn_out = attention(*x_qkv)
|
||||
x = self.post_attention(x, attn_out, x_conditions)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
class JointBlock(nn.Module):
|
||||
|
||||
def __init__(self, dim: int, nhead: int, mlp_ratio: float = 4.0, pre_only: bool = False):
|
||||
super().__init__()
|
||||
self.pre_only = pre_only
|
||||
self.latent_block = MMDitSingleBlock(dim,
|
||||
nhead,
|
||||
mlp_ratio,
|
||||
pre_only=False,
|
||||
kernel_size=3,
|
||||
padding=1)
|
||||
self.clip_block = MMDitSingleBlock(dim,
|
||||
nhead,
|
||||
mlp_ratio,
|
||||
pre_only=pre_only,
|
||||
kernel_size=3,
|
||||
padding=1)
|
||||
self.text_block = MMDitSingleBlock(dim, nhead, mlp_ratio, pre_only=pre_only, kernel_size=1)
|
||||
|
||||
def forward(self, latent: torch.Tensor, clip_f: torch.Tensor, text_f: torch.Tensor,
|
||||
global_c: torch.Tensor, extended_c: torch.Tensor, latent_rot: torch.Tensor,
|
||||
clip_rot: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
|
||||
# latent: BS * N1 * D
|
||||
# clip_f: BS * N2 * D
|
||||
# c: BS * (1/N) * D
|
||||
x_qkv, x_mod = self.latent_block.pre_attention(latent, extended_c, latent_rot)
|
||||
c_qkv, c_mod = self.clip_block.pre_attention(clip_f, global_c, clip_rot)
|
||||
t_qkv, t_mod = self.text_block.pre_attention(text_f, global_c, rot=None)
|
||||
|
||||
latent_len = latent.shape[1]
|
||||
clip_len = clip_f.shape[1]
|
||||
text_len = text_f.shape[1]
|
||||
|
||||
joint_qkv = [torch.cat([x_qkv[i], c_qkv[i], t_qkv[i]], dim=2) for i in range(3)]
|
||||
|
||||
attn_out = attention(*joint_qkv)
|
||||
x_attn_out = attn_out[:, :latent_len]
|
||||
c_attn_out = attn_out[:, latent_len:latent_len + clip_len]
|
||||
t_attn_out = attn_out[:, latent_len + clip_len:]
|
||||
|
||||
latent = self.latent_block.post_attention(latent, x_attn_out, x_mod)
|
||||
if not self.pre_only:
|
||||
clip_f = self.clip_block.post_attention(clip_f, c_attn_out, c_mod)
|
||||
text_f = self.text_block.post_attention(text_f, t_attn_out, t_mod)
|
||||
|
||||
return latent, clip_f, text_f
|
||||
|
||||
|
||||
class FinalBlock(nn.Module):
|
||||
|
||||
def __init__(self, dim, out_dim):
|
||||
super().__init__()
|
||||
self.adaLN_modulation = nn.Sequential(nn.SiLU(), nn.Linear(dim, 2 * dim, bias=True))
|
||||
self.norm = nn.LayerNorm(dim, elementwise_affine=False)
|
||||
self.conv = ChannelLastConv1d(dim, out_dim, kernel_size=7, padding=3)
|
||||
|
||||
def forward(self, latent, c):
|
||||
shift, scale = self.adaLN_modulation(c).chunk(2, dim=-1)
|
||||
latent = modulate(self.norm(latent), shift, scale)
|
||||
latent = self.conv(latent)
|
||||
return latent
|
||||
0
postprocessing/mmaudio/model/utils/__init__.py
Normal file
0
postprocessing/mmaudio/model/utils/__init__.py
Normal file
46
postprocessing/mmaudio/model/utils/distributions.py
Normal file
46
postprocessing/mmaudio/model/utils/distributions.py
Normal file
@ -0,0 +1,46 @@
|
||||
from typing import Optional
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
|
||||
|
||||
class DiagonalGaussianDistribution:
|
||||
|
||||
def __init__(self, parameters, deterministic=False):
|
||||
self.parameters = parameters
|
||||
self.mean, self.logvar = torch.chunk(parameters, 2, dim=1)
|
||||
self.logvar = torch.clamp(self.logvar, -30.0, 20.0)
|
||||
self.deterministic = deterministic
|
||||
self.std = torch.exp(0.5 * self.logvar)
|
||||
self.var = torch.exp(self.logvar)
|
||||
if self.deterministic:
|
||||
self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device)
|
||||
|
||||
def sample(self, rng: Optional[torch.Generator] = None):
|
||||
# x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device)
|
||||
|
||||
r = torch.empty_like(self.mean).normal_(generator=rng)
|
||||
x = self.mean + self.std * r
|
||||
|
||||
return x
|
||||
|
||||
def kl(self, other=None):
|
||||
if self.deterministic:
|
||||
return torch.Tensor([0.])
|
||||
else:
|
||||
if other is None:
|
||||
|
||||
return 0.5 * torch.pow(self.mean, 2) + self.var - 1.0 - self.logvar
|
||||
else:
|
||||
return 0.5 * (torch.pow(self.mean - other.mean, 2) / other.var +
|
||||
self.var / other.var - 1.0 - self.logvar + other.logvar)
|
||||
|
||||
def nll(self, sample, dims=[1, 2, 3]):
|
||||
if self.deterministic:
|
||||
return torch.Tensor([0.])
|
||||
logtwopi = np.log(2.0 * np.pi)
|
||||
return 0.5 * torch.sum(logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var,
|
||||
dim=dims)
|
||||
|
||||
def mode(self):
|
||||
return self.mean
|
||||
174
postprocessing/mmaudio/model/utils/features_utils.py
Normal file
174
postprocessing/mmaudio/model/utils/features_utils.py
Normal file
@ -0,0 +1,174 @@
|
||||
from typing import Literal, Optional
|
||||
import json
|
||||
import open_clip
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
from einops import rearrange
|
||||
from open_clip import create_model_from_pretrained, create_model
|
||||
from torchvision.transforms import Normalize
|
||||
|
||||
from ...ext.autoencoder import AutoEncoderModule
|
||||
from ...ext.mel_converter import get_mel_converter
|
||||
from ...ext.synchformer.synchformer import Synchformer
|
||||
from ...model.utils.distributions import DiagonalGaussianDistribution
|
||||
|
||||
|
||||
def patch_clip(clip_model):
|
||||
# a hack to make it output last hidden states
|
||||
# https://github.com/mlfoundations/open_clip/blob/fc5a37b72d705f760ebbc7915b84729816ed471f/src/open_clip/model.py#L269
|
||||
def new_encode_text(self, text, normalize: bool = False):
|
||||
cast_dtype = self.transformer.get_cast_dtype()
|
||||
|
||||
x = self.token_embedding(text).to(cast_dtype) # [batch_size, n_ctx, d_model]
|
||||
|
||||
x = x + self.positional_embedding.to(cast_dtype)
|
||||
x = self.transformer(x, attn_mask=self.attn_mask)
|
||||
x = self.ln_final(x) # [batch_size, n_ctx, transformer.width]
|
||||
return F.normalize(x, dim=-1) if normalize else x
|
||||
|
||||
clip_model.encode_text = new_encode_text.__get__(clip_model)
|
||||
return clip_model
|
||||
|
||||
def get_model_config(model_name):
|
||||
with open("ckpts/DFN5B-CLIP-ViT-H-14-378/open_clip_config.json", 'r', encoding='utf-8') as f:
|
||||
return json.load(f)["model_cfg"]
|
||||
|
||||
class FeaturesUtils(nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
tod_vae_ckpt: Optional[str] = None,
|
||||
bigvgan_vocoder_ckpt: Optional[str] = None,
|
||||
synchformer_ckpt: Optional[str] = None,
|
||||
enable_conditions: bool = True,
|
||||
mode=Literal['16k', '44k'],
|
||||
need_vae_encoder: bool = True,
|
||||
):
|
||||
super().__init__()
|
||||
self.device ="cuda"
|
||||
if enable_conditions:
|
||||
old_get_model_config = open_clip.factory.get_model_config
|
||||
open_clip.factory.get_model_config = get_model_config
|
||||
with open("ckpts/DFN5B-CLIP-ViT-H-14-378/open_clip_config.json", 'r', encoding='utf-8') as f:
|
||||
override_preprocess = json.load(f)["preprocess_cfg"]
|
||||
|
||||
self.clip_model = create_model('DFN5B-CLIP-ViT-H-14-378', pretrained='ckpts/DFN5B-CLIP-ViT-H-14-378/open_clip_pytorch_model.bin', force_preprocess_cfg= override_preprocess)
|
||||
open_clip.factory.get_model_config = old_get_model_config
|
||||
|
||||
# self.clip_model = create_model_from_pretrained('hf-hub:apple/DFN5B-CLIP-ViT-H-14-384', return_transform=False)
|
||||
self.clip_preprocess = Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
|
||||
std=[0.26862954, 0.26130258, 0.27577711])
|
||||
self.clip_model = patch_clip(self.clip_model)
|
||||
|
||||
self.synchformer = Synchformer()
|
||||
self.synchformer.load_state_dict(
|
||||
torch.load(synchformer_ckpt, weights_only=True, map_location='cpu'))
|
||||
|
||||
self.tokenizer = open_clip.get_tokenizer('ViT-H-14-378-quickgelu') # same as 'ViT-H-14'
|
||||
else:
|
||||
self.clip_model = None
|
||||
self.synchformer = None
|
||||
self.tokenizer = None
|
||||
|
||||
if tod_vae_ckpt is not None:
|
||||
self.mel_converter = get_mel_converter(mode)
|
||||
self.tod = AutoEncoderModule(vae_ckpt_path=tod_vae_ckpt,
|
||||
vocoder_ckpt_path=bigvgan_vocoder_ckpt,
|
||||
mode=mode,
|
||||
need_vae_encoder=need_vae_encoder)
|
||||
else:
|
||||
self.tod = None
|
||||
|
||||
def compile(self):
|
||||
if self.clip_model is not None:
|
||||
self.clip_model.encode_image = torch.compile(self.clip_model.encode_image)
|
||||
self.clip_model.encode_text = torch.compile(self.clip_model.encode_text)
|
||||
if self.synchformer is not None:
|
||||
self.synchformer = torch.compile(self.synchformer)
|
||||
self.decode = torch.compile(self.decode)
|
||||
self.vocode = torch.compile(self.vocode)
|
||||
|
||||
def train(self, mode: bool) -> None:
|
||||
return super().train(False)
|
||||
|
||||
@torch.inference_mode()
|
||||
def encode_video_with_clip(self, x: torch.Tensor, batch_size: int = -1) -> torch.Tensor:
|
||||
assert self.clip_model is not None, 'CLIP is not loaded'
|
||||
# x: (B, T, C, H, W) H/W: 384
|
||||
b, t, c, h, w = x.shape
|
||||
assert c == 3 and h == 384 and w == 384
|
||||
x = self.clip_preprocess(x)
|
||||
x = rearrange(x, 'b t c h w -> (b t) c h w')
|
||||
outputs = []
|
||||
if batch_size < 0:
|
||||
batch_size = b * t
|
||||
for i in range(0, b * t, batch_size):
|
||||
outputs.append(self.clip_model.encode_image(x[i:i + batch_size], normalize=True))
|
||||
x = torch.cat(outputs, dim=0)
|
||||
# x = self.clip_model.encode_image(x, normalize=True)
|
||||
x = rearrange(x, '(b t) d -> b t d', b=b)
|
||||
return x
|
||||
|
||||
@torch.inference_mode()
|
||||
def encode_video_with_sync(self, x: torch.Tensor, batch_size: int = -1) -> torch.Tensor:
|
||||
assert self.synchformer is not None, 'Synchformer is not loaded'
|
||||
# x: (B, T, C, H, W) H/W: 384
|
||||
|
||||
b, t, c, h, w = x.shape
|
||||
assert c == 3 and h == 224 and w == 224
|
||||
|
||||
# partition the video
|
||||
segment_size = 16
|
||||
step_size = 8
|
||||
num_segments = (t - segment_size) // step_size + 1
|
||||
segments = []
|
||||
for i in range(num_segments):
|
||||
segments.append(x[:, i * step_size:i * step_size + segment_size])
|
||||
x = torch.stack(segments, dim=1) # (B, S, T, C, H, W)
|
||||
|
||||
outputs = []
|
||||
if batch_size < 0:
|
||||
batch_size = b
|
||||
x = rearrange(x, 'b s t c h w -> (b s) 1 t c h w')
|
||||
for i in range(0, b * num_segments, batch_size):
|
||||
outputs.append(self.synchformer(x[i:i + batch_size]))
|
||||
x = torch.cat(outputs, dim=0)
|
||||
x = rearrange(x, '(b s) 1 t d -> b (s t) d', b=b)
|
||||
return x
|
||||
|
||||
@torch.inference_mode()
|
||||
def encode_text(self, text: list[str]) -> torch.Tensor:
|
||||
assert self.clip_model is not None, 'CLIP is not loaded'
|
||||
assert self.tokenizer is not None, 'Tokenizer is not loaded'
|
||||
# x: (B, L)
|
||||
tokens = self.tokenizer(text).to(self.device)
|
||||
return self.clip_model.encode_text(tokens, normalize=True)
|
||||
|
||||
@torch.inference_mode()
|
||||
def encode_audio(self, x) -> DiagonalGaussianDistribution:
|
||||
assert self.tod is not None, 'VAE is not loaded'
|
||||
# x: (B * L)
|
||||
mel = self.mel_converter(x)
|
||||
dist = self.tod.encode(mel)
|
||||
|
||||
return dist
|
||||
|
||||
@torch.inference_mode()
|
||||
def vocode(self, mel: torch.Tensor) -> torch.Tensor:
|
||||
assert self.tod is not None, 'VAE is not loaded'
|
||||
return self.tod.vocode(mel)
|
||||
|
||||
@torch.inference_mode()
|
||||
def decode(self, z: torch.Tensor) -> torch.Tensor:
|
||||
assert self.tod is not None, 'VAE is not loaded'
|
||||
return self.tod.decode(z.transpose(1, 2))
|
||||
|
||||
# @property
|
||||
# def device(self):
|
||||
# return next(self.parameters()).device
|
||||
|
||||
@property
|
||||
def dtype(self):
|
||||
return next(self.parameters()).dtype
|
||||
72
postprocessing/mmaudio/model/utils/parameter_groups.py
Normal file
72
postprocessing/mmaudio/model/utils/parameter_groups.py
Normal file
@ -0,0 +1,72 @@
|
||||
import logging
|
||||
|
||||
log = logging.getLogger()
|
||||
|
||||
|
||||
def get_parameter_groups(model, cfg, print_log=False):
|
||||
"""
|
||||
Assign different weight decays and learning rates to different parameters.
|
||||
Returns a parameter group which can be passed to the optimizer.
|
||||
"""
|
||||
weight_decay = cfg.weight_decay
|
||||
# embed_weight_decay = cfg.embed_weight_decay
|
||||
# backbone_lr_ratio = cfg.backbone_lr_ratio
|
||||
base_lr = cfg.learning_rate
|
||||
|
||||
backbone_params = []
|
||||
embed_params = []
|
||||
other_params = []
|
||||
|
||||
# embedding_names = ['summary_pos', 'query_init', 'query_emb', 'obj_pe']
|
||||
# embedding_names = [e + '.weight' for e in embedding_names]
|
||||
|
||||
# inspired by detectron2
|
||||
memo = set()
|
||||
for name, param in model.named_parameters():
|
||||
if not param.requires_grad:
|
||||
continue
|
||||
# Avoid duplicating parameters
|
||||
if param in memo:
|
||||
continue
|
||||
memo.add(param)
|
||||
|
||||
if name.startswith('module'):
|
||||
name = name[7:]
|
||||
|
||||
inserted = False
|
||||
# if name.startswith('pixel_encoder.'):
|
||||
# backbone_params.append(param)
|
||||
# inserted = True
|
||||
# if print_log:
|
||||
# log.info(f'{name} counted as a backbone parameter.')
|
||||
# else:
|
||||
# for e in embedding_names:
|
||||
# if name.endswith(e):
|
||||
# embed_params.append(param)
|
||||
# inserted = True
|
||||
# if print_log:
|
||||
# log.info(f'{name} counted as an embedding parameter.')
|
||||
# break
|
||||
|
||||
# if not inserted:
|
||||
other_params.append(param)
|
||||
|
||||
parameter_groups = [
|
||||
# {
|
||||
# 'params': backbone_params,
|
||||
# 'lr': base_lr * backbone_lr_ratio,
|
||||
# 'weight_decay': weight_decay
|
||||
# },
|
||||
# {
|
||||
# 'params': embed_params,
|
||||
# 'lr': base_lr,
|
||||
# 'weight_decay': embed_weight_decay
|
||||
# },
|
||||
{
|
||||
'params': other_params,
|
||||
'lr': base_lr,
|
||||
'weight_decay': weight_decay
|
||||
},
|
||||
]
|
||||
|
||||
return parameter_groups
|
||||
12
postprocessing/mmaudio/model/utils/sample_utils.py
Normal file
12
postprocessing/mmaudio/model/utils/sample_utils.py
Normal file
@ -0,0 +1,12 @@
|
||||
from typing import Optional
|
||||
|
||||
import torch
|
||||
|
||||
|
||||
def log_normal_sample(x: torch.Tensor,
|
||||
generator: Optional[torch.Generator] = None,
|
||||
m: float = 0.0,
|
||||
s: float = 1.0) -> torch.Tensor:
|
||||
bs = x.shape[0]
|
||||
s = torch.randn(bs, device=x.device, generator=generator) * s + m
|
||||
return torch.sigmoid(s)
|
||||
609
postprocessing/mmaudio/runner.py
Normal file
609
postprocessing/mmaudio/runner.py
Normal file
@ -0,0 +1,609 @@
|
||||
"""
|
||||
trainer.py - wrapper and utility functions for network training
|
||||
Compute loss, back-prop, update parameters, logging, etc.
|
||||
"""
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import Optional, Union
|
||||
|
||||
import torch
|
||||
import torch.distributed
|
||||
import torch.optim as optim
|
||||
# from av_bench.evaluate import evaluate
|
||||
# from av_bench.extract import extract
|
||||
# from nitrous_ema import PostHocEMA
|
||||
from omegaconf import DictConfig
|
||||
from torch.nn.parallel import DistributedDataParallel as DDP
|
||||
|
||||
from .model.flow_matching import FlowMatching
|
||||
from .model.networks import get_my_mmaudio
|
||||
from .model.sequence_config import CONFIG_16K, CONFIG_44K
|
||||
from .model.utils.features_utils import FeaturesUtils
|
||||
from .model.utils.parameter_groups import get_parameter_groups
|
||||
from .model.utils.sample_utils import log_normal_sample
|
||||
from .utils.dist_utils import (info_if_rank_zero, local_rank, string_if_rank_zero)
|
||||
from .utils.log_integrator import Integrator
|
||||
from .utils.logger import TensorboardLogger
|
||||
from .utils.time_estimator import PartialTimeEstimator, TimeEstimator
|
||||
from .utils.video_joiner import VideoJoiner
|
||||
|
||||
|
||||
class Runner:
|
||||
|
||||
def __init__(self,
|
||||
cfg: DictConfig,
|
||||
log: TensorboardLogger,
|
||||
run_path: Union[str, Path],
|
||||
for_training: bool = True,
|
||||
latent_mean: Optional[torch.Tensor] = None,
|
||||
latent_std: Optional[torch.Tensor] = None):
|
||||
self.exp_id = cfg.exp_id
|
||||
self.use_amp = cfg.amp
|
||||
self.enable_grad_scaler = cfg.enable_grad_scaler
|
||||
self.for_training = for_training
|
||||
self.cfg = cfg
|
||||
|
||||
if cfg.model.endswith('16k'):
|
||||
self.seq_cfg = CONFIG_16K
|
||||
mode = '16k'
|
||||
elif cfg.model.endswith('44k'):
|
||||
self.seq_cfg = CONFIG_44K
|
||||
mode = '44k'
|
||||
else:
|
||||
raise ValueError(f'Unknown model: {cfg.model}')
|
||||
|
||||
self.sample_rate = self.seq_cfg.sampling_rate
|
||||
self.duration_sec = self.seq_cfg.duration
|
||||
|
||||
# setting up the model
|
||||
empty_string_feat = torch.load('./ext_weights/empty_string.pth', weights_only=True)[0]
|
||||
self.network = DDP(get_my_mmaudio(cfg.model,
|
||||
latent_mean=latent_mean,
|
||||
latent_std=latent_std,
|
||||
empty_string_feat=empty_string_feat).cuda(),
|
||||
device_ids=[local_rank],
|
||||
broadcast_buffers=False)
|
||||
if cfg.compile:
|
||||
# NOTE: though train_fn and val_fn are very similar
|
||||
# (early on they are implemented as a single function)
|
||||
# keeping them separate and compiling them separately are CRUCIAL for high performance
|
||||
self.train_fn = torch.compile(self.train_fn)
|
||||
self.val_fn = torch.compile(self.val_fn)
|
||||
|
||||
self.fm = FlowMatching(cfg.sampling.min_sigma,
|
||||
inference_mode=cfg.sampling.method,
|
||||
num_steps=cfg.sampling.num_steps)
|
||||
|
||||
# ema profile
|
||||
if for_training and cfg.ema.enable and local_rank == 0:
|
||||
self.ema = PostHocEMA(self.network.module,
|
||||
sigma_rels=cfg.ema.sigma_rels,
|
||||
update_every=cfg.ema.update_every,
|
||||
checkpoint_every_num_steps=cfg.ema.checkpoint_every,
|
||||
checkpoint_folder=cfg.ema.checkpoint_folder,
|
||||
step_size_correction=True).cuda()
|
||||
self.ema_start = cfg.ema.start
|
||||
else:
|
||||
self.ema = None
|
||||
|
||||
self.rng = torch.Generator(device='cuda')
|
||||
self.rng.manual_seed(cfg['seed'] + local_rank)
|
||||
|
||||
# setting up feature extractors and VAEs
|
||||
if mode == '16k':
|
||||
self.features = FeaturesUtils(
|
||||
tod_vae_ckpt=cfg['vae_16k_ckpt'],
|
||||
bigvgan_vocoder_ckpt=cfg['bigvgan_vocoder_ckpt'],
|
||||
synchformer_ckpt=cfg['synchformer_ckpt'],
|
||||
enable_conditions=True,
|
||||
mode=mode,
|
||||
need_vae_encoder=False,
|
||||
)
|
||||
elif mode == '44k':
|
||||
self.features = FeaturesUtils(
|
||||
tod_vae_ckpt=cfg['vae_44k_ckpt'],
|
||||
synchformer_ckpt=cfg['synchformer_ckpt'],
|
||||
enable_conditions=True,
|
||||
mode=mode,
|
||||
need_vae_encoder=False,
|
||||
)
|
||||
self.features = self.features.cuda().eval()
|
||||
|
||||
if cfg.compile:
|
||||
self.features.compile()
|
||||
|
||||
# hyperparameters
|
||||
self.log_normal_sampling_mean = cfg.sampling.mean
|
||||
self.log_normal_sampling_scale = cfg.sampling.scale
|
||||
self.null_condition_probability = cfg.null_condition_probability
|
||||
self.cfg_strength = cfg.cfg_strength
|
||||
|
||||
# setting up logging
|
||||
self.log = log
|
||||
self.run_path = Path(run_path)
|
||||
vgg_cfg = cfg.data.VGGSound
|
||||
if for_training:
|
||||
self.val_video_joiner = VideoJoiner(vgg_cfg.root, self.run_path / 'val-sampled-videos',
|
||||
self.sample_rate, self.duration_sec)
|
||||
else:
|
||||
self.test_video_joiner = VideoJoiner(vgg_cfg.root,
|
||||
self.run_path / 'test-sampled-videos',
|
||||
self.sample_rate, self.duration_sec)
|
||||
string_if_rank_zero(self.log, 'model_size',
|
||||
f'{sum([param.nelement() for param in self.network.parameters()])}')
|
||||
string_if_rank_zero(
|
||||
self.log, 'number_of_parameters_that_require_gradient: ',
|
||||
str(
|
||||
sum([
|
||||
param.nelement()
|
||||
for param in filter(lambda p: p.requires_grad, self.network.parameters())
|
||||
])))
|
||||
info_if_rank_zero(self.log, 'torch version: ' + torch.__version__)
|
||||
self.train_integrator = Integrator(self.log, distributed=True)
|
||||
self.val_integrator = Integrator(self.log, distributed=True)
|
||||
|
||||
# setting up optimizer and loss
|
||||
if for_training:
|
||||
self.enter_train()
|
||||
parameter_groups = get_parameter_groups(self.network, cfg, print_log=(local_rank == 0))
|
||||
self.optimizer = optim.AdamW(parameter_groups,
|
||||
lr=cfg['learning_rate'],
|
||||
weight_decay=cfg['weight_decay'],
|
||||
betas=[0.9, 0.95],
|
||||
eps=1e-6 if self.use_amp else 1e-8,
|
||||
fused=True)
|
||||
if self.enable_grad_scaler:
|
||||
self.scaler = torch.amp.GradScaler(init_scale=2048)
|
||||
self.clip_grad_norm = cfg['clip_grad_norm']
|
||||
|
||||
# linearly warmup learning rate
|
||||
linear_warmup_steps = cfg['linear_warmup_steps']
|
||||
|
||||
def warmup(currrent_step: int):
|
||||
return (currrent_step + 1) / (linear_warmup_steps + 1)
|
||||
|
||||
warmup_scheduler = optim.lr_scheduler.LambdaLR(self.optimizer, lr_lambda=warmup)
|
||||
|
||||
# setting up learning rate scheduler
|
||||
if cfg['lr_schedule'] == 'constant':
|
||||
next_scheduler = optim.lr_scheduler.LambdaLR(self.optimizer, lr_lambda=lambda _: 1)
|
||||
elif cfg['lr_schedule'] == 'poly':
|
||||
total_num_iter = cfg['iterations']
|
||||
next_scheduler = optim.lr_scheduler.LambdaLR(self.optimizer,
|
||||
lr_lambda=lambda x:
|
||||
(1 - (x / total_num_iter))**0.9)
|
||||
elif cfg['lr_schedule'] == 'step':
|
||||
next_scheduler = optim.lr_scheduler.MultiStepLR(self.optimizer,
|
||||
cfg['lr_schedule_steps'],
|
||||
cfg['lr_schedule_gamma'])
|
||||
else:
|
||||
raise NotImplementedError
|
||||
|
||||
self.scheduler = optim.lr_scheduler.SequentialLR(self.optimizer,
|
||||
[warmup_scheduler, next_scheduler],
|
||||
[linear_warmup_steps])
|
||||
|
||||
# Logging info
|
||||
self.log_text_interval = cfg['log_text_interval']
|
||||
self.log_extra_interval = cfg['log_extra_interval']
|
||||
self.save_weights_interval = cfg['save_weights_interval']
|
||||
self.save_checkpoint_interval = cfg['save_checkpoint_interval']
|
||||
self.save_copy_iterations = cfg['save_copy_iterations']
|
||||
self.num_iterations = cfg['num_iterations']
|
||||
if cfg['debug']:
|
||||
self.log_text_interval = self.log_extra_interval = 1
|
||||
|
||||
# update() is called when we log metrics, within the logger
|
||||
self.log.batch_timer = TimeEstimator(self.num_iterations, self.log_text_interval)
|
||||
# update() is called every iteration, in this script
|
||||
self.log.data_timer = PartialTimeEstimator(self.num_iterations, 1, ema_alpha=0.9)
|
||||
else:
|
||||
self.enter_val()
|
||||
|
||||
def train_fn(
|
||||
self,
|
||||
clip_f: torch.Tensor,
|
||||
sync_f: torch.Tensor,
|
||||
text_f: torch.Tensor,
|
||||
a_mean: torch.Tensor,
|
||||
a_std: torch.Tensor,
|
||||
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
# sample
|
||||
a_randn = torch.empty_like(a_mean).normal_(generator=self.rng)
|
||||
x1 = a_mean + a_std * a_randn
|
||||
bs = x1.shape[0] # batch_size * seq_len * num_channels
|
||||
|
||||
# normalize the latents
|
||||
x1 = self.network.module.normalize(x1)
|
||||
|
||||
t = log_normal_sample(x1,
|
||||
generator=self.rng,
|
||||
m=self.log_normal_sampling_mean,
|
||||
s=self.log_normal_sampling_scale)
|
||||
x0, x1, xt, (clip_f, sync_f, text_f) = self.fm.get_x0_xt_c(x1,
|
||||
t,
|
||||
Cs=[clip_f, sync_f, text_f],
|
||||
generator=self.rng)
|
||||
|
||||
# classifier-free training
|
||||
samples = torch.rand(bs, device=x1.device, generator=self.rng)
|
||||
null_video = (samples < self.null_condition_probability)
|
||||
clip_f[null_video] = self.network.module.empty_clip_feat
|
||||
sync_f[null_video] = self.network.module.empty_sync_feat
|
||||
|
||||
samples = torch.rand(bs, device=x1.device, generator=self.rng)
|
||||
null_text = (samples < self.null_condition_probability)
|
||||
text_f[null_text] = self.network.module.empty_string_feat
|
||||
|
||||
pred_v = self.network(xt, clip_f, sync_f, text_f, t)
|
||||
loss = self.fm.loss(pred_v, x0, x1)
|
||||
mean_loss = loss.mean()
|
||||
return x1, loss, mean_loss, t
|
||||
|
||||
def val_fn(
|
||||
self,
|
||||
clip_f: torch.Tensor,
|
||||
sync_f: torch.Tensor,
|
||||
text_f: torch.Tensor,
|
||||
x1: torch.Tensor,
|
||||
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
|
||||
bs = x1.shape[0] # batch_size * seq_len * num_channels
|
||||
# normalize the latents
|
||||
x1 = self.network.module.normalize(x1)
|
||||
t = log_normal_sample(x1,
|
||||
generator=self.rng,
|
||||
m=self.log_normal_sampling_mean,
|
||||
s=self.log_normal_sampling_scale)
|
||||
x0, x1, xt, (clip_f, sync_f, text_f) = self.fm.get_x0_xt_c(x1,
|
||||
t,
|
||||
Cs=[clip_f, sync_f, text_f],
|
||||
generator=self.rng)
|
||||
|
||||
# classifier-free training
|
||||
samples = torch.rand(bs, device=x1.device, generator=self.rng)
|
||||
# null mask is for when a video is provided but we decided to ignore it
|
||||
null_video = (samples < self.null_condition_probability)
|
||||
# complete mask is for when a video is not provided or we decided to ignore it
|
||||
clip_f[null_video] = self.network.module.empty_clip_feat
|
||||
sync_f[null_video] = self.network.module.empty_sync_feat
|
||||
|
||||
samples = torch.rand(bs, device=x1.device, generator=self.rng)
|
||||
null_text = (samples < self.null_condition_probability)
|
||||
text_f[null_text] = self.network.module.empty_string_feat
|
||||
|
||||
pred_v = self.network(xt, clip_f, sync_f, text_f, t)
|
||||
|
||||
loss = self.fm.loss(pred_v, x0, x1)
|
||||
mean_loss = loss.mean()
|
||||
return loss, mean_loss, t
|
||||
|
||||
def train_pass(self, data, it: int = 0):
|
||||
|
||||
if not self.for_training:
|
||||
raise ValueError('train_pass() should not be called when not training.')
|
||||
|
||||
self.enter_train()
|
||||
with torch.amp.autocast('cuda', enabled=self.use_amp, dtype=torch.bfloat16):
|
||||
clip_f = data['clip_features'].cuda(non_blocking=True)
|
||||
sync_f = data['sync_features'].cuda(non_blocking=True)
|
||||
text_f = data['text_features'].cuda(non_blocking=True)
|
||||
video_exist = data['video_exist'].cuda(non_blocking=True)
|
||||
text_exist = data['text_exist'].cuda(non_blocking=True)
|
||||
a_mean = data['a_mean'].cuda(non_blocking=True)
|
||||
a_std = data['a_std'].cuda(non_blocking=True)
|
||||
|
||||
# these masks are for non-existent data; masking for CFG training is in train_fn
|
||||
clip_f[~video_exist] = self.network.module.empty_clip_feat
|
||||
sync_f[~video_exist] = self.network.module.empty_sync_feat
|
||||
text_f[~text_exist] = self.network.module.empty_string_feat
|
||||
|
||||
self.log.data_timer.end()
|
||||
if it % self.log_extra_interval == 0:
|
||||
unmasked_clip_f = clip_f.clone()
|
||||
unmasked_sync_f = sync_f.clone()
|
||||
unmasked_text_f = text_f.clone()
|
||||
x1, loss, mean_loss, t = self.train_fn(clip_f, sync_f, text_f, a_mean, a_std)
|
||||
|
||||
self.train_integrator.add_dict({'loss': mean_loss})
|
||||
|
||||
if it % self.log_text_interval == 0 and it != 0:
|
||||
self.train_integrator.add_scalar('lr', self.scheduler.get_last_lr()[0])
|
||||
self.train_integrator.add_binned_tensor('binned_loss', loss, t)
|
||||
self.train_integrator.finalize('train', it)
|
||||
self.train_integrator.reset_except_hooks()
|
||||
|
||||
# Backward pass
|
||||
self.optimizer.zero_grad(set_to_none=True)
|
||||
if self.enable_grad_scaler:
|
||||
self.scaler.scale(mean_loss).backward()
|
||||
self.scaler.unscale_(self.optimizer)
|
||||
grad_norm = torch.nn.utils.clip_grad_norm_(self.network.parameters(),
|
||||
self.clip_grad_norm)
|
||||
self.scaler.step(self.optimizer)
|
||||
self.scaler.update()
|
||||
else:
|
||||
mean_loss.backward()
|
||||
grad_norm = torch.nn.utils.clip_grad_norm_(self.network.parameters(),
|
||||
self.clip_grad_norm)
|
||||
self.optimizer.step()
|
||||
|
||||
if self.ema is not None and it >= self.ema_start:
|
||||
self.ema.update()
|
||||
self.scheduler.step()
|
||||
self.integrator.add_scalar('grad_norm', grad_norm)
|
||||
|
||||
self.enter_val()
|
||||
with torch.amp.autocast('cuda', enabled=self.use_amp,
|
||||
dtype=torch.bfloat16), torch.inference_mode():
|
||||
try:
|
||||
if it % self.log_extra_interval == 0:
|
||||
# save GT audio
|
||||
# unnormalize the latents
|
||||
x1 = self.network.module.unnormalize(x1[0:1])
|
||||
mel = self.features.decode(x1)
|
||||
audio = self.features.vocode(mel).cpu()[0] # 1 * num_samples
|
||||
self.log.log_spectrogram('train', f'spec-gt-r{local_rank}', mel.cpu()[0], it)
|
||||
self.log.log_audio('train',
|
||||
f'audio-gt-r{local_rank}',
|
||||
audio,
|
||||
it,
|
||||
sample_rate=self.sample_rate)
|
||||
|
||||
# save audio from sampling
|
||||
x0 = torch.empty_like(x1[0:1]).normal_(generator=self.rng)
|
||||
clip_f = unmasked_clip_f[0:1]
|
||||
sync_f = unmasked_sync_f[0:1]
|
||||
text_f = unmasked_text_f[0:1]
|
||||
conditions = self.network.module.preprocess_conditions(clip_f, sync_f, text_f)
|
||||
empty_conditions = self.network.module.get_empty_conditions(x0.shape[0])
|
||||
cfg_ode_wrapper = lambda t, x: self.network.module.ode_wrapper(
|
||||
t, x, conditions, empty_conditions, self.cfg_strength)
|
||||
x1_hat = self.fm.to_data(cfg_ode_wrapper, x0)
|
||||
x1_hat = self.network.module.unnormalize(x1_hat)
|
||||
mel = self.features.decode(x1_hat)
|
||||
audio = self.features.vocode(mel).cpu()[0]
|
||||
self.log.log_spectrogram('train', f'spec-r{local_rank}', mel.cpu()[0], it)
|
||||
self.log.log_audio('train',
|
||||
f'audio-r{local_rank}',
|
||||
audio,
|
||||
it,
|
||||
sample_rate=self.sample_rate)
|
||||
except Exception as e:
|
||||
self.log.warning(f'Error in extra logging: {e}')
|
||||
if self.cfg.debug:
|
||||
raise
|
||||
|
||||
# Save network weights and checkpoint if needed
|
||||
save_copy = it in self.save_copy_iterations
|
||||
|
||||
if (it % self.save_weights_interval == 0 and it != 0) or save_copy:
|
||||
self.save_weights(it)
|
||||
|
||||
if it % self.save_checkpoint_interval == 0 and it != 0:
|
||||
self.save_checkpoint(it, save_copy=save_copy)
|
||||
|
||||
self.log.data_timer.start()
|
||||
|
||||
@torch.inference_mode()
|
||||
def validation_pass(self, data, it: int = 0):
|
||||
self.enter_val()
|
||||
with torch.amp.autocast('cuda', enabled=self.use_amp, dtype=torch.bfloat16):
|
||||
clip_f = data['clip_features'].cuda(non_blocking=True)
|
||||
sync_f = data['sync_features'].cuda(non_blocking=True)
|
||||
text_f = data['text_features'].cuda(non_blocking=True)
|
||||
video_exist = data['video_exist'].cuda(non_blocking=True)
|
||||
text_exist = data['text_exist'].cuda(non_blocking=True)
|
||||
a_mean = data['a_mean'].cuda(non_blocking=True)
|
||||
a_std = data['a_std'].cuda(non_blocking=True)
|
||||
|
||||
clip_f[~video_exist] = self.network.module.empty_clip_feat
|
||||
sync_f[~video_exist] = self.network.module.empty_sync_feat
|
||||
text_f[~text_exist] = self.network.module.empty_string_feat
|
||||
a_randn = torch.empty_like(a_mean).normal_(generator=self.rng)
|
||||
x1 = a_mean + a_std * a_randn
|
||||
|
||||
self.log.data_timer.end()
|
||||
loss, mean_loss, t = self.val_fn(clip_f.clone(), sync_f.clone(), text_f.clone(), x1)
|
||||
|
||||
self.val_integrator.add_binned_tensor('binned_loss', loss, t)
|
||||
self.val_integrator.add_dict({'loss': mean_loss})
|
||||
|
||||
self.log.data_timer.start()
|
||||
|
||||
@torch.inference_mode()
|
||||
def inference_pass(self,
|
||||
data,
|
||||
it: int,
|
||||
data_cfg: DictConfig,
|
||||
*,
|
||||
save_eval: bool = True) -> Path:
|
||||
self.enter_val()
|
||||
with torch.amp.autocast('cuda', enabled=self.use_amp, dtype=torch.bfloat16):
|
||||
clip_f = data['clip_features'].cuda(non_blocking=True)
|
||||
sync_f = data['sync_features'].cuda(non_blocking=True)
|
||||
text_f = data['text_features'].cuda(non_blocking=True)
|
||||
video_exist = data['video_exist'].cuda(non_blocking=True)
|
||||
text_exist = data['text_exist'].cuda(non_blocking=True)
|
||||
a_mean = data['a_mean'].cuda(non_blocking=True) # for the shape only
|
||||
|
||||
clip_f[~video_exist] = self.network.module.empty_clip_feat
|
||||
sync_f[~video_exist] = self.network.module.empty_sync_feat
|
||||
text_f[~text_exist] = self.network.module.empty_string_feat
|
||||
|
||||
# sample
|
||||
x0 = torch.empty_like(a_mean).normal_(generator=self.rng)
|
||||
conditions = self.network.module.preprocess_conditions(clip_f, sync_f, text_f)
|
||||
empty_conditions = self.network.module.get_empty_conditions(x0.shape[0])
|
||||
cfg_ode_wrapper = lambda t, x: self.network.module.ode_wrapper(
|
||||
t, x, conditions, empty_conditions, self.cfg_strength)
|
||||
x1_hat = self.fm.to_data(cfg_ode_wrapper, x0)
|
||||
x1_hat = self.network.module.unnormalize(x1_hat)
|
||||
mel = self.features.decode(x1_hat)
|
||||
audio = self.features.vocode(mel).cpu()
|
||||
for i in range(audio.shape[0]):
|
||||
video_id = data['id'][i]
|
||||
if (not self.for_training) and i == 0:
|
||||
# save very few videos
|
||||
self.test_video_joiner.join(video_id, f'{video_id}', audio[i].transpose(0, 1))
|
||||
|
||||
if data_cfg.output_subdir is not None:
|
||||
# validation
|
||||
if save_eval:
|
||||
iter_naming = f'{it:09d}'
|
||||
else:
|
||||
iter_naming = 'val-cache'
|
||||
audio_dir = self.log.log_audio(iter_naming,
|
||||
f'{video_id}',
|
||||
audio[i],
|
||||
it=None,
|
||||
sample_rate=self.sample_rate,
|
||||
subdir=Path(data_cfg.output_subdir))
|
||||
if save_eval and i == 0:
|
||||
self.val_video_joiner.join(video_id, f'{iter_naming}-{video_id}',
|
||||
audio[i].transpose(0, 1))
|
||||
else:
|
||||
# full test set, usually
|
||||
audio_dir = self.log.log_audio(f'{data_cfg.tag}-sampled',
|
||||
f'{video_id}',
|
||||
audio[i],
|
||||
it=None,
|
||||
sample_rate=self.sample_rate)
|
||||
|
||||
return Path(audio_dir)
|
||||
|
||||
@torch.inference_mode()
|
||||
def eval(self, audio_dir: Path, it: int, data_cfg: DictConfig) -> dict[str, float]:
|
||||
with torch.amp.autocast('cuda', enabled=False):
|
||||
if local_rank == 0:
|
||||
extract(audio_path=audio_dir,
|
||||
output_path=audio_dir / 'cache',
|
||||
device='cuda',
|
||||
batch_size=32,
|
||||
audio_length=8)
|
||||
output_metrics = evaluate(gt_audio_cache=Path(data_cfg.gt_cache),
|
||||
pred_audio_cache=audio_dir / 'cache')
|
||||
for k, v in output_metrics.items():
|
||||
# pad k to 10 characters
|
||||
# pad v to 10 decimal places
|
||||
self.log.log_scalar(f'{data_cfg.tag}/{k}', v, it)
|
||||
self.log.info(f'{data_cfg.tag}/{k:<10}: {v:.10f}')
|
||||
else:
|
||||
output_metrics = None
|
||||
|
||||
return output_metrics
|
||||
|
||||
def save_weights(self, it, save_copy=False):
|
||||
if local_rank != 0:
|
||||
return
|
||||
|
||||
os.makedirs(self.run_path, exist_ok=True)
|
||||
if save_copy:
|
||||
model_path = self.run_path / f'{self.exp_id}_{it}.pth'
|
||||
torch.save(self.network.module.state_dict(), model_path)
|
||||
self.log.info(f'Network weights saved to {model_path}.')
|
||||
|
||||
# if last exists, move it to a shadow copy
|
||||
model_path = self.run_path / f'{self.exp_id}_last.pth'
|
||||
if model_path.exists():
|
||||
shadow_path = model_path.with_name(model_path.name.replace('last', 'shadow'))
|
||||
model_path.replace(shadow_path)
|
||||
self.log.info(f'Network weights shadowed to {shadow_path}.')
|
||||
|
||||
torch.save(self.network.module.state_dict(), model_path)
|
||||
self.log.info(f'Network weights saved to {model_path}.')
|
||||
|
||||
def save_checkpoint(self, it, save_copy=False):
|
||||
if local_rank != 0:
|
||||
return
|
||||
|
||||
checkpoint = {
|
||||
'it': it,
|
||||
'weights': self.network.module.state_dict(),
|
||||
'optimizer': self.optimizer.state_dict(),
|
||||
'scheduler': self.scheduler.state_dict(),
|
||||
'ema': self.ema.state_dict() if self.ema is not None else None,
|
||||
}
|
||||
|
||||
os.makedirs(self.run_path, exist_ok=True)
|
||||
if save_copy:
|
||||
model_path = self.run_path / f'{self.exp_id}_ckpt_{it}.pth'
|
||||
torch.save(checkpoint, model_path)
|
||||
self.log.info(f'Checkpoint saved to {model_path}.')
|
||||
|
||||
# if ckpt_last exists, move it to a shadow copy
|
||||
model_path = self.run_path / f'{self.exp_id}_ckpt_last.pth'
|
||||
if model_path.exists():
|
||||
shadow_path = model_path.with_name(model_path.name.replace('last', 'shadow'))
|
||||
model_path.replace(shadow_path) # moves the file
|
||||
self.log.info(f'Checkpoint shadowed to {shadow_path}.')
|
||||
|
||||
torch.save(checkpoint, model_path)
|
||||
self.log.info(f'Checkpoint saved to {model_path}.')
|
||||
|
||||
def get_latest_checkpoint_path(self):
|
||||
ckpt_path = self.run_path / f'{self.exp_id}_ckpt_last.pth'
|
||||
if not ckpt_path.exists():
|
||||
info_if_rank_zero(self.log, f'No checkpoint found at {ckpt_path}.')
|
||||
return None
|
||||
return ckpt_path
|
||||
|
||||
def get_latest_weight_path(self):
|
||||
weight_path = self.run_path / f'{self.exp_id}_last.pth'
|
||||
if not weight_path.exists():
|
||||
self.log.info(f'No weight found at {weight_path}.')
|
||||
return None
|
||||
return weight_path
|
||||
|
||||
def get_final_ema_weight_path(self):
|
||||
weight_path = self.run_path / f'{self.exp_id}_ema_final.pth'
|
||||
if not weight_path.exists():
|
||||
self.log.info(f'No weight found at {weight_path}.')
|
||||
return None
|
||||
return weight_path
|
||||
|
||||
def load_checkpoint(self, path):
|
||||
# This method loads everything and should be used to resume training
|
||||
map_location = 'cuda:%d' % local_rank
|
||||
checkpoint = torch.load(path, map_location={'cuda:0': map_location}, weights_only=True)
|
||||
|
||||
it = checkpoint['it']
|
||||
weights = checkpoint['weights']
|
||||
optimizer = checkpoint['optimizer']
|
||||
scheduler = checkpoint['scheduler']
|
||||
if self.ema is not None:
|
||||
self.ema.load_state_dict(checkpoint['ema'])
|
||||
self.log.info(f'EMA states loaded from step {self.ema.step}')
|
||||
|
||||
map_location = 'cuda:%d' % local_rank
|
||||
self.network.module.load_state_dict(weights)
|
||||
self.optimizer.load_state_dict(optimizer)
|
||||
self.scheduler.load_state_dict(scheduler)
|
||||
|
||||
self.log.info(f'Global iteration {it} loaded.')
|
||||
self.log.info('Network weights, optimizer states, and scheduler states loaded.')
|
||||
|
||||
return it
|
||||
|
||||
def load_weights_in_memory(self, src_dict):
|
||||
self.network.module.load_weights(src_dict)
|
||||
self.log.info('Network weights loaded from memory.')
|
||||
|
||||
def load_weights(self, path):
|
||||
# This method loads only the network weight and should be used to load a pretrained model
|
||||
map_location = 'cuda:%d' % local_rank
|
||||
src_dict = torch.load(path, map_location={'cuda:0': map_location}, weights_only=True)
|
||||
|
||||
self.log.info(f'Importing network weights from {path}...')
|
||||
self.load_weights_in_memory(src_dict)
|
||||
|
||||
def weights(self):
|
||||
return self.network.module.state_dict()
|
||||
|
||||
def enter_train(self):
|
||||
self.integrator = self.train_integrator
|
||||
self.network.train()
|
||||
return self
|
||||
|
||||
def enter_val(self):
|
||||
self.network.eval()
|
||||
return self
|
||||
90
postprocessing/mmaudio/sample.py
Normal file
90
postprocessing/mmaudio/sample.py
Normal file
@ -0,0 +1,90 @@
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import random
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
from hydra.core.hydra_config import HydraConfig
|
||||
from omegaconf import DictConfig, open_dict
|
||||
from tqdm import tqdm
|
||||
|
||||
from .data.data_setup import setup_test_datasets
|
||||
from .runner import Runner
|
||||
from .utils.dist_utils import info_if_rank_zero
|
||||
from .utils.logger import TensorboardLogger
|
||||
|
||||
local_rank = int(os.environ['LOCAL_RANK'])
|
||||
world_size = int(os.environ['WORLD_SIZE'])
|
||||
|
||||
|
||||
def sample(cfg: DictConfig):
|
||||
# initial setup
|
||||
num_gpus = world_size
|
||||
run_dir = HydraConfig.get().run.dir
|
||||
|
||||
# wrap python logger with a tensorboard logger
|
||||
log = TensorboardLogger(cfg.exp_id,
|
||||
run_dir,
|
||||
logging.getLogger(),
|
||||
is_rank0=(local_rank == 0),
|
||||
enable_email=cfg.enable_email and not cfg.debug)
|
||||
|
||||
info_if_rank_zero(log, f'All configuration: {cfg}')
|
||||
info_if_rank_zero(log, f'Number of GPUs detected: {num_gpus}')
|
||||
|
||||
# cuda setup
|
||||
torch.cuda.set_device(local_rank)
|
||||
torch.backends.cudnn.benchmark = cfg.cudnn_benchmark
|
||||
|
||||
# number of dataloader workers
|
||||
info_if_rank_zero(log, f'Number of dataloader workers (per GPU): {cfg.num_workers}')
|
||||
|
||||
# Set seeds to ensure the same initialization
|
||||
torch.manual_seed(cfg.seed)
|
||||
np.random.seed(cfg.seed)
|
||||
random.seed(cfg.seed)
|
||||
|
||||
# setting up configurations
|
||||
info_if_rank_zero(log, f'Configuration: {cfg}')
|
||||
info_if_rank_zero(log, f'Batch size (per GPU): {cfg.batch_size}')
|
||||
|
||||
# construct the trainer
|
||||
runner = Runner(cfg, log=log, run_path=run_dir, for_training=False).enter_val()
|
||||
|
||||
# load the last weights if needed
|
||||
if cfg['weights'] is not None:
|
||||
info_if_rank_zero(log, f'Loading weights from the disk: {cfg["weights"]}')
|
||||
runner.load_weights(cfg['weights'])
|
||||
cfg['weights'] = None
|
||||
else:
|
||||
weights = runner.get_final_ema_weight_path()
|
||||
if weights is not None:
|
||||
info_if_rank_zero(log, f'Automatically finding weight: {weights}')
|
||||
runner.load_weights(weights)
|
||||
|
||||
# setup datasets
|
||||
dataset, sampler, loader = setup_test_datasets(cfg)
|
||||
data_cfg = cfg.data.ExtractedVGG_test
|
||||
with open_dict(data_cfg):
|
||||
if cfg.output_name is not None:
|
||||
# append to the tag
|
||||
data_cfg.tag = f'{data_cfg.tag}-{cfg.output_name}'
|
||||
|
||||
# loop
|
||||
audio_path = None
|
||||
for curr_iter, data in enumerate(tqdm(loader)):
|
||||
new_audio_path = runner.inference_pass(data, curr_iter, data_cfg)
|
||||
if audio_path is None:
|
||||
audio_path = new_audio_path
|
||||
else:
|
||||
assert audio_path == new_audio_path, 'Different audio path detected'
|
||||
|
||||
info_if_rank_zero(log, f'Inference completed. Audio path: {audio_path}')
|
||||
output_metrics = runner.eval(audio_path, curr_iter, data_cfg)
|
||||
|
||||
if local_rank == 0:
|
||||
# write the output metrics to run_dir
|
||||
output_metrics_path = os.path.join(run_dir, f'{data_cfg.tag}-output_metrics.json')
|
||||
with open(output_metrics_path, 'w') as f:
|
||||
json.dump(output_metrics, f, indent=4)
|
||||
0
postprocessing/mmaudio/utils/__init__.py
Normal file
0
postprocessing/mmaudio/utils/__init__.py
Normal file
17
postprocessing/mmaudio/utils/dist_utils.py
Normal file
17
postprocessing/mmaudio/utils/dist_utils.py
Normal file
@ -0,0 +1,17 @@
|
||||
import os
|
||||
from logging import Logger
|
||||
|
||||
from .logger import TensorboardLogger
|
||||
|
||||
local_rank = int(os.environ['LOCAL_RANK']) if 'LOCAL_RANK' in os.environ else 0
|
||||
world_size = int(os.environ['WORLD_SIZE']) if 'WORLD_SIZE' in os.environ else 1
|
||||
|
||||
|
||||
def info_if_rank_zero(logger: Logger, msg: str):
|
||||
if local_rank == 0:
|
||||
logger.info(msg)
|
||||
|
||||
|
||||
def string_if_rank_zero(logger: TensorboardLogger, tag: str, msg: str):
|
||||
if local_rank == 0:
|
||||
logger.log_string(tag, msg)
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user