You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper, it's show that MuseTalk training was performed on 2 NVIDIA H20 GPUs, and the Unet model was initially trained with L1 and perceptual losses for 200,000 steps. However, the paper doesn't specify the batch_size and gradient_accumulation_steps, which impact training speed. Could you provide the specific numbers used?
The text was updated successfully, but these errors were encountered:
In the paper, it's show that MuseTalk training was performed on 2 NVIDIA H20 GPUs, and the Unet model was initially trained with L1 and perceptual losses for 200,000 steps. However, the paper doesn't specify the batch_size and gradient_accumulation_steps, which impact training speed. Could you provide the specific numbers used?
The text was updated successfully, but these errors were encountered: