-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to do re-enactment with an mp4 video of myself after training? #30
Comments
Hi,@jryebread. I provide a instruction to extract 3DMM expression coefficients from monocular video here(https://github.com/YuelangX/Multiview-3DMM-Fitting). You can refer to it. |
@YuelangX Hi thank you, I setup all the files for Multiview, but how do i get the params needed for param_files in reenactment.yml? The reenactment script asserts that len(params) == len(images) but your multiview preprocessor here only outputs images and cameras so i am confused on how to get params.npz |
it is also needed for pose_code_path pose_code_path: 'mini_demo_dataset/031/params/0000/params.npz' |
@jryebread you need to run https://github.com/YuelangX/Multiview-3DMM-Fitting?tab=readme-ov-file#multiview-monocular-fitting to generate the landmarks and params |
Hi, I'm confused on how to just test one of the existing datasets and get a front-facing re-enactment of one of the nersemble avatars using an MP4 input video of myself, can someone guide me on how to do this?
I already trained and ran one of the examples on the mini dataset, but I don't understand how to use my own driving video for re-enactment
like the instructions say "the trained avatar can be reenacted by a sequence of expression coefficients" what does this mean? how can I input my own mp4 video for reenactment? is there a script to convert an mp4 video into the required input the model needs?
The text was updated successfully, but these errors were encountered: