-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dataset #22
Comments
Yes, you can follow the instruction to preprocess your monocular video for training. Actually, I tested on a set of monocular data. The rendering results from the side view look a lot worse. freeview.mp4source.mp4 |
Hi @YuelangX , for monocular data, how do you get the camera params(extrinsic, intrinsic). |
You could refer to https://github.com/YuelangX/Multiview-3DMM-Fitting. The camera params are manually specified. |
@YuelangX I am trying to use this repo with my monocular data not from Nersemble dataset. Do you know a way to generate the camera params for the dataset structure required in your repo Multiview-3DMM-Fitting ? As in, frame1.jpg - camera1.npz, frame2.jpg - camera2.npz, etc. |
oh my bad, i thought the camera params where required, i misread that the repository creates them. |
@YuelangX sorry to disturb you again, but i have a question. I am currently training on a monocular video (539 frames, preprocessed with Multiview-3DMM-Fitting) and am at 600 epochs. I used checkpoint 600 to do self-reenactment on my monocular video and this is the result i get: This does not look like Gaussian Splatting, i would expect to see larger splats everywhere. Problems i know: The camera params are not correct, since they are manual in Multiview-3DMM-Fitting/preprocess_monocular_video.py. And i noticed that there is no lowres landmark in myDataset/mySubject/landmarks/*/. after preprocess. Do you have an idea what could be wrong here? I know training is not finished, but i would expect to see a different intermediate result. |
@NikoBele1 This seems strange, are the results during training also like this? |
@YuelangX thanks for replying. Tracking looks fine, besides the strange cropping. This is probably because of the manual camera parameters i used from Multiview-3DMM-Fitting/preprocess_monocular_video.py ? |
@NikoBele1 hi did you figure out the fix to your issue for custom character? have same issue |
@jryebread what issue do you have? the cropping or the other weird reenactment? |
Can I use RGB video from a monocular camera for training? We look forward to hearing from you, thank you.
The text was updated successfully, but these errors were encountered: