Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Have any code to process my own datasets? #1

Open
kafei123456 opened this issue Nov 10, 2023 · 6 comments
Open

Have any code to process my own datasets? #1

kafei123456 opened this issue Nov 10, 2023 · 6 comments

Comments

@kafei123456
Copy link

No description provided.

@kafei123456
Copy link
Author

Now, I have new questions. How much memory of GPU is needed. My GPU has 8G and it print out of memory.

@zhuomanliu
Copy link
Member

Now, I have new questions. How much memory of GPU is needed. My GPU has 8G and it print out of memory.

Thanks for your attention to our paper. We ran the experiment on a GPU with 24G memory (RTX3090). The memory usage depends on the settings of N_rand and N_views. It is suggested to decrease these hyper parameters if the GPU memory is limited.

@zhuomanliu
Copy link
Member

May I ask what type of dataset you own? Is it RGB frames/video or depth scans?

@Zhong2017WHU
Copy link

May I ask what type of dataset you own? Is it RGB frames/video or depth scans?

I have several datesets for 3D reconstruction. They are all RGB frames and can be aligned using Colmap. So I wonder how to transform my own datasets into the format that this code requires.

@JuliusQv
Copy link

May I ask what type of dataset you own? Is it RGB frames/video or depth scans?

I have several datesets for 3D reconstruction. They are all RGB frames and can be aligned using Colmap. So I wonder how to transform my own datasets into the format that this code requires.

same question.

@zhuomanliu
Copy link
Member

May I ask what type of dataset you own? Is it RGB frames/video or depth scans?

I have several datesets for 3D reconstruction. They are all RGB frames and can be aligned using Colmap. So I wonder how to transform my own datasets into the format that this code requires.

For camera settings, we use colmap2nerf.py from Instant-NGP to convert the Colmap outputs (cameras.txt, images.txt) to transforms.json.

As for distance supervisions, you can first load depths obtained by Colmap (*.geometric.bin) and convert them to distance values using the coversion function convert_d(d, scene_info, out='dist') provided in utils/math.py.

About the hyper-parameters sphere_center and radius, they can be computed from the mesh obtained by Colmap. In particular, sphere_center is the center of the mesh, and radius is half the maximum length of the mesh's bounding box.

Then you can rewrite the dataloader script data/load_${dataset}.py according to your own dataset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants