-
-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Leveraging additional information from nuscenes #38
Comments
you are free to do so |
you can also consider other modality as long as it's provided by nusc dataset |
So the full nuscenes dataset will be provided at challenge/llama_adapter_v2_multimodal7B/data/nuscene, right? And I guess the annotations, such as map expansion, lidar seg... will not be available. |
no, we won't provide the "full" nuscenes dataset here. you can refer to bevformer or other repo to download the full dataset. no, those annotation (bbox, map, lidar seg) should not be used when inference. |
Do you mean that you will not provide the full nuscenes in this repo, but on the test server we can somehow access to the full nuscenes dataset (without any annotation)? |
not sure what do you mean by "on the test server we can somehow access to the full nuscenes dataset". we do not restrict the input modality and history frames for the inference of the model, while we don't allow using any human-labelled annotation as well as nusc provided ground truth annotation (including but not limited to bbox, map, lidar seg). |
Thanks ! |
Is it opposite of the Huggingface homepage - Specific Rules? |
typo there. will fix soon. |
Hi there, we're considering if the agent could benefit from more than just keyframe images to generate the right answers. Is that possible to use some extra infomations from nuscenes in this task, like continuous frame images or radar points, which can be used to obtain more precise velocity information?
The text was updated successfully, but these errors were encountered: