Skip to content
/ UniAD Public
forked from OpenDriveLab/UniAD

[CVPR 2023 Best Paper] Planning-oriented Autonomous Driving

License

Notifications You must be signed in to change notification settings

UbbeWolf/UniAD

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

72 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Planning-oriented Autonomous Driving

UniAD.mp4



teaser

Table of Contents:

  1. Highlights
  2. News
  3. Getting Started
  4. Results and Models
  5. TODO List
  6. License
  7. Citation

Highlights

  • 🚘 Planning-oriented philosophy: UniAD is a Unified Autonomous Driving algorithm framework following a planning-oriented philosophy. Instead of standalone modular design and multi-task learning, we cast a series of tasks, including perception, prediction and planning tasks hierarchically.
  • 🏆 SOTA performance: All tasks within UniAD achieve SOTA performance, especially prediction and planning (motion: 0.71m minADE, occ: 63.4% IoU, planning: 0.31% avg.Col)

News

  • Paper Title Change: To avoid confusion with the "goal-point" navigation in Robotics, we change the title from "Goal-oriented" to "Planning-oriented" suggested by Reviewers. Thank you!

  • [2023/06/12] Bugfix [Ref: OpenDriveLab#21]: Previously, the performance of the stage1 model (track_map) could not be replicated when trained from scratch, due to mistakenly adding loss_past_traj and freezing img_neck and BN. By removing loss_past_traj and unfreezing img_neck and BN in training, the reported results could be reproduced (AMOTA: 0.393, stage1_train_log).

  • [2023/04/18] New feature: You can replace BEVFormer with other BEV Encoding methods, e.g., LSS, as long as you provide the bev_embed and bev_pos in track_train and track_inference. Make sure your bevs and ours are of the same shape.

  • [2023/04/18] Base-model checkpoints are released.

  • [2023/03/29] Code & model initial release v1.0

  • [2023/03/21] 🚀🚀 UniAD is accepted by CVPR 2023, as an Award Candidate (12 out of 2360 accepted papers)!

  • [2022/12/21] UniAD paper is available on arXiv.

Getting Started

Results and Pre-trained Models

UniAD is trained in two stages. Pretrained checkpoints of both stages will be released and the results of each model are listed in the following tables.

Stage1: Perception training

We first train the perception modules (i.e., track and map) to obtain a stable weight initlization for the next stage. BEV features are aggregated with 5 frames (queue_length = 5).

Method Encoder Tracking
AMOTA
Mapping
IoU-lane
config Download
UniAD-B R101 0.390 0.297 base-stage1 base-stage1

Stage2: End-to-end training

We optimize all task modules together, including track, map, motion, occupancy and planning. BEV features are aggregated with 3 frames (queue_length = 3).

Method Encoder Tracking
AMOTA
Mapping
IoU-lane
Motion
minADE
Occupancy
IoU-n.
Planning
avg.Col.
config Download
UniAD-B R101 0.358 0.317 0.709 64.1 0.25 base-stage2 base-stage2

Checkpoint Usage

  • Download the checkpoints you need into UniAD/ckpts/ directory.
  • You can evaluate these checkpoints to reproduce the results, following the evaluation section in TRAIN_EVAL.md.
  • You can also initialize your own model with the provided weights. Change the load_from field to path/of/ckpt in the config and follow the train section in TRAIN_EVAL.md to start training.

Model Structure

The overall pipeline of UniAD is controlled by uniad_e2e.py which coordinates all the task modules in UniAD/projects/mmdet3d_plugin/uniad/dense_heads. If you are interested in the implementation of a specific task module, please refer to its corresponding file, e.g., motion_head.

TODO List

  • All configs & checkpoints [Soon]
  • Upgrade the implementation of MapFormer from Panoptic SegFormer to TopoNet, which features the vectorized map representations and topology reasoning.
  • Support larger batch size
  • [Long-term] Improve flexibility for future extensions
  • Fix bug: Unable to reproduce the results of stage1 track-map model when training from scratch. [Ref: OpenDriveLab#21]
  • Visualization codes
  • Separating BEV encoder and tracking module
  • Base-model configs & checkpoints
  • Code initialization

License

All assets and code are under the Apache 2.0 license unless specified otherwise.

Citation

Please consider citing our paper if the project helps your research with the following BibTex:

@inproceedings{hu2023_uniad,
 title={Planning-oriented Autonomous Driving}, 
 author={Yihan Hu and Jiazhi Yang and Li Chen and Keyu Li and Chonghao Sima and Xizhou Zhu and Siqi Chai and Senyao Du and Tianwei Lin and Wenhai Wang and Lewei Lu and Xiaosong Jia and Qiang Liu and Jifeng Dai and Yu Qiao and Hongyang Li},
 booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
 year={2023},
}

Related resources

Awesome

About

[CVPR 2023 Best Paper] Planning-oriented Autonomous Driving

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.6%
  • Shell 0.4%