[go: nahoru, domu]

Skip to content

Latest commit

 

History

History

faster-rcnn.pytorch

Joint Monocular 3D Vehicle Detection and Tracking

We present a novel framework that jointly detects and tracks 3D vehicle bounding boxes. Our approach leverages 3D pose estimation to learn 2D patch association overtime and uses temporal information from tracking to obtain stable 3D estimation.

Joint Monocular 3D Vehicle Detection and Tracking
Hou-Ning Hu, Qi-Zhi Cai, Dequan Wang, Ji Lin, Min Sun, Philipp Krähenbühl, Trevor Darrell, Fisher Yu.
In ICCV, 2019.

Paper Website

Quick start

In this section, you will train a model from scratch, test our pretrained models, and reproduce our evaluation results.

Execution

For running a whole pipeline (training and testing):

# Generate predicted bounding boxes for object proposals

# Step 00 (Optional) - Training on GTA dataset
./run_train.sh

# Step 01 - Generate bounding boxes
./run_test.sh

# For 3D AP evaluation, please follow the step list in [3d-tracking](../3d-tracking/object-ap-eval)
# The bbox results of 3D AP are the 2D detection evaluation.

License

Third-party datasets are subject to their respective licenses.

If you use our code/models in your research, please cite our paper:

@inproceedings{Hu3DT19,
author = {Hu, Hou-Ning and Cai, Qi-Zhi and Wang, Dequan and Lin, Ji and Sun, Min and Krähenbühl, Philipp and Darrell, Trevor and Yu, Fisher},
title = {Joint Monocular 3D Vehicle Detection and Tracking},
journal = {ICCV},
year = {2019}
}

Acknowledgements

We thank faster.rcnn.pytorch for the detection codebase and kitti-object-eval-python for the 3D AP calculation tool.