CVPR 2020

Workshop on Autonomous Driving

June 19, 2020

The Washington State Convention Center
Seattle, WA

About The Workshop

About the Workshop

The CVPR 2020 Workshop on Autonomous Driving (WAD) aims to gather researchers and engineers from academia and industry to discuss the latest advances in perception for autonomous driving. In this one-day workshop, we will have regular paper presentations, invited speakers, and technical benchmark challenges to present the current state of the art, as well as the limitations and future directions for computer vision in autonomous driving, arguably the most promising application of computer vision and AI in general. The previous chapters of the workshop at CVPR attracted hundreds of researchers to attend. This year, multiple industry sponsors also join our organizing efforts to push its success to a new level.

Day Event
About Image
About Image
Challenge Image

Paper Submission

We solicit paper submissions on novel methods and application scenarios of CV for Autonomous vehicles. We accept papers on a variety of topics, including autonomous navigation and exploration, ADAS, UAV, deep learning, calibration, SLAM, etc.. Papers will be peer reviewed under double-blind policy and the submission deadline is 20th March 2020. Accepted papers will be presented at the poster session, some as orals and one paper will be awarded as the best paper.

Challenge Track

We host a challenge to understand the current status of computer vision algorithms in solving the environmental perception problems for autonomous driving. We have prepared a number of large scale datasets with fine annotation, collected and annotated by Berkeley Deep Driving Consortium and others. Based on the datasets, we have define a set of four realistic problems and encourage new algorithms and pipelines to be invented for autonomous driving).

Challenge Image
Call for Papers

Important Dates

  • Workshop paper submission deadline: March 14th 2020 March 20th 2020
  • Notification to authors: 9th April 2020 15th April 2020
  • Camera ready deadline: 19th April 2020

Topics Covered

Topics of the papers include but are not limited to:

  • Autonomous navigation and exploration
  • Vision based advanced driving assistance systems, driver monitoring and advanced interfaces
  • Vision systems for unmanned aerial and underwater vehicles
  • Deep Learning, machine learning, and image analysis techniques in vehicle technology
  • Performance evaluation of vehicular applications
  • On-board calibration of acquisition systems (e.g., cameras, radars, lidars)
  • 3D reconstruction and understanding
  • Vision based localization (e.g., place recognition, visual odometry, SLAM)

Presentation Guidelines

All accepted papers will be presented as posters. The guidelines for the posters are the same as at the main conference.

Submission Guidelines

  • We solicit short papers on autonomous vehicle topics
  • Submitted manuscript should follow the CVPR 2019 paper template
  • The page limit is 8 pages (excluding references)
  • We accept dual submissions, but the manuscript must contain substantial original contents not submitted to any other conference, workshop or journal
  • Submissions will be rejected without review if they:
    • contain more than 8 pages (excluding references)
    • violate the double-blind policy or violate the dual-submission policy
  • The accepted papers will be linked at the workshop webpage and also in the main conference proceedings if the authors agree
  • Papers will be peer reviewed under double-blind policy, and must be submitted online through the CMT submission system at:

We host challenges to understand the current status of computer vision algorithms in solving the environmental perception problems for autonomous driving. We have prepared a number of large scale datasets with fine annotation, collected and annotated by Berkeley DeepDriving, Argo AI. Based on the datasets, we have defined a set of several realistic problems and encourage new algorithms and pipelines to be invented for autonomous driving.

Our second challenge will be announced soon!

Challenge Image

Challenge 1: Argoverse Motion Forecasting and 3D Tracking Challenge

Our first two challenges are the Argoverse Motion Forecasting and 3D Tracking challenges. Argo AI is offering $5,000 in prizes for Motion Forecasting and 3D tracking competitions on Argoverse. See more details on the Motion Forecasting and 3D Tracking leaderboards. The competition will end on June 10th. Winning methods will be highlighted during the workshop.

Challenge 2: BDD100K Tracking

We are hosting a multi-object tracking challenge based on BDD100K, the largest open driving video dataset as part of the CVPR 2020 workshop on autonomous driving. This is a large-scale tracking challenge under the most diverse driving conditions. Understanding the temporal association of objects within videos is one of the fundamental yet challenging tasks for autonomous driving. The BDD100K MOT dataset provides diverse driving scenarios with complicated occlusions and reappearing patterns, which serves as a great testbed for the reliability of the developed MOT algorithms in real scenes. We provide 2,000 fully annotated 40-second sequences under different weather conditions, time of the day, and scene types. We encourage participants from both academia and industry and the winning teams will be awarded certificates for the memorable achievement. The challenge webpage:

Challenge Image

BDD100K Dataset from Berkeley DeepDrive

BDD100K dataset is a large collection of 100K driving videos with diverse scene types and weather conditions. Along with the video data, we also released annotation of different levels on 100K keyframes, including image tagging, object detection, instance segmentation, driving area and lane marking. In 2018, the challenges hosted at CVPR 2018 and AI Challenger 2018 based on BDD data attracted hundreds of teams to compete for best object recognition and segmentation algorithms for autonomous driving.

Argoverse by Argo AI

Argoverse is the first large-scale self-driving data collection to include HD maps with geometric and semantic metadata — such as lane centerlines, lane direction, and driveable area. All of the detail we provide makes it possible to develop more accurate perception algorithms, which in turn will enable self-driving vehicles to safely navigate complex city streets.

Challenge Image

General Chairs

Program Chairs


Agata Mosinska - RetinAI Medical
Ali Armin - Data61
Caglayan Dicle - nuTonomy
Carlos Becker - Pix4D
Chuong Nguyen - Data61
Eduardo Romera - Universidad de Alcala de Henares
Fatemeh Sadat Saleh - Australian National University (ANU)
George Siogkas - Panasonic Automotive Europe
Helge Rhodin - EPFL
Holger Caesar - nuTonomy
Hsun-Hsien Chang - nuTonomy
Isinsu Katircioglu - EPFL
Joachim Hugonot - EPFL
Kailun Yang
Kashyap Chitta - MPI-IS and University of Tuebingen
Luis Herranz - Computer Vision Center
Mårten Wadenbäck - Chalmers University of Technology and the University of Gothenburg
Mateusz Kozinski -EPFL

Miaomiao Liu - Australian National University
Mohammad Sadegh Aliakbarian - Australian National University
Pablo Márquez Neila - University of Bern
Prof. Dr. Riad I. Hammoud - BAE Systems
Shaodi You University of Amsterdam
Shuai Zheng - VAITL
Shuxuan Guo - EPFL
Sina Samangooei - Five AI Ltd.
Sourabh Vora -nuTonomy
Thomas Probst - ETH Zurich
Timo Rehfeld - Mercedes-Benz R&D
Venice Liong - nuTonomy
Victor Constantin - EPFL
Wei Wang EPFL -
Xiangyu Chen - Shanghai Jiao Tong University
Xuming He -ShanghaiTech University
Yinlin Hu - EPFL
Zeeshan Hayder - Data61


Diamand Sponsors