About the Workshop
The CVPR 2020 Workshop on Autonomous Driving (WAD) aims to gather researchers and engineers from academia and industry to discuss the latest advances in perception for autonomous driving. In this one-day workshop, we will have regular paper presentations, invited speakers, and technical benchmark challenges to present the current state of the art, as well as the limitations and future directions for computer vision in autonomous driving, arguably the most promising application of computer vision and AI in general. The previous chapters of the workshop at CVPR attracted hundreds of researchers to attend. This year, multiple industry sponsors also join our organizing efforts to push its success to a new level.
We solicit paper submissions on novel methods and application scenarios of CV for Autonomous vehicles. We accept papers on a variety of topics, including autonomous navigation and exploration, ADAS, UAV, deep learning, calibration, SLAM, etc.. Papers will be peer reviewed under double-blind policy and the submission deadline is 20th March 2020. Accepted papers will be presented at the poster session, some as orals and one paper will be awarded as the best paper.
We host a challenge to understand the current status of computer vision algorithms in solving the environmental perception problems for autonomous driving. We have prepared a number of large scale datasets with fine annotation, collected and annotated by Berkeley Deep Driving Consortium and others. Based on the datasets, we have define a set of four realistic problems and encourage new algorithms and pipelines to be invented for autonomous driving).
Raquel UrtasunUofT and Uber
Andreas GeigerMPI / University of Tübingen
Trevor DarrellUC Berkeley
Byron BootsUniversity of Washington
Andreas WendelKodiak Robotics
Emilio FrazzoliETH Zürich/Aptiv
Dengxin DaiETH Zürich
- Workshop paper submission deadline:
March 14th 2020March 20th 2020Notification to authors: 9th April 202015th April 2020Camera ready deadline: 19th April 2020
Topics CoveredTopics of the papers include but are not limited to:
- Autonomous navigation and explorationVision based advanced driving assistance systems, driver monitoring and advanced interfacesVision systems for unmanned aerial and underwater vehiclesDeep Learning, machine learning, and image analysis techniques in vehicle technologyPerformance evaluation of vehicular applicationsOn-board calibration of acquisition systems (e.g., cameras, radars, lidars)3D reconstruction and understandingVision based localization (e.g., place recognition, visual odometry, SLAM)
Presentation GuidelinesAll accepted papers will be presented as posters. The guidelines for the posters are the same as at the main conference.
- We solicit short papers on autonomous vehicle topicsSubmitted manuscript should follow the CVPR 2019 paper templateThe page limit is 8 pages (excluding references)We accept dual submissions, but the manuscript must contain substantial original contents not submitted to any other conference, workshop or journalSubmissions will be rejected without review if they:
- contain more than 8 pages (excluding references)violate the double-blind policy or violate the dual-submission policyThe accepted papers will be linked at the workshop webpage and also in the main conference proceedings if the authors agreePapers will be peer reviewed under double-blind policy, and must be submitted online through the CMT submission system at: https://cmt3.research.microsoft.com/WAD2020
We host challenges to understand the current status of computer vision algorithms in solving the environmental perception problems for autonomous driving. We have prepared a number of large scale datasets with fine annotation, collected and annotated by Berkeley DeepDriving, Argo AI. Based on the datasets, we have defined a set of several realistic problems and encourage new algorithms and pipelines to be invented for autonomous driving.
Our second challenge will be announced soon!
Challenge 1: Argoverse Motion Forecasting and 3D Tracking Challenge
Our first two challenges are the Argoverse Motion Forecasting and 3D Tracking challenges. Argo AI is offering $5,000 in prizes for Motion Forecasting and 3D tracking competitions on Argoverse. See more details on the Motion Forecasting and 3D Tracking leaderboards. The competition will end on June 10th. Winning methods will be highlighted during the workshop.
Challenge 2: BDD100K Tracking
We are hosting a multi-object tracking challenge based on BDD100K, the largest open driving video dataset as part of the CVPR 2020 workshop on autonomous driving. This is a large-scale tracking challenge under the most diverse driving conditions. Understanding the temporal association of objects within videos is one of the fundamental yet challenging tasks for autonomous driving. The BDD100K MOT dataset provides diverse driving scenarios with complicated occlusions and reappearing patterns, which serves as a great testbed for the reliability of the developed MOT algorithms in real scenes. We provide 2,000 fully annotated 40-second sequences under different weather conditions, time of the day, and scene types. We encourage participants from both academia and industry and the winning teams will be awarded certificates for the memorable achievement. The challenge webpage: https://bdd-data.berkeley.edu/wad-2020.html.Datasets
BDD100K Dataset from Berkeley DeepDrive
BDD100K dataset is a large collection of 100K driving videos with diverse scene types and weather conditions. Along with the video data, we also released annotation of different levels on 100K keyframes, including image tagging, object detection, instance segmentation, driving area and lane marking. In 2018, the challenges hosted at CVPR 2018 and AI Challenger 2018 based on BDD data attracted hundreds of teams to compete for best object recognition and segmentation algorithms for autonomous driving.
Argoverse by Argo AI
Argoverse is the first large-scale self-driving data collection to include HD maps with geometric and semantic metadata — such as lane centerlines, lane direction, and driveable area. All of the detail we provide makes it possible to develop more accurate perception algorithms, which in turn will enable self-driving vehicles to safely navigate complex city streets.Organizers
Agata Mosinska - RetinAI Medical
Ali Armin - Data61
Caglayan Dicle - nuTonomy
Carlos Becker - Pix4D
Chuong Nguyen - Data61
Eduardo Romera - Universidad de Alcala de Henares
Fatemeh Sadat Saleh - Australian National University (ANU)
George Siogkas - Panasonic Automotive Europe
Helge Rhodin - EPFL
Holger Caesar - nuTonomy
Hsun-Hsien Chang - nuTonomy
Isinsu Katircioglu - EPFL
Joachim Hugonot - EPFL
Kashyap Chitta - MPI-IS and University of Tuebingen
Luis Herranz - Computer Vision Center
Mårten Wadenbäck - Chalmers University of Technology and the University of Gothenburg
Mateusz Kozinski -EPFL
Miaomiao Liu - Australian National University
Mohammad Sadegh Aliakbarian - Australian National University
Pablo Márquez Neila - University of Bern
Prof. Dr. Riad I. Hammoud - BAE Systems
Shaodi You University of Amsterdam
Shuai Zheng - VAITL
Shuxuan Guo - EPFL
Sina Samangooei - Five AI Ltd.
Sourabh Vora -nuTonomy
Thomas Probst - ETH Zurich
Timo Rehfeld - Mercedes-Benz R&D
Venice Liong - nuTonomy
Victor Constantin - EPFL
Wei Wang EPFL -
Xiangyu Chen - Shanghai Jiao Tong University
Xuming He -ShanghaiTech University
Yinlin Hu - EPFL
Zeeshan Hayder - Data61