B | AI Infrastructure Stream | Case Study
Monday, September 29
03:00 PM - 03:30 PM
Live in Berlin
Less Details
The perception and classification of moving objects are crucial for autonomous vehicles performing collision avoidance in dynamic environments. LiDARs and cameras tremendously enhance scene understanding but do not provide direct motion information and face limitations under adverse weather. Radar sensors work under these conditions, including rain, fog, and snow, and provide Doppler velocities, delivering direct information on dynamic objects. In our work, we address the problem of perceiving and classifying moving agents in sparse and noisy radar point clouds to enhance scene interpretation for safety-critical tasks. Our approaches focus on attention-based methods incorporating radar-specific information, such as the Doppler velocity, to address different tasks, including moving object segmentation, moving instance segmentation, semantic segmentation and moving instance tracking. We optimize the backbone architecture to reduce information loss and incorporate instance and motion information to improve segmentation quality. We incorporate temporal information within single scans and propose advanced modules to process sparse and noisy radar data to enhance the accuracy.
In sum, our approaches show superior performance on different benchmarks, including diverse environments, and we provide model-agnostic modules to enhance scene understanding.
In this session, you will learn more about:
Matthias Zeller is a Ph.D. Candidate at CARIAD SE, Mönsheim, Germany, and the Photogrammetry and Robotics Lab at the University of Bonn, headed by Cyrill Stachniss. He has over 3 years of experience in the automotive industry focusing on deep learning. His current research focuses on radar-based scene understanding for self-driving vehicles.