22/11/2021

Semantic-Guided Radar-Vision Fusion for Depth Estimation and Object Detection

Wei-Yu Lee, Ljubomir Jovanov, Wilfried Philips

Keywords: radar-vision fusion, sensor fusion, depth estimation, object detection, semantic segmentation

Abstract: In the last decade, radar is gaining its importance in perception modules of cars and infrastructure, due to its robustness against various weather and light conditions. Although radar has numerous advantages, the properties of its output signal also make the development of fusion scheme a challenging task. Most of the prior work does not exploit full potential of fusion due to the abstraction, sparsity and low quality of radar data. In this paper, we propose a novel fusion scheme to overcome this limitation by introducing semantic understanding to assist the fusion process. The sparse radar point-cloud and vision data is transformed to robust and reliable depth maps and fused in a multi-scale detection network for further exploiting the complementary information. In our experiments, we evaluate the proposed fusion scheme on both depth estimation and 2D object detection problems. The quantitative and qualitative results compare favourably to the state-of-the-art and demonstrate the effectiveness of the proposed scheme. The ablation studies also show the effectiveness of the proposed components.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at BMVC 2021 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers