A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation
[article]
2022
arXiv
pre-print
Multi-sensor fusion is essential for an accurate and reliable autonomous driving system. Recent approaches are based on point-level fusion: augmenting the LiDAR point cloud with camera features. However, the camera-to-LiDAR projection throws away the semantic density of camera features, hindering the effectiveness of such methods, especially for semantic-oriented tasks (such as 3D scene segmentation). In this paper, we break this deeply-rooted convention with BEVFusion, an efficient and generic
arXiv:2205.13542v2
fatcat:qtunylgozjcvrdrjzdk23xjpve