A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is
Dense depth perception is critical for autonomous driving and other robotics applications. However, modern LiDAR sensors only provide sparse depth measurement. It is thus necessary to complete the sparse LiDAR data, where a synchronized guidance RGB image is often used to facilitate this completion. Many neural networks have been designed for this task. However, they often naïvely fuse the LiDAR data and RGB image information by performing feature concatenation or element-wise addition.arXiv:1908.01238v1 fatcat:n2k6zuoxtbdudkartuxquepqsu