Parametric Dense Visual SLAM

Steven Lovegrove, Andrew Davison
Existing work in the field of monocular Simultaneous Localisation and Mapping (SLAM) has largely centred around sparse feature-based representations of the world. By tracking salient image patches across many frames of video, both the positions of the features and the motion of the camera can be inferred live. Within the visual SLAM community, there has been a focus on both increasing the number of features that can be tracked across an image and efficiently managing and adjusting this map of
more » ... atures in order to improve camera trajectory and feature location accuracy. Although prior research has looked at augmenting this map with more sophisticated features such as edgelets or planar patches, no incremental real-time system has yet made use of every pixel in the image to maximise camera trajectory estimation accuracy. Moreover, across many practical domains, these feature-based representations of the world fall short. In robotics, sparse feature-based models do not allow a robot to reason about free space and are not so useful for interaction. In augmented reality, sparse models do not allow us to place virtual objects behind real-ones and cannot enable virtual characters to interact with real objects. In this research we show how a dense surface model offers many advantages and we explore different methods of reasoning about dense surfaces over a sparse feature-based map. We continue by developing different methods for dense tracking and constrained dense SLAM in different applications such as spherical mosaicing. Finally, we show how live dense tracking can be tightly integrated with dense reconstruction to create a 6 DOF monocular live dense SLAM system which outperforms the current state of the art in many respects.
doi:10.25560/9618 fatcat:nuz4j4uqevddtevwjitrhm6sau