A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos
[article]
2018
arXiv
pre-print
Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor robot navigation. In this work we address unsupervised learning of scene depth and robot ego-motion where supervision is provided by monocular videos, as cameras are the cheapest, least restrictive and most ubiquitous sensor for robotics. Previous work in unsupervised image-to-depth learning has established strong baselines in the domain. We propose a novel approach which produces higher quality
arXiv:1811.06152v1
fatcat:zkrl6iv4wbbrroepf5tkpfynzu