Filters








602 Hits in 5.3 sec

Self-Supervised Deep Pose Corrections for Robust Visual Odometry [article]

Brandon Wagstaff, Valentin Peretroukhin, Jonathan Kelly
<span title="2020-02-27">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We present a self-supervised deep pose correction (DPC) network that applies pose corrections to a visual odometry estimator to improve its accuracy.  ...  Instead of regressing inter-frame pose changes directly, we build on prior work that uses data-driven learning to regress pose corrections that account for systematic errors due to violations of modelling  ...  Fig. 1 : 1 Our self-supervised deep pose correction (DPC) network regresses a pose correction to a classical VO estimator.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.12339v1">arXiv:2002.12339v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wcqxk2hzazah7euhgnathnhmlm">fatcat:wcqxk2hzazah7euhgnathnhmlm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200322112126/https://arxiv.org/pdf/2002.12339v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.12339v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Survey on Deep Learning for Localization and Mapping: Towards the Age of Spatial Machine Intelligence [article]

Changhao Chen, Bing Wang, Chris Xiaoxuan Lu, Niki Trigoni, Andrew Markham
<span title="2020-06-29">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work, we provide a comprehensive survey, and propose a new taxonomy for localization and mapping using deep learning.  ...  their structure for real-world applications.  ...  Similar to unsupervised VO, Visual-inertial odometry can also be solved in a self-supervised fashion using novel view synthesis.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.12567v2">arXiv:2006.12567v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/snb2byqamfcblauw5lzccb7umy">fatcat:snb2byqamfcblauw5lzccb7umy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929081602/https://arxiv.org/pdf/2006.12567v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9d/d2/9dd202ba65241df4a9ad643b5fe9d369d54d2821.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.12567v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Learning for Underwater Visual Odometry Estimation

Bernardo Teixeira, Hugo Silva, Anibal Matos, Eduardo Silva
<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
application domains, has prompted a great a volume of recent research concerning Deep Learning architectures tailored for visual odometry estimation.  ...  Additionally, an extension of current work is proposed, in the form of a visual-inertial sensor fusion network aimed at correcting visual odometry estimate drift.  ...  SfMLearner [66] is a solution that established an influential framework for Deep Learning for Visual Odometry research.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2978406">doi:10.1109/access.2020.2978406</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zjjpiqgol5bclksbob6lnrf2lu">fatcat:zjjpiqgol5bclksbob6lnrf2lu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201108155952/https://ieeexplore.ieee.org/ielx7/6287639/8948470/09024043.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0f/46/0f461a2d2b9ebd9507242714b52b009240c477c0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2978406"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Multi-Sensor Fusion Self-Supervised Deep Odometry and Depth Estimation

Yingcai Wan, Qiankun Zhao, Cheng Guo, Chenlong Xu, Lijing Fang
<span title="2022-03-02">2022</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/kay2tsbijbawliu45dnhvyvgsq" style="color: black;">Remote Sensing</a> </i> &nbsp;
We first capture dense features and solve the pose by deep visual odometry (DVO), and then combine the pose estimation pipeline with deep inertial odometry (DIO) by the extended Kalman filter (EKF) method  ...  Specifically, we use the strengths of learning-based visual-inertial odometry (VIO) and depth estimation to build an end-to-end self-supervised learning architecture.  ...  to each pixel in the 2D image and accurate pose with DeepVIO, which combines deep visual odometry (DVO) with deep inertial odometry (DIO).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/rs14051228">doi:10.3390/rs14051228</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/srqcx7oo4fhztjq4qrutqosaau">fatcat:srqcx7oo4fhztjq4qrutqosaau</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220308175800/https://mdpi-res.com/d_attachment/remotesensing/remotesensing-14-01228/article_deploy/remotesensing-14-01228-v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fd/e3/fde3eb34f032566075d654bd55afb278fd679c41.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/rs14051228"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Stereo Visual Odometry Pose Correction through Unsupervised Deep Learning

Sumin Zhang, Shouyi Lu, Rui He, Zhipeng Bao
<span title="2021-07-11">2021</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/taedaf6aozg7vitz5dpgkojane" style="color: black;">Sensors</a> </i> &nbsp;
To solve this challenge, we combine the multiview geometry constraints of the classical stereo VO system with the robustness of deep learning to present an unsupervised pose correction network for the  ...  At the heart of VSLAM is visual odometry (VO), which uses continuous images to estimate the camera's ego-motion.  ...  Through training in these challenging scenes, the stereo visual odometry pose correction network has better robustness.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s21144735">doi:10.3390/s21144735</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zwwh7j4lzrb5xlw65xv4v3flda">fatcat:zwwh7j4lzrb5xlw65xv4v3flda</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210713074256/https://res.mdpi.com/d_attachment/sensors/sensors-21-04735/article_deploy/sensors-21-04735-v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8c/d0/8cd07ba0e4cf4d999069796fcb7521ba69163060.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s21144735"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry [article]

Nan Yang and Lukas von Stumberg and Rui Wang and Daniel Cremers
<span title="2020-03-28">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose D3VO as a novel framework for monocular visual odometry that exploits deep networks on three levels -- deep depth, pose and uncertainty estimation.  ...  We first propose a novel self-supervised monocular depth estimation network trained on stereo videos without any external supervision.  ...  Strobl for their constructive comments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.01060v2">arXiv:2003.01060v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ph5rwawodfhdhhx62wnmyqxjee">fatcat:ph5rwawodfhdhhx62wnmyqxjee</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200401001948/https://arxiv.org/pdf/2003.01060v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.01060v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Self-Supervised 3D Keypoint Learning for Ego-motion Estimation [article]

Jiexiong Tang, Rares Ambrus, Vitor Guizilini, Sudeep Pillai, Hanme Kim, Patric Jensfelt, Adrien Gaidon
<span title="2020-11-18">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We describe how our self-supervised keypoints can be integrated into state-of-the-art visual odometry frameworks for robust and accurate ego-motion estimation of autonomous vehicles in real-world conditions  ...  Detecting and matching robust viewpoint-invariant keypoints is critical for visual SLAM and Structure-from-Motion.  ...  Finally, in our third contribution, we show results comparable to state-Figure 1: Self-Supervised 3D Keypoints for Robust Visual-Odometry.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.03426v3">arXiv:1912.03426v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sxfvfyh75beivbiuzvb7prx2gq">fatcat:sxfvfyh75beivbiuzvb7prx2gq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201119073425/https://arxiv.org/pdf/1912.03426v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/2f/48/2f4871a46f95b44b29675e5142553facb4589d52.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1912.03426v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction [article]

Huangying Zhan, Ravi Garg, Chamara Saroj Weerasekera, Kejie Li, Harsh Agarwal, Ian Reid
<span title="2018-04-05">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
competitive results for visual odometry; (ii) deep feature-based warping loss improves upon simple photometric warp loss for both single view depth estimation and visual odometry.  ...  In this paper, we explore the use of stereo sequences for learning depth and visual odometry.  ...  This work was supported by the UoA Scholarship to HZ and KL, the ARC Laureate Fellowship FL130100102 to IR and the Australian Centre of Excellence for Robotic Vision CE140100016.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1803.03893v3">arXiv:1803.03893v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/uqdpu4ypafbq3fkieqfpowvhbi">fatcat:uqdpu4ypafbq3fkieqfpowvhbi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200914094555/https://arxiv.org/pdf/1803.03893v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/db/7e/db7e6a2b8110c193d5e3f0bc96d55e21696712b0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1803.03893v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Online Correction for Monocular Visual Odometry [article]

Jiaxin Zhang, Wei Sui, Xinggang Wang, Wenming Meng, Hongmei Zhu, Qian Zhang
<span title="2021-03-18">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work, we propose a novel deep online correction (DOC) framework for monocular visual odometry.  ...  The whole pipeline has two stages: First, depth maps and initial poses are obtained from convolutional neural networks (CNNs) trained in self-supervised manners.  ...  Visual Odometry Evaluation For KITTI odometry dataset, we take Pose-CNN from Monodepth2 as our baseline.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.10029v1">arXiv:2103.10029v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ngotxv2mzzbt3ii3vnrecaz3jy">fatcat:ngotxv2mzzbt3ii3vnrecaz3jy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210328162633/https://arxiv.org/pdf/2103.10029v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c6/26/c6264704f282e398b687e7186444bb33402732c5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.10029v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

DF-VO: What Should Be Learnt for Visual Odometry? [article]

Huangying Zhan, Chamara Saroj Weerasekera, Jia-Wang Bian, Ravi Garg, Ian Reid
<span title="2021-03-01">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recent studies show that deep neural networks can learn scene depths and relative camera in a self-supervised manner without acquiring ground truth labels.  ...  Multi-view geometry-based methods dominate the last few decades in monocular Visual Odometry for their superior performance, while they have been vulnerable to dynamic and low-texture scenes.  ...  Acknowledgment This work was supported by the UoA Scholarship to HZ, the ARC Laureate Fellowship FL130100102 to IR and the Australian Centre of Excellence for Robotic Vision CE140100016.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.00933v1">arXiv:2103.00933v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/gs4bsysoozelfmv7mdntx3xiba">fatcat:gs4bsysoozelfmv7mdntx3xiba</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210304023325/https://arxiv.org/pdf/2103.00933v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/48/2f/482f5bf84de2ae9f96603ac3ba047f7a58485124.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.00933v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Learning for Visual SLAM in Transportation Robotics: A review

Chao Duan, Steffen Junginger, Jiahao Huang, Kairong Jin, Kerstin Thurow
<span title="2019-12-12">2019</span> <i title="Oxford University Press (OUP)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/4wonywgrhnfpvgrgh2usfqtgqq" style="color: black;">Transportation Safety and Environment</a> </i> &nbsp;
The outstanding research results of deep learning visual odometry and deep learning loop closure detect are summarized.  ...  Finally, future development directions of visual SLAM based on deep learning is prospected.  ...  Recently both supervised deep learning methods and unsupervised methods are applied for visual SLAM problems such as visual odometry [10, 11] and loop closure [12, 13] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1093/tse/tdz019">doi:10.1093/tse/tdz019</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/c5tj64xro5ftvcw6qwz7rgrgky">fatcat:c5tj64xro5ftvcw6qwz7rgrgky</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200229031953/https://watermark.silverchair.com/tdz019.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAlQwggJQBgkqhkiG9w0BBwagggJBMIICPQIBADCCAjYGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM5wygHfhSgKDdjqXiAgEQgIICB1Gd7qXIJ3r5jZR_EzJxpYmCQ77s2pohUXHY2-v99rk8pTr_ujTX-41eVNXVwXJOxrbWrMfqV3W0dz6HYbpwO9SAbbBTWJZaFrCk549AHjrzJ4uT7YiUg2JC5ShHqff9v1csUbKap8rWGTXVa8awhWEZsMIZ9tXHEhvoo7oY6BoqKSfyUae8MjvJhM6i_lkVqhrYdM6IYzPBBEhYSXqDaLpuc0XkQArP2oIAHtvenzeeX3DE9HuAcuIGYSiR3CGtWqKm3BkoL-T9XHdz3uRb5DK2J_mc_G3FuWyC2kstHpPGm5SEDYZqussVw1Dg0FIWx0l1qZBrlHKAwCK1g6bugDlmPcntcNZnJGmVMTvT7lCOilCcekNQlPE2zynFPgGYVi8qi2kvbZcyrCmOjV1cc3DhLNPlfWUBhHtnqzFUpl1Bmm_H1BNgfzMUj7b5IBd0TWj-vAAp5r-77TXFR6fQAUCtNMDjgP9TtjO87vyaO4b6pz25anfDTdxDsLMa_U6NjIt4Q7pMnfwoi4v7OmG2feFU2DDMWsNO9Etnskdiq7w82DrYD1Vb8G3N7dDMY5n3jtQdbj-3slqnOJ2b1IIFrzP8bwYEvXQO7ayT4ZiEAEOTM7-W8HPhuWEaC4mcC_8quibErs8w_4Vup509A6C8rmVc-9Jzt83tjdDosegKhfbsFT32Qbnxyg" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/38/55/38557810527edfa588081b1ee7d5ac79e30a7a4a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1093/tse/tdz019"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> oup.com </button> </a>

Visual Odometry Revisited: What Should Be Learnt? [article]

Huangying Zhan, Chamara Saroj Weerasekera, Jiawang Bian, Ian Reid
<span title="2020-02-18">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this work we present a monocular visual odometry (VO) algorithm which leverages geometry-based methods and deep learning.  ...  With the deep predictions, we design a simple but robust frame-to-frame VO algorithm (DF-VO) which outperforms pure deep learning-based and geometry-based methods.  ...  Deep learning for VO: For supervised learning, Agrawal et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.09803v4">arXiv:1909.09803v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/2yf6ozwtmbgj3bcete3ob4qgt4">fatcat:2yf6ozwtmbgj3bcete3ob4qgt4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321152947/https://arxiv.org/pdf/1909.09803v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.09803v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments [article]

Dan Barnes, Will Maddern, Geoffrey Pascoe, Ingmar Posner
<span title="2018-03-05">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We present a self-supervised approach to ignoring "distractors" in camera images for the purposes of robustly estimating vehicle motion in cluttered urban environments.  ...  At run-time we use the predicted ephemerality and depth as an input to a monocular visual odometry (VO) pipeline, using either sparse features or dense photometric matching.  ...  The number of output channels and filter dimensions are also detailed for each block. Fig. 6 . Input data for ephemerality-aware visual odometry.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1711.06623v2">arXiv:1711.06623v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tg7no3np45dj7nau5cxmlisf2i">fatcat:tg7no3np45dj7nau5cxmlisf2i</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200930182530/https://arxiv.org/pdf/1711.06623v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0d/8d/0d8de6865447a4ece3968fdefafc2f2dd07d0896.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1711.06623v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Privileged label enhancement with multi-label learning

Wenfang Zhu, Xiuyi Jia, Weiwei Li
<span title="">2020</span> <i title="International Joint Conferences on Artificial Intelligence Organization"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/vfwwmrihanevtjbbkti2kc3nke" style="color: black;">Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence</a> </i> &nbsp;
VINS-Mono [Qin et al., 2018] fuses preintegrated IMU measurements with visual feature observations to achieve accurate pose estimation. Supervised/Self-Supervised learning methods.  ...  Some self-supervised learning methods were proposed for releasing the pressure of collecting quantities of ground truth labels for supervised learning. Shamwell et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.24963/ijcai.2020/325">doi:10.24963/ijcai.2020/325</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/ijcai/WeiHHML20.html">dblp:conf/ijcai/WeiHHML20</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/mbkbqui6vndivibwldz5acwes4">fatcat:mbkbqui6vndivibwldz5acwes4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200916140510/https://www.ijcai.org/Proceedings/2020/0325.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/83/d8/83d87ad9c97ed955b006b485e54b676c11be9f15.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.24963/ijcai.2020/325"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Tune your Place Recognition: Self-Supervised Domain Calibration via Robust SLAM [article]

Pierre-Yves Lajoie, Giovanni Beltrame
<span title="2022-03-08">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
To this end, we propose a completely self-supervised domain calibration procedure based on robust pose graph estimation from Simultaneous Localization and Mapping (SLAM) as the supervision signal without  ...  Visual place recognition techniques based on deep learning, which have imposed themselves as the state-of-the-art in recent years, do not always generalize well to environments that are visually different  ...  Using a single calibration sequence through a new environment, our proposed self-supervised technique for visual place recognition verifies putative loop closures using recent progress in robust pose graph  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.04446v1">arXiv:2203.04446v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oiwe5iuj65c6jjtl53sfz2qssm">fatcat:oiwe5iuj65c6jjtl53sfz2qssm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220621201930/https://arxiv.org/pdf/2203.04446v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/57/a7/57a7be1ed1f931b2e09f21b6d2d00a1c10b20681.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.04446v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 602 results