Filters








363 Hits in 7.7 sec

Unsupervised Learning of Depth Estimation and Visual Odometry for Sparse Light Field Cameras [article]

S. Tejaswi Digumarti
<span title="2021-03-21">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We introduce a generalised encoding of sparse LFs that allows unsupervised learning of odometry and depth.  ...  We consider emerging sparse light field (LF) cameras, which capture a subset of the 4D LF function describing the set of light rays passing through a plane.  ...  Our key contributions are: • We generalize unsupervised odometry and depth estimation to operate on sparse 4D LFs; • We introduce an encoding scheme for sparse LFs appropriate to odometry and shape estimation  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.11322v1">arXiv:2103.11322v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/dccr5ipy4nep3gck47egyux6wu">fatcat:dccr5ipy4nep3gck47egyux6wu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210331070944/https://arxiv.org/pdf/2103.11322v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ca/87/ca87f36b5c0ff56fd0a4d72a407a394e62570509.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.11322v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Learning for Visual SLAM in Transportation Robotics: A review

Chao Duan, Steffen Junginger, Jiahao Huang, Kairong Jin, Kerstin Thurow
<span title="2019-12-12">2019</span> <i title="Oxford University Press (OUP)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/4wonywgrhnfpvgrgh2usfqtgqq" style="color: black;">Transportation Safety and Environment</a> </i> &nbsp;
The outstanding research results of deep learning visual odometry and deep learning loop closure detect are summarized.  ...  In this paper, the latest research progress of deep learning applied to the field of visual SLAM is reviewed.  ...  Recently both supervised deep learning methods and unsupervised methods are applied for visual SLAM problems such as visual odometry [10, 11] and loop closure [12, 13] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1093/tse/tdz019">doi:10.1093/tse/tdz019</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/c5tj64xro5ftvcw6qwz7rgrgky">fatcat:c5tj64xro5ftvcw6qwz7rgrgky</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200229031953/https://watermark.silverchair.com/tdz019.pdf?token=AQECAHi208BE49Ooan9kkhW_Ercy7Dm3ZL_9Cf3qfKAc485ysgAAAlQwggJQBgkqhkiG9w0BBwagggJBMIICPQIBADCCAjYGCSqGSIb3DQEHATAeBglghkgBZQMEAS4wEQQM5wygHfhSgKDdjqXiAgEQgIICB1Gd7qXIJ3r5jZR_EzJxpYmCQ77s2pohUXHY2-v99rk8pTr_ujTX-41eVNXVwXJOxrbWrMfqV3W0dz6HYbpwO9SAbbBTWJZaFrCk549AHjrzJ4uT7YiUg2JC5ShHqff9v1csUbKap8rWGTXVa8awhWEZsMIZ9tXHEhvoo7oY6BoqKSfyUae8MjvJhM6i_lkVqhrYdM6IYzPBBEhYSXqDaLpuc0XkQArP2oIAHtvenzeeX3DE9HuAcuIGYSiR3CGtWqKm3BkoL-T9XHdz3uRb5DK2J_mc_G3FuWyC2kstHpPGm5SEDYZqussVw1Dg0FIWx0l1qZBrlHKAwCK1g6bugDlmPcntcNZnJGmVMTvT7lCOilCcekNQlPE2zynFPgGYVi8qi2kvbZcyrCmOjV1cc3DhLNPlfWUBhHtnqzFUpl1Bmm_H1BNgfzMUj7b5IBd0TWj-vAAp5r-77TXFR6fQAUCtNMDjgP9TtjO87vyaO4b6pz25anfDTdxDsLMa_U6NjIt4Q7pMnfwoi4v7OmG2feFU2DDMWsNO9Etnskdiq7w82DrYD1Vb8G3N7dDMY5n3jtQdbj-3slqnOJ2b1IIFrzP8bwYEvXQO7ayT4ZiEAEOTM7-W8HPhuWEaC4mcC_8quibErs8w_4Vup509A6C8rmVc-9Jzt83tjdDosegKhfbsFT32Qbnxyg" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/38/55/38557810527edfa588081b1ee7d5ac79e30a7a4a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1093/tse/tdz019"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> oup.com </button> </a>

Unsupervised Learning-based Depth Estimation aided Visual SLAM Approach [article]

Mingyang Geng, Suning Shang, Bo Ding, Huaimin Wang, Pengfei Zhang, Lei Zhang
<span title="2019-01-22">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
and add training constraints for the task of monocular depth and camera motion estimation.  ...  Recently, deep learning technologies have achieved great success in the visual SLAM area, which can directly learn high-level features from the visual inputs and improve the estimation accuracy of the  ...  This work was supported by the National Natural Science Foundation of China (grant numbers 61751208, 61502510, and 61773390), the Outstanding Natural Science Foundation of Hunan Province (grant number  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.07288v1">arXiv:1901.07288v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/amnkyh7k3ndjhmnspkfann5msm">fatcat:amnkyh7k3ndjhmnspkfann5msm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200826142329/https://arxiv.org/pdf/1901.07288v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/3f/3e/3f3e3b70f160050742e6635bdb1f10792cfc27ca.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1901.07288v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

DFineNet: Ego-Motion Estimation and Depth Refinement from Sparse, Noisy Depth Input with RGB Guidance [article]

Yilun Zhang, Ty Nguyen, Ian D. Miller, Shreyas S. Shivakumar, Steven Chen, Camillo J. Taylor, Vijay Kumar
<span title="2019-08-14">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose an end-to-end learning algorithm that is capable of using sparse, noisy input depth for refinement and depth completion.  ...  Depth estimation is an important capability for autonomous vehicles to understand and reconstruct 3D environments as well as avoid obstacles during the execution.  ...  RGB image (1st) is overlaid with sparse, noisy depth input for visualization pose simultaneously, and argue for jointly estimating pose and depth to improve estimations in both domains.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.06397v4">arXiv:1903.06397v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/hlfq2v2c75ap3dswxvixl4rjqy">fatcat:hlfq2v2c75ap3dswxvixl4rjqy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200911065558/https://arxiv.org/pdf/1903.06397v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8e/1c/8e1c2ef0816598866e110362583c1c4f570401d3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1903.06397v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Survey on Deep Learning for Localization and Mapping: Towards the Age of Spatial Machine Intelligence [article]

Changhao Chen, Bing Wang, Chris Xiaoxuan Lu, Niki Trigoni, Andrew Markham
<span title="2020-06-29">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
A wide range of topics are covered, from learning odometry estimation, mapping, to global localization and simultaneous localization and mapping (SLAM).  ...  In this work, we provide a comprehensive survey, and propose a new taxonomy for localization and mapping using deep learning.  ...  Visual Odometry Visual odometry (VO) estimates the ego-motion of a camera, and integrates the relative motion between images into global poses.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.12567v2">arXiv:2006.12567v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/snb2byqamfcblauw5lzccb7umy">fatcat:snb2byqamfcblauw5lzccb7umy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200929081602/https://arxiv.org/pdf/2006.12567v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9d/d2/9dd202ba65241df4a9ad643b5fe9d369d54d2821.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.12567v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Survey of Simultaneous Localization and Mapping with an Envision in 6G Wireless Networks [article]

Baichuan Huang, Jun Zhao, Jingbin Liu
<span title="2020-02-14">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
For Lidar or visual SLAM, the survey illustrates the basic type and product of sensors, open source system in sort and history, deep learning embedded, the challenge and future.  ...  It's very friendly for new researchers to hold the development of SLAM and learn it very obviously.  ...  GeoNet [196] is a jointly unsupervised learning framework for monocular depth, optical flow and ego-motion estimation from videos.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.05214v4">arXiv:1909.05214v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/itnluvkewfd6fel7x65wdgig3e">fatcat:itnluvkewfd6fel7x65wdgig3e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321164709/https://arxiv.org/pdf/1909.05214v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.05214v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Learning for Underwater Visual Odometry Estimation

Bernardo Teixeira, Hugo Silva, Anibal Matos, Eduardo Silva
<span title="">2020</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
application domains, has prompted a great a volume of recent research concerning Deep Learning architectures tailored for visual odometry estimation.  ...  Additionally, an extension of current work is proposed, in the form of a visual-inertial sensor fusion network aimed at correcting visual odometry estimate drift.  ...  SfMLearner [66] is an unsupervised learning pipeline for depth and egomotion estimation.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2978406">doi:10.1109/access.2020.2978406</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zjjpiqgol5bclksbob6lnrf2lu">fatcat:zjjpiqgol5bclksbob6lnrf2lu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201108155952/https://ieeexplore.ieee.org/ielx7/6287639/8948470/09024043.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0f/46/0f461a2d2b9ebd9507242714b52b009240c477c0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2020.2978406"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Towards Better Generalization: Joint Depth-Pose Learning without PoseNet [article]

Wang Zhao, Shaohui Liu, Yezhi Shu, Yong-Jin Liu
<span title="2021-09-03">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
in indoor environments and long-sequence visual odometry application.  ...  depth-pose learning methods under a variety of challenging scenarios, and achieves state-of-the-art results among self-supervised learning-based methods on KITTI Odometry and NYUv2 dataset.  ...  Acknowledgements This work was partially supported by NSFC (61725204, 61521002), BNRist and MOE-Key Laboratory of Pervasive Computing.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.01314v2">arXiv:2004.01314v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ria3yrzwafcdxoux2y2ctmguke">fatcat:ria3yrzwafcdxoux2y2ctmguke</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210907155654/https://arxiv.org/pdf/2004.01314v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9c/50/9c50d19707244af585314eb2f1a705f706082005.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2004.01314v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Improving Monocular Visual Odometry Using Learned Depth [article]

Libo Sun, Wei Yin, Enze Xie, Zhengrong Li, Changming Sun, Chunhua Shen
<span title="2022-04-04">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
With a sparse depth map and an RGB image input, the depth estimation module can generate accurate scale-consistent depth for dense mapping.  ...  Monocular visual odometry (VO) is an important task in robotics and computer vision.  ...  ACKNOWLEDGMENTS We thank the editor and the reviewers for their constructive comments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.01268v1">arXiv:2204.01268v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wzjfyftcfvdgpc2i7s2dhbnpc4">fatcat:wzjfyftcfvdgpc2i7s2dhbnpc4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220406205233/https://arxiv.org/pdf/2204.01268v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/00/62/00622d2594af8e033a2bafbd53354e03b2c30ad5.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2204.01268v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Special Issue on Robot Vision

Jana Košecká, Eric Marchand, Peter Corke
<span title="">2015</span> <i title="SAGE Publications"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/uhsvnr5ecvb4die3422lvgaz6q" style="color: black;">The international journal of robotics research</a> </i> &nbsp;
Happily, many techniques have matured over this period and become an integral part of many robotic vision systems, for example visual odometry, visual Simultaneous Localization and Mapping (SLAM), visual  ...  These issues nicely summarize the highlights and progress of the past 12 years of research devoted to the use of visual perception for robotics.  ...  This suggests a promising approach for learning models of semantically meaningful objects in an unsupervised setting, as well as enabling this strategy to attain reliable visual odometry estimates in the  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1177/0278364915574960">doi:10.1177/0278364915574960</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tjdlotredzddbi6mtsn34urkay">fatcat:tjdlotredzddbi6mtsn34urkay</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170924101835/https://hal.inria.fr/hal-01142837/document" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b8/6a/b86a798f5c1be82441b7a5839a9e909965ac42dd.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1177/0278364915574960"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> sagepub.com </button> </a>

VCIP 2020 Index

<span title="2020-12-01">2020</span> <i title="IEEE"> 2020 IEEE International Conference on Visual Communications and Image Processing (VCIP) </i> &nbsp;
Cane with Visual Odometry for Real-tim Indoor Navigation of Blind People Hu, Menghan Wearable Visually Assistive Device for Blind People to Appreciate Real-world Scene and Screen Image Hu, Min-Chun  ...  Audio-Visual Saliency Prediction for Omnidirectional Video with Spatial Audio Deng, Huiping Fast Geometry Estimation for Phase-coding Structured Light Field Dhollande, Nicolas Prediction-Aware  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/vcip49819.2020.9301896">doi:10.1109/vcip49819.2020.9301896</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bdh7cuvstzgrbaztnahjdp5s5y">fatcat:bdh7cuvstzgrbaztnahjdp5s5y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210105234653/https://ieeexplore.ieee.org/ielx7/9301747/9301748/09301896.pdf?tp=&amp;arnumber=9301896&amp;isnumber=9301748&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/25/ad/25ad6bc8038c7153c18c5e98fc8bf3e4c6a888fd.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/vcip49819.2020.9301896"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

SelfVIO: Self-Supervised Deep Monocular Visual-Inertial Odometry and Depth Estimation [article]

Yasin Almalioglu, Mehmet Turan, Alp Eren Sari, Muhamad Risqi U. Saputra, Pedro P. B. de Gusmão, Andrew Markham, Niki Trigoni
<span title="2020-07-23">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In the last decade, numerous supervised deep learning approaches requiring large amounts of labeled data have been proposed for visual-inertial odometry (VIO) and depth map estimation.  ...  The proposed approach is able to perform VIO without the need for IMU intrinsic parameters and/or the extrinsic calibration between the IMU and the camera. estimation and single-view depth recovery network  ...  The proposed unsupervised deep learning approach consists of depth generation, visual odometry, inertial odometry, visual-inertial fusion, spatial transformer, and target discrimination modules.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.09968v2">arXiv:1911.09968v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vxucv3n6mred3p6pnh5w4wrvki">fatcat:vxucv3n6mred3p6pnh5w4wrvki</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200725153601/https://arxiv.org/pdf/1911.09968v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4a/ca/4acad3cad158f5dcd0bd272a035c7e6ee1a234d7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.09968v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Special issue on Robot Vision

<span title="">2013</span> <i title="SAGE Publications"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/uhsvnr5ecvb4die3422lvgaz6q" style="color: black;">The international journal of robotics research</a> </i> &nbsp;
This suggests a promising approach for learning models of semantically meaningful objects in an unsupervised setting, as well as enabling this strategy to attain reliable visual odometry estimates in the  ...  A visual odometrybased system uses calibrated fish-eye imagery and sparse structured lighting to produce 3D textured surface models of the pipe's internal surface using a sparse bundle adjustment framework  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1177/0278364913484107">doi:10.1177/0278364913484107</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5ne2ajnlfvf4rj277mzscwn5xm">fatcat:5ne2ajnlfvf4rj277mzscwn5xm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20181103191605/http://eprints.qut.edu.au/84113/7/84113.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7d/0d/7d0de583577116c8144afcec26e8253c7c76bd48.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1177/0278364913484107"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> sagepub.com </button> </a>

Special Issue on Robot Vision

Kazunori Umeda
<span title="2003-06-20">2003</span> <i title="Fuji Technology Press Ltd."> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/sjxcfqgyfjdozdr45pnobtazxu" style="color: black;">Journal of Robotics and Mechatronics</a> </i> &nbsp;
This indicates the high degree of research activity in this field.  ...  Robot vision is an essential key technology in robotics and mechatronics. The number of studies on robot vision is wide-ranging, and this topic remains a hot vital target.  ...  This suggests a promising approach for learning models of semantically meaningful objects in an unsupervised setting, as well as enabling this strategy to attain reliable visual odometry estimates in the  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.20965/jrm.2003.p0253">doi:10.20965/jrm.2003.p0253</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/u46ey4aqbfb7leiimgnw45ggye">fatcat:u46ey4aqbfb7leiimgnw45ggye</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170924101835/https://hal.inria.fr/hal-01142837/document" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b8/6a/b86a798f5c1be82441b7a5839a9e909965ac42dd.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.20965/jrm.2003.p0253"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

Deep Visual Teach and Repeat on Path Networks

Tristan Swedish, Ramesh Raskar
<span title="">2018</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ilwxppn4d5hizekyd3ndvy2mii" style="color: black;">2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</a> </i> &nbsp;
We propose an approach for solving Visual Teach and Repeat tasks for routes that consist of discrete directions along path networks using deep learning.  ...  Our method is efficient for both storing or following paths and enables sharing of visual path specifications between parties without sharing visual data explicitly.  ...  Acknowledgements We would like to thank the reviewers for their helpful comments.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvprw.2018.00203">doi:10.1109/cvprw.2018.00203</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/cvpr/SwedishR18.html">dblp:conf/cvpr/SwedishR18</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/oisq2azadfd7ne2lp4ublf5bgq">fatcat:oisq2azadfd7ne2lp4ublf5bgq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200318224212/http://openaccess.thecvf.com/content_cvpr_2018_workshops/papers/w30/Swedish_Deep_Visual_Teach_CVPR_2018_paper.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/45/95/4595619bd2ed7031bdadbce640cbfb0ace5a40f0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/cvprw.2018.00203"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 363 results