Filters








1,474 Hits in 11.5 sec

Accurate 3D maps from depth images and motion sensors via nonlinear Kalman filtering [article]

Thibault Hervier, Silvère Bonnabel, François Goulette
<span title="2012-05-16">2012</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper investigates the use of depth images as localisation sensors for 3D map building.  ...  Experiments with a Kinect sensor and a three-axis gyroscope prove clear improvement in the accuracy of the localization, and thus in the accuracy of the built 3D map.  ...  In order to improve the localisation, and thus the accuracy of the final 3D maps, we propose to combine the information from successive depth images and motion sensors.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1205.3727v1">arXiv:1205.3727v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zekrphxhcrfsnhpknkujfpb3d4">fatcat:zekrphxhcrfsnhpknkujfpb3d4</a> </span>
<a target="_blank" rel="noopener" href="https://archive.org/download/arxiv-1205.3727/1205.3727.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> File Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1205.3727v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation

Yuanhao Qu, Minghao Yang, Jiaqing Zhang, Wu Xie, Baohua Qiang, Jinlong Chen
<span title="2021-02-25">2021</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/taedaf6aozg7vitz5dpgkojane" style="color: black;">Sensors</a> </i> &nbsp;
the mainstream technologies of multi-sensor fusion methods, including various combinations of sensors and several widely recognized multi-modal sensor datasets.  ...  This work summarizes the multi-sensor fusion methods for mobile agents' navigation by: (1) analyzing and comparing the advantages and disadvantages of a single sensor in the task of navigation; (2) introducing  ...  the measurement vector, f ( ) is the nonlinear mapping equation from the previous state to the current state, h ( ) is the nonlinear mapping equation between state and measurement, Q is the covariance  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s21051605">doi:10.3390/s21051605</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/33668886">pmid:33668886</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC7956205/">pmcid:PMC7956205</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7twh225phbbupjd4fv2fw6twum">fatcat:7twh225phbbupjd4fv2fw6twum</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210301143816/https://res.mdpi.com/d_attachment/sensors/sensors-21-01605/article_deploy/sensors-21-01605-v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c9/fd/c9fdd72fe8d9270db94d8c3260ba647431723657.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s21051605"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7956205" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Simultaneous Localization and Mapping Algorithm Based on the Asynchronous Fusion of Laser and Vision Sensors

Kexin Xing, Xingsheng Zhang, Yegui Lin, Wenqi Ci, Wei Dong
<span title="2022-05-24">2022</span> <i title="Frontiers Media SA"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/el4ui6zhlfcjjbeubbsd7m4x6i" style="color: black;">Frontiers in Neurorobotics</a> </i> &nbsp;
In this paper, a simultaneous localization and mapping algorithm based on the weighted asynchronous fusion of laser and vision sensors is proposed for an assistant robot.  ...  When compared to the synchronous fusion method, the asynchronous fusion algorithm has a more accurate prior, faster operation speed, higher pose estimation frequency, and more accurate positioning accuracy  ...  As shown from Figure 3D , the asynchronous Kalman filtering and weighting algorithm has best tracking accuracy than other algorithms.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3389/fnbot.2022.866294">doi:10.3389/fnbot.2022.866294</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/35686119">pmid:35686119</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC9172619/">pmcid:PMC9172619</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/n3keld6dvjfefheujhayeljoeu">fatcat:n3keld6dvjfefheujhayeljoeu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220526190535/https://fjfsdata01prod.blob.core.windows.net/articles/files/866294/pubmed-zip/.versions/2/.package-entries/fnbot-16-866294-r1/fnbot-16-866294.pdf?sv=2018-03-28&amp;sr=b&amp;sig=fydw9MQ0ogRh1stNOQEz6%2F2mfML7hyXbLD5EAeBhoRA%3D&amp;se=2022-05-26T19%3A06%3A04Z&amp;sp=r&amp;rscd=attachment%3B%20filename%2A%3DUTF-8%27%27fnbot-16-866294.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/27/d8/27d8b6e1aed1f800dd9deda996ed42d5e3e0f4ac.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3389/fnbot.2022.866294"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> frontiersin.org </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9172619" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Quantitative Depth Recovery from Time-Varying Optical Flow in a Kalman Filter Framework [chapter]

John Barron, Wang Kay Jacky Ngai, Hagen Spies
<span title="">2003</span> <i title="Springer Berlin Heidelberg"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/2w3awgokqne6te4nvlofavy5a4" style="color: black;">Lecture Notes in Computer Science</a> </i> &nbsp;
We present a Kalman lter framework for recovering depth from the time-varying optical ow elds generated by a camera translating over a scene by a known amount.  ...  Synthetic data made from ray traced cubical, cylinderal and spherical primitives are used in the optical ow calculation and allow a quantitative error analysis of the recovered depth.  ...  Conclusions We have presented a new algorithm to compute dense accurate depth using a Kalman lter framework.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/3-540-36586-9_22">doi:10.1007/3-540-36586-9_22</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7rxzeiw2kbcfbm5af5stglc3me">fatcat:7rxzeiw2kbcfbm5af5stglc3me</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20110401055731/http://www.csd.uwo.ca/faculty/barron/PAPERS/x.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/aa/9d/aa9d86165c5a3b51ab4c4c56a262bf835c5a123f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/3-540-36586-9_22"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Monocular feature-based periodic motion estimation for surgical guidance

Stephen Tully, George Kantor, Howie Choset
<span title="">2013</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/nytnrt3gtzbixld5r6a4talbky" style="color: black;">2013 IEEE International Conference on Robotics and Automation</a> </i> &nbsp;
Our approach uses a bank of Kalman filters to estimate FFT parameters that encode the periodic motion of visually detected features.  ...  To ensure convergent estimation for this highly nonlinear problem, we have developed an iterative update procedure that treats the Kalman filter measurement update step as an optimization problem.  ...  Due to the nonlinearity of this mapping task, the conventionally adopted extended Kalman filter (EKF) is susceptible to divergence.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icra.2013.6631201">doi:10.1109/icra.2013.6631201</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icra/TullyKC13.html">dblp:conf/icra/TullyKC13</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/uvyzjlf5qzdubcvyporrhvxjfq">fatcat:uvyzjlf5qzdubcvyporrhvxjfq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170413090716/http://stephentully.net/papers/icra13periodic.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/24/93/2493f6299f6949ca7730034316a34200c86ed501.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/icra.2013.6631201"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

State of the Art in Vision-Based Localization Techniques for Autonomous Navigation Systems

Yusra Alkendi, Lakmal Seneviratne, Yahya Zweiri
<span title="">2021</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
Existing filtering-based VIO solutions use the nonlinear filter framework (Kalman filter) where errors are linearized producing accurate pose estimation.  ...  Motion is estimated in Large Scale Direct-SLAM [47] by the image alignment method that relies on the depth map.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2021.3082778">doi:10.1109/access.2021.3082778</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bgt6qrpdcngnrisgnday74ohsm">fatcat:bgt6qrpdcngnrisgnday74ohsm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210717231650/https://ieeexplore.ieee.org/ielx7/6287639/6514899/09438708.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8e/91/8e91c90bce5e87cc08d641a343c54e547d084dad.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2021.3082778"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Visual analytics of 3D LiDAR point clouds in robotics operating systems

Alia Mohd Azri, Shuzlina Abdul-Rahman, Raseeda Hamzah, Zalilah Abd Aziz, Nordin Abu Bakar
<span title="2020-04-01">2020</span> <i title="Institute of Advanced Engineering and Science"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/mwj5yys4mnek7al64kpw5psjqi" style="color: black;">Bulletin of Electrical Engineering and Informatics</a> </i> &nbsp;
Existing algorithms such as Grid M apping and M onte Carlo have limitations in dealing with 3D environment data that have led to less accurate estimation.  ...  In this study, experiment on Simultaneous Localization and M apping (SLAM ) using point cloud data derived from the Light Detection and Ranging (LiDAR) technology is conducted.  ...  ACKNOWLEDGEMENTS The authors would like to thank the Institute of Quality & Knowledge Advancement (InQKA) for the publication fee and the Faculty of Computer & Mathematical Sciences for all the given support  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.11591/eei.v9i2.2061">doi:10.11591/eei.v9i2.2061</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pavgdbc74rabxilrvomjaxgele">fatcat:pavgdbc74rabxilrvomjaxgele</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200506183800/http://beei.org/index.php/EEI/article/download/2061/1397" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ad/fd/adfd37b2ecfbd92e6ddf5be413bebf77b3f68762.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.11591/eei.v9i2.2061"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> Publisher / doi.org </button> </a>

A critique on previous work in vision aided navigation

Charles Murcott, Francois Du Plessis, Johan Meyer
<span title="">2011</span> <i title="IEEE"> IEEE Africon &#39;11 </i> &nbsp;
While the motion estimations from the cameras are not error-free, this method is made highly effective because of the complementary nature of the errors from the cameras and INS.  ...  Several improvements and updates are proposed for the existent systems. GPS receivers have allowed for accurate navigation for many vehicles and robotic platforms.  ...  Landmarks are extracted and stored in a probabilistic 3D map.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/afrcon.2011.6072126">doi:10.1109/afrcon.2011.6072126</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qatd6kedxjcpzbpu2z35pt27ga">fatcat:qatd6kedxjcpzbpu2z35pt27ga</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170922233116/https://ujcontent.uj.ac.za/vital/access/services/Download/uj:6259/CONTENT1" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/49/32/49322ae05d25179375bc8e41ffeb78f050d903bc.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/afrcon.2011.6072126"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

An Overview to Visual Odometry and Visual SLAM: Applications to Mobile Robotics

Khalid Yousif, Alireza Bab-Hadiashar, Reza Hoseinnezhad
<span title="2015-11-13">2015</span> <i title="Springer Nature"> Intelligent Industrial Systems </i> &nbsp;
We discuss and compare the basics of most common SLAM methods such as the Extended Kalman Filter SLAM (EKF-SLAM), Particle Filter and the most recent RGB-D SLAM.  ...  The topics we discuss range from basic localization techniques such as wheel odometry and dead reckoning, to the more advance Visual Odometry (VO) and Simultaneous Localization and Mapping (SLAM) techniques  ...  The authors also propose new nonlinear distributed filtering approach called "Derivative-free distributed nonlinear Kalman Filter".  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s40903-015-0032-7">doi:10.1007/s40903-015-0032-7</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/jndhsezkhjepdizyaq4odtvgba">fatcat:jndhsezkhjepdizyaq4odtvgba</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180728215649/https://link.springer.com/content/pdf/10.1007%2Fs40903-015-0032-7.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7d/00/7d00b1bc8689538f32cf457085535a9994a7906c.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s40903-015-0032-7"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Vision-Based Pole-Like Obstacle Detection and Localization for Urban Mobile Robots

Stefano Sabatini, Matteo Corno, Simone Fiorenti, Sergio Matteo Savaresi
<span title="">2018</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/7cbpwi7j75hgfoparrxqyf2c2i" style="color: black;">2018 IEEE Intelligent Vehicles Symposium (IV)</a> </i> &nbsp;
The approach described here is based on identifying poles as long vertical structures in the image and in locating them with respect to the robot using a Kalman filter based depth estimation.  ...  Such obstacles, due to their thin structure, may be difficult to be detected by common active sensors like lasers.  ...  Considering now a moving camera in a 3D static environment, it is useful to map the motion of the points projected in the image plane to the known motion of the camera.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/ivs.2018.8500452">doi:10.1109/ivs.2018.8500452</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/ivs/SabatiniCFS18.html">dblp:conf/ivs/SabatiniCFS18</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sjdlk7qu6nhjrm2s2wvb7me7yu">fatcat:sjdlk7qu6nhjrm2s2wvb7me7yu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200505152759/https://re.public.polimi.it/retrieve/handle/11311/1062220/462693/Vision-based%20street%20pole-like%20detection%20for%20a%20mobile%20robot.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0d/fb/0dfbd06e9dcbba2b42a6a14fe34ff0617c6aa117.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/ivs.2018.8500452"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Simultaneous Localization and Mapping (SLAM) and Data Fusion in Unmanned Aerial Vehicles: Recent Advances and Challenges

Abhishek Gupta, Xavier Fernando
<span title="2022-03-28">2022</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/ycpqunwwuja6beynxtnfswzqgy" style="color: black;">Drones</a> </i> &nbsp;
We then discuss SLAM techniques such as Kalman filters and extended Kalman filters to address scene perception, mapping, and localization in UAVs.  ...  We begin with an introduction to applications where UAV localization is necessary, followed by an analysis of multimodal sensor data fusion to fuse the information gathered from different sensors mounted  ...  Search Space Reduction in Nonlinear Systems Using Extended Kalman Filters A filtering technique widely used to solve nonlinear tracking problems is the extended Kalman filter (EKF).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/drones6040085">doi:10.3390/drones6040085</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/abl72hadbze3tgwl3xysbcnpoq">fatcat:abl72hadbze3tgwl3xysbcnpoq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220613004648/https://mdpi-res.com/d_attachment/drones/drones-06-00085/article_deploy/drones-06-00085-v2.pdf?version=1648539467" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ec/e1/ece1a14bfd98f70c8306c7ae955ee311b4b11e5a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/drones6040085"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Integration of 3D and 2D imaging data for assured navigation in unknown environments

Evan Dill, Maarten Uijt de Haag
<span title="">2010</span> <i title="IEEE"> IEEE/ION Position, Location and Navigation Symposium </i> &nbsp;
Also, the 2D imagery suffers from an unknown depth when estimating the position from consecutive image frames.  ...  With these features, consecutive observations in the 3D and 2D image frames can be used to compute and estimate position and orientation change of the sensors.  ...  The interpolated points will be used at the exact times that the Kalman filter updates, and will create a very accurate comparison of motion between the 3D imager data and the IMU data. 4.1Wavelet  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/plans.2010.5507244">doi:10.1109/plans.2010.5507244</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/j2su7efgyfejnmpsgmg2owzfl4">fatcat:j2su7efgyfejnmpsgmg2owzfl4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170922031031/https://etd.ohiolink.edu/!etd.send_file?accession=ohiou1299616166&amp;disposition=inline" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/73/35/73353c0d96db7f312758990762649601de809745.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/plans.2010.5507244"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Survey of Datafusion Techniques for Laser and Vision Based Sensor Integration for Autonomous Navigation

Prasanna Kolar, Patrick Benavidez, Mo Jamshidi
<span title="2020-04-12">2020</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/taedaf6aozg7vitz5dpgkojane" style="color: black;">Sensors</a> </i> &nbsp;
(RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping  ...  This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.  ...  The nonlinear filtering problem heuristic is the Extended Kalman Filter (EKF). This technique is naturally the most sought after filtering and estimation for nonlinear systems.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s20082180">doi:10.3390/s20082180</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/32290582">pmid:32290582</a> <a target="_blank" rel="external noopener" href="https://pubmed.ncbi.nlm.nih.gov/PMC7218742/">pmcid:PMC7218742</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ngfckkvp3fc23o7idvivlhdzbq">fatcat:ngfckkvp3fc23o7idvivlhdzbq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200421100232/https://res.mdpi.com/d_attachment/sensors/sensors-20-02180/article_deploy/sensors-20-02180-v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/16/5c/165cda1e135259d65a2a3576ece0a1fbca01e0b2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s20082180"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7218742" title="pubmed link"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> pubmed.gov </button> </a>

Navigation in Difficult Environments: Multi-Sensor Fusion Techniques [chapter]

Andrey Soloviev, Mikel M. Miller
<span title="2011-10-28">2011</span> <i title="Springer New York"> Sensors: Theory, Algorithms, and Applications </i> &nbsp;
, for example, features extracted from images of laser scanners and video cameras.  ...  This paper focuses on multi-sensor fusion for navigation in difficult environments where none of the existing navigation technologies can satisfy requirements for accurate and reliable navigation if used  ...  depth for a general 3D case.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-0-387-88619-0_9">doi:10.1007/978-0-387-88619-0_9</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tycp46bmnzerxhxxavnilmnpcu">fatcat:tycp46bmnzerxhxxavnilmnpcu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170225165328/http://www.dtic.mil/dtic/tr/fulltext/u2/a581024.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/71/82/718220d966b6f45b82f383793914b65c67dab476.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/978-0-387-88619-0_9"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>

Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme

Bernd Kitt, Andreas Geiger, Henning Lategahn
<span title="">2010</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/7cbpwi7j75hgfoparrxqyf2c2i" style="color: black;">2010 IEEE Intelligent Vehicles Symposium</a> </i> &nbsp;
We employ an Iterated Sigma Point Kalman Filter in combination with a RANSAC-based outlier rejection scheme which yields robust frame-to-frame motion estimation even in dynamic environments.  ...  In this paper we propose a novel approach for estimating the egomotion of the vehicle from a sequence of stereo images.  ...  Here, we make use of the trifocal tensor's ability to map two corresponding feature points x A ↔ x B in images A and B into image C. Figure 1 illustrates this procedure graphically: 3d space.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/ivs.2010.5548123">doi:10.1109/ivs.2010.5548123</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/ivs/KittGL10.html">dblp:conf/ivs/KittGL10</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/z3cyiz3skzfv7clkr2lwxm3mxu">fatcat:z3cyiz3skzfv7clkr2lwxm3mxu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20101226054451/http://rainsoft.de:80/publications/iv10b.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b2/71/b271ec368d0e9898350a82df70a298e2783bcaa7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/ivs.2010.5548123"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 1,474 results