Filters








60 Hits in 6.5 sec

Reducing drift in visual odometry by inferring sun direction using a Bayesian Convolutional Neural Network

Valentin Peretroukhin, Lee Clement, Jonathan Kelly
2017 2017 IEEE International Conference on Robotics and Automation (ICRA)  
We leverage recent advances in Bayesian Convolutional Neural Networks to train and implement a sun detection model that infers a three-dimensional sun direction vector from a single RGB image.  ...  We present a method to incorporate global orientation information from the sun into a visual odometry pipeline using only the existing image stream, where the sun is typically not visible.  ...  INDIRECT SUN DETECTION USING A BAYESIAN CONVOLUTIONAL NEURAL NETWORK We use a Convolutional Neural Network (CNN) to infer the direction of the sun.  ... 
doi:10.1109/icra.2017.7989235 dblp:conf/icra/PeretroukhinCK17 fatcat:luwfbtob5fhcfka3njyerhx2ve

Approaches, Challenges, and Applications for Deep Visual Odometry: Toward to Complicated and Emerging Areas [article]

Ke Wang, Sai Ma, Junlan Chen, Fan Ren
2020 arXiv   pre-print
Through the literature decomposition, analysis, and comparison, we finally put forward a number of open issues and raise some future research directions in this field.  ...  Visual odometry (VO) is a prevalent way to deal with the relative localization problem, which is becoming increasingly mature and accurate, but it tends to be fragile under challenging environments.  ...  [89] used Bayesian Convolutional Neural Network to generate sun direction information in the image and then incorporated this sun direction information into stereo visual odometry pipeline to reduce  ... 
arXiv:2009.02672v1 fatcat:zdnwt4lpmvbiromtpcxhxxpjxa

Learning Multiplicative Interactions with Bayesian Neural Networks for Visual-Inertial Odometry [article]

Kashmira Shinde, Jongseok Lee, Matthias Humt, Aydin Sezgin, Rudolph Triebel
2020 arXiv   pre-print
The proposed network makes use of a multi-head self-attention mechanism that learns multiplicative interactions between multiple streams of information.  ...  This paper presents an end-to-end multi-modal learning approach for monocular Visual-Inertial Odometry (VIO), which is specifically designed to exploit sensor complementarity in the light of sensor degradation  ...  The authors acknowledge the support by the Helmholtz Associations Initiative and Networking Fund (INF) under the Helmholtz AI platform grant agreement (contract number ID ZT-I-PF-5-1), the project ARCHES  ... 
arXiv:2007.07630v1 fatcat:pcilesijxbaunlw5jol4sg7h5i

Exploiting Sparse Semantic HD Maps for Self-Driving Vehicle Localization [article]

Wei-Chiu Ma, Ignacio Tartavull, Ioan Andrei Bârsan, Shenlong Wang, Min Bai, Gellert Mattyus, Namdar Homayounfar, Shrinidhi Kowshika Lakshmikanth, Andrei Pokrovsky, Raquel Urtasun
2019 arXiv   pre-print
Towards this goal, we formulate the problem in a Bayesian filtering framework, and exploit lanes, traffic signs, as well as vehicle dynamics to localize robustly with respect to a sparse semantic map.  ...  In this paper we propose a novel semantic localization algorithm that exploits multiple sensors and has precision on the order of a few centimeters.  ...  The presence of traffic signs helps reduce uncertainty along the longitudinal direction, but signs could be as sparse as every 1km, during which INS drift could be as large as 7 meters.  ... 
arXiv:1908.03274v1 fatcat:gdbrztrdlnebtmbqp364hopiky

Computer Vision for Autonomous Vehicles: Problems, Datasets and State of the Art [article]

Joel Janai, Fatma Güney, Aseem Behl, Andreas Geiger
2021 arXiv   pre-print
This book attempts to narrow this gap by providing a survey on the state-of-the-art datasets and techniques.  ...  Recent years have witnessed enormous progress in AI-related fields such as computer vision, machine learning, and autonomous vehicles.  ...  [181] present a direct sparse approach for monocular visual odometry.  ... 
arXiv:1704.05519v3 fatcat:xiintiarqjbfldheeg2hsydyra

milliEgo

Chris Xiaoxuan Lu, Muhamad Risqi U. Saputra, Peijun Zhao, Yasin Almalioglu, Pedro P. B. de Gusmao, Changhao Chen, Ke Sun, Niki Trigoni, Andrew Markham
2020 Proceedings of the 18th Conference on Embedded Networked Sensor Systems  
Smartphone AR/VR/MR UAV Ground Robot milliEgo Spatial Awareness DNN Model Figure 1: Our proposed milliEgo uses a low-cost COTS mmWave radar and IMU coupled with a deep neural network model to accurately  ...  Although currently dominated by optical techniques e.g., visual-inertial odometry, these suffer from challenges with scene illumination or featureless surfaces.  ...  Lastly, when applying recent advances in deep neural networks (DNNs) as used in visual or lidar odometry, computational load can be significant which hampers their adoption on mobile, wearable devices  ... 
doi:10.1145/3384419.3430776 dblp:conf/sensys/LuSZAGC0TM20 fatcat:4n7jmbkmfvfoxnay3geuqbvi64

milliEgo: Single-chip mmWave Radar Aided Egomotion Estimation via Deep Sensor Fusion [article]

Chris Xiaoxuan Lu, Muhamad Risqi U. Saputra, Peijun Zhao, Yasin Almalioglu, Pedro P. B. de Gusmao, Changhao Chen, Ke Sun, Niki Trigoni, Andrew Markham
2020 arXiv   pre-print
Although currently dominated by optical techniques e.g., visual-inertial odometry, these suffer from challenges with scene illumination or featureless surfaces.  ...  Secondly, to robustly fuse mmWave pose estimates with additional sensors, e.g. inertial or visual sensors we introduce a mixed attention approach to deep fusion.  ...  Lastly, when applying recent advances in deep neural networks (DNNs) as used in visual or lidar odometry, computational load can be significant which hampers their adoption on mobile, wearable devices  ... 
arXiv:2006.02266v2 fatcat:svh23pzogbcglg42urhmfdl47i

A Novel Online Approach for Drift Covariance Estimation of Odometries Used in Intelligent Vehicle Localization

Mostafa Osman, Ahmed Hussein, Abdulla Al-Kaff, Fernando García, Dongpu Cao
2019 Sensors  
In this paper, the drift error in an odometry is modeled and a Drift Covariance Estimation (DCE) algorithm is introduced.  ...  A lot of different odometries (visual, inertial, wheel encoders) have been introduced through the past few years for autonomous vehicle localization.  ...  Funding: This research was supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2016-78886-C3-1-R).  ... 
doi:10.3390/s19235178 pmid:31779211 pmcid:PMC6928711 fatcat:nutyyrckjvgtnj67hvneo2av7q

State of the Art in Vision-Based Localization Techniques for Autonomous Navigation Systems

Yusra Alkendi, Lakmal Seneviratne, Yahya Zweiri
2021 IEEE Access  
[98] have combined a recurrent CNN with a Bayesian CNN to infer the sun direction to improve VO ego-motion estimation.  ...  • Deep neural network was used to estab- lish 2D-2D/3D-2D correspondences for pose estimation. • Framework did not suffer from scale- drift issue.  ... 
doi:10.1109/access.2021.3082778 fatcat:bgt6qrpdcngnrisgnday74ohsm

An Overview of Perception and Decision-Making in Autonomous Systems in the Era of Learning [article]

Yang Tang, Chaoqiang Zhao, Jianrui Wang, Chongzhen Zhang, Qiyu Sun, Weixing Zheng, Wenli Du, Feng Qian, Juergen Kurths
2020 arXiv   pre-print
Finally, we examine the several challenges and promising directions discussed and concluded in related research for future works in the era of computer science, automatic control, and robotics.  ...  Autonomous systems possess the features of inferring their own ego-motion, autonomously understanding their surroundings, and planning trajectories.  ...  [128] used recurrent convolutional neural networks (RNN) for camera localization. Then, Xue et al.  ... 
arXiv:2001.02319v3 fatcat:z3zhp2cyonfqtlttl2y57572uy

A Critical Analysis of Image-based Camera Pose Estimation Techniques [article]

Meng Xu, Youchen Wang, Bin Xu, Jun Zhang, Jian Ren, Stefan Poslad, Pengfei Xu
2022 arXiv   pre-print
improvements in their algorithms such as loss functions, neural network structures.  ...  Furthermore, we summarise what are the popular datasets used for camera localization and compare the quantitative and qualitative results of these methods with detailed performance metrics.  ...  In contrast, the RPE is well-suited for measuring the drift of a visual odometry system, for example, the drift per second [10] . Absolute pose error (APE).  ... 
arXiv:2201.05816v1 fatcat:5wskhyskivh5bh67icaj3pc5i4

Coarse Semantic-based Motion Removal for Robust Mapping in Dynamic Environments

Shuo Wang, Xudong Lv, Junbao Li, Dong Ye
2020 IEEE Access  
We propose a novel method for keypoints selection to lower the negative effect brought by moving objects during map construction.  ...  For each frame in a sequence, objects in a scenario will be classified into two motion states, non-static and static, according to the category prediction from the moving object detection thread.  ...  RPE measures the local accuracy of the trajectory over a fixed time interval. RPE corresponds to the drift of trajectory which can be used to evaluate visual odometry.  ... 
doi:10.1109/access.2020.2989317 fatcat:uyebw54jr5bctn3ztcu4lmqcda

Event-Based Sensing and Signal Processing in the Visual, Auditory, and Olfactory Domain: A Review

Mohammad-Hassan Tayarani-Najaran, Michael Schmuker
2021 Frontiers in Neural Circuits  
Our aim is to facilitate research in event-based sensing and signal processing by providing a comprehensive overview of the research performed previously as well as highlighting conceptual advantages,  ...  In this paper we highlight the advantages and challenges of event-based sensing and signal processing in the visual, auditory and olfactory domains.  ...  The camera is then used for visual odometry of a robot.  ... 
doi:10.3389/fncir.2021.610446 pmid:34135736 pmcid:PMC8203204 fatcat:qjvv6czzufazthcyvs7go5pnj4

Scanning the Issue

Azim Eskandarian
2022 IEEE transactions on intelligent transportation systems (Print)  
The authors take into account the integration of blockchain and federated learning in vehicular networks as a direction for future research.  ...  This article first presents the state-of-the-art communication technologies, standards, and protocols in vehicular networks (either inter-vehicle networking or in-vehicle networking) along with several  ...  Motivated by a degenerate case caused by a large bias of an MEMS IMU, the authors redesign a system model of visual-inertial odometry in a framework of extended Kalman filter.  ... 
doi:10.1109/tits.2022.3141513 fatcat:gvywr655cvgolg7rfjrqmt33b4

A multirange architecture for collision-free off-road robot navigation

Pierre Sermanet, Raia Hadsell, Marco Scoffier, Matt Grimes, Jan Ben, Ayse Erkan, Chris Crudele, Urs Miller, Yann LeCun
2009 Journal of Field Robotics  
Localization is performed using a combination of GPS, wheel odometry, IMU, and a high-speed, low-complexity rotational visual odometry module.  ...  Instead of using a dynamical model of the robot for short-range planning, the system uses a large lookup table of physically-possible trajectory segments recorded on the robot in a wide variety of driving  ...  Acknowledgements This work was supported in part by the DARPA LAGR program under contract HR001105C0038.  ... 
doi:10.1002/rob.20270 fatcat:whztx6qearde7fyukx4hslx644
« Previous Showing results 1 — 15 out of 60 results