26,578 Hits in 6.5 sec

A Visual Navigation Perspective for Category-Level Object Pose Estimation [article]

Jiaxin Guo, Fangxun Zhong, Rong Xiong, Yunhui Liu, Yue Wang, Yiyi Liao
2022 arXiv   pre-print
This paper studies category-level object pose estimation based on a single monocular image.  ...  In this paper, we take a deeper look at the inference of analysis-by-synthesis from the perspective of visual navigation, and investigate what is a good navigation policy for this specific task.  ...  Object Pose Estimation as Visual Navigation Our goal is to improve the inference procedure of the analysis-by-synthesis pipeline for category-level object pose estimation.  ... 
arXiv:2203.13572v1 fatcat:cgxidaypf5dwlm6mgmw5rvueqa

An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation

Yuanhao Qu, Minghao Yang, Jiaqing Zhang, Wu Xie, Baohua Qiang, Jinlong Chen
2021 Sensors  
This work summarizes the multi-sensor fusion methods for mobile agents' navigation by: (1) analyzing and comparing the advantages and disadvantages of a single sensor in the task of navigation; (2) introducing  ...  In spite of the high performance of the single-sensor navigation method, multi-sensor fusion methods still potentially improve the perception and navigation abilities of mobile agents.  ...  In order to realize the autonomous navigation, it is also necessary to perform motion pose estimation for the mobile agent [13] .  ... 
doi:10.3390/s21051605 pmid:33668886 pmcid:PMC7956205 fatcat:7twh225phbbupjd4fv2fw6twum

Utilizing Semantic Visual Landmarks for Precise Vehicle Navigation [article]

Varun Murali, Han-Pang Chiu, Supun Samarasekera, Rakesh Kumar
2018 arXiv   pre-print
The class category that the visual feature belongs to is extracted from a pre-trained deep learning network trained for semantic segmentation.  ...  This paper presents a new approach for integrating semantic information for vision-based vehicle navigation.  ...  For example, Figure 1 shows the result of using an off-the-shelf video segmentation tool to classify object categories from a street scene.  ... 
arXiv:1801.00858v1 fatcat:7it7zmjtgndr5hzc7izdk3m4y4

Monocular Visual SLAM for Underwater Navigation in Turbid and Dynamic Environments

Chinthaka Amarasinghe, Asanga Ratnaweera, Sanjeeva Maitripala
2020 American Journal of Mechanical Engineering  
In this paper, we proposed UW-SLAM (Underwater SLAM), a new monocular visual SLAM algorithm focused on the underwater environment which addresses the turbidity and dynamism.  ...  Although many algorithms developed in recent years, especially in the ground and areal robotic communities, directly applying those methods in underwater navigation remain challenging due to the visual  ...  The pose is estimated with the Perspective-from-3-Points (P3P) formula, using the method expressed by Gao et al [38] .  ... 
doi:10.12691/ajme-8-2-5 fatcat:iclpghxltrc5lljb3qknka3oiu

Feature-based visual navigation integrity monitoring for urban autonomous platforms

Shizhuang Wang, Xingqun Zhan, Yuanwen Fu, Yawei Zhai
2020 Aerospace Systems  
and safety-assured pose estimates.  ...  Visual navigation systems have increasingly been adopted in many urban safety-critical applications, such as urban air mobility and highly automated vehicle, for which they must continuously provide accurate  ...  Obviously, integrity provides a direct way to quantify the level of safety from navigation perspective, and it is therefore employed to measure localization safety for urban autonomous systems.  ... 
doi:10.1007/s42401-020-00057-8 fatcat:awo4u3bu3japhngqz3zdhvdjmm

A fully-autonomous aerial robotic solution for the 2016 International Micro Air Vehicle competition

Carlos Sampedro, Hriday Bavle, Alejandro Rodriguez-Ramos, Adrian Carrio, Ramon A. Suarez Fernandez, Jose Luis Sanchez-Lopez, Pascual Campoy
2017 2017 International Conference on Unmanned Aircraft Systems (ICUAS)  
with objects, landing autonomously on a moving platform, etc.  ...  appropriate capabilities of the proposed system for performing high-level missions and its flexibility for being adapted to a wide variety of applications.  ...  the Mission Planning, Navigation and Mapping, Pose Estimation and Object Recognition capabilities.  ... 
doi:10.1109/icuas.2017.7991442 fatcat:cntwxackondvhpp6nvsqms2tjq

Fusion of vision, 3D gyro and GPS for camera dynamic registration

Z. Hu, U. Keiichi, H. Lu, F. Lamosa
2004 Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.  
This paper presents a novel framework of hybrid camera pose tracking system for outdoor navigation system.  ...  Our system combines vision, GPS and 3D inertial gyroscope sensors to obtain accurate and robust camera pose estimation result.  ...  visual performance of virtual objects in the AR space.  ... 
doi:10.1109/icpr.2004.1334539 dblp:conf/icpr/HuKLL04 fatcat:qqj33g5azzhftlkkqhpusiigxq


E. Blettery, P. Lecat, A. Devaux, V. Gouet-Brunet, F. Saly-Giocanti, M. Brédif, L. Delavoipière, S. Conord, F. Moret
2020 ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences  
This article presents a spatio-temporal web application dedicated to the co-exploitation of heterogeneous data spatialized in a common 3D environment, providing several paradigms for supporting their co-visualization  ...  Through census as statistical data and aerial imagery as visual data, a group of historians and sociologists experimented the relevance of the joint exploitation of those heterogeneous data within the  ...  We also would like to thank Martyna Poreba and Nathalie Abadie for their help in making available the images of Nanterre and Benjamin Lecat for his work on the census data.  ... 
doi:10.5194/isprs-annals-vi-4-w1-2020-45-2020 fatcat:t4yqyxdhbjg4toa3aarvq3unsy

6-PACK: Category-level 6D Pose Tracker with Anchor-Based Keypoints [article]

Chen Wang, Roberto Martín-Martín, Danfei Xu, Jun Lv, Cewu Lu, Li Fei-Fei, Silvio Savarese, Yuke Zhu
2019 arXiv   pre-print
We present 6-PACK, a deep learning approach to category-level 6D object pose tracking on RGB-D data.  ...  Our experiments show that our method substantially outperforms existing methods on the NOCS category-level 6D pose estimation benchmark and supports a physical robot to perform simple vision-based closed-loop  ...  CONCLUSION We presented 6-PACK, a category-level 6D object pose tracker.  ... 
arXiv:1910.10750v1 fatcat:kl36mdjjgrd3dh4nwmtxtyl5jy

A Comprehensive Review on 3D Object Detection and 6D Pose Estimation with Deep Learning

Sabera Hoque, MD. Yasir Arafat, Shuxiang Xu, Ananda Maiti, Yuchen Wei
2021 IEEE Access  
CATEGORY-LEVEL 6 DOF POSE ESTIMATION Sahin et al., [207] covers various challenges for 6D pose estimation such as inconsistency of viewpoint, objects (both texture and texture-less), curbs, cluttered scene  ...  For the 6-D category level pose estimation, two-level BB-based alternative methods have been developed that directly output the 6D pose without the use of any PNP but consist of ResNet (Residual Neural  ... 
doi:10.1109/access.2021.3114399 fatcat:kvdwsslqxff3lkh27tsdsciqma

A mobile indoor navigation system interface adapted to vision-based localization

Andreas Möller, Matthias Kranz, Robert Huitl, Stefan Diewald, Luis Roalter
2012 Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia - MUM '12  
The particular requirements to a navigation user interface for a vision-based system, however, have not been investigated so far.  ...  The additional indicators showed a potential for making users choose distinctive reference images for reliable localization.  ...  Acknowledgments This research project has been supported by the space agency of the German Aerospace Center with funds from the Federal Ministry of Economics and Technology on the basis of a resolution  ... 
doi:10.1145/2406367.2406372 dblp:conf/mum/MollerKHDR12 fatcat:ux7y767l6faphjldwvjv2ey5oa

Survey on Computer Vision for UAVs: Current Developments and Trends

Christoforos Kanellakis, George Nikolakopoulos
2017 Journal of Intelligent and Robotic Systems  
To this extent, visual sensing techniques have been integrated in the control pipeline of the UAVs in order to enhance their navigation and guidance skills.  ...  The aim of this article is to present a comprehensive literature review on vision based applications for UAVs focusing mainly on current developments and trends.  ..., which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a  ... 
doi:10.1007/s10846-017-0483-z fatcat:2abqia3mlnedrlonxhlsjanffy

NAVIG: Guidance system for the visually impaired using virtual augmented reality

Brian F.G. Katz, Florian Dramas, Gaëtan Parseihian, Olivier Gutierrez, Slim Kammoun, Adrien Brilhault, Lucie Brunet, Mathieu Gallay, Bernard Oriola, Malika Auvray, Philippe Truillet, Michel Denis (+2 others)
2012 Technology and Disability  
Finding ones way to an unknown destination, navigating complex routes, finding inanimate objects; these are all tasks that can be challenging for the visually impaired.  ...  scale and small scale, through a combination of a Global Navigation Satellite System (GNSS) and rapid visual recognition with which the precise position of the user can be determined.  ...  The NAVIG consortium includes IRIT, LIMSI, Cer-Co, SpikeNet Technology, NAVOCAP, CESDV -Institute for Young Blind, and the community of Grand Toulouse.  ... 
doi:10.3233/tad-2012-0344 fatcat:jyaiefidwvhj3bnk7fxffw356u

DRACO: Weakly Supervised Dense Reconstruction And Canonicalization of Objects [article]

Rahul Sajnani, AadilMehdi Sanchawala, Krishna Murthy Jatavallabhula, Srinath Sridhar, K. Madhava Krishna
2020 arXiv   pre-print
Canonical shape reconstruction, estimating 3D object shape in a coordinate space canonicalized for scale, rotation, and translation parameters, is an emerging paradigm that holds promise for a multitude  ...  We present DRACO, a method for Dense Reconstruction And Canonicalization of Object shape from one or more RGB images.  ...  Recently, canonicalization -the process of mapping an object instances to a category-level container -has emerged as a useful tool for category-level understanding [11] [12] [13] [14] (Fig. 3) .  ... 
arXiv:2011.12912v1 fatcat:of7rkwpfvza5rpaqzhd2jx23py

Review and classification of vision-based localisation techniques in unknown environments

Amani Ben-Afia, Vincent Gay-Bellile, Anne-Christine Escher, Daniel Salos, Laurent Soulier, Lina Deambrogio, Christophe Macabiau
2014 IET radar, sonar & navigation  
Localization can be defined as the process of estimating an object pose (position and attitude) relative to a reference frame, based on sensor inputs.  ...  The localization system performance is evaluated based on its accuracy defined as the degree of conformance of an estimated or measured information at a given time to a defined reference value which is  ...  The objective of this study is to describe and compare recent vision techniques for localization and provide guidelines for the integration of visual information with other navigation measurements such  ... 
doi:10.1049/iet-rsn.2013.0389 fatcat:hpun3pzn6nhuvdw4eyxbtjbedq
« Previous Showing results 1 — 15 out of 26,578 results