Filters








15,913 Hits in 2.7 sec

Situational Fusion of Visual Representation for Visual Navigation [article]

Bokui Shen, Danfei Xu, Yuke Zhu, Leonidas J. Guibas, Li Fei-Fei, Silvio Savarese
2021 arXiv   pre-print
A complex visual navigation task puts an agent in different situations which call for a diverse range of visual perception abilities.  ...  Therefore, utilizing the appropriate visual perception abilities based on a situational understanding of the visual environment can empower these navigation models in unseen visual environments.  ...  Acknowledgement: We thank Andrey Kurenkov and Ajay Mandlekar for helpful comments.  ... 
arXiv:1908.09073v2 fatcat:zjhb3pft6jhibipvpi5sitlqfa

Combination of active sensing and sensor fusion for collision avoidance in mobile robots [chapter]

Terence Chek, Hion Heng, Yoshinori Kuno, Yoshiaki Shirai
1997 Lecture Notes in Computer Science  
To fully utilise the strengths of both the sonar and visual sensing systems, this paper proposes a fusion of navigating methods involving both the sonar and visual systems as primary sources to produce  ...  Presently, mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system.  ...  In this way, through active sensor fusion [10] , the robot will be able to navigate more intelligently in a real-life situation.  ... 
doi:10.1007/3-540-63508-4_169 fatcat:jqhuf2bznbfuzidcddz32pclcq

NAVIG: Guidance system for the visually impaired using virtual augmented reality

Brian F.G. Katz, Florian Dramas, Gaëtan Parseihian, Olivier Gutierrez, Slim Kammoun, Adrien Brilhault, Lucie Brunet, Mathieu Gallay, Bernard Oriola, Malika Auvray, Philippe Truillet, Michel Denis (+2 others)
2012 Technology and Disability  
Finding ones way to an unknown destination, navigating complex routes, finding inanimate objects; these are all tasks that can be challenging for the visually impaired.  ...  scale and small scale, through a combination of a Global Navigation Satellite System (GNSS) and rapid visual recognition with which the precise position of the user can be determined.  ...  The NAVIG consortium includes IRIT, LIMSI, Cer-Co, SpikeNet Technology, NAVOCAP, CESDV -Institute for Young Blind, and the community of Grand Toulouse.  ... 
doi:10.3233/tad-2012-0344 fatcat:jyaiefidwvhj3bnk7fxffw356u

Multi-modality Image Fusion via Generalized Riesz-wavelet Transformation

2014 KSII Transactions on Internet and Information Systems  
To preserve the spatial consistency of low-level features, generalized Riesz-wavelet transform (GRWT) is adopted for fusing multi-modality images.  ...  A performance analysis of the proposed method applied to real-world images demonstrates that it is competitive with the state-of-art fusion methods, especially in combining structural information.  ...  Programs Foundation of Ministry of Education of China (Grant Nos. 20090073110045).  ... 
doi:10.3837/tiis.2014.11.026 fatcat:4ru4bvy5ynaezeemhnygpuhwme

Autonomous swarm of heterogeneous robots for surveillance operations

Georgios Orfanidis, Savvas Apostolidis, Athanasios Kapoutsis, Konstantinos Ioannidis, Elias Kosmatopoulos, Stefanos Vrochidis, Ioannis Kompatsiaris
2019 Zenodo  
Dealing with these issues from the counter-threat perspective, the proposed architecture project focuses on designing and developing a complete system which utilizes the capabilities of multiple UxVs for  ...  Finally, the operator is informed properly according to the visual identification modules and the outcomes of the UxVs operations.  ...  The authors would like to thank the ROBORDER consortium for their valuable overall contribution.  ... 
doi:10.5281/zenodo.3497083 fatcat:kjoslohqr5dqxptqb4g2l6uxdm

NAVIG: augmented reality guidance system for the visually impaired

Brian F. G. Katz, Slim Kammoun, Gaëtan Parseihian, Olivier Gutierrez, Adrien Brilhault, Malika Auvray, Philippe Truillet, Michel Denis, Simon Thorpe, Christophe Jouffrais
2012 Virtual Reality  
Navigating complex routes and finding objects of interest are challenging tasks for the visually impaired.  ...  This system was developed in relation to guidance directives developed through participative design with potential users and educators for the visually impaired.  ...  Acknowledgments The NAVIG consortium includes IRIT, LIMSI, CerCo, SpikeNet Technology, NAVOCAP, CESDV -Institute for Young Blind, and the community of Grand Toulouse.  ... 
doi:10.1007/s10055-012-0213-6 fatcat:gq67mbayjnh63k3f4gzzv4gk64

Autonomous Vehicles Navigation with Visual Target Tracking: Technical Approaches

Zhen Jia, Arjuna Balasuriya, Subhash Challa
2008 Algorithms  
Next, the increasing trends of using data fusion for visual target tracking based autonomous vehicles navigation are discussed.  ...  It can be concluded that it is very necessary to develop robust visual target tracking based navigation algorithms for the broad applications of autonomous vehicles.  ...  Data fusion has been widely used in visual target tracking for autonomous vehicles navigation.  ... 
doi:10.3390/a1020153 fatcat:rasy5qe7zjdi5flbhezlaeblcu

Auxiliary Tasks Speed Up Learning PointGoal Navigation [article]

Joel Ye, Dhruv Batra, Erik Wijmans, Abhishek Das
2020 arXiv   pre-print
PointGoal Navigation is an embodied task that requires agents to navigate to a specified point in an unseen environment.  ...  To overcome this, we use attention to combine representations learnt from individual auxiliary tasks.  ...  The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the  ... 
arXiv:2007.04561v2 fatcat:qc4xvbl6dbdphkex67xwjppq6y

Context-Aware Assistive Indoor Navigation of Visually Impaired Persons

Chathurika S. Silva, Prasad Wimalaratne
2020 Sensors and materials  
This paper presents an approach for context awareness in navigation for visually impaired persons via sensor-based obstacle detection, obstacle recognition, sensor fusion, and walking context analysis.  ...  A fuzzy logic model is used for safety aspect handling during visually impaired navigation.  ...  Work has been reported in the field of indoor navigation for visually impaired persons with the use of a multitude of sensors.  ... 
doi:10.18494/sam.2020.2646 fatcat:buc4ljr46bc3dijxru2wipajeq

MVP: Unified Motion and Visual Self-Supervised Learning for Large-Scale Robotic Navigation [article]

Marvin Chancán, Michael Milford
2020 arXiv   pre-print
Conversely, recent reinforcement learning (RL) based methods for visual navigation rely on the quality of GPS data reception, which may not be reliable when directly using it as ground truth across multiple  ...  In this paper, we propose a novel motion and visual perception approach, dubbed MVP, that unifies these two sensor modalities for large-scale, target-driven navigation tasks.  ...  GPS data reception situations (drifted for better visualization).  ... 
arXiv:2003.00667v1 fatcat:6d6xs2ri3rarbj6te3fry44eoa

PREFACE: TECHNICAL COMMISSION IV ON SPATIAL INFORMATION SCIENCE

S. Zlatanova, S. Dragicevic, G. Sithole
2020 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences  
is coordinated by 10 working groups (WG) as follows - WG1: Strengthen the work on multidimensional spatial model and representations towards seamless data fusion; WG2: Advance the semantic modelling,  ...  The situation also provides an excellent opportunity to connect the work of the Commission to addressing an important global problem.  ...  The situation also provides an excellent opportunity to connect the work of the Commission to addressing an important global problem.  ... 
doi:10.5194/isprs-archives-xliii-b4-2020-7-2020 fatcat:3w644mlzsnah5balgrpgnfqb7u

Clustering and visualization of non-classified points from LiDAR data for helicopter navigation

Ferdinand Eisenkeil, Tobias Schafhitzel, Uwe Kühne, Oliver Deussen, Ivan Kadar
2014 Signal Processing, Sensor/Information Fusion, and Target Recognition XXIII  
Only a few lines are indicating the position of threatening unclassified points, where a hazardous situation for the helicopter could happen, if it comes too close.  ...  Cluster stability is a key feature to provide a smooth and un-distracting visualization for the pilot.  ...  For fusion of two clusters we apply a conservative Boolean operation and adapt the cluster for multiple sensor frames.  ... 
doi:10.1117/12.2050497 fatcat:2xbgngw7qnbsnbhev72cswjfbe

An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation

Yuanhao Qu, Minghao Yang, Jiaqing Zhang, Wu Xie, Baohua Qiang, Jinlong Chen
2021 Sensors  
This work summarizes the multi-sensor fusion methods for mobile agents' navigation by: (1) analyzing and comparing the advantages and disadvantages of a single sensor in the task of navigation; (2) introducing  ...  In spite of the high performance of the single-sensor navigation method, multi-sensor fusion methods still potentially improve the perception and navigation abilities of mobile agents.  ...  VINet uses an end-to-end trainable method for VIO which performs fusion of the data at an intermediate feature-representation level.  ... 
doi:10.3390/s21051605 pmid:33668886 pmcid:PMC7956205 fatcat:7twh225phbbupjd4fv2fw6twum

Preface: Technical Commission IV on Spatial Information Science

S. Zlatanova, S. Dragicevic, G. Sithole
2020 ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences  
is coordinated by 10 working groups (WG) as follows - WG1: Strengthen the work on multidimensional spatial model and representations towards seamless data fusion; WG2: Advance the semantic modelling,  ...  The situation also provides an excellent opportunity to connect the work of the Commission to addressing an important global problem.  ...  The situation also provides an excellent opportunity to connect the work of the Commission to addressing an important global problem.  ... 
doi:10.5194/isprs-annals-v-4-2020-7-2020 fatcat:b3f6ssatongszg3osn7fd7z3ae

Deep Learning for Underwater Visual Odometry Estimation

Bernardo Teixeira, Hugo Silva, Anibal Matos, Eduardo Silva
2020 IEEE Access  
Additionally, an extension of current work is proposed, in the form of a visual-inertial sensor fusion network aimed at correcting visual odometry estimate drift.  ...  Robot visual-based navigation faces several additional difficulties in the underwater context, which severely hinder both its robustness and the possibility for persistent autonomy in underwater mobile  ...  ). • Integration of visual-inertial fusion within end-to-end deep learning for underwater robot navigation pipelines.  ... 
doi:10.1109/access.2020.2978406 fatcat:zjjpiqgol5bclksbob6lnrf2lu
« Previous Showing results 1 — 15 out of 15,913 results