Filters








183 Hits in 6.8 sec

Ceiling-Vision-Based Mobile Object Self-Localization: A Composite Framework

Alfredo Cuzzocrea, Alfredo Cuzzocrea
2021 Journal of Visual Language and Computing  
Mobile objects vision based self-localization is currently an open research field [29] and an increasing number of new methods are continuously proposed.  ...  In this paper we describes a novel Computer Vision DOI reference number: 10-18293/JVLC2021-N2-019 A composite framework for supporting mobile object self-localization based on ceiling vision is presented  ... 
doi:10.18293/jvlc2021-n2-019 fatcat:55nc6mgj6jen7bdjfhvirbmt5u

Monocular vision SLAM for indoor aerial vehicles

Koray Celik, Soon-Jo Chung, Matthew Clausman, Arun K. Somani
2009 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems  
By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM.  ...  We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV) with sophisticated on-board image processing and SLAM capabilities.  ...  We integrate this ranging technique with SLAM to achieve autonomous indoor navigation of an MAV. Related Work on Vision-Based SLAM.  ... 
doi:10.1109/iros.2009.5354050 dblp:conf/iros/CelikCCS09 fatcat:abxdddwkjjb57h3q2cmdoxnhsy

Monocular Vision SLAM for Indoor Aerial Vehicles

Koray Çelik, Arun K. Somani
2013 Journal of Electrical and Computer Engineering  
By exploiting the architectural orthogonality of the indoor environments, we introduce a new method to estimate range and vehicle states from a monocular camera for vision-based SLAM.  ...  We experimentally validate the proposed algorithms on a fully self-contained microaerial vehicle (MAV) with sophisticated on-board image processing and SLAM capabilities.  ...  An MAV with vision based on-line simultaneous localization and mapping (SLAM) capabilities can pave the way for an ultimate GPS-free navigation tool for both urban outdoors and architectural indoors.  ... 
doi:10.1155/2013/374165 fatcat:cwpgimadibaprl4uhnvkvjyblq

Application of Augmented Reality and Robotic Technology in Broadcasting: A Survey

Dingtian Yan, Huosheng Hu
2017 Robotics  
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries.  ...  In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting.  ...  Author Contributions: All authors have discussed and commented on the manuscript at all stages.  ... 
doi:10.3390/robotics6030018 fatcat:rtutdrwjlvb3zaybx4mbudxxb4

ESPEE: Event-Based Sensor Pose Estimation Using an Extended Kalman Filter

Fabien Colonnier, Luca Della Vedova, Garrick Orchard
2021 Sensors  
In this paper, we present an event-based algorithm that relies on an Extended Kalman Filter for 6-Degree of Freedom sensor pose estimation.  ...  Event-based vision sensors show great promise for use in embedded applications requiring low-latency passive sensing at a low computational cost.  ...  The authors of [28] proposed an event-based method for tracking the relative 6-DoF pose between an event-based sensor and an object in the world.  ... 
doi:10.3390/s21237840 pmid:34883852 fatcat:imjzxox6qrepzdq5j7ahy3ikvy

A Vision based Vehicle Detection System

Himanshu Chandel, Sonia Vatta
2015 Communications on Applied Electronics  
This research work investigates the techniques for monocular vision based vehicle detection. A system that can robustly detect and track vehicles in images.  ...  The system consists of three major modules: shape analysis based on Histogram of oriented gradient (HOG) is used as the main feature descriptor, a machine learning part based on support vector machine  ...  The part is any element of an object or scene that can be reliably detected using only local image evidence. In part based model, each part represents local visual properties.  ... 
doi:10.5120/cae2015651767 fatcat:kzxnbjx2hjgqfdgh4hg4qa5frq

Low-Cost Visually Intelligent Robots With Eot

David Moloney, Dexmont Pena, Aubrey Dunne, Alireza Dehghani, Gary Baugh, Sam Caulfield, Kevin Lee, Xiaofan Xu, Maximilian Müller, Remi Gastaud, Ovidiu Vesa, Oscar Deniz Suarez (+2 others)
2016 Zenodo  
This paper describes the development of a low-cost, low-power visual-intelligence and robotics platform based on the H2020 EoT (Eyes of Things platform) and the Movidius Myriad2 VPU (Vision Processing  ...  Unit), associated machine vision, communications and motor-control libraries and the Movidius Fathom deep-learning framework.  ...  ACKNOWLEDGMENT This work has been supported by the European Union's Horizon 2020 Research and Innovation Programme under grant agreement No. 643924.  ... 
doi:10.5281/zenodo.56226 fatcat:xlnwhwhtmrc4hmshbggrylrkkm

State-of-the-Art Mobile Intelligence: Enabling Robots to Move Like Humans by Estimating Mobility with Artificial Intelligence

Xue-Bo Jin, Ting-Li Su, Jian-Lei Kong, Yu-Ting Bai, Bei-Bei Miao, Chao Dou
2018 Applied Sciences  
The performance of an autonomous car heavily depends on the accuracy and reliability of its environmental perception technologies, including self-localization and perception of obstacles.  ...  The tracking algorithms are based mainly on estimation theory. Here, the target is a general one, either the robot itself or others.  ...  The vision-based approach is shown in Figure 5 . Natural landmarks, such as corridors, edges, doors, wall, and ceiling lights, are objects or features that are part of the environment.  ... 
doi:10.3390/app8030379 fatcat:zx2u5ox4ivcvtm2vb2yg2kmbqm

IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning

Jacky Chow, Derek Lichti, Jeroen Hol, Giovanni Bellusci, Henk Luinge
2014 Robotics  
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields.  ...  Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid "continuous stop-and-go" mobile mapping system called Scannect  ...  Acknowledgments This research is funded by the Natural Science and Engineering Research Council (NSERC) of Canada, Alberta Innovates, the Canada Foundation for Innovation, and the Killam Trust.  ... 
doi:10.3390/robotics3030247 fatcat:3uqczkikubb2ff664ry65ms2jq

Simultaneous Localization and Mapping: A Survey of Current Trends in Autonomous Driving

Guillaume Bresson, Zayed Alsayed, Li Yu, Sebastien Glaser
2017 IEEE Transactions on Intelligent Vehicles  
We mostly focus on approaches building and reusing long-term maps in various conditions (weather, season, etc.).  ...  The growing interest regarding self-driving cars has given new directions to localization and mapping techniques.  ...  The main advantage of a sphere is to cover a given area and not only one position. An online registration method based on monocular inputs serves to localize the vehicle.  ... 
doi:10.1109/tiv.2017.2749181 fatcat:ohjoahw24zakrmrleg6vogzg3q

Enabling Flexibility in Manufacturing by Integrating Shopfloor and Process Perception for Mobile Robot Workers

Angelos Christos Bavelos, Niki Kousi, Christos Gkournelos, Konstantinos Lotsaris, Sotiris Aivaliotis, George Michalos, Sotiris Makris
2021 Applied Sciences  
This paper presents a smart execution control framework for enabling the autonomous operation of flexible mobile robot workers.  ...  Acknowledgments: This research was funded by European Union's Horizon 2020 research and innovation program under grant agreement nos. 723616 (www.thomas-project.eu/, accessed on 12 April 2021) and 825196  ...  (https://trinityrobotics.eu/, accessed on 12 April 2021).  ... 
doi:10.3390/app11093985 doaj:cf9bd55b559a4e0196093df606ef16b5 fatcat:vzcxcg6g7vbgbhm7oa6owp2uva

Sliding Window Mapping for Omnidirectional RGB-D Sensors

Nicolas Dalmedico, Marco Antônio Simões Simões Teixeira, Higor Barbosa Barbosa Santos, Rafael de Castro Martins Nogueira, Lúcia Valéria Ramos de Ramos de Arruda, Flávio Neves, Pipa, Ramos, Schneider de Oliveira
2019 Sensors  
The mapping strategy is based on two environment maps, a local map for instantaneous perception, and a global map for perception memory.  ...  This paper presents an omnidirectional RGB-D (RGB + Distance fusion) sensor prototype using an actuated LIDAR (Light Detection and Ranging) and an RGB camera.  ...  The mobile robot's skills (such as self-localization and safe navigation through an environment) depend on the quality of its sensors, which support the robot's perception of obstacles, objects, people  ... 
doi:10.3390/s19235121 pmid:31766772 pmcid:PMC6928814 fatcat:j3jgkpktnzabnluyivwfn6pxfa

Fireground location understanding by semantic linking of visual objects and building information models

Florian Vandecasteele, Bart Merci, Steven Verstockt
2017 Fire safety journal  
This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques.  ...  Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider.  ...  In our framework, contrarily, we focus on computer vision based object detection and localization and semantically compare these results with the location of objects in the building model.  ... 
doi:10.1016/j.firesaf.2017.03.083 fatcat:wrnkmyfq4zcabd6gilruhbkcfa

Visual detection of vehicles using a bag-of-features approach

Pedro Pinto, Ana Tome, Vitor Santos
2013 2013 13th International Conference on Autonomous Robot Systems  
Section IV introduces the Visual-Perception Layer based on Monocular Vision.  ...  Due to limitations found in image-based mobile robot localization approaches, regarding illumination changes, and aiming the development of an efficient self-localization solution that can work in places  ...  actions (e.g. the uncertainty of the robot's self-localization and the expected object pose).  ... 
doi:10.1109/robotica.2013.6623539 fatcat:ialsxj53yzfkfe5f766krtkkrq

Machine Vision for intelligent Semi-Autonomous Transport (MV-iSAT)

Kenneth Mapanga, Veera Ragavan Sampath Kumar
2012 Procedia Engineering  
It is hoped that the iSAT platform will provide the basis for future work on advanced FPGA-based machine-vision algorithms for mobile robotics.  ...  The primary focus was to develop a vision-based system suitable for the navigation and mapping of an indoor, single-floor environment.  ...  The primary objective of using hardware-software co-design to develop a real-time machine vision system for a mobile robot has been successful.  ... 
doi:10.1016/j.proeng.2012.07.190 fatcat:23upxzsiufg3rjosxuijpbpmya
« Previous Showing results 1 — 15 out of 183 results