A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection
[article]
2017
arXiv
pre-print
Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well ...
They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. ...
Google's self-driving car project [18] relies on a combination of lasers, radars, and cameras in the form of a roof-mounted sensor pod to navigate pre-mapped environments. ...
arXiv:1708.09839v1
fatcat:sa2ze4tkg5dqng44u2tjtf2ygu
Positioning and perception in LIDAR point clouds
2021
Digital signal processing (Print)
Acknowledgment The research was supported by the Ministry of Innovation and Technology NRDI Office within the framework of the Autonomous Systems National Laboratory Program. ...
Visual-odometry using LIDARs Recently, several visual-odometry algorithms were proposed to compute the motion of a vehicle in real time using only the continuously streaming data gathered by the LIDAR ...
These methods first estimate the trajectories of the camera and Lidar sensors either by visual odometry and scan matching techniques, or by exploiting IMU and GNSS measurements. ...
doi:10.1016/j.dsp.2021.103193
fatcat:assc6y3epfc5zjcke5pikfs6ea
A Review of Visual Odometry Methods and Its Applications for Autonomous Driving
[article]
2020
arXiv
pre-print
This paper presents a recent review to methods that are pertinent to visual odometry with an emphasis on autonomous driving. ...
In attempts to develop exclusive vision-based systems, visual odometry is often considered as a key element to achieve motion estimation and self-localisation, in place of wheel odometry or inertial measurements ...
Current autonomous vehicles rely on a variety of sensors to achieve self-localisation and obstacle avoidance. These can include a combination of laser scanners, radar, GPS, and camera. ...
arXiv:2009.09193v1
fatcat:qwc2ov2vanbovby377i23hu7vm
Review of visual odometry: types, approaches, challenges, and applications
2016
SpringerPlus
Therefore, various sensors, techniques, and systems for mobile robot positioning, such as wheel odometry, laser/ultrasonic odometry, global position system (GPS), global navigation satellite system (GNSS ...
VO is compared with the most common localization sensors and techniques, such as inertial navigation systems, global positioning systems, and laser sensors. ...
Different sensors and techniques, such as wheel odometry, GPS, INS, sonar and laser sensors, and visual sensors, can be utilized for localization tasks. Each technique has its own drawbacks. ...
doi:10.1186/s40064-016-3573-7
pmid:27843754
pmcid:PMC5084145
fatcat:pfha22xk35gvra22i6sx4ccq2m
Visual mapping for natural gas pipe inspection
2014
The international journal of robotics research
In this work, we introduce a visual odometry-based system using calibrated fisheye imagery and sparse structured lighting to produce high-resolution 3D textured surface models of the inner pipe wall. ...
Our work extends state-of-the-art visual odometry and mapping for fisheye systems to incorporate weak geometric constraints based on prior knowledge of the pipe components into a sparse bundle adjustment ...
A sample dense 3D appearance map of the internal pipe structure was produced for one of the datasets using the visual odometry and scene reconstruction results. ...
doi:10.1177/0278364914550133
fatcat:qwtnmicstbfkvnb54yctx7i66i
Perception for a river mapping robot
2011
2011 IEEE/RSJ International Conference on Intelligent Robots and Systems
We describe three key components that use computer vision, laser scanning, and inertial sensing to follow the river without the use of a prior map, estimate motion of the rotorcraft, ensure collisionfree ...
We present a multimodal perception system to be used for the active exploration and mapping of a river from a small rotorcraft flying a few meters above the water. ...
We use the approach to visual odometry presented in [20] . This approach makes use of the stereo setup by triangulating 3D points based on matches in one stereo pair. ...
doi:10.1109/iros.2011.6095040
dblp:conf/iros/ChambersANRKCHSS11
fatcat:rncd337d6za5vjihyerwoxx6ae
Perception for a river mapping robot
2011
2011 IEEE/RSJ International Conference on Intelligent Robots and Systems
We describe three key components that use computer vision, laser scanning, and inertial sensing to follow the river without the use of a prior map, estimate motion of the rotorcraft, ensure collisionfree ...
We present a multimodal perception system to be used for the active exploration and mapping of a river from a small rotorcraft flying a few meters above the water. ...
We use the approach to visual odometry presented in [20] . This approach makes use of the stereo setup by triangulating 3D points based on matches in one stereo pair. ...
doi:10.1109/iros.2011.6048799
fatcat:guyxjvk62jezvemf4x2aaq62ti
Key technologies for intelligent and safer cars - From motion estimation to predictive collision avoidance
2010
2010 IEEE International Symposium on Industrial Electronics
In this paper, we present approaches to two of the major tasks for autonomous driving in urban environments: self-localization and egomotion estimation and detection of dynamic objects such as cars and ...
For each of these tasks we present a summary of the techniques we employ and results on real data. All modules have been implemented and tested on our autonomous car platform SmartTer. ...
Fig. 3 . 3 Recovered 3D map and camera positions: top view.
Fig. 4 . 4 Comparison between visual odometry (red dashed line) and ground truth (black solid line). The entire trajectory is 3Km long. ...
doi:10.1109/isie.2010.5636880
fatcat:ernnktdq5zeqpfib3r2glg2era
EU Long-term Dataset with Multiple Sensors for Autonomous Driving
[article]
2020
arXiv
pre-print
One of the major purposes of using sensors is to provide environment perception for vehicle understanding, learning and reasoning, and ultimately interacting with the environment. ...
while exploits a ROS (Robot Operating System) based software to process the sensory data. ...
Furthermore, as we take privacy very seriously and handle personal data in line with the EU's data protection law (i.e. the General Data Protection Regulation (GDPR)), we used deep learning-based methods ...
arXiv:1909.03330v3
fatcat:bncxrkpoevajdk6s6mnujylzi4
Stereo vision based indoor/outdoor navigation for flying robots
2013
2013 IEEE/RSJ International Conference on Intelligent Robots and Systems
Keyframebased stereo odometry is fused with IMU data compensating for time delays that are induced by the vision pipeline. The system state estimate is used for control and on-board 3D mapping. ...
Autonomous waypoint navigation, obstacle avoidance and flight control is implemented on-board. The system does not require a special environment, artificial markers or an external reference system. ...
Furthermore, we would like to thank Supercomputing Systems (SCS) in Switzerland as well as Uwe Franke and Stefan Gehrig (Daimler AG, Germany) for the FPGA implementation of SGM. ...
doi:10.1109/iros.2013.6696922
dblp:conf/iros/SchmidTRHS13
fatcat:4tozyfoznra2nc5uvh6doi6vse
State of the Art in Vision-Based Localization Techniques for Autonomous Navigation Systems
2021
IEEE Access
They provided a strategy to improve localization using laser maps with line fitting on Radar-based SLAM. ...
The five main categories of single-based approaches are wheel odometry, inertial odometry, radar odometry, visual odometry (VO), and laser-based odometry. ...
doi:10.1109/access.2021.3082778
fatcat:bgt6qrpdcngnrisgnday74ohsm
SelfVIO: Self-Supervised Deep Monocular Visual-Inertial Odometry and Depth Estimation
[article]
2020
arXiv
pre-print
In this study, we introduce a novel self-supervised deep learning-based VIO and depth map recovery approach (SelfVIO) using adversarial training and self-adaptive visual-inertial sensor fusion. ...
) approaches on the KITTI, EuRoC and Cityscapes datasets. ...
Although learning based methods use raw input data similar to the dense VO and VIO methods, they also extract features related to odometry, depth and optical flow without explicit mathematical modeling ...
arXiv:1911.09968v2
fatcat:vxucv3n6mred3p6pnh5w4wrvki
IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning
2014
Robotics
A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. ...
A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. ...
Chow prepared the manuscript; acquired the datasets using the laser scanner, Kinects, and IMU; developed software for manipulating, processing, and integrating data from the sensors; formulated the modified ...
doi:10.3390/robotics3030247
fatcat:3uqczkikubb2ff664ry65ms2jq
3D Sensors for Sewer Inspection: A Quantitative Review and Analysis
2021
Sensors
The acquired point clouds from the sensors are compared with reference 3D models using the cloud-to-mesh metric. ...
The 3D reconstruction performance of the sensors is assessed in both a laboratory setup and in an outdoor above-ground setup. ...
[47] 2016 X X Laser profilometry [48] 2017 X X Laser profilometry [49] 2017 X X LiDAR Dense stereo matching [50] 2018 X X X X Laser profilometry [51] 2018 X X ToF Visual inertial odometry [ ...
doi:10.3390/s21072553
pmid:33917392
fatcat:qxvtr6vghfhdriwj7fmyu62xuu
Large-scale Localization Datasets in Crowded Indoor Spaces
[article]
2021
arXiv
pre-print
We present a benchmark of modern visual localization algorithms on these challenging datasets showing superior performance of structure-based methods using robust image features. ...
They were captured in a large shopping mall and a large metro station in Seoul, South Korea, using a dedicated mapping platform consisting of 10 cameras and 2 laser scanners. ...
For calibration between the base and the cameras, we employ an online self-calibration via SFM rather than an offline method. ...
arXiv:2105.08941v1
fatcat:rpyu66x5h5h2zgigsifxmkbz6u
« Previous
Showing results 1 — 15 out of 306 results