Filters








760 Hits in 6.5 sec

Depth-Camera-Aided Inertial Navigation Utilizing Directional Constraints

Usman Qayyum, Jonghyuk Kim
2021 Sensors  
This paper presents a practical yet effective solution for integrating an RGB-D camera and an inertial sensor to handle the depth dropouts that frequently happen in outdoor environments, due to the short  ...  In depth drop conditions, only the partial 5-degrees-of-freedom pose information (attitude and position with an unknown scale) is available from the RGB-D sensor.  ...  Consequently, the RGB-D sensor would act virtually as a monocular camera, causing a depth dropout problem, limiting the usability of RGB-D sensors in outdoor flying conditions.  ... 
doi:10.3390/s21175913 pmid:34502806 fatcat:chlnewub5nhixc5p4tejak25ya

Monocular Depth Estimation by Learning from Heterogeneous Datasets [article]

Akhil Gurram, Onay Urfalioglu, Ibrahim Halfaoui, Fahd Bouzaraa and Antonio M. Lopez
2018 arXiv   pre-print
Especially, Monocular Depth Estimation is interesting from a practical point of view, since using a single camera is cheaper than many other options and avoids the need for continuous calibration strategies  ...  State-of-the-art methods for Monocular Depth Estimation are based on Convolutional Neural Networks (CNNs).  ...  and its ACCIO agency.  ... 
arXiv:1803.08018v2 fatcat:bkocmpyg6za7rhwotqnwhc5z4a

SelfVIO: Self-Supervised Deep Monocular Visual-Inertial Odometry and Depth Estimation [article]

Yasin Almalioglu, Mehmet Turan, Alp Eren Sari, Muhamad Risqi U. Saputra, Pedro P. B. de Gusmão, Andrew Markham, Niki Trigoni
2020 arXiv   pre-print
The proposed approach is able to perform VIO without the need for IMU intrinsic parameters and/or the extrinsic calibration between the IMU and the camera. estimation and single-view depth recovery network  ...  SelfVIO learns to jointly estimate 6 degrees-of-freedom (6-DoF) ego-motion and a depth map of the scene from unlabeled monocular RGB image sequences and inertial measurement unit (IMU) readings.  ...  data captured by an exteroceptive RGB camera sensor.  ... 
arXiv:1911.09968v2 fatcat:vxucv3n6mred3p6pnh5w4wrvki

A Hybrid Sparse-Dense Monocular SLAM System for Autonomous Driving [article]

Louis Gallagher, Varun Ravi Kumar, Senthil Yogamani, John B. McDonald
2021 arXiv   pre-print
In this paper, we present a system for incrementally reconstructing a dense 3D model of the geometry of an outdoor environment using a single monocular camera attached to a moving vehicle.  ...  Our novel contributions include design of hybrid sparse-dense camera tracking and loop closure, and scale estimation improvements in dense depth prediction.  ...  A calibrated monocular camera is inexpensive, lightweight, and can be used as a direction sensor providing rich photometric and geometric measurements of the scene at every frame.  ... 
arXiv:2108.07736v1 fatcat:pk4vxikqd5fq5fwhdsa5ktqebe

Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review

De Jong Yeong, Gustavo Velasco-Hernandez, John Barry, Joseph Walsh
2021 Sensors  
We present an overview of the three primary categories of sensor calibration and review existing open-source calibration packages for multi-sensor calibration and their compatibility with numerous commercial  ...  Sensor calibration is the foundation block of any autonomous system and its constituent sensors and must be performed correctly before sensor fusion and obstacle detection processes may be implemented.  ...  Reference [181] proposed a two-stage 3D obstacle detection architecture, named 3D-cross view fusion (3D-CVF).  ... 
doi:10.3390/s21062140 pmid:33803889 pmcid:PMC8003231 fatcat:j52leqrvwnhu5brd7lxgozvwya

Gated2Depth: Real-time Dense Lidar from Gated Images [article]

Tobias Gruber, Frank Julca-Aguilar, Mario Bijelic, Werner Ritter, Klaus Dietmayer, Felix Heide
2019 arXiv   pre-print
The proposed replacement for scanning lidar systems is real-time, handles back-scatter and provides dense depth at long ranges.  ...  We depart from point scanning and demonstrate that it is possible to turn a low-cost CMOS gated imager into a dense depth camera with at least 80m range - by learning depth from three gated images.  ...  We thank Robert Bhler, Stefanie Walz and Yao Wang for help processing the large dataset. We thank Fahim Mannan for fruitful discussions and comments on the manuscript.  ... 
arXiv:1902.04997v3 fatcat:lnqqlfjd7jexxldes35g3atm3e

Stereo Hybrid Event-Frame (SHEF) Cameras for 3D Perception [article]

Ziwei Wang, Liyuan Pan, Yonhon Ng, Zheyu Zhuang, Robert Mahony
2021 arXiv   pre-print
sensor and allowing for stereo depth estimation.  ...  However, conventional cameras have drawbacks such as low dynamic range, motion blur and latency due to the underlying frame-based mechanism.  ...  ACKNOWLEDGMENTS The authors would like to thank Pieter van Goor for helping ground truth depth collection, and thank Prophesee for providing the event camera that was used in the work.  ... 
arXiv:2110.04988v1 fatcat:4xfm2m4cvrh2bktlsctn2xlyky

Depth Estimation from Monocular Images and Sparse Radar Data

Juan-Ting Lin, Dengxin Dai, Luc Van Gool
2020 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  
We give a comprehensive study of the fusion between RGB images and Radar measurements from different aspects and proposed a working solution based on the observations.  ...  The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.  ...  Given recently released nuScenes dataset [11] consisting of RGB, LiDAR, and Radar measurements, we are able to conduct experiments on cross-modality sensor fusion between RGB camera and Radar.  ... 
doi:10.1109/iros45743.2020.9340998 fatcat:gaqtmolqezb37fasdcxlownhmu

Depth Estimation from Monocular Images and Sparse Radar Data [article]

Juan-Ting Lin, Dengxin Dai, Luc Van Gool
2020 arXiv   pre-print
We give a comprehensive study of the fusion between RGB images and Radar measurements from different aspects and proposed a working solution based on the observations.  ...  The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.  ...  Given recently released nuScenes dataset [11] consisting of RGB, LiDAR, and Radar measurements, we are able to conduct experiments on cross-modality sensor fusion between RGB camera and Radar.  ... 
arXiv:2010.00058v1 fatcat:5rs6mpy3vrhalddgpihkmxci6y

LiDAR guided Small obstacle Segmentation [article]

Aasheesh Singh, Aditya Kamireddypalli, Vineet Gandhi, K Madhava Krishna
2020 arXiv   pre-print
We stress that precise calibration between LiDAR and camera is crucial for this task and thus propose a novel Hausdorff distance based calibration refinement method over extrinsic parameters.  ...  In this paper, we present a method to reliably detect such obstacles through a multi-modal framework of sparse LiDAR(VLP-16) and Monocular vision.  ...  (d) Ground-truth Software Setup: The Sensor and Software setup for recording the data is as follows • ZED Stereo camera (only left feed). • Velodyne Puck (VLP-16). • Vehicle -Mahindra E2O electric car  ... 
arXiv:2003.05970v1 fatcat:nwg7gwpypfa5plly4ttn6felii

Camera-based vehicle velocity estimation from monocular video [article]

Moritz Kampelmühler, Michael G. Müller, Christoph Feichtenhofer
2018 arXiv   pre-print
Another contribution is an explorative study of features for monocular vehicle velocity estimation.  ...  We find that light-weight trajectory based features outperform depth and motion cues extracted from deep ConvNets, especially for far-distance predictions where current disparity and optical flow estimators  ...  We are grateful for discussions with Axel Pinz. The GPUs used for this research were donated by NVIDIA.  ... 
arXiv:1802.07094v1 fatcat:bqvijrjdtvc3jbcblo5sl47rce

Forecasting Time-to-Collision from Monocular Video: Feasibility, Dataset, and Challenges [article]

Aashi Manglik, Xinshuo Weng, Eshed Ohn-Bar, Kris M. Kitani
2020 arXiv   pre-print
We explore the possibility of using a single monocular camera to forecast the time to collision between a suitcase-shaped robot being pushed by its user and other nearby pedestrians.  ...  Our results show that our proposed multi-stream CNN is the best model for predicting time to near-collision.  ...  ACKNOWLEDGMENT This work was sponsored in part by NIDILRR (90DPGE0003), JST CREST (JPMJCR14E1) and NSF NRI (1637927).  ... 
arXiv:1903.09102v3 fatcat:muyxmzvpdrfrzhjz45sj74ddoy

Multimodal End-to-End Autonomous Driving [article]

Yi Xiao, Felipe Codevilla, Akhil Gurram, Onay Urfalioglu, Antonio M. López
2019 arXiv   pre-print
We will compare the use of RGBD information by means of early, mid and late fusion schemes, both in multisensory and single-sensor (monocular depth estimation) settings.  ...  So far, most proposals relying on this paradigm assume RGB images as input sensor data.  ...  [10] , where different fusion schemes for RGB and Far Infrared (FIR) calibrated images are compared.  ... 
arXiv:1906.03199v1 fatcat:ak2lqcopc5hubnjfkz7yosenje

Deep Learning-Based Monocular Depth Estimation Methods—A State-of-the-Art Review

Faisal Khan, Saqib Salahuddin, Hossein Javidnia
2020 Sensors  
Relevant datasets and 13 state-of-the-art deep learning-based approaches for monocular depth estimation are reviewed, evaluated and discussed.  ...  The recent approaches for monocular depth estimation mostly rely on Convolutional Neural Networks (CNN).  ...  They do not require alignment and calibration which is important for multi-camera, or multi-sensor depth measurement systems.  ... 
doi:10.3390/s20082272 pmid:32316336 pmcid:PMC7219073 fatcat:kfio24wembgxhkheafhd74gra4

End-to-End Learning of Semantic Grid Estimation Deep Neural Network with Occupancy Grids

Ozgur Erkent, Christian Wolf, Christian Laugier
2019 Unmanned Systems  
It consists of an integration of an occupancy grid, which computes the grid states with a Bayesian filter approach, and semantic segmentation information from monocular RGB images, which is obtained with  ...  The proposed method is tested in various datasets (KITTI dataset, Inria-Chroma dataset and SYNTHIA) and different deep neural network architectures are compared.  ...  We thank Jean-Alix David and Jérôme Lussereau for their assistance with data collection.  ... 
doi:10.1142/s2301385019410036 fatcat:egfnhvlqgfh7dddxtfydtvsnxy
« Previous Showing results 1 — 15 out of 760 results