93 Hits in 3.2 sec

Estimating Drivable Collision-Free Space from Monocular Video

Jian Yao, Srikumar Ramalingam, Yuichi Taguchi, Yohei Miki, Raquel Urtasun
2015 2015 IEEE Winter Conference on Applications of Computer Vision  
In this paper we propose a novel algorithm for estimating the drivable collision-free space for autonomous navigation of on-road and on-water vehicles.  ...  represents a column in the image and its label denotes a position that separates the free space from the obstacles.  ...  We labeled the drivable collision-free space for all images. Boat: We mounted a GoPro camera on a boat and collected monocular video sequences while maneuvering the boat near a dock.  ... 
doi:10.1109/wacv.2015.62 dblp:conf/wacv/YaoRTMU15 fatcat:4thuxhdtx5gb5eo3qady4z2xky

Real-time estimation of drivable image area based on monocular vision

A. Miranda Neto, A. Correa Victorino, I. Fantoni, J. V. Ferreira
2013 2013 IEEE Intelligent Vehicles Symposium (IV)  
Applying the DP to estimation of drivable image areas has not been done yet, making the concept unique.  ...  In this way, this work proposes a drivable region detection algorithm that generates the region of interest from a dynamic threshold search method and from a drag process (DP).  ...  Estimation of Drivable Image Area From the image processing and sky removal steps, in order to obtain a multimodal 2D drivability free-area, w FA , i.e. free-navigable area detection, the algorithm performs  ... 
doi:10.1109/ivs.2013.6629448 dblp:conf/ivs/NetoVFF13 fatcat:z2jv4s3plfdo3k3mcqafpmuxf4

Free Space Estimation using Occupancy Grids and Dynamic Object Detection [article]

Raghavender Sahdev
2017 arXiv   pre-print
In this paper we present an approach to estimate Free Space from a Stereo image pair using stochastic occupancy grids.  ...  Dynamic Objects are detected in successive images based on an idea similar to tracking of foreground objects from the background objects based on motion flow.  ...  [14] proposed an approach to estimate free space from a monocular video sequence.  ... 
arXiv:1708.04989v1 fatcat:v6tgbm5zvbhubf6tv3kyssd3lu

Safe Visual Navigation via Deep Learning and Novelty Detection

Charles Richter, Nicholas Roy
2017 Robotics: Science and Systems XIII  
different from their training data.  ...  A video illustrating our approach is available at: visual navigation.  ...  We define the function d free (m t , a t ), which returns the length of known free space in map estimatem t , along the ray extending from some robot pose along action a t , averaged over a number of equally-spaced  ... 
doi:10.15607/rss.2017.xiii.064 dblp:conf/rss/RichterR17 fatcat:amapfytrxndurpbogjhlpri26u

Vision-based navigation of omnidirectional mobile robots

Marco Ferro, Antonio Paolillo, Andrea Cherubini, Marilena Vendittelli
2019 IEEE Robotics and Automation Letters  
Information from a monocular camera, encoders, and an inertial measurement unit is used to achieve the task.  ...  This paper considers the problem of collision-free navigation of omnidirectional mobile robots in environments with obstacles.  ...  This reduces the risk of collisions with undetected obstacles (see the accompanying video).  ... 
doi:10.1109/lra.2019.2913077 fatcat:b2zrmcwzbbg7vafjd4ycsmxsiq

Detecting Road Obstacles by Erasing Them [article]

Krzysztof Lis, Sina Honari, Pascal Fua, Mathieu Salzmann
2021 arXiv   pre-print
Instead, we select image patches and inpaint them with the surrounding road texture, which tends to remove obstacles from those patches.  ...  We also contribute a new dataset for monocular road obstacle detection, and show that our approach outperforms the state-of-the-art methods on both our new dataset and the standard Fishyscapes Lost \&  ...  Figure 2 . 2 Drivable space from semantic segmentation.Top: Input images.  ... 
arXiv:2012.13633v2 fatcat:df7z43iofjanpafl7gebk7mt7u

Vision-Based High Speed Driving with a Deep Dynamic Observer [article]

Paul Drews, Grady Williams, Brian Goldfain, Evangelos A. Theodorou, James M. Rehg
2018 arXiv   pre-print
A video of these results can be found at  ...  A particle filter uses this dynamic observation model to localize in a schematic map, and MPC is used to drive aggressively using this particle filter based state estimate.  ...  Instead of using an existing whole image or key point based SLAM system, the monocular camera images are used as the input to a convolutional neural network in order to directly regress the free space  ... 
arXiv:1812.02071v2 fatcat:ynrpcs57xbfczhjmctx3xboz6u

A Sensor for Urban Driving Assistance Systems Based on Dense Stereovision [chapter]

Sergiu Nedevschi, Radu Danescu, Tiberiu Marita, Florin Oniga, Ciprian Pocol, Silviu Bota, Cristian Vance
2008 Stereo Vision  
Each DEM cell is labeled as drivable if it is closer to the road surface than its estimated height uncertainty, or as non-drivable otherwise.  ...  -Monocular video sensors: employed in the visual or in the infrared light spectrum, the visual sensors can have a high field of view and can extract almost any kind of information relevant for driving  ... 
doi:10.5772/5891 fatcat:kefr673c3rdrdii2zobmftf4ny

Persistent self-supervised learning principle: from stereo to monocular vision for obstacle avoidance [article]

Kevin van Hecke, Guido de Croon, Laurens van der Maaten, Daniel Hennes, Dario Izzo
2016 arXiv   pre-print
A strategy is introduced that has the robot switch from stereo vision based flight to monocular flight, with stereo vision purely used as 'training wheels' to avoid imminent collisions.  ...  Over time it will learn to also estimate distances based on monocular appearance cues.  ...  Monocular disparity estimation The monocular disparity estimator forms a function from the image's pixel values to the average disparity in the image.  ... 
arXiv:1603.08047v1 fatcat:zn6aoeyjqrhsfmeq3uda66mnqa

Where can I drive? A System Approach: Deep Ego Corridor Estimation for Robust Automated Driving [article]

Thomas Michalke, Di Feng, Claudius Gläser, Fabian Timm
2021 arXiv   pre-print
More recently, data-driven approaches have been proposed that target the drivable area / freespace mainly in inner-city applications.  ...  Therefore, we propose to classify specifically a drivable corridor of the ego lane on pixel level with a deep learning approach.  ...  There are many different solutions to lane detection, which we divide into three categories: Data from stationary Video and/or Lidar sensors, data from high-definition maps and data from the vehicle's  ... 
arXiv:2004.07639v2 fatcat:pritfc2mmfhnfm2gqnhuzsrlci

Garment Avatars: Realistic Cloth Driving using Pattern Registration [article]

Oshri Halimi, Fabian Prada, Tuur Stuyck, Donglai Xiang, Timur Bagautdinov, He Wen, Ron Kimmel, Takaaki Shiratori, Chenglei Wu, Yaser Sheikh
2022 arXiv   pre-print
Here, we propose an end-to-end pipeline for building drivable representations for clothing.  ...  We demonstrate the efficacy of our pipeline on a realistic virtual telepresence application, where a garment is being reconstructed from two views, and a user can pick and swap garment design as they wish  ...  We include the full videos in the supplementary material. 7.2 Driving from sparse observations 7.2.1 Driving from pose.  ... 
arXiv:2206.03373v1 fatcat:wvp4bqr4ybfx3hwpwewjv5exyu

A Versatile and Efficient Reinforcement Learning Framework for Autonomous Driving [article]

Guan Wang, Haoyi Niu, Desheng Zhu, Jianming Hu, Xianyuan Zhan, Guyue Zhou
2022 arXiv   pre-print
space and lane boundary estimation efficiently based on the input image x from the monocular camera, as supervised input state-encoder. (2) Distributed RL-based Decision Making: using the learned representation  ...  from monocular RGB images and is widely used in autonomous driving systems.  ... 
arXiv:2110.11573v2 fatcat:fhjv37a6i5apxgghwe6ve27arm

Instance-Aware Predictive Navigation in Multi-Agent Environments [article]

Jinkun Cao, Xin Wang, Trevor Darrell, Fisher Yu
2021 arXiv   pre-print
We adopt a novel multi-instance event prediction module to estimate the possible interaction among agents in the ego-centric view, conditioned on the selected action sequence of the ego-vehicle.  ...  Instead, we use cues of multi-modal future state possibility from only ego-centric monocular observations.  ...  We develop an action selection method to ensure the ego-vehicle drives within drivable areas and avoids collisions in the environment.  ... 
arXiv:2101.05893v1 fatcat:tyogyo2dlbbadbzauqiq6t6g5y

Persistent self-supervised learning: From stereo to monocular vision for obstacle avoidance

Kevin van Hecke, Guido de Croon, Laurens van der Maaten, Daniel Hennes, Dario Izzo
2018 International Journal of Micro Air Vehicles  
The information in x f and the function space F may not allow for a perfect estimate of gðx g Þ.  ...  Monocular disparity estimation The monocular disparity estimator forms a function from the image's pixel values to the average disparity in the image.  ... 
doi:10.1177/1756829318756355 fatcat:ate5sovpirblnakqjn4sjacttm

Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation [article]

Gregory Kahn, Adam Villaflor, Bosen Ding, Pieter Abbeel, Sergey Levine
2018 arXiv   pre-print
Videos of the experiments and code can be found at  ...  interpolating between model-free and model-based.  ...  In contrast, our approach learns from scratch to navigate using monocular images solely in the real-world.  ... 
arXiv:1709.10489v3 fatcat:gojv4oqs7ze35noymyti46il2i
« Previous Showing results 1 — 15 out of 93 results