Filters








240 Hits in 4.3 sec

Accurate and robust ego-motion estimation using expectation maximization

G. Dubbelman, W. van der Mark, F.C.A. Groen
2008 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems  
In this contribution, stereo-vision is used to generate a number of minimal-set motion hypothesis.  ...  A novel robust visual-odometry technique, called EM-SE(3) is presented and compared against using the random sample consensus (RANSAC) for ego-motion estimation.  ...  INTRODUCTION In this article, the focus is on robust ego-motion estimation of a moving vehicle using an onboard stereo-rig, this is also known as stereo-based visual-odometry.  ... 
doi:10.1109/iros.2008.4650944 dblp:conf/iros/DubbelmanMG08 fatcat:rnvlm2hdivgsvi3aggtr7gmaci

A New Anthropomorphic Visual Sensor
人間の網膜特性をもつ視覚センサ

Cheon Woo SHIN, Seiji INOKUCHI
1995 Transactions of the Society of Instrument and Control Engineers  
The motion equation that relates the egomotion and/or the motion of the object in the scene to the optical flow is considerably simplified if the velocity is represented in a polar coordinate system, as  ...  This paper describes the development of an anthropomorphic visual sensor with retina-like structure to perform the polar mapping.  ...  O'Brien: Motion Stereo Using Ego-motion Complex Logarithmic Mapping, IEEE PAMI, 9-3, 356/369 (1987) 24) M. Tistarelli and G.  ... 
doi:10.9746/sicetr1965.31.1817 fatcat:3vvcmjl7bvcb7mtkrz6pyk2brq

Unsupervised Monocular Depth Learning in Dynamic Scenes [article]

Hanhan Li, Ariel Gordon, Hang Zhao, Vincent Casser, Anelia Angelova
2020 arXiv   pre-print
We present a method for jointly training the estimation of depth, ego-motion, and a dense 3D translation field of objects relative to the scene, with monocular photometric consistency being the sole source  ...  At inference time, a depth map is obtained from a single frame, whereas a 3D motion map and ego-motion are obtained from two consecutive frames.  ...  The latter predicts a 3D translation map T obj (u, v) at the original resolution for the moving objects and a 6D ego-motion vector M ego .  ... 
arXiv:2010.16404v2 fatcat:dwf7hypltnbijn3somdt6hv2zu

Reciprocal-wedge transform for space-variant sensing

F. Tong, Ze-Nian Li
1995 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Index Terms-active vision, ego motion, motion stereo, vigation, reciprocal-wedge transform, space-variant sensing 1. Linear features are also preserved in the S-RWT.  ...  As examples of initial applications, the RWT is used for finding road directions in navigation, and for recovering depth in motion stereo.  ...  The work of the longitudinal motion stereo is also extended to more general ego motions, especially circular movements (rotations).  ... 
doi:10.1109/34.391393 fatcat:vhyi75uefjhsnj4xf3lwuk4u3i

ClusterSLAM: A SLAM Backend for Simultaneous Rigid Body Clustering and Motion Estimation

Jiahui Huang, Sheng Yang, Zishuo Zhao, Yu-Kun Lai, Shimin Hu
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
We present a practical backend for stereo visual SLAM which can simultaneously discover individual rigid bodies and compute their motions in dynamic environments.  ...  Specifically, our algorithm builds a noise-aware motion affinity matrix upon landmarks, and uses agglomerative clustering for distinguishing those rigid bodies.  ...  Output: The cluster assignments θ : i → q (q = 0 for the static cluster), the MAP relative position of 3D landmarks w.r.t. their cluster iX q,i , the ego-motion of stereo camera t P c t , and the trajectory  ... 
doi:10.1109/iccv.2019.00597 dblp:conf/iccv/HuangYZLH19 fatcat:r33c7b53kfeqlemxbqtk3w6abq

Radar and stereo vision fusion for multitarget tracking on the special Euclidean group

Josip Ćesić, Ivan Marković, Igor Cvišić, Ivan Petrović
2016 Robotics and Autonomous Systems  
For example, works based on a monocular camera use radar for finding regions of interest in the image [4] [5] [6] [7] , process separately image and radar data [8] [9] [10] , use motion stereo to reconstruct  ...  We use a multisensor setup consisting of a radar and a stereo camera mounted on top of a vehicle.  ...  matrix obtained from the ego-motion algorithm.  ... 
doi:10.1016/j.robot.2016.05.001 fatcat:fivxxn4fwjcl3dc5ikv6bh5ij4

Unsupervised Learning of Depth from Monocular Videos Using 3D-2D Corresponding Constraints

Fusheng Jin, Yu Zhao, Chuanbing Wan, Ye Yuan, Shuliang Wang
2021 Remote Sensing  
This paper proposes a depth prediction method for AMP based on unsupervised learning, which can learn from video sequences and simultaneously estimate the depth structure of the scene and the ego-motion  ...  The depth map of the target view and the ego-motion are estimated by CNN network, respectively.  ...  The depth estimation network predicated one-channel depth map of four different scales from a single three-channel image while the ego-motion estimation network predicated six-DOF of ego-motion from three  ... 
doi:10.3390/rs13091764 fatcat:hin47ftalfghfo5ixenjh7rnei

ClusterSLAM: A SLAM backend for simultaneous rigid body clustering and motion estimation

Jiahui Huang, Sheng Yang, Zishuo Zhao, Yu-Kun Lai, Shi-Min Hu
2021 Computational Visual Media  
AbstractWe present a practical backend for stereo visual SLAM which can simultaneously discover individual rigid bodies and compute their motions in dynamic environments.  ...  Specifically, our algorithm builds a noise-aware motion affinity matrix from landmarks, and uses agglomerative clustering to distinguish rigid bodies.  ...  Output: Cluster assignments θ : i → q (q = 0 for the static cluster), the MAP relative position of 3D landmarks w.r.t. their cluster iX q,i , the ego-motion of stereo camera t P c t , and the trajectory  ... 
doi:10.1007/s41095-020-0195-3 fatcat:hip62un4fzc4bchigrgj7aaoce

Early Cognitive Vision: Using Gestalt-Laws for Task-Dependent, Active Image-Processing

Florentin Wörgötter, Norbert Krüger, Nicolas Pugeault, Dirk Calow, Markus Lappe, Karl Pauwels, Marc Van Hulle, Sovira Tan, Alan Johnston
2004 Natural Computing  
We will ask to what degree such strategies are also useful in a computer vision context.  ...  In addition, we will try to show that it is possible to employ multiple strategies in parallel to arrive at a flexible and robust computer vision system based on recurrent feedback loops and using information  ...  In addition, the properties of such a complex logarithmic mapping lead to scaling for objects that increase in image size in proportion to visual eccentricity and rotational invariance.  ... 
doi:10.1023/b:naco.0000036817.38320.fe fatcat:ryl3o7lkyjfq5nqngzx42ekrie

A Sensor for Urban Driving Assistance Systems Based on Dense Stereovision [chapter]

Sergiu Nedevschi, Radu Danescu, Tiberiu Marita, Florin Oniga, Ciprian Pocol, Silviu Bota, Cristian Vance
2008 Stereo Vision  
Our system uses 3D points set for scene representation; therefore the preferred output is the Z map.  ...  Dense stereo information allows us to compute and track an unstructured elevation map, which provides drivable areas in the case when no lane markings or other road delimiting features are present or visible  ... 
doi:10.5772/5891 fatcat:kefr673c3rdrdii2zobmftf4ny

A Study of the Rao-Blackwellised Particle Filter for Efficient and Accurate Vision-Based SLAM

Robert Sim, Pantelis Elinas, James J. Little
2007 International Journal of Computer Vision  
solutions using vision-based sensing.  ...  The main contributions of our work are the introduction of a new robot motion model utilizing structure from motion (SFM) methods and a novel mixture proposal distribution that combines local and global  ...  We can do this using structure from motion techniques taking advantage of our stereo camera setup.  ... 
doi:10.1007/s11263-006-0021-0 fatcat:o4pevzcn7bcedhwdwzjlkh52uq

Event-based Vision: A Survey

Guillermo Gallego, Tobi Delbruck, Garrick Michael Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew Davison, Jorg Conradt, Kostas Daniilidis, Davide Scaramuzza
2020 IEEE Transactions on Pattern Analysis and Machine Intelligence  
traditional cameras: high temporal resolution (in the order of is), very high dynamic range (140dB vs. 60dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion  ...  We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow  ...  ., latency and motion blur). Table 3 classifies the related work using these complexity axes. Tracking and Mapping: Let us focus on methods that address the tracking-and-mapping problem. Cook et al.  ... 
doi:10.1109/tpami.2020.3008413 pmid:32750812 fatcat:vlxvlv4uynh5rpw4qlmaywqlqq

Event-based Vision: A Survey [article]

Guillermo Gallego, Tobi Delbruck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew Davison, Joerg Conradt, Kostas Daniilidis, Davide Scaramuzza
2020 arXiv   pre-print
We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow  ...  outstanding properties compared to traditional cameras: very high dynamic range (140 dB vs. 60 dB), high temporal resolution (in the order of microseconds), low power consumption, and do not suffer from motion  ...  In [129] a learning-based approach for segmentation using motion-compensation is proposed: ANNs are used to estimate depth, ego-motion, segmentation masks of independently moving objects and object 3D  ... 
arXiv:1904.08405v2 fatcat:ffh6el7ojfg6jag5qm2hwnhweu

Event-aided Direct Sparse Odometry [article]

Javier Hidalgo-Carrió and Guillermo Gallego and Davide Scaramuzza
2022 arXiv   pre-print
The method recovers a semi-dense 3D map using photometric bundle adjustment. EDS is the first method to perform 6-DOF VO using events and frames with a direct approach.  ...  We introduce EDS, a direct monocular visual odometry using events and frames. Our algorithm leverages the event generation model to track the camera motion in the blind time between frames.  ...  However such a comparison is yet to be performed for 6-DOF camera tracking (i.e., ego-motion estimation).  ... 
arXiv:2204.07640v2 fatcat:td7wajmutvfenj4oyp2h66dv44

Efficient tracking and ego-motion recovery using gait analysis

Huiyu Zhou, Andrew M. Wallace, Patrick R. Green
2009 Signal Processing  
In the second phase, this gait model is employed within a "predict-correct" framework using a maximum a posteriori, expectation-maximization (MAP-EM) strategy to obtain robust estimates of the ego-motion  ...  Experiments on synthetic and real image sequences show that the use of the gait model results in more efficient tracking.  ...  Acknowledgment We thank Iain Wallace for helping to generate the computer game scenarios using the Quake game engine.  ... 
doi:10.1016/j.sigpro.2009.04.010 fatcat:f4plmzzubbdxnbintwry2daazi
« Previous Showing results 1 — 15 out of 240 results