Filters








2,799 Hits in 8.1 sec

Visual odometry based on Random Finite Set Statistics in urban environment

Feihu Zhang, Guang Chen, Hauke Stahle, Christian Buckl, Alois Knoll
2012 2012 IEEE Intelligent Vehicles Symposium  
The method is based on two phases: a preprocessing phase to extract features from the image and transform the coordinates from the image space to vehicle coordinates; a tracking phase to estimate the ego-motion  ...  In previous work, we presented a visual odometry solution that estimates frame-to-frame motion from a single camera based on Random Finite Set (RFS) Statistics.  ...  a three-dimensional cartesian coordinate system with origin in the optical center of the camera, and the image coordinates [u, v] T are defined in a two-dimensional cartesian coordinate system with  ... 
doi:10.1109/ivs.2012.6232201 dblp:conf/ivs/ZhangCSBK12 fatcat:o2z4j25lvrhe7aufp3erhgl3qi

An Ego-Motion Detection System Employing Directional-Edge-Based Motion Field Representations

Jia HAO, Tadashi SHIBATA
2010 IEICE transactions on information and systems  
As a result, the problems of motion ambiguity as well as motion field distortion caused by camera shaking during video capture have been resolved.  ...  Two kinds of feature vectors, the global motion vector and the component distribution vectors, are generated from a motion field at two different scales and perspectives.  ...  Ego-motion detection is carried out in three stages: (1) motion field generation, (2) hierarchical vector representation, and (3) multiple-clue template matching.  ... 
doi:10.1587/transinf.e93.d.94 fatcat:ieybg5fr7jginj6hb4rv6cbxyu

Determination of Ego-Motion from Matched Points

C. G. Harris
1987 Procedings of the Alvey Vision Conference 1987  
We propose an algorithm for the estimation of the motion of a camera moving through a static environment (ie. the ego-motion) from matched points on two images.  ...  The algorithm correctly weights the observations by minimising point mis-match distances on the image-plane. Prior knowledge concerning the camera motion may also be included.  ...  ego-motion estimate.  ... 
doi:10.5244/c.1.26 dblp:conf/bmvc/000287 fatcat:g32jngof7fbgrin5s2awwdt7re

Rover navigation using stereo ego-motion

Clark F. Olson, Larry H. Matthies, Marcel Schoppers, Mark W. Maimone
2003 Robotics and Autonomous Systems  
Promising techniques for position estimation by determining the camera ego-motion from monocular or stereo sequences have been previously described.  ...  In addition, we show that a system based on only camera ego-motion estimates will accumulate errors with super-linear growth in the distance traveled, owing to increasing orientation errors.  ...  This paper is an expanded version of previous work on stereo ego-motion that has appeared in the IEEE Computer Society Conference on Computer Vision and Pattern Recognition [17] and the IEEE International  ... 
doi:10.1016/s0921-8890(03)00004-6 fatcat:tpzpp7xnkfhozgck7pim4wafku

Spatio-temporal prediction of collision candidates for static and dynamic objects in monocular image sequences

Alexander Schaub, Darius Burschka
2013 2013 IEEE Intelligent Vehicles Symposium (IV)  
The rotational component of this sparse optical flow due to ego motion of the camera is compensated using motion parameters estimated directly from the images.  ...  A sparse motion field is calculated by tracking point features using the Kanade-Lucas-Tomasi method.  ...  In case of dynamic scenes, we observe multiple relative camera to surface motions with different epipoles.  ... 
doi:10.1109/ivs.2013.6629605 dblp:conf/ivs/SchaubB13 fatcat:k6xq3j7cnbe6vgcwe737tywagy

Visual odometry based on a Bernoulli filter

Feihu Zhang, Daniel Clarke, Alois Knoll
2015 International Journal of Control, Automation and Systems  
In contrast to other approaches, ego-motion vector is considered as the state of an extended target while the features are considered as multiple measurements that originated from the target.  ...  In this paper, we propose a Bernoulli filter for estimating a vehicle's trajectory under random finite set (RFS) framework.  ...  Much work has been done utilizing Structure-from-Motion (SfM) technique [1] . It refers to the process of estimating three dimensional information from two dimensional images.  ... 
doi:10.1007/s12555-014-0192-3 fatcat:33qrfg7amzdfnkab3l7p3yj4xy

Tracking a varying number of people with a visually-controlled robotic head

Yutong Ban, Xavier Alameda-Pineda, Fabien Badeig, Sileye Ba, Radu Horaud
2017 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  
Multi-person tracking with a robotic platform is one of the cornerstones of human-robot interaction. Challenges arise from occlusions, appearance changes and a time-varying number of people.  ...  person of interest within the field of view.  ...  points from the background in order to estimate camera motions [19] .  ... 
doi:10.1109/iros.2017.8206274 dblp:conf/iros/BanABBH17 fatcat:jbmupkkrtvc6lodh2nvx4eflra

Egocentric Object Tracking: An Odometry-Based Solution [chapter]

Stefano Alletto, Giuseppe Serra, Rita Cucchiara
2015 Lecture Notes in Computer Science  
This is no easy task: in this paper, we show how current state-ofthe-art visual tracking algorithms fail if challenged with a first-person sequence recorded from a wearable camera attached to a moving  ...  Tracking objects moving around a person is one of the key steps in human visual augmentation: we could estimate their locations when they are out of our field of view, know their position, distance or  ...  Even a still object could bounce in and out of the camera field of view due to ego-motion, or a still background can be all but still, having significant apparent motion.  ... 
doi:10.1007/978-3-319-23234-8_63 fatcat:yjcwfa4l5bau7bizgbkv3yafzy

First-Person Animal Activity Recognition from Egocentric Videos

Yumi Iwashita, Asamichi Takamine, Ryo Kurazume, M.S. Ryoo
2014 2014 22nd International Conference on Pattern Recognition  
We implemented multiple baseline approaches to recognize activities from such videos while utilizing multiple types of global/local motion features.  ...  Our new dataset consists of 10 activities containing a heavy/fair amount of ego-motion.  ...  The future work includes using multiple cameras on a dog. The dataset was collected with a camera on the dog, and we found out that the position of the camera clearly influences the view.  ... 
doi:10.1109/icpr.2014.739 dblp:conf/icpr/IwashitaTKR14 fatcat:53q2tr7ocbdsbhwbdr4kf4frcm

Real-time Motion Tracking from a Mobile Robot

Boyoon Jung, Gaurav S. Sukhatme
2009 International Journal of Social Robotics  
We propose a set of algorithms for multiple motion tracking from a mobile robot equipped with a monocular camera and a laser rangefinder.  ...  The estimates are fused with the depth information from the laser rangefinder to estimate the partial 3D position.  ...  There are three challenges: 1) Compensation for the robot ego-motion: The ego-motion of the robot was directly measured using corresponding feature sets in two consecutive images obtained from a camera  ... 
doi:10.1007/s12369-009-0038-y fatcat:tllbe7e2ivclxbelbwjeqvrmei

Pixel-Wise Motion Segmentation For SLAM in Dynamic Environments

Thorsten Hempel, Ayoub Al-Hamadi
2020 IEEE Access  
SCENE FLOW ESTIMATION Scene flow was introduced in [18] as three-dimensional motion field of points and since then multiple times approached on stereo [4] , [19] and RGB-D systems [5] , [20] - [23  ...  The three dimensional movement of the projected points are estimated by a transformation matrix. Most scene flow vectors are caused by the camera motion between ct and ct−1.  ... 
doi:10.1109/access.2020.3022506 fatcat:ro4ekkmfbjgnbevvkv4hp2rh6i

Constrained Multiple Planar Reconstruction for Automatic Camera Calibration of Intelligent Vehicles

Sang Jun Lee, Jeawoo Lee, Wonju Lee, Cheolhun Jang
2021 Sensors  
The proposed method jointly optimizes reprojection errors of image features projected from multiple planar surfaces, and finally, it significantly reduces errors in camera extrinsic parameters.  ...  In intelligent vehicles, extrinsic camera calibration is preferable to be conducted on a regular basis to deal with unpredictable mechanical changes or variations on weight load distribution.  ...  estimate camera motion.  ... 
doi:10.3390/s21144643 fatcat:w63mxb6zqjdcjps3dzwmvc4abu

Camera-based platform and sensor motion tracking for data fusion in a landmine detection system

Wannes van der Mark, Johan C. van den Heuvel, Eric den Breejen, Frans C. A. Groen, Grant R. Gerhart, Charles M. Shoemaker, Douglas W. Gage
2003 Unmanned Ground Vehicle Technology V  
To estimate the relative position and orientation of sensors, techniques from camera calibration are used. The platform motion is estimated from tracked features on the ground.  ...  In this paper a vision based approach is presented which can estimate the relative sensor pose and position together with the vehicle motion.  ...  Another problem for LS motion estimation is presence of outliers; features with a motion which does not correspond to the ego-motion of the cameras.  ... 
doi:10.1117/12.486838 fatcat:pggr5aimazf5jnjhdpbxpu2hyq

Pose and Motion from Omnidirectional Optical Flow and a Digital Terrain Map

Ronen Lerner, Oleg Kupervasser, Ehud Rivlin
2006 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems  
An algorithm for pose and motion estimation using corresponding features in omnidirectional images and a digital terrain map is proposed.  ...  The first one is polydioptric cameras while the second is catadioptric camera.  ...  In ego-motion integration approach the motion of the camera with respect to itself is estimated.  ... 
doi:10.1109/iros.2006.282569 dblp:conf/iros/LernerKR06 fatcat:ant3j4izbbhhph2a7yteo27dqy

Group action recognition in soccer videos

Yu Kong, Xiaoqin Zhang, Qingdi Wei, Weiming Hu, Yunde Jia
2008 Pattern Recognition (ICPR), Proceedings of the International Conference on  
Due to the inaccurate ego-motion estimation, the optical flow can not reflect accurate motion of objects.  ...  This paper presents a novel approach for recognizing group action with a moving camera. In our approach, egomotion is estimated by the Kanade-Lucas-Tomasi feature sets on successive frames.  ...  These features are used to estimate ego-motion of camera. The optical flow is then computed on frames after ego-motion compensation.  ... 
doi:10.1109/icpr.2008.4761001 dblp:conf/icpr/KongZWHJ08 fatcat:hj6z2ndddbelvlb6xchlc5rilq
« Previous Showing results 1 — 15 out of 2,799 results