Filters








32,418 Hits in 4.3 sec

Plane Equation Features in Depth Sensor Tracking

Mika Taskinen, Tero Säntti, Teijo Lehtonen
2017 Proceedings of the 14th International Joint Conference on e-Business and Telecommunications  
Depth sensors have usually been used in augmented reality as mesh builders and in some cases as feature extractors for tracking.  ...  The emergence of depth sensors has made it possible to track not only monocular cues but also the actual depth values of the environment.  ...  ACKNOWLEDGEMENTS The research has been carried out during the MARIN2 project (Mobile Mixed Reality Applications for Professional Use) funded by Tekes (The Finnish Funding Agency for Innovation) in collaboration  ... 
doi:10.5220/0006425700170024 dblp:conf/sigmap/TaskinenSL17 fatcat:lysa3jlagzd3vp4gafnry6a35a

New Fast Fall Detection Method Based on Spatio-Temporal Context Tracking of Head by Using Depth Images

Lei Yang, Yanyun Ren, Huosheng Hu, Bo Tian
2015 Sensors  
tracking over three-dimensional (3D) depth images that are captured by the Kinect sensor.  ...  The dense spatio-temporal context (STC) algorithm is then applied to track the head position and the distance from the head to floor plane is calculated in every following frame of the depth image.  ...  equation have been determined, all depth points should be substituted into the floor plane equation.  ... 
doi:10.3390/s150923004 pmid:26378540 pmcid:PMC4610487 fatcat:36kyjrmclfhwxdkbptti6dzlt4

Tracking-by-synthesis using point features and pyramidal blurring

Gilles Simon
2011 2011 10th IEEE International Symposium on Mixed and Augmented Reality  
In particular, it is drift-free, viewpoint invariant and easy-to-combine with physical sensors such as GPS and inertial sensors.  ...  While edge features have been used succesfully within the tracking-by-synthesis framework, point features have, to our knowledge, still never been used.  ...  The depth of the scene points that are in focus on the sensor image plane is called the focus distance.  ... 
doi:10.1109/ismar.2011.6162875 fatcat:wqpaphh3pvdnbhaatxsw3oteri

VISUAL SERVOING FOR ROBOTIC ASSEMBLY [chapter]

Brad Nelson, N. P. Papanikolopoulos, P. K. Khosla
1993 Visual Servoing  
Visual feedback has traditionally been used in the assembly process to a very limited extent.  ...  In this paper we present some of the issues pertaining to the introduction of visual servoing techniques into the assembly process and solutions we have demonstrated to these problems.  ...  Figure 1 . 1 The pin-hole camera model with the image plane moved in front of the focal plane to simplify signs in the equations.  ... 
doi:10.1142/9789814503709_0005 fatcat:7saqer5sczbapoksuuxmntgrhu

Tracking-by-synthesis using point features and pyramidal blurring

Gilles Simon
2011 2011 10th IEEE International Symposium on Mixed and Augmented Reality  
In particular, it is drift-free, viewpoint invariant and easy-to-combine with physical sensors such as GPS and inertial sensors.  ...  While edge features have been used succesfully within the tracking-by-synthesis framework, point features have, to our knowledge, still never been used.  ...  The depth of the scene points that are in focus on the sensor image plane is called the focus distance.  ... 
doi:10.1109/ismar.2011.6092373 dblp:conf/ismar/Simon11 fatcat:wfbjifmfhfa4nohgvotciip5de

AN RGB-D DATA PROCESSING FRAMEWORK BASED ON ENVIRONMENT CONSTRAINTS FOR MAPPING INDOOR ENVIRONMENTS

W. Darwish, W. Li, S. Tang, Y. Li, W. Chen
2019 ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences  
To overcome the self-repetitive structure of indoor environments, the proposed framework uses novel description functions for both line and plane features extracted from RGB and depth images for further  ...  The visual RGB-D SLAM system and the default sensor tracking system (SensorFusion) were used to assess the performance of the proposed framework.  ...  ACKNOWLEDGEMENTS The work described in this paper was substantially supported by a grant from The National Key Research and Development Program of China (No. 2016YFB0502101).  ... 
doi:10.5194/isprs-annals-iv-2-w5-263-2019 fatcat:bb2ufzoqcnb6rlzhqr7v5cmjse

Natural interface in augmented reality interactive simulations

Pier Paolo Valentini
2012 Virtual and Physical Prototyping  
A depth map is a data structure containing the distance from the sensor of each acquired point (pixel) along a direction perpendicular to the image plane.  ...  It is important to underline that the tracking algorithm is performed by the depth camera only, so the coordinates of the traced points can be easily transformed from the image plane and depth distance  ...  the scene In order to increase the level of realism, the objects in the scene have to behave in a physically correct way. In particular, they have to move according to Newton's laws.  ... 
doi:10.1080/17452759.2012.682332 fatcat:4x3cwji6crejbfnlqufgn757ea

Real time gaze estimation with a consumer depth camera

Li Sun, Zicheng Liu, Ming-Ting Sun
2015 Information Sciences  
In particular, the proposed system uses only a consumer depth camera (Kinect sensor) positioned at a distance from the subject.  ...  A parameterized iris model is then used to locate the center of the iris for gaze feature extraction, which can handle low-quality eye images.  ...  image plane, c) anchor point position p a on the image plane and d) depth value z a of the anchor point P a in the camera space.  ... 
doi:10.1016/j.ins.2015.02.004 fatcat:v3ylj6grs5aivcmtkz6h5p2ivu

Tracking an RGB-D Camera Using Points and Planes

Esra Ataer-Cansizoglu, Yuichi Taguchi, Srikumar Ramalingam, Tyler Garaas
2013 2013 IEEE International Conference on Computer Vision Workshops  
By fitting planes, we implicitly take care of the noise in the depth data that is typical in many commercially available 3D sensors.  ...  By fitting planes, we implicitly take care of the noise in the depth data that is typical in many commercially available 3D sensors.  ...  The inliers are used to refine the plane equation, resulting in the corresponding plane measurement π k j .  ... 
doi:10.1109/iccvw.2013.14 dblp:conf/iccvw/CansizogluTRG13 fatcat:or33rp4qzrahjmtoa42ng43sfy

A Multisource Heterogeneous Data Fusion Method for Pedestrian Tracking

Zhenlian Shi, Yanfeng Sun, Linxin Xiong, Yongli Hu, Baocai Yin
2015 Mathematical Problems in Engineering  
In our method, a RGB-D sequence is used to position the target locally by fusing the texture and depth features.  ...  A camera calibration process is used to map the inertial sensor position onto the video image plane, where the visual tracking position and the mapped position are fused using a similarity feature to obtain  ...  Mathematical Problems in Engineering 5 Figure 7 : Binary image representation of depth feature. Inertial Sensor Position with Depth Correction.  ... 
doi:10.1155/2015/150541 fatcat:j7dzvqhqkndw5fiwvrs5d5puaq

OpenPTrack: Open source multi-camera calibration and people tracking for RGB-D camera networks

Matteo Munaro, Filippo Basso, Emanuele Menegatti
2016 Robotics and Autonomous Systems  
It allows to track people in big volumes at sensor frame rate and currently supports a heterogeneous set of 3D sensors.  ...  Here we detail how a cascade of algorithms working on depth point clouds and color, infrared and disparity images is used to perform people detection from different types of sensors and in any indoor light  ...  Acknowledgements The authors would like to thank Jeff Burke, Alexander Horn and Randy Illum for the extensive collaboration in designing and testing OpenPTrack.  ... 
doi:10.1016/j.robot.2015.10.004 fatcat:xb3dgwn7offhhjbvoc63frarai

Micropositioning of a weakly calibrated microassembly system using coarse-to-fine visual servoing strategies

S.J. Ralis, B. Vikramaditya, B.J. Nelson
2000 IEEE transactions on electronics packaging manufacturing (Print)  
The combination of robust visual tracking and depth estimation within a supervisory control architecture is used to perform high-speed, automatic microinsertions in three dimensions.  ...  The supervisory logic-based controller selects the relevant sensor and tracking strategy to be used at a particular stage in the assembly process, allowing the system to take full advantage of the individual  ...  the vision sensor as well as the number of features tracked and their locations on the image plane; and is a velocity vector in task space.  ... 
doi:10.1109/6104.846935 fatcat:wga3v4pw3re57isxpyr6yyoc2u

Hand Gesture Recognition with Leap Motion [article]

Youchen Du, Shenglan Liu, Lin Feng, Menghui Chen, Jie Wu
2017 arXiv   pre-print
A series of features are extracted from Leap Motion tracking data, we feed these features along with HOG feature extracted from sensor images into a multi-class SVM classifier to recognize performed gesture  ...  The recent introduction of depth cameras like Leap Motion Controller allows researchers to exploit the depth information to recognize hand gesture more robustly.  ...  Comparison among Tracking data features We reconstruct the calculations of features like A, D, E based on equation 1, equation 2 and equation From table 1, we observe that both A+D+T and A+D+E+T outperforms  ... 
arXiv:1711.04293v1 fatcat:gewjauapvzcjnjz7obcwvjcrfq

3D depth image analysis for indoor fall detection of elderly people

Lei Yang, Yanyun Ren, Wenqiang Zhang
2016 Digital Communications and Networks  
This paper presents a new fall detection method of elderly people in a room environment based on shape analysis of 3D depth images captured by a Kinect sensor.  ...  The initial floor plane information is obtained by V disparity map, and the floor plane equation is estimated by the least square method.  ...  Then the floor plane equation is estimated by the least squares method as follows.  ... 
doi:10.1016/j.dcan.2015.12.001 fatcat:3clcqromgbagno5ag3wpk4llzi

Force and vision resolvability for assimilating disparate sensory feedback

B.J. Nelson, P.K. Khosla
1996 IEEE Transactions on Robotics and Automation  
Depth can be resolved using a single feature, but not accurately relative to directions parallel to the image plane. Figure 11 shows a plot of resolvability in depth versus baseline and depth.  ...  Feature Tracking The measurement of the motion of the features on the image plane must be done continuously and quickly.  ... 
doi:10.1109/70.538976 fatcat:qtnljztot5gzzk6agaymkr6of4
« Previous Showing results 1 — 15 out of 32,418 results