2,331 Hits in 7.8 sec

HumanEva: Synchronized Video and Motion Capture Dataset and Baseline Algorithm for Evaluation of Articulated Human Motion

Leonid Sigal, Alexandru O. Balan, Michael J. Black
2009 International Journal of Computer Vision  
A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms.  ...  We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle  ...  This project was supported in part by gifts from Honda Research Institute and Intel Corporation. Funding for portions of this work was also provided by NSF grants IIS-0534858 and IIS-0535075.  ... 
doi:10.1007/s11263-009-0273-6 fatcat:owba5vjntrcjnm4ks6jvq4xn7y

Survey on Video Analysis of Human Walking Motion

S. Nissi Paul, Y. Jayanta Singh
2014 International Journal of Signal Processing, Image Processing and Pattern Recognition  
The task of analyzing human walking can be divided into three distinct subtaskshuman detection or segmentation, motion tracking and walking pose analysis.  ...  This paper presents a survey of different methodologies used for human walking motion analysis, approaches used for human detection or segmentation, various tracking methods, approaches for pose estimation  ...  applications refer to the task of full body motion capture without the need of markers or any specialist suits.  ... 
doi:10.14257/ijsip.2014.7.3.10 fatcat:fr7h75fowraffnbo4v3gt7v45e

Capturing Detailed Deformations of Moving Human Bodies [article]

He Chen, Hyojoon Park, Kutay Macit, Ladislav Kavan
2021 arXiv   pre-print
The key idea behind our system is a new type of motion capture suit which contains a special pattern with checkerboard-like corners and two-letter codes.  ...  Our experiments demonstrate highly accurate captures of a wide variety of human poses, including challenging motions such as yoga, gymnastics, or rolling on the ground.  ...  ACKNOWLEDGMENTS We thank Marianne Kavan and Katey Blumenthal for their performances and consultation on applications in medicine; to Daniel  ... 
arXiv:2102.07343v3 fatcat:fhs67vamdjhylp57bz6xjst4xy

A system for articulated tracking incorporating a clothing model

Bodo Rosenhahn, Uwe Kersting, Katie Powell, Reinhard Klette, Gisela Klette, Hans-Peter Seidel
2006 Machine Vision and Applications  
Pose results are compared with results obtained from a commercially available marker based tracking system.  ...  This leads to a simultaneous estimation of pose, joint angles, cloth draping parameters and wind forces. An error functional is formalized to minimize the involved parameters simultaneously.  ...  Acknowledgements We gratefully acknowledge funding by the DFG project RO2497/1 and the Max-Planck Center for visual computing and communication.  ... 
doi:10.1007/s00138-006-0046-y fatcat:m4rfotn45fb2dlnowfv36mfira

The Visual Analysis of Human Movement: A Survey

D.M Gavrila
1999 Computer Vision and Image Understanding  
The ability to recognize humans and their activities by vision is key for a machine to interact intelligently and effortlessly with a human-inhabited environment.  ...  This survey identifies a number of promising applications and provides an overview of recent developments in this domain.  ...  Dorner [21] tracked articulated 3-D hand motion (palm motion and finger bending/unbending) with a single camera. Her system requires colored markers on the joints and cannot handle occlusions.  ... 
doi:10.1006/cviu.1998.0716 fatcat:ep75ekg4h5elzebmm3pasqlqdy

Markerless Articulated Human Body Tracking from Multi-view Video with GPU-PSO [chapter]

Luca Mussi, Spela Ivekovic, Stefano Cagnoni
2010 Lecture Notes in Computer Science  
In this paper, we describe the GPU implementation of a markerless full-body articulated human motion tracking system from multi-view video sequences acquired in a studio environment.  ...  We model the human body pose with a skeleton-driven subdivisionsurface human body model.  ...  A. Hilton from the CSSVP, University of Surrey, for the test sequences, and Mr A. Patney from University of California, Davis, for sharing his CUDA implementation of the Catmull-Clark subdivision. S.  ... 
doi:10.1007/978-3-642-15323-5_9 fatcat:xtln4kqynrcgbkb72ithkaqxke

Learning Markerless Human Pose Estimation from Multiple Viewpoint Video [chapter]

Matthew Trumble, Andrew Gilbert, Adrian Hilton, John Collomosse
2016 Lecture Notes in Computer Science  
A manifold embedding is learned via Gaussian Processes for the CNN descriptor and articulated pose spaces enabling regression and so estimation of human pose from MVV input.  ...  We present a novel human performance capture technique capable of robustly estimating the pose (articulated joint positions) of a performer observed passively via multiple view-point video (MVV).  ...  The Ballet dataset is courtesy of the EU FP7 RE@CT project. ECCV-16 submission ID 2  ... 
doi:10.1007/978-3-319-49409-8_70 fatcat:w5xpa3wiqvfq5jfs3sbt6uirli

4D Mesh Reconstruction from Time-Varying Voxelized Geometry through ARAP Tracking [article]

Ludovic Blache, Mathieu Desbrun, Celine Loscos, Laurent Lucas
2015 Eurographics State of the Art Reports  
We present a method to derive a time-evolving triangle mesh representation from a sequence of binary volumetric data representing an arbitrary motion.  ...  We propose an automated tracking approach to convert the raw input sequence into a single, animated mesh.  ...  The resulting series of static poses are not well-suited for subsequent editing as they are devoid of any temporal coherency.  ... 
doi:10.2312/egsh.20151002 fatcat:zenuwfwswbeizajsqoic6vyor4

Towards Implicit Correspondence in Signed Distance Field Evolution

Miroslava Slavcheva, Maximilian Baust, Slobodan Ilic
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
We demonstrate that our system is able to preserve texture throughout articulated motion sequences, and evaluate its geometric accuracy on public data.  ...  We propose an energy functional based on a novel data term, which aligns the lowest-frequency Laplacian eigenfunction representations of the input and target shapes.  ...  It, however, does not preserve correspondences and is therefore ill-suited for tasks such as surface registration and motion tracking. Pons et al.  ... 
doi:10.1109/iccvw.2017.103 dblp:conf/iccvw/SlavchevaBI17 fatcat:zaqweb6vrvhkblklip66hw4io4

Folding paper with anthropomorphic robot hands using real-time physics-based modeling

Christof Elbrechter, Robert Haschke, Helge Ritter
2012 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012)  
The first concerns real-time modeling and visual tracking. Our technique not only models the bending of a sheet of paper, but also paper crease lines which allows us to monitor deformations.  ...  The ability to manipulate deformable objects, such as textiles or paper, is a major prerequisite to bringing the capabilities of articulated robot hands closer to the level of manual intelligence exhibited  ...  While there exist good state-of-the-art solutions for rigid objects, the implementation of real-time vision frameworks that can track articulated or deformable objects still poses many challenges.  ... 
doi:10.1109/humanoids.2012.6651522 dblp:conf/humanoids/ElbrechterHR12 fatcat:vtfyeqpfmraq7lxravhx444hge

A clustering compression method for 3D Human motion capture data

hou Kai, ian Feng, ao Guo, en Zhong
2014 2014 9th International Conference on Computer Science & Education  
In this paper, a compression method for 3D Human motion data is proposed. We represent and compress the motion data using the clustering method and primary component analysis.  ...  The compressed data is adapted to network transmission with shorter time in order to maximize the use of network bandwidth and computational performance of local machines.  ...  We also would like to thank the anonymous reviewers for their helpful comments and suggestions.  ... 
doi:10.1109/iccse.2014.6926568 fatcat:jyvo7ztcnzb2zjax62wy7japye

Real time motion capture using a single time-of-flight camera

Varun Ganapathi, Christian Plagemann, Daphne Koller, Sebastian Thrun
2010 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition  
Markerless tracking of human pose is a hard yet relevant problem.  ...  In this paper, we derive an efficient filtering algorithm for tracking human pose at 4-10 frames per second using a stream of monocular depth images.  ...  This work was supported by NSF (ISS 0917151), MURI (N000140710747), and the Boeing company.  ... 
doi:10.1109/cvpr.2010.5540141 dblp:conf/cvpr/GanapathiPKT10 fatcat:mqqf6xugffagbdqpi3pznzeicq

Fast and robust hand tracking using detection-guided optimization

Srinath Sridhar, Franziska Mueller, Antti Oulasvirta, Christian Theobalt
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
In this paper, we present a fast method for accurately tracking rapid and complex articulations of the hand using a single depth camera.  ...  Markerless tracking of hands and fingers is a promising enabler for human-computer interaction.  ...  Acknowledgments: This research was funded by the ERC Starting Grant projects CapReal (335545) and COM-PUTED (637991), and the Academy of Finland. We would like to thank Christian Richardt.  ... 
doi:10.1109/cvpr.2015.7298941 dblp:conf/cvpr/0002MOT15 fatcat:2fvmhiodkndnjktuie5f33wzji

A Probabilistic Framework for Learning Kinematic Models of Articulated Objects

J. Sturm, C. Stachniss, W. Burgard
2011 The Journal of Artificial Intelligence Research  
In particular, we present a set of parametric and non-parametric edge models and how they can robustly be estimated from noisy pose observations.  ...  Further, we demonstrate that our approach has a broad set of applications, in particular for the emerging fields of mobile manipulation and service robotics.  ...  Further, the authors would like to thank Vijay Pradeep and Kurt Konolige from Willow Garage who inspired the authors to work on this subject, and contributed to the experiments with the motion capture  ... 
doi:10.1613/jair.3229 fatcat:bxxtovothrf7xafckk4rojcsum

3D Human Pose Estimation in RGBD Images for Robotic Task Learning [article]

Christian Zimmermann, Tim Welschehold, Christian Dornhege, Wolfram Burgard, Thomas Brox
2018 arXiv   pre-print
We combine the system with our learning from demonstration framework to instruct a service robot without the need of markers.  ...  We propose an approach to estimate 3D human pose in real world units from a single RGBD image and show that it exceeds performance of monocular 3D pose estimation approaches from color as well as pose  ...  [15] use an articulated model of the human body to track teacher actions.  ... 
arXiv:1803.02622v2 fatcat:nwqmktqcc5c3tlufdoauqsm35m
« Previous Showing results 1 — 15 out of 2,331 results