2,159 Hits in 7.0 sec

Visibility Constrained Generative Model for Depth-based 3D Facial Pose Tracking [article]

Lu Sheng, Jianfei Cai, Tat-Jen Cham, Vladimir Pavlovic, King Ngi Ngan
2019 arXiv   pre-print
In this paper, we propose a generative framework that unifies depth-based 3D facial pose tracking and face model adaptation on-the-fly, in the unconstrained scenarios with heavy occlusions and arbitrary  ...  Moreover, unlike prior art that employed ICP-based facial pose estimation, to improve robustness to occlusions, we propose a ray visibility constraint that regularizes the pose based on the face model's  ...  Occlusion handling is vital for robust 3D facial pose tracking.  ... 
arXiv:1905.02114v1 fatcat:pedhcx5rmfaw7gthdrtviixxj4

A Generative Model for Depth-Based Robust 3D Facial Pose Tracking

Lu Sheng, Jianfei Cai, Tat-Jen Cham, Vladimir Pavlovic, King Ngi Ngan
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We consider the problem of depth-based robust 3D facial pose tracking under unconstrained scenarios with heavy occlusions and arbitrary facial expression variations.  ...  Unlike the previous depth-based discriminative or data-driven methods that require sophisticated training or manual intervention, we propose a generative framework that unifies pose tracking and face model  ...  Related Work With the popularity of the consumer-level depth sensors, apart from the RGB based facial pose tracking systems [22, 3, 16, 4, 14, 15, 21, 33, 42] , a variety of 3D facial pose tracking and  ... 
doi:10.1109/cvpr.2017.489 dblp:conf/cvpr/ShengCCPN17 fatcat:bzfdzz4jnfcxnflugmiyif5tiy

HeadFusion: 360° Head Pose tracking combining 3D Morphable Model and 3D Reconstruction

Yu Yu, Kenneth Funes Mora, Jean-Marc Odobez
2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications  ...  Index Terms-Head pose, 3D head reconstruction, 3D morphable model. !  ...  of samples, able to provide very accurate head pose estimations for near frontal head poses, but which has difficulties at tracking heads otherwise; • an online reconstruction 3D head model based on a  ... 
doi:10.1109/tpami.2018.2841403 pmid:29993569 fatcat:hlungkksb5ff5n5omikvl3ef6q

Robust and Accurate 3D Head Pose Estimation through 3DMM and Online Head Model Reconstruction

Yu Yu, Kenneth Alberto Funes Mora, Jean-Marc Odobez
2017 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017)  
Accurate and robust 3D head pose estimation is important for face related analysis.  ...  The approach includes a robust online 3DMM fitting step based on multi-view observation samples as well as smooth and face-neutral synthetic samples generated from the reconstructed 3D head model.  ...  A number of depth-based approaches have been proposed for head pose estimation [7] [8] [9] .  ... 
doi:10.1109/fg.2017.90 dblp:conf/fgr/YuMO17 fatcat:t2eltyuzgjfenhf5aumkiuogh4

A Combined Generalized and Subject-Specific 3D Head Pose Estimation

David Joseph Tan, Federico Tombari, Nassir Navab
2015 2015 International Conference on 3D Vision  
We propose a real-time method for 3D head pose estimation from RGB-D sequences.  ...  Such framework is learned once from a generic dataset of 3D head models and refined online to adapt the forest to the specific characteristics of each subject.  ...  Generalized model-based tracking From a set of CAD models for different subjects, the objective is to build a generalized temporal tracker based on Random Forest [3] that performs the pose estimation  ... 
doi:10.1109/3dv.2015.62 dblp:conf/3dim/TanTN15 fatcat:rulcd7oqmfdlvmdkhxyysyscky

Real-time 3D face tracking based on active appearance model constrained by depth data

Nikolai Smolyanskiy, Christian Huitema, Lin Liang, Sean Eron Anderson
2014 Image and Vision Computing  
Active Appearance Model (AAM) is an algorithm for fitting a generative model of object shape and appearance to an input image.  ...  AAM allows accurate, real-time tracking of human faces in 2D and can be extended to track faces in 3D by constraining its fitting with a linear 3D morphable model.  ...  Constrained 2D + 3D AAM produces only valid facial alignments and estimates 3D tracking parameters (head pose, expressions).  ... 
doi:10.1016/j.imavis.2014.08.005 fatcat:4pd4jdvkxrdgxmnxw2u74chmcy

A review of motion analysis methods for human Nonverbal Communication Computing

Dimitris Metaxas, Shaoting Zhang
2013 Image and Vision Computing  
In general, nonverbal communication research offers high-level principles that might explain how people organize, display, adapt and understand such behaviors for communicative purposes and social goals  ...  Models of nonverbal behaviors in interaction are essential for collaboration tools, human-computer and virtual interaction and other assistive technologies designed to support people in real-world activities  ...  Acknowledgments The authors would like to thank all the reviewers for their constructive suggestions.  ... 
doi:10.1016/j.imavis.2013.03.005 fatcat:ylxt5bph2jfgrfd5a4c22qn66u

FaceCept3D: Real Time 3D Face Tracking and Analysis

Sergey Tulyakov, Radu-Laurentiu Vieriu, Enver Sangineto, Nicu Sebe
2015 2015 IEEE International Conference on Computer Vision Workshop (ICCVW)  
We present an open source cross platform technology for 3D face tracking and analysis.  ...  It contains a full stack of components for complete face understanding: detection, head pose tracking, facial expression and action units recognition.  ...  In other words, facial expressions in general (and Ekmans six prototypical ones in A pipeline for tracking the head pose and recognizing facial expressions.  ... 
doi:10.1109/iccvw.2015.13 dblp:conf/iccvw/TulyakovVSS15 fatcat:g6ilmj76wrhfnbqv3rcr2razdq

Robust facial landmark detection and tracking across poses and expressions for in-the-wild monocular video

Shuang Liu, Yongqiang Zhang, Xiaosong Yang, Daming Shi, Jian J. Zhang
2017 Computational Visual Media  
We present a novel approach for automatically detecting and tracking facial landmarks across poses and expressions from in-the-wild monocular video data, e.g., YouTube videos and smartphone recordings.  ...  Since 2D regression-based methods are sensitive to unstable initialization, and the temporal and spatial coherence of videos is ignored, we utilize a coarse-todense 3D facial expression reconstruction  ...  Acknowledgements This work was supported by the Harbin Institute of Technology Scholarship Fund 2016 and the National Centre for Computer Animation, Bournemouth University.  ... 
doi:10.1007/s41095-016-0068-y fatcat:d2qjkwonsvdtlkxbk5bzd6ud4y

Reconstructing detailed dynamic face geometry from monocular video

Pablo Garrido, Levi Valgaert, Chenglei Wu, Christian Theobalt
2013 ACM Transactions on Graphics  
Our approach tracks accurate sparse 2D features between automatically selected key frames to animate a parametric blend shape model, which is further refined in pose, expression and shape by temporally  ...  Our approach captures detailed, dynamic, spatio-temporally coherent 3D face geometry without the need for markers.  ...  Acknowledgements We gratefully acknowledge all our actors for their participation in the recordings and thank the reviewers for their helpful comments.  ... 
doi:10.1145/2508363.2508380 fatcat:t7zwbhyy7bgltihjpgzym6rmjq

Facial Landmark Detection: A Literature Survey

Yue Wu, Qiang Ji
2018 International Journal of Computer Vision  
We classify the facial landmark detection algorithms into three major categories: holistic methods, Constrained Local Model (CLM) methods, and the regression-based methods.  ...  They are hence important for various facial analysis tasks.  ...  Then, a 3D template is matched to the testing face to estimate the 3D locations of 20 landmarks. In [69] , dense 3D features are estimated from the 3D point cloud generated with depth sensor.  ... 
doi:10.1007/s11263-018-1097-z fatcat:ykqg6lr3j5bbrmrmli2dlrxupi

Dense 3D face alignment from 2D video for real-time use

László A. Jeni, Jeffrey F. Cohn, Takeo Kanade
2017 Image and Vision Computing  
The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model.  ...  The software is available online at representational power and robustness to illumination and pose but are not feasible for generic fitting and real-time use.  ...  [24] propose a method for regressing facial landmarks from 2D video. Pose and facial expression are recovered by fitting a user-specific blendshape model to them.  ... 
doi:10.1016/j.imavis.2016.05.009 pmid:29731533 pmcid:PMC5931713 fatcat:pue3aqhrdbdwhkv75agk2pdody

Unconstrained realtime facial performance capture

Pei-Lun Hsieh, Chongyang Ma, Jihun Yu, Hao Li
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We introduce a realtime facial tracking system specifically designed for performance capture in unconstrained settings using a consumer-level RGB-D sensor.  ...  Our framework provides uninterrupted 3D facial tracking, even in the presence of extreme occlusions such as those caused by hair, hand-to-face gestures, and wearable accessories.  ...  Acknowledgements We would like to thank Mikhail Smirnov for architecting the codebase; Ethan Yu, Frances Chen, Iris Wu, and Lamont Grant for being our capture models; Justin Solomon for the fruitful discussions  ... 
doi:10.1109/cvpr.2015.7298776 dblp:conf/cvpr/HsiehMYL15 fatcat:m7sjcnmbebcchhxxlieolo4ac4

Real-time facial feature tracking from 2D+3D video streams

Filareti Tsalakanidou, Sotiris Malassiotis
2010 2010 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video  
It is based on local feature detectors constrained by a 3D shape model, using techniques that make it robust under pose and partial occlusion.  ...  This paper presents a completely automated 3D facial feature tracking system using 2D+3D image sequences recorded by a real-time 3D sensor.  ...  To achieve realtime performance we use feature based 3D pose estimation followed by iterative tracking of 81 facial points using local appearance and surface geometry information.  ... 
doi:10.1109/3dtv.2010.5506261 fatcat:nkaiz46iq5he7mqospskjkpcqm

Facial performance sensing head-mounted display

Hao Li, Laura Trutoiu, Kyle Olszewski, Lingyu Wei, Tristan Trutna, Pei-Lun Hsieh, Aaron Nicholls, Chongyang Ma
2015 ACM Transactions on Graphics  
To map the input signals to a 3D face model, we perform a single-instance offline training session for each person.  ...  The resulting animations are visually on par with cutting-edge depth sensor-driven facial performance capture systems and hence, are suitable for social interactions in virtual worlds. ⇤ Authors on the  ...  The authors would also like to thank Chris Twigg and Douglas Lanman for their help in revising the paper, Gio Nakpil and Scott Parish for the 3D models, Fei Sha for the discussions on machine learning  ... 
doi:10.1145/2766939 fatcat:7j6h6lkvsbhjbdg3novpojd5pu
« Previous Showing results 1 — 15 out of 2,159 results