Filters








7,150 Hits in 8.4 sec

Using mutual information to indicate facial poses in video sequences

Georgios Goudelis, Anastasios Tefas, Ioannis Pitas
2009 Proceeding of the ACM International Conference on Image and Video Retrieval - CIVR '09  
The proposed method uses a novel pose estimation algorithm based on mutual information to extract any required facial pose from video sequences.  ...  Estimation of the facial pose in video sequences is one of the major issues in many vision systems such as face based biometrics, scene understanding for human and others.  ...  It is based on mutual information and evaluates the information content of each facial image (contained in a video frame) of facial poses in comparison to a given ground truth image.  ... 
doi:10.1145/1646396.1646429 dblp:conf/civr/GoudelisTP09 fatcat:fdmdaxhz4fbfxlonzt6yepdg4a

Automated Facial Pose Extraction From Video Sequences Based on Mutual Information

G. Goudelis, A. Tefas, I. Pitas
2008 IEEE transactions on circuits and systems for video technology (Print)  
The proposed method uses a novel pose estimation algorithm based on mutual information to extract any required facial poses from video sequences.  ...  Estimation of the facial pose in video sequences is one of the major issues in many vision systems such as face-based biometrics, scene understanding for humans, and others.  ...  The proposed method uses a novel pose estimation algorithm based on mutual information to extract any required facial poses from video sequences.  ... 
doi:10.1109/tcsvt.2008.918457 fatcat:5nukkj6hdnfilbgukdgfesx6dy

Face Recognition and Retrieval in Video [chapter]

Caifeng Shan
2010 Studies in Computational Intelligence  
Video data provides rich and redundant information, which can be exploited to resolve the inherent ambiguities of image-based recognition like sensitivity to low resolution, pose variations and occlusion  ...  Face recognition has also been considered in the content-based video retrieval setup, for example, character-based video search.  ...  Psychological studies [60, 83, 95] indicate that facial dynamics play an important role in the face recognition process, and both static and dynamic facial information are used in the human visual system  ... 
doi:10.1007/978-3-642-12900-1_9 fatcat:53dy35g7hzaaxocwoymcpad5s4

Empirical mode decomposition-based facial pose estimation inside video sequences

Chunmei Qing
2010 Optical Engineering: The Journal of SPIE  
While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function ͑IMF͒  ...  We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition ͑EMD͒ and mutual information.  ...  Recently, mutual information ͑MI͒ is used to extract facial poses from video sequences. 8 MI is widely used as a powerful tool for finding similarities between two entities.  ... 
doi:10.1117/1.3359510 fatcat:nxewtysmxnbvjhjrkqpi4papqq

Probabilistic Multiple Face Detection and Tracking Using Entropy Measures

E. Loutas, I. Pitas, C. Nikou
2004 IEEE transactions on circuits and systems for video technology (Print)  
The likelihood estimation process is the core of a multiple face detection scheme used to initialize the tracking process.  ...  The resulting system was tested on real image sequences and is robust to significant partial occlusion and illumination changes.  ...  The image sequences were obtained, using a simple video-conference camera. They can contain multiple faces per video shot.  ... 
doi:10.1109/tcsvt.2003.819178 fatcat:fp2zfgcy7jbzlelokdnbvltncy

Voice puppetry

Matthew Brand
1999 Proceedings of the 26th annual conference on Computer graphics and interactive techniques - SIGGRAPH '99  
Abstract We introduce a method for predicting a control signal from another related signal, and apply it to voice puppetry: Generating full facial animation from expressive information in an audio track  ...  Animation is produced by using audio to drive the model, which induces a probability distribution over the manifold of possible facial motions.  ...  Background Psychologists and storytellers alike have observed that there is a good deal of mutual information between vocal and facial gesture [27] .  ... 
doi:10.1145/311535.311537 dblp:conf/siggraph/Brand99 fatcat:eavefpn2hnar3oszbwfmj5hbrm

Detecting local audio-visual synchrony in monologues utilizing vocal pitch and facial landmark trajectories

Steven Cadavid, Mohamed Abdel-Mottaleb, Daniel S. Messinger, Mohammad H. Mahoor, Lorraine E. Bahrick
2009 Procedings of the British Machine Vision Conference 2009  
The synchrony between the audio and visual feature vectors is computed using Gaussian mutual information.  ...  Experimental results indicate that the proposed approach is successful in detecting facial regions that demonstrate synchrony, and in distinguishing between synchronous and asynchronous sequences.  ...  We use this measure of Gaussian mutual information to compute the temporal contingency between a video and audio signal.  ... 
doi:10.5244/c.23.10 dblp:conf/bmvc/CadavidAMMB09 fatcat:j7sj3l3mkrhx7d45t5qh2qrnae

High detail flexible viewpoint facial video from monocular input using static geometric proxies

Markus Kettern, David Blumenthal-Barby, Peter Eisert
2013 Proceedings of the 6th International Conference on Computer Vision / Computer Graphics Collaboration Techniques and Applications - MIRAGE '13  
We use the term flexible-viewpoint to indicate that the viewpoint can be arbitrarily chosen (as in free-viewpoint video), but from a restricted set of viewing directions 1 .  ...  Furthermore, we show how model-based tracking over the whole video sequence provides precise head pose estimates for each video frame.  ...  Acknowledgement The work presented in this paper has been funded by the Seventh Framework Programme EU project RE@CT (grant agreement no. 288369)  ... 
doi:10.1145/2466715.2466716 dblp:conf/mirage/KetternBE13 fatcat:7bvhzplemjhz7krmtxxhxt24ua

A Novel Joint Chaining Graph Model for Human Pose Estimation on 2D Action Videos and Facial Pose Estimation on 3D Images

D. Ratna kishore, M. Chandra Mohan, Akepogu. Ananda Rao
2017 International Journal of Image Graphics and Signal Processing  
To overcome these issues, we introduce an ensemble chaining graph model to estimate arbitrary human poses in 2D video sequences and facial expression evaluation in 3D images.  ...  Joint human pose estimation in the 2D motion video sequence and 3D facial pose estimation is the challenging issue in computer vision due to noise, large deformation, illumination and complex background  ...  Probabilistic based Human Pose Detection in 2D Action Video Sequences In this proposed model, 2D video action sequences are used to detect the accuracy human poses using enhanced chaining graph model with  ... 
doi:10.5815/ijigsp.2017.03.03 fatcat:b7whitsqmjeapocntasninwybq

Dynamics of facial actions for assessing smile genuineness

Michal Kawulok, Jakub Nalepa, Jolanta Kawulok, Bogdan Smolka, Zezhi Li
2021 PLoS ONE  
attributed to the use of facial action units.  ...  In this work, we explore the possibilities of extracting discriminative features directly from the dynamics of facial action units to differentiate between genuine and posed smiles.  ...  From a video sequence, facial AUs are recognized (in a frame-wise manner, indicated by multiple blocks in the diagram), and the smile intensity is estimated for every frame to determine smile onset, apex  ... 
doi:10.1371/journal.pone.0244647 pmid:33400708 fatcat:5kcp4i7ts5hbzhgjbf4hrfw5ta

FACE RECOGNITION FROM VIDEO: A REVIEW

JEREMIAH R. BARR, KEVIN W. BOWYER, PATRICK J. FLYNN, SOMA BISWAS
2012 International journal of pattern recognition and artificial intelligence  
The ensuing results have demonstrated that videos possess unique properties that allow both humans and automated systems to perform recognition accurately in difficult viewing conditions.  ...  An overview of the most popular and difficult publicly available face video databases is provided to complement these discussions.  ...  The opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of our sponsors.  ... 
doi:10.1142/s0218001412660024 fatcat:xztw7hmpsjacbogyn22axiq4tq

Identifying User-Specific Facial Affects from Spontaneous Expressions with Minimal Annotation

Michael Xuelin Huang, Grace Ngai, Kien A. Hua, Stephen C.F. Chan, Hong Va Leong
2016 IEEE Transactions on Affective Computing  
The conventional approach relies on the use of key frames in recorded affect sequences and requires an expert observer to identify and annotate the frames.  ...  The most indicative facial gestures are identified and extracted from the facial response video, and the association between gesture and affect labels is determined by the distribution of the gesture over  ...  ACKNOWLEDGMENT The authors wish to thank the experiment subjects for their time and effort.  ... 
doi:10.1109/taffc.2015.2495222 fatcat:6h4d72q3ybcldgate4mmin7obu

Physiological parameter monitoring of drivers based on video data and independent vector analysis

Zhenyu Guo, Z. Jane Wang, Zhiqi Shen
2014 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
Here we propose using advanced facial landmark and pose estimation, and independent vector analysis to extract heart rate variability.  ...  In this paper, to maintain the driver's comfort and enhance the driving safety, we propose a non-contact, video-based approach to continuously monitor the driver's heart rate variability under real-world  ...  Since the Viola & Jones face detector used in previous methods failed in the testing video, we use the facial landmarks estimator to get the whole facial area for the ICA algorithm.  ... 
doi:10.1109/icassp.2014.6854428 dblp:conf/icassp/GuoWS14 fatcat:gbx4noccoja5hecaxjme7d2vky

Mutual Information Regularized Identity-aware Facial ExpressionRecognition in Compressed Video [article]

Xiaofeng Liu, Linghao Jin, Xu Han, Jane You
2021 arXiv   pre-print
In this paper, we target to explore the inter-subject variations eliminated facial expression representation in the compressed video domain.  ...  Specifically, we propose a novel collaborative min-min game for mutual information (MI) minimization in latent space.  ...  Specifically, the video in CK+ [34, 53] consists of a sequence which shift from the neutral expression to a apex facial expression.  ... 
arXiv:2010.10637v2 fatcat:xykdonwk4ffezfspqyp7rx6oxq

S3VAE: Self-Supervised Sequential VAE for Representation Disentanglement and Data Generation

Yizhe Zhu, Martin Renqiang Min, Asim Kadav, Hans Peter Graf
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
We propose a sequential variational autoencoder to learn disentangled representations of sequential data (e.g., videos and audios) under self-supervision.  ...  With the supervision of the signals, our model can easily disentangle the representation of an input sequence into static factors and dynamic factors (i.e., time-invariant and time-varying parts).  ...  The characters in the generated videos of Monkeynet fail to follow the pose and action in V m , and many artifacts appear.  ... 
doi:10.1109/cvpr42600.2020.00657 dblp:conf/cvpr/ZhuMKG20 fatcat:xzf6a7pg2zdjziykxslnc3fqre
« Previous Showing results 1 — 15 out of 7,150 results