Filters








8,520 Hits in 6.5 sec

Joint Representation and Estimator Learning for Facial Action Unit Intensity Estimation

Yong Zhang, Baoyuan Wu, Weiming Dong, Zhifeng Li, Wei Liu, Bao-Gang Hu, Qiang Ji
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Facial action unit (AU) intensity is an index to characterize human expressions.  ...  First, rather than keeping image representation fixed, it simultaneously learns representation and intensity estimator to achieve an optimal solution.  ...  Acknowledgments: This work is partially supported by the National Key R&D Program of China (Grant No. 2018YFC0807500) and by NSFC Nos. 61832016 and 61720106006.  ... 
doi:10.1109/cvpr.2019.00357 dblp:conf/cvpr/ZhangWDLLHJ19 fatcat:plywf27qonbb3jhbzs5g2vry7u

Joint Action Unit localisation and intensity estimation through heatmap regression [article]

Enrique Sanchez-Lozano, Georgios Tzimiropoulos, Michel Valstar
2018 arXiv   pre-print
This paper proposes a supervised learning approach to jointly perform facial Action Unit (AU) localisation and intensity estimation.  ...  Contrary to previous works that try to learn an unsupervised representation of the Action Unit regions, we propose to directly and jointly estimate all AU intensities through heatmap regression, along  ...  Conclusions In this paper, we have presented a simple yet efficient method for facial Action Unit intensity estimation, through jointly localising the Action Units along with their intensity estimation  ... 
arXiv:1805.03487v2 fatcat:jk4tkx6qavhwdmbex3vvineg44

G2RL: Geometry-Guided Representation Learning for Facial Action Unit Intensity Estimation

Yingruo Fan, Zhaojiang Lin
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
Facial action unit (AU) intensity estimation aims to measure the intensity of different facial muscle movements.  ...  To this end, we propose a novel geometry-guided representation learning (G2RL) method for facial AU intensity estimation.  ...  Facial Action Unit Intensity Estimation The focus of most existing studies have been on facial expression recognition or facial action unit detection, whereas relatively few works have investigated  ... 
doi:10.24963/ijcai.2020/102 dblp:conf/ijcai/FanL20 fatcat:snei4otj5fcupn2re6vef3ikzq

Deep Structured Learning for Facial Action Unit Intensity Estimation

Robert Walecki, Ognjen Rudovic, Vladimir Pavlovic, Bjoern Schuller, Maja Pantic
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We consider the task of automated estimation of facial expression intensity. This involves estimation of multiple output variables (facial action units -AUs) that are structurally dependent.  ...  We show that joint learning of the deep features and the target output structure results in significant performance gains compared to existing deep structured models for analysis of facial expressions.  ...  Related Work Facial Action Unit Intensity Estimation Estimation of AU intensity is often posed as a multi-class problem approached using Neural Networks [16] , Adaboost [2] , SVMs [26] and belief  ... 
doi:10.1109/cvpr.2017.605 dblp:conf/cvpr/WaleckiRPSP17 fatcat:e66unqqze5dghkaj76tbdifohe

A Transfer Learning approach to Heatmap Regression for Action Unit intensity estimation [article]

Ioanna Ntinou and Enrique Sanchez and Adrian Bulat and Michel Valstar and Georgios Tzimiropoulos
2020 arXiv   pre-print
Action Units (AUs) are geometrically-based atomic facial muscle movements known to produce appearance changes at specific facial locations.  ...  To accommodate the joint modelling of AUs intensity, we propose variable size heatmaps, with their amplitude and size varying according to the labelled intensity.  ...  ACKNOWLEDGMENTS The work of Ioanna Ntinou was supported by the Horizon Centre for Doctoral Training, School of Computer Science, University of Nottingham.  ... 
arXiv:2004.06657v1 fatcat:jkwovffbuvetpgxe4qnqrjk6fy

Copula Ordinal Regression for Joint Estimation of Facial Action Unit Intensity

Robert Walecki, Ognjen Rudovic, Vladimir Pavlovic, Maja Pantic
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Joint modeling of the intensity of facial action units (AUs) from face images is challenging due to the large number of AUs (30+) and their intensity levels (6).  ...  Consequently, the COR model achieves the joint learning and inference of intensities of multiple AUs, while being computationally tractable.  ...  Specifically, the FACS defines a unique set of 30+ atomic non-overlapping facial muscle actions named Action Units (AUs) [19] .  ... 
doi:10.1109/cvpr.2016.530 dblp:conf/cvpr/WaleckiRPP16 fatcat:6gg3rhr2jjer3l75cvmtt6uyoi

Variational Gaussian Process Auto-Encoder for Ordinal Prediction of Facial Action Units [article]

Stefanos Eleftheriadis, Ognjen Rudovic, Marc P. Deisenroth, Maja Pantic
2016 arXiv   pre-print
We further evaluate the model on the tasks of feature fusion and joint ordinal prediction of facial action units.  ...  We demonstrate the representation abilities of our model on benchmark datasets from machine learning and affect analysis.  ...  This work has been funded by the European Community Horizon 2020 under grant agreement no. 645094 (SEWA), and no. 688835 (DE-ENIGMA). MPD has been supported by a Google Faculty Research Award.  ... 
arXiv:1608.04664v2 fatcat:iwfhyyyn35h3posjeogyz7ssw4

Weakly-Supervised Deep Convolutional Neural Network Learning for Facial Action Unit Intensity Estimation

Yong Zhang, Weiming Dong, Bao-Gang Hu, Qiang Ji
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Facial action unit (AU) intensity estimation plays an important role in affective computing and human-computer interaction.  ...  To provide additional supervision for model learning, we exploit naturally existing constraints on AUs, including relative appearance similarity, temporal intensity ordering, facial symmetry, and contrastive  ...  The support of CSC and RPI is greatly appreciated.  ... 
doi:10.1109/cvpr.2018.00246 dblp:conf/cvpr/ZhangDHJ18 fatcat:kqwf4hfjt5cnhbrjzxmqsawbo4

2019 Index IEEE Transactions on Affective Computing Vol. 10

2020 IEEE Transactions on Affective Computing  
., +, T-AFFC April -June 2019 144-154 Copula Ordinal Regression Framework for Joint Estimation of Facial Action Unit Intensity.  ...  Wang, S., +, T-AFFC April -June 2019 155-166 Copula Ordinal Regression Framework for Joint Estimation of Facial Action Unit Intensity.  ... 
doi:10.1109/taffc.2019.2957904 fatcat:55yc25prhrgelmtih2dlf3ilsq

Variational Gaussian Process Auto-Encoder for Ordinal Prediction of Facial Action Units [chapter]

Stefanos Eleftheriadis, Ognjen Rudovic, Marc Peter Deisenroth, Maja Pantic
2017 Lecture Notes in Computer Science  
We further evaluate the model on the tasks of feature fusion and joint ordinal prediction of facial action units.  ...  We demonstrate the representation abilities of our model on benchmark datasets from machine learning and affect analysis.  ...  This work has been funded by the European Community Horizon 2020 under grant agreement no. 645094 (SEWA), and no. 688835 (DE-ENIGMA). MPD has been supported by a Google Faculty Research Award.  ... 
doi:10.1007/978-3-319-54184-6_10 fatcat:4ujqaspn4feejni45iaym7j73m

Weakly Supervised Learning for Facial Behavior Analysis : A Review [article]

Gnana Praveen R, Eric Granger, Patrick Cardinal
2021 arXiv   pre-print
labeling process is highly vulnerable to ambiguity of expressions or action units, especially for intensities due to the bias induced by the domain experts.  ...  Labeling process of huge training data demands lot of human support with strong domain expertise for facial expressions or action units, which is difficult to obtain in real-time environments.Moreover,  ...  AU Intensity Estimation Yong et al [81] designed a deep convolutional neural network for intensity estimation of Action Units(AUs) using annotations of only peak and valley frames.  ... 
arXiv:2101.09858v1 fatcat:fv3nwbr43vfvrlf655jipvhui4

DeepCoder: Semi-parametric Variational Autoencoders for Automatic Facial Action Coding [article]

Dieu Linh Tran, Robert Walecki, Ognjen Rudovic, Stefanos Eleftheriadis, Bjørn Schuller, Maja Pantic
2017 arXiv   pre-print
Human face exhibits an inherent hierarchy in its representations (i.e., holistic facial expressions can be encoded via a set of facial action units (AUs) and their intensity).  ...  Potentially, this makes VAEs a suitable approach for learning facial features for AU intensity estimation.  ...  Related Work Facial Action Unit Intensity Estimation Estimation of AUs intensity is often posed as a multiclass problem approached using Neural Networks [19] , Adaboost [3] , SVMs [32] and belief  ... 
arXiv:1704.02206v2 fatcat:fhqmrhkwz5evdglag26kx4dkgq

FATAUVA-Net: An Integrated Deep Learning Framework for Facial Attribute Recognition, Action Unit Detection, and Valence-Arousal Estimation

Wei-Yi Chang, Shih-Huan Hsu, Jen-Hsien Chien
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
Facial expression recognition has been investigated for many years, and there are two popular models: Action Units (AUs) and the Valence-Arousal space (V-A space) that have been widely used.  ...  In this paper, we propose an integrated deep learning framework for facial attribute recognition, AU detection, and V-A estimation.  ...  of our single network for facial attribute recognition, action unit detection, and valence-arousal estimation.  ... 
doi:10.1109/cvprw.2017.246 dblp:conf/cvpr/ChangHC17 fatcat:dqxksqqer5az3dvdzsvgqdjdhm

Modeling Dynamics of Facial Behavior for Mental Health Assessment [article]

Minh Tran, Ellen Bradley, Michelle Matvey, Joshua Woolley, Mohammad Soleymani
2021 arXiv   pre-print
Facial action unit (FAU) intensities are popular descriptors for the analysis of facial behavior. However, FAUs are sparsely represented when only a few are activated at a time.  ...  We evaluate the usefulness of our learned representations on two downstream tasks: schizophrenia symptom estimation and depression severity regression.  ...  [35] , consists of a set static features obtained from facial Action Unit intensities that has been shown to be useful for schizophrenia symptom classification.  ... 
arXiv:2108.09934v1 fatcat:j625nk2yw5enzjfekbg5ehv2xa

FEAFA: A Well-Annotated Dataset for Facial Expression Analysis and 3D Facial Animation [article]

Yanfu Yan, Ke Lu, Jian Xue, Pengcheng Gao, Jiayi Lyu
2019 arXiv   pre-print
action units, including only their absence, presence, or a five-level intensity according to the Facial Action Coding System.  ...  FACS action descriptors and 2 asymmetrical FACS action descriptors, and each action unit or action descriptor is well-annotated with a floating point number between 0 and 1.  ...  AU Detection and AU Intensity Estimation There are quite a few state-of-the-art techniques for AU detection [11, 12] and AU intensity estimation [13, 14] based on deep learning.  ... 
arXiv:1904.01509v1 fatcat:hwugmg65qndq7hh3cdiq2rph6m
« Previous Showing results 1 — 15 out of 8,520 results