A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Fusion of feature sets and classifiers for facial expression recognition
2013
Expert systems with applications
This paper presents a novel method for facial expression recognition that employs the combination of two different feature sets in an ensemble approach. ...
approaches that employ single feature sets and single classifiers. ...
Acknowledgments The authors would like to acknowledge the National Council for Scientific and Technological Development (CNPq) for the financial support under the Grants 471.496/2007l-3, 309.295/2007Grants ...
doi:10.1016/j.eswa.2012.07.074
fatcat:s5tdtb7zanalvezugsca2d2uru
Integrating Facial Expression and Body Gesture in Videos for Emotion Recognition
2014
IEICE transactions on information and systems
To further extract the common emotion features from both facial expression feature set and gesture feature set, the SCCA method is applied and the extracted emotion features are used for the biomodal emotion ...
cuboids feature descriptor to extract the facial expression and gesture emotion features [1], [2] . ...
To further extract the common emotion features from both facial expression feature set and gesture feature set, the SCCA method is applied and the extracted emotion features are used for the biomodal emotion ...
doi:10.1587/transinf.e97.d.610
fatcat:wyn4hiwzejdyfdqimwbo5pdr34
A Multi-Layer Fusion-Based Facial Expression Recognition Approach with Optimal Weighted AUs
2017
Applied Sciences
Taking these typical features and classifiers in facial expression area as a basis, we fully analyse their fusion performance. ...
to an individual feature/classifier and some state-of-the-art methods, including the recent deep learning based expression recognition one. ...
Conflicts of Interest: The authors declare no conflict of interest. ...
doi:10.3390/app7020112
fatcat:wht5drnqyfbixprq7rzmsril4m
Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal
2016
MATEC Web of Conferences
First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN). ...
Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close ...
Acknowledgment The authors wish to deeply thank graduate students who collaborated in the experiments and in the development of the system. ...
doi:10.1051/matecconf/20166103012
fatcat:wqrjiwzpjfdvzdwyvpr2zhsvvq
Emotion recognition from multi-modal information
2013
2013 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference
A variety of theoretical background and applications ranging from salient emotional features, emotionalcognitive models, to multi-modal data fusion strategies is surveyed for emotion recognition on these ...
In this paper, we presents a survey on theoretical and practical work offering new and broad views of the latest research in emotion recognition from multi-modal information including facial and vocal ...
In feature-level fusion [55] - [57] , facial and vocal features are concatenated to construct a joint feature vector, and are then modeled by a single classifier for emotion recognition. ...
doi:10.1109/apsipa.2013.6694347
dblp:conf/apsipa/WuLWC13
fatcat:mnewdisdhfhq7b376ntz5bb5wm
Emotion recognition using bimodal data fusion
2011
Proceedings of the 12th International Conference on Computer Systems and Technologies - CompSysTech '11
This paper provides proposes a bimodal system for emotion recognition that uses face and speech analysis. ...
The paper presents the best performing models and the results of the proposed recognition system. ...
Based on the previous feature selection procedures, we find a relevant set of features for each facial expression category. ...
doi:10.1145/2023607.2023629
dblp:conf/compsystech/DatcuR11
fatcat:aaymo77t2vhnzjv45avaajzque
Recognition of Facial Expressions in the Presence of Occlusion
2001
Procedings of the British Machine Vision Conference 2001
We present a new approach for the recognition of facial expressions from video sequences in the presence of occlusion. ...
The proposed approach is based on a localised representation of facial features, and on data fusion. The experiments show that the proposed approach is robust to partial occlusion of the face. ...
The framework for automatic recognition of facial expressions, shown in Figure 1 , consists of a feature point tracker, a feature extractor, a group of local classifiers, and a fusion module. ...
doi:10.5244/c.15.23
dblp:conf/bmvc/BourelCL01
fatcat:t3ek5hai6nhwngst2qf373frbi
Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition
2017
Computational Intelligence and Neuroscience
The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources. ...
This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. ...
The four emotion states are detected by both facial expression and EEG. Emotion recognition is based on a decision-level fusion of both EEG and facial expression detection. ...
doi:10.1155/2017/2107451
pmid:29056963
pmcid:PMC5625811
fatcat:swc3vn66xjhi3agfu2dwrmyupa
Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram
2017
Mathematical Problems in Engineering
of facial expressions. ...
This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. ...
SVM is binary classifier used only in two sets of feature spaces of objects. Facial expression recognition is multiclass problem. ...
doi:10.1155/2017/7206041
fatcat:imyvx5morrf6jedthcr6rh3zje
Facial expression recognition in the wild based on multimodal texture features
2016
Journal of Electronic Imaging (JEI)
We train linear support vector machine and partial least squares classifiers for those kinds of features on the static facial expression in the wild (SFEW) and acted facial expression in the wild (AFEW ...
We evaluate the recognition results of gray deep features and color deep features, and explore the fusion of multimodal texture features. ...
Yao et al. 16 combined the CNN model with facial action unit aware features and got the state-of-the-art result for facial expression recognition in videos. ...
doi:10.1117/1.jei.25.6.061407
fatcat:gtn6qycghzhjhnnd2uoauopuei
Semantic Audiovisual Data Fusion for Automatic Emotion Recognition
[chapter]
2015
Emotion Recognition
Two types of models based on geometric face features for facial expression recognition are being used, depending on the presence or absence of speech. ...
The results from the facial expression recognition and from the emotion recognition from speech are combined using a bi-modal multimodal semantic data fusion model that determines the most probable emotion ...
The research reported here is part of the Interactive Collaborative Information Systems (ICIS) project, supported by the Dutch Ministry of Economic Affairs, grant nr: BSIK03024. ...
doi:10.1002/9781118910566.ch16
fatcat:r3m2okqa5refjdvs4z3tum3k7m
Research on facial expression recognition based on Multimodal data fusion and neural network
[article]
2021
arXiv
pre-print
sub neural networks to extract data features, using multimodal data feature fusion mechanism to improve the accuracy of facial expression recognition. ...
In this paper, a neural network algorithm of facial expression recognition based on multimodal data fusion is proposed. ...
Acknowledgment The authors received no financial support for the research, authorship, and/or publication of this article. ...
arXiv:2109.12724v1
fatcat:e4fwldphsnerfohxwmx2deh4ma
Facial Expression Recognition via Sparse Representation
2012
IEICE transactions on information and systems
decision level fusion, facial expression recognition ...
The features of "important" training samples are selected to represent test sample. Furthermore, fuzzy integral is utilized to fuse individual classifiers for facial components. ...
Conclusions Facial parts based sparse representation classification method is proposed for facial expression recognition, and the fusion of multiple classifiers are realized with the aid of fuzzy integral ...
doi:10.1587/transinf.e95.d.2347
fatcat:ddola44gdjb5zc2t5mugchfkhe
Multi-classifier Fusion Based Facial Expression Recognition Approach
2014
KSII Transactions on Internet and Information Systems
This paper proposes a facial expression recognition approach based on multi-classifier fusion with stacking algorithm. ...
Facial expression recognition is an important part in emotional interaction between human and machine. ...
Note that the meta-classifier sees only the probabilities estimates for each classifier and class, across the set of fusion samples. ...
doi:10.3837/tiis.2014.01.012
fatcat:idipgj5smvfdpcyqgnl2pbnikm
Speech-Driven Automatic Facial Expression Synthesis
2008
2008 3DTV Conference: The True Vision - Capture, Transmission and Display of 3D Video
This paper focuses on the problem of automatically generating speech synchronous facial expressions for 3D talking heads. The proposed system is speaker and language independent. ...
HMM based classifier has lower recognition rates than the GMM based classifier. However, fusion of the two classifiers has 83.80% recognition rate on the average. ...
INTRODUCTION Facial expressions and emotional states of a person are related since gestures and facial expressions are used to express an emotion. ...
doi:10.1109/3dtv.2008.4547861
fatcat:rc5gqmljtbh2dl4bjowhwl2kgi
« Previous
Showing results 1 — 15 out of 11,854 results