3,273 Hits in 6.6 sec

Time-Delay Neural Network for Continuous Emotional Dimension Prediction From Facial Expression Sequences

Hongying Meng, Nadia Bianchi-Berthouze, Yangdong Deng, Jinkuang Cheng, John P. Cosmas
2016 IEEE Transactions on Cybernetics  
In this paper, a novel two-stage automatic system is proposed to continuously predict affective dimension values from facial expression videos.  ...  Automatic continuous affective state prediction from naturalistic facial expression is a very challenging research topic but very important in human-computer interaction.  ...  In this paper, we propose to use a two-stage model with a TDNN model for continuous dimensional emotion prediction from facial expression image sequences.  ... 
doi:10.1109/tcyb.2015.2418092 pmid:25910269 fatcat:wdqhwbsj6fcwjmqg7mgowyuabm

Continuous Emotion Recognition via Deep Convolutional Autoencoder and Support Vector Regressor [article]

Sevegni Odilon Clement Allognon, Alessandro L. Koerich, Alceu de S. Britto Jr
2020 arXiv   pre-print
In this paper, we present a new model for continuous emotion recognition based on facial expression recognition by using an unsupervised learning approach based on transfer learning and autoencoders.  ...  In recent years, deep neural networks have been used with great success in recognizing emotions.  ...  [9] used recurrent neural networks (RNN) to study emotion recognition from facial features extracted.  ... 
arXiv:2001.11976v1 fatcat:eds6v363k5envakntu2qe7ffby

Multi-modal Dimensional Emotion Recognition using Recurrent Neural Networks

Shizhe Chen, Qin Jin
2015 Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge - AVEC '15  
This paper presents our effort for the Audio/Visual Emotion Challenge (AVEC2015), whose goal is to explore utilizing audio, visual and physiological signals to continuously predict the value of the emotion  ...  Our system applies the Recurrent Neural Networks (RNN) to model temporal information.  ...  In this paper, we use temporal models, recurrent neural networks, to predict continuous dimensional values and explore many variations to improve performance.  ... 
doi:10.1145/2808196.2811638 dblp:conf/mm/ChenJ15 fatcat:rxqecowahfalln6bcmo77x6ryq

Continuous Emotion Recognition with Spatiotemporal Convolutional Neural Networks

Thomas Teixeira, Éric Granger, Alessandro Lameiras Koerich
2021 Applied Sciences  
This paper investigates the suitability of state-of-the-art deep learning architectures based on convolutional neural networks (CNNs) to deal with long video sequences captured in the wild for continuous  ...  emotion space, where continuous values of valence and arousal must be predicted.  ...  a variety of emotions along with records of facial expressions from single subjects.  ... 
doi:10.3390/app112411738 fatcat:f44i3gzcqjashc3yk5qd2wmpwy

Learning Hierarchical Emotion Context for Continuous Dimensional Emotion Recognition from Video Sequences

Qirong Mao, Qing Zhu, Qiyu Rao, Hongjie Jia, Sidian Luo
2019 IEEE Access  
In this paper, a novel three-stage method is proposed to learn hierarchical emotion context information (feature-and label-level contexts) for predicting affective dimension values from video sequences  ...  Our framework highlights that incorporating both feature/label level dependencies and context information is a promising research direction for predicting the continuous dimensional emotion.  ...  For continuous dimensional emotion recognition, the challenge is to build systems that can continuously (i.e., over time) analyze and predict affective emotion in dimensional space.  ... 
doi:10.1109/access.2019.2916211 fatcat:52lfmwa3h5cpxnijy64cejz3ie

Decoupling Temporal Dynamics for Naturalistic Affect Recognition in a Two-Stage Regression Framework

Yona Falinie A. Gaus, Hongying Meng, Asim Jan
2017 2017 3rd IEEE International Conference on Cybernetics (CYBCONF)  
Delay Neural Network (TDNN), Long Short-Term Memory (LSTM) or Kalman Filter (KF), have been frequently explored, but in an isolated way.  ...  Automatic continuous affect recognition from multiple modalities is one of the most active research areas in affective computing.  ...  The delay property in TDNN nodes can be set as the number of past instance of emotional expression, making it a perfect fit for a modeling continuous emotion recognition.  ... 
doi:10.1109/cybconf.2017.7985772 dblp:conf/cybconf/GausMJ17 fatcat:jxxbewad45d3hen6pjwptnlhim

Multi-modal Conditional Attention Fusion for Dimensional Emotion Prediction [article]

Shizhe Chen, Qin Jin
2017 arXiv   pre-print
Continuous dimensional emotion prediction is a challenging task where the fusion of various modalities usually achieves state-of-the-art performance such as early fusion or late fusion.  ...  Long-short term memory recurrent neural networks (LSTM-RNN) is applied as the basic uni-modality model to capture long time dependencies.  ...  For example, for neural networks, modellevel fusion could be concatenation of di erent hidden layers from di erent modalities [15] .  ... 
arXiv:1709.02251v1 fatcat:gyk5bx76xnahnfz6gq2c6pw2ka

Multi-modal Conditional Attention Fusion for Dimensional Emotion Prediction

Shizhe Chen, Qin Jin
2016 Proceedings of the 2016 ACM on Multimedia Conference - MM '16  
Continuous dimensional emotion prediction is a challenging task where the fusion of various modalities usually achieves state-of-the-art performance such as early fusion or late fusion.  ...  Long-short term memory recurrent neural networks (LSTM-RNN) is applied as the basic uni-modality model to capture long time dependencies.  ...  For example, for neural networks, model-level fusion could be concatenation of different hidden layers from different modalities [15] .  ... 
doi:10.1145/2964284.2967286 dblp:conf/mm/ChenJ16 fatcat:pr7w32js75hl7hjhjc7knpfy5q

A benchmark of dynamic versus static methods for facial action unit detection

L. Alharbawee, N. Pugeault
2021 The Journal of Engineering  
Action Units activation is a set of local individual facial muscle parts that occur in time constituting a natural facial expression event.  ...  models from different CNN architectures for local deep visual learning for AU image analysis.  ...  the evolution of huge neural networks.  ... 
doi:10.1049/tje2.12001 fatcat:g7qidt3uevbxtpc3nyfkgj7o4q

Affective computing using speech and eye gaze: a review and bimodal system proposal for continuous affect prediction [article]

Jonny O'Dwyer, Niall Murray, Ronan Flynn
2018 arXiv   pre-print
From a review of the literature, the use of eye gaze features extracted from video is a modality that has remained largely unexploited for continuous affect prediction.  ...  This work presents a review of the literature within the emotion classification and continuous affect prediction sub-fields of affective computing for both speech and eye gaze modalities.  ...  The review of continuous affect prediction clearly shows arousal and valence as the emotion dimensions of choice for two-dimensional affect prediction (Mencattini et al., 2017; Ringeval et al., 2013b;  ... 
arXiv:1805.06652v1 fatcat:mkwhbbocxnhtropgsjj5v3fo5u

An Emotion-embedded Visual Attention Model for Dimensional Emotion Context Learning

Yuhao Tang, Qirong Mao, Hongjie Jia, Heping Song, Yongzhao Zhan
2019 IEEE Access  
In this paper, we propose an emotionembedded visual attention model (EVAM) to learn emotion context information for predicting affective dimension values from video sequences.  ...  Third, the k-means algorithm is adapted to embed previous emotion into attention model to produce more robust time series predictions, which emphasize the influence of previous emotion on current effective  ...  Dimensional emotion recognition is typically much more complicated than category emotion recognition as the emotion labels are continuous values and we need to detect emotion states from time to time.  ... 
doi:10.1109/access.2019.2911714 fatcat:a2nsjbzelnfyrd227p5x4qfs6i

Robust continuous prediction of human emotions using multiscale dynamic cues

Jérémie Nicolle, Vincent Rapp, Kévin Bailly, Lionel Prevost, Mohamed Chetouani
2012 Proceedings of the 14th ACM international conference on Multimodal interaction - ICMI '12  
This paper details our response to the Audio/Visual Emotion Challenge (AVEC'12) whose goal is to continuously predict four affective signals describing human emotions (namely valence, arousal, expectancy  ...  For selecting features, we introduce a new correlation-based measure that takes into account a possible delay between the labels and the data and significantly increases robustness.  ...  (ANR) in the frame of its Technological Research CONTINT program (IMMEMO, project number ANR-09-CORD-012), the French FUI project PRAMAD2 (project number J11P159) and the Cap Digital Business cluster for  ... 
doi:10.1145/2388676.2388783 dblp:conf/icmi/NicolleRBPC12 fatcat:4irs75zx55apllfqyt6cjmb4oq

Estimation of Affective Level in the Wild with Multiple Memory Networks

Jianshu Li, Yunpeng Chen, Shengtao Xiao, Jian Zhao, Sujoy Roy, Jiashi Feng, Shuicheng Yan, Terence Sim
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
A carefully designed deep convolutional neural network (a variation of residual network) for affective level estimation of facial expressions is first implemented as a baseline.  ...  Next we use multiple memory networks to model the temporal relations between the frames. Finally ensemble models are used to combine the predictions from multiple memory networks.  ...  It also allows for understanding how facial expressions transform over time, which is useful for estimating more subtle facial expressions.  ... 
doi:10.1109/cvprw.2017.244 dblp:conf/cvpr/LiCXZRFYS17 fatcat:j2u3viumufdbzkvvno4yycqo5a

Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data

Fabien Ringeval, Florian Eyben, Eleni Kroupi, Anil Yuce, Jean-Philippe Thiran, Touradj Ebrahimi, Denis Lalanne, Björn Schuller
2015 Pattern Recognition Letters  
predict emotion from several (asynchronous) raters in continuous time domains, i. e., arousal and valence.  ...  Features are extracted with various window sizes for each modality and performance for the automatic emotion prediction is compared for both di↵erent architectures of Neural Networks and fusion approaches  ...  been supported by the Swiss National Science Foundation through the National Centre for Competence in Research (NCCR) on Interactive Multimodal Information Management (IM2).  ... 
doi:10.1016/j.patrec.2014.11.007 fatcat:igplp6lowvhnfejpz5hry5j4nq

Expression EEG Multimodal Emotion Recognition Method Based on the Bidirectional LSTM and Attention Mechanism

Yifeng Zhao, Deyun Chen, Kaijian Xia
2021 Computational and Mathematical Methods in Medicine  
Firstly, facial expression features are extracted based on the bilinear convolution network (BCN), and EEG signals are transformed into three groups of frequency band image sequences, and BCN is used to  ...  Experimental results show that the attention mechanism can enhance the visual effect of the image, and compared with other methods, the proposed method can extract emotion features from expressions and  ...  For facial expressions, BCN is used to extract facial expression features.  ... 
doi:10.1155/2021/9967592 pmid:34055043 pmcid:PMC8131147 fatcat:aj5tuzic6bgtjlbgh4v7uspfii
« Previous Showing results 1 — 15 out of 3,273 results