A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
High-Level Geometry-based Features of Video Modality for Emotion Prediction
2016
Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge - AVEC '16
First, we propose to improve the performance of the multimodal prediction with low-level features by adding high-level geometry-based features, namely head pose and expression signature. ...
The results show that our high-level features improve the performance of the multimodal prediction of arousal and that the subjects fusion works well in unimodal prediction but generalizes poorly in multimodal ...
High-level geometry-based features extraction The figure 2 gives an overview of the high-level geometrybased features extraction used for the video modality. ...
doi:10.1145/2988257.2988262
dblp:conf/mm/WeberBSS16
fatcat:it7oo4m255dnzclilojq3pylxa
Context-Aware Emotion Recognition in the Wild Using Spatio-Temporal and Temporal-Pyramid Models
2021
Sensors
In this paper, we introduce a multi-modal flexible system for video-based emotion recognition in the wild. ...
The key contribution of this study is that it proposes the use of face feature extraction with context-aware and statistical information for emotion recognition. ...
Conflicts of Interest: The authors declare no conflict of interest. ...
doi:10.3390/s21072344
pmid:33801739
pmcid:PMC8036494
fatcat:h5roj3vztvbtharh2chqycsfzi
A Systematic Review on Affective Computing: Emotion Models, Databases, and Recent Advances
[article]
2022
arXiv
pre-print
Thus, the fusion of physical information and physiological signals can provide useful features of emotional states and lead to higher accuracy. ...
Firstly, we introduce two typical emotion models followed by commonly used databases for affective computing. ...
, and another BDBN for extracting high multimodal features of both video and psycho-physiological modalities. ...
arXiv:2203.06935v3
fatcat:h4t3omkzjvcejn2kpvxns7n2qe
Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-related Applications
[article]
2016
arXiv
pre-print
Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. ...
We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. ...
Global geometric features, for both RGB and 3D modalities, usually describe the face deformation based on the location of specific fiducial points. ...
arXiv:1606.03237v1
fatcat:t55kncgy6fgsvgi42pdgleu43m
Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History, Trends, and Affect-Related Applications
2016
IEEE Transactions on Pattern Analysis and Machine Intelligence
Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. ...
We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. ...
Global geometric features, for both RGB and 3D modalities, usually describe the face deformation based on the location of specific fiducial points. ...
doi:10.1109/tpami.2016.2515606
pmid:26761193
pmcid:PMC7426891
fatcat:ezwkw2bmhbdtlffz3uz3m3hoiy
On the Effect of Observed Subject Biases in Apparent Personality Analysis from Audio-visual Signals
[article]
2019
arXiv
pre-print
We base our study on the ChaLearn First Impressions dataset, consisting of one-person conversational videos. ...
on prediction ability for apparent personality estimation. ...
We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPU used for this research. ...
arXiv:1909.05568v2
fatcat:rxh4sftv75cd7bcozmr365p3me
AV+EC 2015
2015
Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge - AVEC '15
The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the audio, video and physiological emotion recognition communities, to compare ...
We present the first Audio-Visual + Emotion recognition Challenge and workshop (AV + EC 2015) aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and physiological ...
key to achieve high performance in the prediction of emotional arousal and valence from spontaneous recordings, as all modalities contribute to the prediction of emotion. ...
doi:10.1145/2808196.2811642
dblp:conf/mm/RingevalSVJMLCP15
fatcat:og3k5mutvjcijb4xieg63u4phq
Hybrid Mutimodal Fusion for Dimensional Emotion Recognition
[article]
2021
arXiv
pre-print
The goal of MuSe-Stress sub-challenge is to predict the level of emotional arousal and valence in a time-continuous manner from audio-visual recordings and the goal of MuSe-Physio sub-challenge is to predict ...
For the MuSe-Physio sub-challenge, we first extract the audio-visual features and the bio-signal features from multiple modalities. ...
According to the Eq. 7,ˆ1 andˆ1 are calculated.ˆ1 is the emotion predictions of audio modality andˆ1 is the emotion predictions of video modality. ...
arXiv:2110.08495v1
fatcat:rf3673qxtneqffivql634l5z4q
Facial Expression and Peripheral Physiology Fusion to Decode Individualized Affective Experience
[article]
2018
arXiv
pre-print
video-based stimuli. ...
Furthermore, we gained prediction improvement for affective experience by considering the effect of individualized resting dynamics. ...
Inference Models for Affective Experience Prediction
PREDICTION BASED ON EMOTIONAL VIDEO STIMULUS CONTENT We performed a binary across-individual classification on video stimuli with positive or negative ...
arXiv:1811.07392v1
fatcat:xq2bvudmdfferjcakmt62oshri
Multimodal Sentiment Analysis: A Comparison Study
2018
Journal of Computer Science
Large number of videos is being uploaded online every day. Video files contain text, visual and audio features that complement each other. ...
Nowadays, sentiment analysis is replacing the old web based survey and traditional survey methods that conducted by deferent companies for finding public opinion about entities like products and services ...
This research project was conducted with full compliance of research ethics norms of Arab Open University -Kuwait. ...
doi:10.3844/jcssp.2018.804.818
fatcat:wgchjlvjavenxptlitnpgktb7q
Evaluation of Texture and Geometry for Dimensional Facial Expression Recognition
2011
2011 International Conference on Digital Image Computing: Techniques and Applications
Distributions of arousal and valence for different emotions obtained via the feature extraction process are compared with those obtained from subjective ground truth values assigned by viewers. ...
It is not fully known whether fusion of geometric and texture features will result in better dimensional representation of spontaneous emotions. ...
ACKNOWLEDGMENT The research in this paper use the USTC-NVIE database collected under the sponsor of the 863 project of China, and the Semaine Database collected for the Semaine project (www.semaine-db.eu ...
doi:10.1109/dicta.2011.110
dblp:conf/dicta/ZhangTC11
fatcat:fgstqm75g5bfhcu6dsm5gjch6u
Depression recognition based on dynamic facial and vocal expression features using partial least square regression
2013
Proceedings of the 3rd ACM international workshop on Audio/visual emotion challenge - AVEC '13
Predicted values of visual and vocal clues are further combined at decision level for final decision. ...
then predict the depression scale for an unseen one. ...
ASC concentrates fully on continuous affect recognition of the dimensions of valence and arousal, where the level of affect has to be predicted for each frame of the recording, while DSC requires to predict ...
doi:10.1145/2512530.2512532
dblp:conf/mm/Meng0WYAW13
fatcat:5idd6yngb5cvnicgjbzly5vymq
Facial Landmark-Based Emotion Recognition via Directed Graph Neural Network
2020
Electronics
In this paper, we propose a graph convolution neural network that utilizes landmark features for FER, which we called a directed graph neural network (DGNN). ...
Our experimental results proved the effectiveness of the proposed method for datasets such as CK+ (96.02%), MMI (69.4%), and AFEW (32.64%). ...
Acknowledgments: The authors would like to thank the support of Inha University.
Conflicts of Interest: The authors declare no conflict of interest. ...
doi:10.3390/electronics9050764
fatcat:bp73dqwbdrddxmnjofik6dnoxu
Online Affect Tracking with Multimodal Kalman Filters
2016
Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge - AVEC '16
Leveraging the inter-correlations between arousal and valence, we use the predicted arousal as an additional feature to improve valence predictions. ...
Furthermore, we propose a conditional framework to select Kalman filters of different modalities while tracking. ...
These video features can be classified into appearance and geometry based. ...
doi:10.1145/2988257.2988259
dblp:conf/mm/SomandepalliGNB16
fatcat:yyoh4rdo7fcobh5m5lw6qmvzzq
A computer vision based image processing system for depression detection among students for counseling
2019
Indonesian Journal of Electrical Engineering and Computer Science
Classification of these facial features is done using SVM classifier. The level of depression is identified by calculating the amount of negative emotions present in the entire video. ...
To predict depression, a video of the student is captured, from which the face of the student is extracted. Then using Gabor filters, the facial features are extracted. ...
, Coimbatore for the support extended in carrying out this work. ...
doi:10.11591/ijeecs.v14.i1.pp503-512
fatcat:hwn43xof3fg5bptkyj75yzju7y
« Previous
Showing results 1 — 15 out of 3,117 results