Filters








108 Hits in 4.8 sec

AVEC 2012

Björn Schuller, Michel Valster, Florian Eyben, Roddy Cowie, Maja Pantic
2012 Proceedings of the 14th ACM international conference on Multimodal interaction - ICMI '12  
We present the second Audio-Visual Emotion recognition Challenge and workshop (AVEC 2012), which aims to bring together researchers from the audio and video analysis communities around the topic of emotion  ...  This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.  ...  This should improve the reproducibility of the baseline results.  ... 
doi:10.1145/2388676.2388776 dblp:conf/icmi/SchullerVECP12 fatcat:oad7xx265zc7ximh73zotpfnwe

Strength modelling for real-worldautomatic continuous affect recognition from audiovisual signals

Jing Han, Zixing Zhang, Nicholas Cummins, Fabien Ringeval, Björn Schuller
2017 Image and Vision Computing  
To highlight the effectiveness and robustness of the proposed approach, extensive experiments have been carried out on two time-and value-continuous spontaneous emotion databases (RECOLA and SEMAINE) using  ...  audio and video signals.  ...  We further thank the NVIDIA Corporation for their support of this research by Tesla K40-type GPU donation.  ... 
doi:10.1016/j.imavis.2016.11.020 fatcat:am4uvadrsbbmreoialq54nckve

Cognitive Behaviour Analysis Based on Facial Information Using Depth Sensors [chapter]

Juan Manuel Fernandez Montenegro, Barbara Villarini, Athanasios Gkelias, Vasileios Argyriou
2018 Lecture Notes in Computer Science  
This work uses novel features based on a non-linear dimensionality reduction, t-SNE, applied on facial landmarks and depth data.  ...  In healthcare, cognitive and emotional behaviour analysis helps to improve the quality of life of patients and their families.  ...  This work uses novel features based on a non-linear dimensionality reduction technique, i.e., t-SNE, which is applied on facial landmarks and depth data.  ... 
doi:10.1007/978-3-319-91863-1_2 fatcat:vzdlsnybazgtrolc5uscqi5qya

On-line emotion recognition in a 3-D activation-valence-time continuum using acoustic and linguistic cues

Florian Eyben, Martin Wöllmer, Alex Graves, Björn Schuller, Ellen Douglas-Cowie, Roddy Cowie
2009 Journal on Multimodal User Interfaces  
For many applications of emotion recognition, such as virtual agents, the system must select responses while the user is speaking. This requires reliable on-line recognition of the user's affect.  ...  We also investigate the benefits of including linguistic features on the signal frame level obtained by a keyword spotter.  ...  Section 6 introduces the naturalistic emotion database used for experimental evaluations in Sect. 7. Section 8 shows results obtained on this database.  ... 
doi:10.1007/s12193-009-0032-6 fatcat:tyi7adiax5hwfmlel4nf7ta2fi

A review of affective computing: From unimodal analysis to multimodal fusion

Soujanya Poria, Erik Cambria, Rajiv Bajpai, Amir Hussain
2017 Information Fusion  
In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities.  ...  Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage.  ...  In this section, we de-scribe the literature of unimodal affect analysis primarily focusing on visual, audio and textual modalities. The following section focuses on multimodal fusion.  ... 
doi:10.1016/j.inffus.2017.02.003 fatcat:ytebhjxlz5bvxcdghg4wxbvr6a

A Bimodal Approach for Speech Emotion Recognition using Audio and Text

Oxana Verkholyak, Anastasia Dvoynikova, Alexey Karpov
2021 Journal of Internet Services and Information Security  
We compare relative performance of unimodal vs. bimodal systems, analyze their effectiveness on different levels of annotation agreement, and discuss the effect of reduction of training data size on the  ...  This paper presents a novel bimodal speech emotion recognition system based on analysis of acoustic and linguistic information.  ...  Baseline system does not use any normalization and dimensionality reduction strategies and performs classification directly on the extracted features.  ... 
doi:10.22667/jisis.2021.02.28.080 dblp:journals/jisis/VerkholyakDK21 fatcat:d6antopnsvglnj6ehiklx46ccq

Multimodal Affect Recognition: Current Approaches and Challenges [chapter]

Hussein Al Osman, Tiago H. Falk
2017 Emotion and Attention Recognition Based on Biological Signals and Images  
However, the multimodal approach presents challenges pertaining to the fusion of individual signals, dimensionality of the feature space, and incompatibility of collected signals in terms of time resolution  ...  In this chapter, we explore the aforementioned challenges while presenting the latest scholarship on the topic. Hence, we first discuss the various modalities used in affect classification.  ...  Varies across databases Observers' judgment + self- assessment VAM (2008) [116] Natural 19 Visual and audio Dimensional labeling SAM SEMAINE (2010) [117] Induced 20 Visual and audio  ... 
doi:10.5772/65683 fatcat:du7u2lfx4nhkzf5d7zq7g5ofty

Prediction of Emotion Change From Speech

Zhaocheng Huang, Julien Epps
2018 Frontiers in ICT  
However, further work is needed to achieve effective emotion change prediction performances on the SEMAINE database, due to the large number of non-change frames in the absolute emotion ratings.  ...  database, achieving 0.74 vs. 0.71 for arousal and 0.41 vs. 0.37 for valence in concordance correlation coefficients.  ...  , 2013) , based on audio and video signals.  ... 
doi:10.3389/fict.2018.00011 fatcat:sg6cx6v3qzdjxcdvdyngkuoq4e

Multi-stream LSTM-HMM decoding and histogram equalization for noise robust keyword spotting

Martin Wöllmer, Erik Marchi, Stefano Squartini, Björn Schuller
2011 Cognitive Neurodynamics  
The proposed techniques are evaluated on the SEMAINE database-a corpus containing emotionally colored conversations with a cognitive system for "Sensitive Artificial Listening".  ...  Histogram Equalization is an efficient method to reduce the mismatch between clean and noisy conditions by normalizing all moments of the probability distribution of the feature vector components.  ...  Acknowledgments The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013 and from the Federal Republic of Germany through the  ... 
doi:10.1007/s11571-011-9166-9 pmid:22942915 pmcid:PMC3179540 fatcat:lfqo5jvmavgfdasr3my2j4xjvi

Towards a Better Gold Standard

Chen Wang, Phil Lopes, Thierry Pun, Guillaume Chanel
2018 Proceedings of the 2018 on Audio/Visual Emotion Challenge and Workshop - AVEC'18  
The proposed method performance is evaluated on the RECOLA database containing audio, video and physiological recordings.  ...  The unsupervised dimensionality reduction approaches could be used to determine a gold standard annotations from multiple annotations.  ...  Recently, databases such as SEMAINE [29] and RECOLA [36] with time-continuous emotion ratings have shifted the methods from classification to regression to predict continuous emotion in several emotion  ... 
doi:10.1145/3266302.3266307 dblp:conf/mm/WangLPC18 fatcat:hxplxhysxzctdiftxvzvs4tvju

Combining Long Short-Term Memory and Dynamic Bayesian Networks for Incremental Emotion-Sensitive Artificial Listening

Martin Wollmer, Björn Schuller, Florian Eyben, Gerhard Rigoll
2010 IEEE Journal on Selected Topics in Signal Processing  
and emotional history to predict the affective state of the user.  ...  Experimental evaluations on the SAL corpus of non-prototypical real-life emotional speech data consider a number of variants of our recognition framework: continuous emotion estimation from low-level feature  ...  ACKNOWLEDGMENT The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 211486 (SEMAINE).  ... 
doi:10.1109/jstsp.2010.2057200 fatcat:7fixrqz6nnbpfe4cbzlugcujvm

Emotion Understanding Using Multimodal Information Based on Autobiographical Memories for Alzheimer's Patients [chapter]

Juan Manuel Fernandez Montenegro, Athanasios Gkelias, Vasileios Argyriou
2017 Lecture Notes in Computer Science  
This work uses novel EEG features based on quaternions, facial landmarks and the combination of them.  ...  Alzheimer Disease (AD) early detection is considered of high importance for improving the quality of life of patients and their families.  ...  For example, DEAP dataset provides EEG and face recordings of participants while they watch musical videos just for the analysis of human affective states [9] ; SEMAINE database aims to provide voice  ... 
doi:10.1007/978-3-319-54407-6_17 fatcat:7lzqnhqcw5b2rdt6wxolntgnnm

Robust Correlated and Individual Component Analysis

Yannis Panagakis, Mihalis A. Nicolaou, Stefanos Zafeiriou, Maja Pantic
2016 IEEE Transactions on Pattern Analysis and Machine Intelligence  
audio-visual prediction of interest and conflict), iii) face clustering, and iv) the temporal alignment of facial expressions.  ...  Experimental results on 2 synthetic and 7 real world datasets indicate the robustness and effectiveness of the proposed methods on these application domains, outperforming other state-of-the-art methods  ...  setting of the SEMAINE database.  ... 
doi:10.1109/tpami.2015.2497700 pmid:26552077 fatcat:lea2y4hduzfozexfp7n57mjnua

Continuous Analysis of Affect from Voice and Face [chapter]

Hatice Gunes, Mihalis A. Nicolaou, Maja Pantic
2011 Computer Analysis of Human Behavior  
This chapter aims to (i) give a brief overview of the existing efforts and the major accomplishments in modeling and analysis of emotional expressions in dimensional and continuous space while focusing  ...  on open issues and new challenges in the field, and (ii) introduce a representative approach for H.  ...  The section starts with the description of the naturalistic database used in the experimental studies.  ... 
doi:10.1007/978-0-85729-994-9_10 fatcat:2awlffumuvcp3fs6e75ltokdeq

Automatic, Dimensional and Continuous Emotion Recognition

Hatice Gunes, Maja Pantic
2010 International Journal of Synthetic Emotions  
It must be carried out differently in the case of acted behaviour than in the case of spontaneous behaviour (see the previous section of this article), and both configuration and temporal analysis of the  ...  As described previously, the FeelTrace annotation tool is often used to annotate audio and audio-visual recordings (e.g., in the case of the SAL database).  ... 
doi:10.4018/jse.2010101605 fatcat:hipfyafiybfl5fk2ag6gvflm24
« Previous Showing results 1 — 15 out of 108 results