Filters








18,451 Hits in 5.5 sec

Exploring Multimodal Visual Features for Continuous Affect Recognition

Bo Sun, Siming Cao, Liandong Li, Jun He, Lejun Yu
2016 Proceedings of the 6th International Workshop on Audio/Visual Emotion Challenge - AVEC '16  
As visual features are very important in emotion recognition, we try a variety of handcrafted and deep visual features.  ...  This paper presents our work in the Emotion Sub-Challenge of the 6 th Audio/Visual Emotion Challenge and Workshop (AVEC 2016), whose goal is to explore utilizing audio, visual and physiological signals  ...  To accelerate research in automatic continuous affect recognition from audio, video and physiological data, the Audio/Visual Emotion Challenge and Workshop (AVEC) aimed at comparison of multimedia processing  ... 
doi:10.1145/2988257.2988270 dblp:conf/mm/SunCLHY16 fatcat:cxxsk3jw4zhipgmovqmgnmnuz4

2011 Annual Index

2012 IEEE Transactions on Affective Computing  
-Mar. 2011 10-21 Exploring Fusion Methods for Multimodal Emotion Recognition with Missing Data. Wagner, Johannes, +, T-AFFC Oct.  ...  -Mar. 2011 10-21 Exploring Fusion Methods for Multimodal Emotion Recognition with Missing Data. Wagner, Johannes, +, T-AFFC Oct.  ... 
doi:10.1109/t-affc.2012.7 fatcat:ejirms67dza3bmqub3jege7xtm

Exploring the contextual factors affecting multimodal emotion recognition in videos [article]

Prasanta Bhattacharya, Raj Kumar Gupta, Yinping Yang
2021 arXiv   pre-print
While multimodal emotion recognition techniques are gaining research attention, there is a lack of deeper understanding on how visual and non-visual features can be used to better recognize emotions in  ...  Multimodal features performed particularly better for male speakers in recognizing most emotions.  ...  All errors that remain are our sole responsibility. 1 Exploring the contextual factors affecting multimodal emotion recognition in videos Supplemental Material Appendix A Multimodal Emotion Features  ... 
arXiv:2004.13274v5 fatcat:nb4tst75czbqrji2sbaqijjjqa

Multimodal Affect Recognition: Current Approaches and Challenges [chapter]

Hussein Al Osman, Tiago H. Falk
2017 Emotion and Attention Recognition Based on Biological Signals and Images  
Many factors render multimodal affect recognition approaches appealing. First, humans employ a multimodal approach in emotion recognition.  ...  However, the multimodal approach presents challenges pertaining to the fusion of individual signals, dimensionality of the feature space, and incompatibility of collected signals in terms of time resolution  ...  Moreover, new sensors and wearable technologies are emerging continuously, which may open doors for new affect-recognition modalities.  ... 
doi:10.5772/65683 fatcat:du7u2lfx4nhkzf5d7zq7g5ofty

Emotion representation, analysis and synthesis in continuous space: A survey

Hatice Gunes, Bjorn Schuller, Maja Pantic, Roddy Cowie
2011 Face and Gesture 2011  
Therefore, affective and behavioural computing researchers have recently invested increased effort in exploring how to best model, analyse, interpret and respond to the subtlety, complexity and continuity  ...  Despite major advances within the affective computing research field, modelling, analysing, interpreting and responding to naturalistic human affective behaviour still remains as a challenge for automated  ...  For further details on how features are extracted for each communicative modality, and how multicue and multimodal fusion is achieved for affect analysis purposes please see [14] , [50] , [51] .  ... 
doi:10.1109/fg.2011.5771357 dblp:conf/fgr/GunesSPC11 fatcat:5vykm64mavg3bc4kf3oe22b7ki

Multi-stream Confidence Analysis for Audio-Visual Affect Recognition [chapter]

Zhihong Zeng, Jilin Tu, Ming Liu, Thomas S. Huang
2005 Lecture Notes in Computer Science  
affect recognition.  ...  In this paper, we explore the development of a computing algorithm that uses audio and visual sensors to recognize a speaker's affective state.  ...  Lawrence Chen for collecting the valuable data in this paper for audio-visual affect recognition.  ... 
doi:10.1007/11573548_123 fatcat:se3zhvwbqrcbbega2qhudnk55q

Audiovisual Information Fusion in Human–Computer Interfaces and Intelligent Environments: A Survey

Shankar T. Shivappa, Mohan Manubhai Trivedi, Bhaskar D. Rao
2010 Proceedings of the IEEE  
In this paper we describe the fusion strategies and the corresponding models used in audiovisual tasks such as speech recognition, tracking, biometrics, affective state recognition and meeting scene analysis  ...  Intelligent systems with audio-visual sensors should be capable of achieving similar goals. The audio-visual information fusion strategy is a key component in designing such systems.  ...  We sincerely thank the reviewers for their valuable advise which has helped us enhance the content as well as the presentation of the paper.  ... 
doi:10.1109/jproc.2010.2057231 fatcat:lfzgfmn2hjdq7h6o5txva3oapq

A review of affective computing: From unimodal analysis to multimodal fusion

Soujanya Poria, Erik Cambria, Rajiv Bajpai, Amir Hussain
2017 Information Fusion  
In this paper, we focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90% of the relevant literature appears to cover these three modalities.  ...  Multimodality is defined by the presence of more than one modality or channel, e.g., visual, audio, text, gestures, and eye gage.  ...  In this section, we present various studies on the use of visual features for multimodal affect analysis.  ... 
doi:10.1016/j.inffus.2017.02.003 fatcat:ytebhjxlz5bvxcdghg4wxbvr6a

Linear and Non-Linear Multimodal Fusion for Continuous Affect Estimation In-the-Wild

Yona Falinie A. Gaus, Hongying Meng
2018 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018)  
Automatic continuous affect recognition from multiple modality in the wild is arguably one of the most challenging research areas in affective computing.  ...  performance improvement for all affect estimation.  ...  In the literature of continuous affect recognition, typically there are two modality present to estimate affect, audio and visual modality [10] .  ... 
doi:10.1109/fg.2018.00079 dblp:conf/fgr/GausM18 fatcat:p44l7hhgtfecddykm4ectr2f5i

A Systematic Review on Affective Computing: Emotion Models, Databases, and Recent Advances [article]

Yan Wang, Wei Song, Wei Tao, Antonio Liotta, Dawei Yang, Xinlei Li, Shuyong Gao, Yixuan Sun, Weifeng Ge, Wei Zhang, Wenqiang Zhang
2022 arXiv   pre-print
baseline dataset, fusion strategies for multimodal affective analysis, and unsupervised learning models.  ...  Next, we survey and taxonomize state-of-the-art unimodal affect recognition and multimodal affective analysis in terms of their detailed architectures and performances.  ...  (a) Feature-level fusion for visual-audio emotion recognition adopted from [349] ; (b) Feature-level fusion for text-audio emotion recognition adopted from [350] ; (c) Feature-level fusion for visual-audio-text  ... 
arXiv:2203.06935v3 fatcat:h4t3omkzjvcejn2kpvxns7n2qe

Audio-Visual Affect Recognition

Z. Zeng, J. Tu, M. Liu, T.S. Huang, B. Pianfetti, D. Roth, S. Levinson
2007 IEEE transactions on multimedia  
In this paper, we present our efforts toward audio-visual affect recognition on 11 affective states customized for HCI application (four cognitive/motivational and seven basic affective states) of 20 nonactor  ...  For person-dependent recognition, we apply the voting method to combine the frame-based classification results from both audio and visual channels.  ...  Chen for collecting the valuable data in this paper for audio-visual affect recognition.  ... 
doi:10.1109/tmm.2006.886310 fatcat:un7fcgr26fhjbnd3g4zxil7fym

Multimodal Relational Tensor Network for Sentiment and Emotion Classification [article]

Saurav Sahay, Shachi H Kumar, Rui Xia, Jonathan Huang, Lama Nachman
2018 arXiv   pre-print
We present the results of our model on CMU-MOSEI dataset and show that our model outperforms many baselines and state of the art methods for sentiment classification and emotion recognition.  ...  Understanding Affect from video segments has brought researchers from the language, audio and video domains together.  ...  However, this hasn't been widely researched in conversational multimodal audio-visual and textual context for continuous recognition of sentiments and emotions.  ... 
arXiv:1806.02923v1 fatcat:sohdzj7qejatnhnilvtlts7yaq

From the Lab to the Real World: Affect Recognition Using Multiple Cues and Modalities [chapter]

Hatice Gunes, Massimo Piccardi, Maja Pantic
2008 Affective Computing  
Acknowledgement The authors would like to thank Arman Savran, Bulent Sankur, Guillaume Chanel, Rana el Kaliouby, Rosalind Picard and Thorsten Spexard for granting permission to use figures from their works  ...  Such issues are yet to be explored in multimodal affect recognition.  ...  The chapter then explores further issues in data acquisition, data annotation, feature extraction, and multimodal affective state recognition.  ... 
doi:10.5772/6180 fatcat:wpyowqavajdabp3r6ljyrani4q

Construction of Spontaneous Emotion Corpus from Indonesian TV Talk Shows and Its Application on Multimodal Emotion Recognition

Nurul LUBIS, Dessi LESTARI, Sakriani SAKTI, Ayu PURWARIANTI, Satoshi NAKAMURA
2018 IEICE transactions on information and systems  
We perform multimodal emotion recognition utilizing the predictions of three modalities: acoustic, semantic, and visual.  ...  When compared to the unimodal result, in the multimodal feature combination, we attain identical accuracy for the arousal at 92.6%, and a significant improvement for the valence classification task at  ...  For example, the decision of which visual features to use in the combination does not affect the performance of the end multimodal model, most probably due to its suboptimal performance in comparison to  ... 
doi:10.1587/transinf.2017edp7362 fatcat:sxkvqnmp7fcp7av3avhp2iqmge

Survey on audiovisual emotion recognition: databases, features, and data fusion strategies

Chung-Hsien Wu, Jen-Chun Lin, Wen-Li Wei
2014 APSIPA Transactions on Signal and Information Processing  
Facial and vocal features and audiovisual bimodal data fusion methods for emotion recognition are then surveyed and discussed.  ...  Conclusions outline and address some of the existing emotion recognition issues.  ...  To recognize continuously valued affective dimensions, Schuller et al.  ... 
doi:10.1017/atsip.2014.11 fatcat:6ujyy4sv55ezvdfbn3rt3leki4
« Previous Showing results 1 — 15 out of 18,451 results