2,644 Hits in 5.9 sec

Multimodal Emotion Recognition Model using Physiological Signals [article]

Yuxuan Zhao, Xinyan Cao, Jinlong Lin, Dunshan Yu, Xixin Cao
2019 arXiv   pre-print
1D convolutional neural network model and a biologically inspired multimodal fusion model which integrates multimodal information on the decision level for emotion recognition.  ...  Compared with the single-modal recognition, the multimodal fusion model improves the accuracy of emotion recognition by 5% ~ 25%, and the fusion result of EEG signals (decomposed into four frequency bands  ...  model on decision level for multimodal emotion recognition.  ... 
arXiv:1911.12918v1 fatcat:wggkkuakifedpilhnal5osfoz4

Multimodal emotion recognition using a hierarchical fusion convolutional neural network

Yong Zhang, Cheng Cheng, Yidie Zhang
2021 IEEE Access  
Considering the complexity of recording electroencephalogram signals, some researchers have applied deep learning to find new features for emotion recognition.  ...  However, the extraction of hierarchical features with convolutional neural network for multimodal emotion recognition remains unexplored.  ...  EXPERIMENTS AND RESULT ANALYSIS This paper focuses on the feature extraction and fusion method for multimodal physiological signals.  ... 
doi:10.1109/access.2021.3049516 fatcat:rnynnapfnjdldi54rolcspglxi

A Systematic Review on Affective Computing: Emotion Models, Databases, and Recent Advances [article]

Yan Wang, Wei Song, Wei Tao, Antonio Liotta, Dawei Yang, Xinlei Li, Shuyong Gao, Yixuan Sun, Weifeng Ge, Wei Zhang, Wenqiang Zhang
2022 arXiv   pre-print
Thus, the fusion of physical information and physiological signals can provide useful features of emotional states and lead to higher accuracy.  ...  baseline dataset, fusion strategies for multimodal affective analysis, and unsupervised learning models.  ...  (a) Feature-level fusion for visual-audio emotion recognition adopted from [360] ; (b) Feature-level fusion for text-audio emotion recognition adopted from [361] ; (c) Feature-level fusion for visual-audio-text  ... 
arXiv:2203.06935v3 fatcat:h4t3omkzjvcejn2kpvxns7n2qe

Automatic Emotion Recognition Using Temporal Multimodal Deep Learning

Bahareh Nakisa, Mohammad Naim Rastgoo, Andry Rakotonirainy, Frederic Maire, Vinod Chandran
2020 IEEE Access  
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.  ...  ACKNOWLEDGEMENT Frederic Maire acknowledges continued support from the Queensland University of Technology (QUT) through the Centre for Robotics.  ...  B) Multimodal data fusion for emotion classification Using multimodal input has improved the accuracy of emotion recognition compared to using inputs of a single modality.  ... 
doi:10.1109/access.2020.3027026 fatcat:brbdfo5xijgb7ewufmpqqwnkim

Advances in Multimodal Emotion Recognition Based on Brain–Computer Interfaces

Zhipeng He, Zina Li, Fuzhou Yang, Lei Wang, Jingcong Li, Chengju Zhou, Jiahui Pan
2020 Brain Sciences  
Finally, we identify several important issues and research directions for multimodal emotion recognition based on BCI.  ...  This paper primarily discusses the progress of research into multimodal emotion recognition based on BCI and reviews three types of multimodal affective BCI (aBCI): aBCI based on a combination of behavior  ...  At the fusion level, they use feature-level fusion to combine the extracted spatial features from the EEG signals with the temporal features extracted from the physiological signals.  ... 
doi:10.3390/brainsci10100687 pmid:33003397 pmcid:PMC7600724 fatcat:juzx77asgrh2zpl3s2jvw6tdcq

Cross-Subject Multimodal Emotion Recognition based on Hybrid Fusion

Yucel Cimtay, Erhan Ekmekcioglu, Seyma Caglar Ozhan
2020 IEEE Access  
Similarly, our approach yields a maximum one-subject-out accuracy of 91.5% and a mean accuracy of 53.8% on the Database for Emotion Analysis using Physiological Signals (DEAP) for varying numbers of emotion  ...  This method follows a hybrid fusion strategy and yields a maximum one-subject-out accuracy of 81.2% and a mean accuracy of 74.2% on our bespoke multimodal emotion dataset (LUMED-2) for 3 emotion classes  ...  ACKNOWLEDGMENT We would like to thank the creators of DEAP dataset for openly sharing the dataset with us and the wider research community.  ... 
doi:10.1109/access.2020.3023871 fatcat:6pv6ftzbxzb57jejybadv43x6u

Multimodal Affect Recognition: Current Approaches and Challenges [chapter]

Hussein Al Osman, Tiago H. Falk
2017 Emotion and Attention Recognition Based on Biological Signals and Images  
However, the multimodal approach presents challenges pertaining to the fusion of individual signals, dimensionality of the feature space, and incompatibility of collected signals in terms of time resolution  ...  Many factors render multimodal affect recognition approaches appealing. First, humans employ a multimodal approach in emotion recognition.  ...  Physiological modality Physiological signals can be used for affect recognition through the detection of biological patterns that are reflective of emotional expressions.  ... 
doi:10.5772/65683 fatcat:du7u2lfx4nhkzf5d7zq7g5ofty

Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition

Yongrui Huang, Jianhao Yang, Pengkai Liao, Jiahui Pan
2017 Computational Intelligence and Neuroscience  
This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression.  ...  Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule.  ...  The goal of our research is to perform a multimodal fusion between EEGs and peripheral physiological signals for emotion recognition.  ... 
doi:10.1155/2017/2107451 pmid:29056963 pmcid:PMC5625811 fatcat:swc3vn66xjhi3agfu2dwrmyupa

Multimodal Emotion Recognition using Deep Learning

Sharmeen M.Saleem Abdullah Abdullah, Siddeeq Y. Ameen Ameen, Mohammed Mohammed sadeeq, Subhi Zeebaree
2021 Journal of Applied Science and Technology Trends  
This paper presents a review of emotional recognition of multimodal signals using deep learning and comparing their applications based on current studies.  ...  This would encourage studies to understand better physiological signals of the current state of the science and its emotional awareness problems.  ...  Multimodal fusion model can achieve emotion detection results by integrating physiological signals in various ways [58] .  ... 
doi:10.38094/jastt20291 fatcat:2ofkuynxebgb5glhsaii5zcq4u

Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

Zhiyan Han, Jian Wang, S.A. Hamouda, M. Mirzaei, Z. Yu
2016 MATEC Web of Conferences  
Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close  ...  This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects.  ...  The work is also supported by grant from the National Natural Science Foundation of China (No. 61503038, No. 61403042).  ... 
doi:10.1051/matecconf/20166103012 fatcat:wqrjiwzpjfdvzdwyvpr2zhsvvq

Emotion Recognition from Multimodal Physiological Signals for Emotion Aware Healthcare Systems

Değer Ayata, Yusuf Yaslan, Mustafa E. Kamasak
2020 Journal of Medical and Biological Engineering  
Purpose The purpose of this paper is to propose a novel emotion recognition algorithm from multimodal physiological signals for emotion aware healthcare systems.  ...  Results indicate that using multiple sources of physiological signals and their fusion increases the accuracy rate of emotion recognition.  ...  It is shown that decision level fusion from multiple classifiers (one per signal source) improved the accuracy rate of emotion recognition both for arousal and valence dimensions.  ... 
doi:10.1007/s40846-019-00505-7 fatcat:zyvnt2ajhzh7rfreobdkd4yfau

Emotional Analysis Model for Social Hot Topics of Professional Migrant Workers

Gefeng Pang, Anze Bao, Xin Ning
2022 Computational Intelligence and Neuroscience  
It analyzes human body structure and emotional signals, and then combines them with visual and physiological signals to create multimodal emotional data.  ...  Text makes up a large portion of network data because it is the vehicle for people's direct expression of emotions and opinions.  ...  Emotion Recognition Based on Multimodal Information Feature Level and Decision Level Fusion.  ... 
doi:10.1155/2022/3812055 pmid:35140770 pmcid:PMC8820853 fatcat:pq4nroprerclhbhmpezlsf4bda

Emotion Recognition Using Multimodal Deep Learning [chapter]

Wei Liu, Wei-Long Zheng, Bao-Liang Lu
2016 Lecture Notes in Computer Science  
We demonstrate that high level representation features extracted by the Bimodal Deep AutoEncoder (BDAE) are effective for emotion recognition.  ...  To enhance the performance of affective models and reduce the cost of acquiring physiological signals for real-world applications, we adopt multimodal deep learning approach to construct affective models  ...  Lu et al. used two different fusion strategies for combining EEG and eye movement data: feature level fusion and decision level fusion [10] .  ... 
doi:10.1007/978-3-319-46672-9_58 fatcat:dw3wkv3hpvgavmpx7qu6yitte4

A Survey on Physiological Signal Based Emotion Recognition [article]

Zeeshan Ahmad, Naimul Khan
2022 arXiv   pre-print
and their comparison, data preprocessing techniques for each physiological signal, data splitting techniques for improving the generalization of emotion recognition models and different multimodal fusion  ...  Physiological Signals are the most reliable form of signals for emotion recognition, as they cannot be controlled deliberately by the subject.  ...  In section VIII, it is explained that commonly practiced fusion techniques for emotion recognition are feature level and decision level fusion.  ... 
arXiv:2205.10466v1 fatcat:llfz5bjml5gctjuj6tnbjx2rwe

Performance Comparison of the KNN and SVM Classification Algorithms in the Emotion Detection System EMOTICA

Kone Chaka, Nhan Le Thanh, Remi Flamary, Cecile Belleudy
2018 International Journal of Sensor Networks and Data Communications  
Emotica (EMOTIon CApture) system is a multimodal emotion recognition system that uses physiological signals.  ...  A DLF (Decision Level Fusion) approach with a voting method is used in this system to merge monomodal decisions for a multimodal detection.  ...  To do this, three techniques (signals level fusion, features level fusion and decisions level fusion) have been proposed in the literature for fusion.  ... 
doi:10.4172/2090-4886.1000153 fatcat:tipzphq6dvbi3n2tzt2t4lf2pe
« Previous Showing results 1 — 15 out of 2,644 results