Deep Multimodal Learning for Emotion Recognition in Spoken Language [article]

Yue Gu, Shuhong Chen, Ivan Marsic
2018 arXiv   pre-print
In this paper, we present a novel deep multimodal framework to predict human emotions based on sentence-level spoken language. Our architecture has two distinctive characteristics. First, it extracts the high-level features from both text and audio via a hybrid deep multimodal structure, which considers the spatial information from text, temporal information from audio, and high-level associations from low-level handcrafted features. Second, we fuse all features by using a three-layer deep
more » ... l network to learn the correlations across modalities and train the feature extraction and fusion modules together, allowing optimal global fine-tuning of the entire structure. We evaluated the proposed framework on the IEMOCAP dataset. Our result shows promising performance, achieving 60.4% in weighted accuracy for five emotion categories.
arXiv:1802.08332v1 fatcat:hyvzt6wrnbdedir3m3y7ld7fym