Seq2Seq2Sentiment: Multimodal Sequence to Sequence Models for Sentiment Analysis [article]

Hai Pham, Thomas Manzini, Paul Pu Liang, Barnabas Poczos
2018 arXiv   pre-print
Multimodal machine learning is a core research area spanning the language, visual and acoustic modalities. The central challenge in multimodal learning involves learning representations that can process and relate information from multiple modalities. In this paper, we propose two methods for unsupervised learning of joint multimodal representations using sequence to sequence (Seq2Seq) methods: a Seq2Seq Modality Translation Model and a Hierarchical Seq2Seq Modality Translation Model. We also
more » ... plore multiple different variations on the multimodal inputs and outputs of these seq2seq models. Our experiments on multimodal sentiment analysis using the CMU-MOSI dataset indicate that our methods learn informative multimodal representations that outperform the baselines and achieve improved performance on multimodal sentiment analysis, specifically in the Bimodal case where our model is able to improve F1 Score by twelve points. We also discuss future directions for multimodal Seq2Seq methods.
arXiv:1807.03915v2 fatcat:6zzjqlxfdfa2zdsvhu35xvprle