A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Seq2Seq2Sentiment: Multimodal Sequence to Sequence Models for Sentiment Analysis
2018
Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)
unpublished
Multimodal machine learning is a core research area spanning the language, visual and acoustic modalities. The central challenge in multimodal learning involves learning representations that can process and relate information from multiple modalities. In this paper, we propose two methods for unsupervised learning of joint multimodal representations using sequence to sequence (Seq2Seq) methods: a Seq2Seq Modality Translation Model and a Hierarchical Seq2Seq Modality Translation Model. We also
doi:10.18653/v1/w18-3308
fatcat:nokbghuw6rfjnh743yqn7xqybq