Filters








4 Hits in 1.2 sec

Seq2Seq2Sentiment: Multimodal Sequence to Sequence Models for Sentiment Analysis [article]

Hai Pham, Thomas Manzini, Paul Pu Liang, Barnabas Poczos
2018 arXiv   pre-print
on multimodal sentiment analysis, specifically in the Bimodal case where our model is able to improve F1 Score by twelve points.  ...  In this paper, we propose two methods for unsupervised learning of joint multimodal representations using sequence to sequence (Seq2Seq) methods: a Seq2Seq Modality Translation Model and a Hierarchical  ...  Specific thanks to Louis-Phillipe Morency and Amir Zadeh for their helpful discussions and thoughtful critiques.  ... 
arXiv:1807.03915v2 fatcat:6zzjqlxfdfa2zdsvhu35xvprle

Seq2Seq2Sentiment: Multimodal Sequence to Sequence Models for Sentiment Analysis

Hai Pham, Thomas Manzini, Paul Pu Liang, Barnabás Poczós
2018 Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML)   unpublished
on multimodal sentiment analysis, specifically in the Bimodal case where our model is able to improve F1 Score by twelve points.  ...  In this paper, we propose two methods for unsupervised learning of joint multimodal representations using sequence to sequence (Seq2Seq) methods: a Seq2Seq Modality Translation Model and a Hierarchical  ...  Specific thanks to Louis-Phillipe Morency and Amir Zadeh for their helpful discussions and thoughtful critiques.  ... 
doi:10.18653/v1/w18-3308 fatcat:nokbghuw6rfjnh743yqn7xqybq

Multimodal Language Analysis with Recurrent Multistage Fusion [article]

Paul Pu Liang, Ziyin Liu, Amir Zadeh, Louis-Philippe Morency
2018 arXiv   pre-print
The RMFN displays state-of-the-art performance in modeling human multimodal language across three public datasets relating to multimodal sentiment analysis, emotion recognition, and speaker traits recognition  ...  We provide visualizations to show that each stage of fusion focuses on a different subset of multimodal signals, learning increasingly discriminative multimodal representations.  ...  Acknowledgements The authors thank Yao Chong Lim, Venkata Ramana Murthy Oruganti, Zhun Liu, Ying Shen, Volkan Cirik, and the anonymous reviewers for their constructive comments on this paper.  ... 
arXiv:1808.03920v1 fatcat:z6l6eod5sfd6bmers4f7aibfwa

COBRA: Contrastive Bi-Modal Representation Algorithm [article]

Vishaal Udandarao, Abhishek Maiti, Deepak Srivatsav, Suryatej Reddy Vyalla, Yifang Yin, Rajiv Ratn Shah
2020 arXiv   pre-print
Existing approaches generate latent embeddings for each modality in a joint fashion by representing them in a common manifold.  ...  We hypothesize that these embeddings retain the intra-class relationships but are unable to preserve the inter-class dynamics.  ...  [40] proposed Seq2Seq2Sentiment, an unsupervised method for learning joint multi-modal representations using sequence to sequence models. Wang et al.  ... 
arXiv:2005.03687v2 fatcat:nfz7mrdv6jfyhil6vroqpegkie