A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities
[article]
2020
arXiv
pre-print
Multimodal sentiment analysis is a core research area that studies speaker sentiment expressed from the language, visual, and acoustic modalities. The central challenge in multimodal learning involves inferring joint representations that can process and relate information from these modalities. However, existing work learns joint representations by requiring all modalities as input and as a result, the learned representations may be sensitive to noisy or missing modalities at test time. With
arXiv:1812.07809v2
fatcat:fvxwz2xhsnfp7ny6shbzola6gm