A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Multimodal Routing: Improving Local and Global Interpretability of Multimodal Language Analysis
2020
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
The human language can be expressed through multiple sources of information known as modalities, including tones of voice, facial gestures, and spoken language. Recent multimodal learning with strong performances on human-centric tasks such as sentiment analysis and emotion recognition are often black-box, with very limited interpretability. In this paper we propose Multimodal Routing, which dynamically adjusts weights between input modalities and output representations differently for each
doi:10.18653/v1/2020.emnlp-main.143
pmid:33969363
pmcid:PMC8106385
fatcat:fknnqv6a6zbx7fub66j6mvilvy