A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Probing Representations Learned by Multimodal Recurrent and Transformer Models
[article]
2019
arXiv
pre-print
Recent literature shows that large-scale language modeling provides excellent reusable sentence representations with both recurrent and self-attentive architectures. However, there has been less clarity on the commonalities and differences in the representational properties induced by the two architectures. It also has been shown that visual information serves as one of the means for grounding sentence representations. In this paper, we present a meta-study assessing the representational
arXiv:1908.11125v1
fatcat:b5ktvnumqverbouczrspqgcwo4