A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2015; you can also visit the original URL.
The file type is application/pdf
.
Learning Abstract Concept Embeddings from Multi-Modal Data: Since You Probably Can't See What I Mean
2014
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Models that acquire semantic representations from both linguistic and perceptual input are of interest to researchers in NLP because of the obvious parallels with human language learning. Performance advantages of the multi-modal approach over language-only models have been clearly established when models are required to learn concrete noun concepts. However, such concepts are comparatively rare in everyday language. In this work, we present a new means of extending the scope of multi-modal
doi:10.3115/v1/d14-1032
dblp:conf/emnlp/HillK14
fatcat:ebszwwkcv5hi3kok2bnkuxbcam