Massively Multilingual Sparse Word Representations

Gábor Berend
2020 International Conference on Learning Representations  
In this paper, we introduce MAMUS for constructing multilingual sparse word representations. Our algorithm operates by determining a shared set of semantic units which get reutilized across languages, providing it a competitive edge both in terms of speed and evaluation performance. We demonstrate that our proposed algorithm behaves competitively to strong baselines through a series of rigorous experiments performed towards downstream applications spanning over dependency parsing, document
more » ... ification and natural language inference. Additionally, our experiments relying on the QVEC-CCA evaluation score suggests that the proposed sparse word representations convey an increased interpretability as opposed to alternative approaches. Finally, we are releasing our multilingual sparse word representations for the 27 typologically diverse set of languages that we conducted our various experiments on.
dblp:conf/iclr/Berend20 fatcat:zoommhwkq5db7dqw6l4mk6v2ui