A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Word Representations Concentrate and This is Good News!
2020
Proceedings of the 24th Conference on Computational Natural Language Learning
unpublished
This article establishes that, unlike the legacy tf*idf representation, recent natural language representations (word embedding vectors) tend to exhibit a so-called concentration of measure phenomenon, in the sense that, as the representation size p and database size n are both large, their behavior is similar to that of large dimensional Gaussian random vectors. This phenomenon may have important consequences as machine learning algorithms for natural language data could be amenable to
doi:10.18653/v1/2020.conll-1.25
fatcat:dkgyu7ucbjhudm4byjz2yembja