A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Adapting lexical representation and OOV handling from written to spoken language with word embedding
2015
Interspeech 2015
unpublished
Word embeddings have become ubiquitous in NLP, especially when using neural networks. One of the assumptions of such representations is that words with similar properties have similar representation, allowing for better generalization from subsequent models. In the standard setting, two kinds of training corpora are used: a very large unlabeled corpus for learning the word embedding representations; and an in-domain training corpus with gold labels for training classifiers on the target NLP
doi:10.21437/interspeech.2015-58
fatcat:dcfk74pbl5bytk5urghczrd25i