A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Learning Compressed Sentence Representations for On-Device Text Processing
2019
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Vector representations of sentences, trained on massive text corpora, are widely used as generic sentence embeddings across a variety of NLP problems. The learned representations are generally assumed to be continuous and real-valued, giving rise to a large memory footprint and slow retrieval speed, which hinders their applicability to low-resource (memory and computation) platforms, such as mobile devices. In this paper, we propose four different strategies to transform continuous and generic
doi:10.18653/v1/p19-1011
dblp:conf/acl/ShenCSZYTCC19
fatcat:34mqfel4snfrtfqpsf4qi43e7e