A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Conformal retrofitting via Riemannian manifolds: distilling task-specific graphs into pretrained embeddings
[article]
2020
arXiv
pre-print
Pretrained (language) embeddings are versatile, task-agnostic feature representations of entities, like words, that are central to many machine learning applications. These representations can be enriched through retrofitting, a class of methods that incorporate task-specific domain knowledge encoded as a graph over a subset of these entities. However, existing retrofitting algorithms face two limitations: they overfit the observed graph by failing to represent relationships with missing
arXiv:2010.04842v1
fatcat:miqno2kjhng2roup6d5mabgzta