A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages [article]

Clara Vania, Yova Kementchedjhieva, Anders Søgaard, Adam Lopez
2019 arXiv   pre-print
Parsers are available for only a handful of the world's languages, since they require lots of training data. How far can we get with just a small amount of training data? We systematically compare a set of simple strategies for improving low-resource parsers: data augmentation, which has not been tested before; cross-lingual training; and transliteration. Experimenting on three typologically diverse low-resource languages---North S\'ami, Galician, and Kazah---We find that (1) when only the
more » ... esource treebank is available, data augmentation is very helpful; (2) when a related high-resource treebank is available, cross-lingual training is helpful and complements data augmentation; and (3) when the high-resource treebank uses a different writing system, transliteration into a shared orthographic spaces is also very helpful.
arXiv:1909.02857v1 fatcat:uuraw5arvvg3bpyje4s3eqbdey