A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is
2019). A systematic comparison of methods for lowresource dependency parsing on genuinely low-resource languages. In Abstract Parsers are available for only a handful of the world's languages, since they require lots of training data. How far can we get with just a small amount of training data? We systematically compare a set of simple strategies for improving low-resource parsers: data augmentation, which has not been tested before; cross-lingual training; and transliteration. Experimentingdoi:10.18653/v1/d19-1102 dblp:conf/emnlp/VaniaKSL19 fatcat:w6tt4bygofefxbtgdepnoktfku