"Diversity and Uncertainty in Moderation" are the Key to Data Selection for Multilingual Few-shot Transfer [article]

Shanu Kumar, Sandipan Dandapat, Monojit Choudhury
2022 arXiv   pre-print
Few-shot transfer often shows substantial gain over zero-shot transfer , which is a practically useful trade-off between fully supervised and unsupervised learning approaches for multilingual pretrained model-based systems. This paper explores various strategies for selecting data for annotation that can result in a better few-shot transfer. The proposed approaches rely on multiple measures such as data entropy using n-gram language model, predictive entropy, and gradient embedding. We propose
more » ... loss embedding method for sequence labeling tasks, which induces diversity and uncertainty sampling similar to gradient embedding. The proposed data selection strategies are evaluated and compared for POS tagging, NER, and NLI tasks for up to 20 languages. Our experiments show that the gradient and loss embedding-based strategies consistently outperform random data selection baselines, with gains varying with the initial performance of the zero-shot transfer. Furthermore, the proposed method shows similar trends in improvement even when the model is fine-tuned using a lower proportion of the original task-specific labeled training data for zero-shot transfer.
arXiv:2206.15010v1 fatcat:js53kwvhyjewzkcnvyhb5v5x54