Muppet: Massive Multi-task Representations with Pre-Finetuning

Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, Sonal Gupta
2021 Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing   unpublished
We propose pre-finetuning, an additional largescale learning stage between language model pre-training and fine-tuning. Pre-finetuning is massively multi-task learning (around 50 datasets, over 4.8 million total labeled examples), and is designed to encourage learning of representations that generalize better to many different tasks. We show that prefinetuning consistently improves performance for pretrained discriminators (e.g. RoBERTa) and generation models (e.g. BART) on a wide range of
more » ... (sentence prediction, commonsense reasoning, MRC, etc.), while also significantly improving sample efficiency during fine-tuning. We also show that large-scale multi-tasking is crucial; pre-finetuning can hurt performance when few tasks are used up until a critical point (usually above 15) after which performance improves linearly in the number of tasks.
doi:10.18653/v1/2021.emnlp-main.468 fatcat:e73mympdubfr5dpkr4r6ynjzra