A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is
Language Models are Few-shot Multilingual Learners
Proceedings of the 1st Workshop on Multilingual Representation Learning
General-purpose language models have demonstrated impressive capabilities, performing on par with state-of-the-art approaches on a range of downstream natural language processing (NLP) tasks and benchmarks when inferring instructions from very few examples. Here, we evaluate the multilingual skills of the GPT and T5 models in conducting multi-class classification on non-English languages without any parameter updates. We show that, given a few English examples as context, pre-trained languagedoi:10.18653/v1/2021.mrl-1.1 fatcat:czv4znd6g5gexgduqrq6j4rgcu