Investigating Post-pretraining Representation Alignment for Cross-Lingual Question Answering [article]

Fahim Faisal, Antonios Anastasopoulos
2021 arXiv   pre-print
Human knowledge is collectively encoded in the roughly 6500 languages spoken around the world, but it is not distributed equally across languages. Hence, for information-seeking question answering (QA) systems to adequately serve speakers of all languages, they need to operate cross-lingually. In this work we investigate the capabilities of multilingually pre-trained language models on cross-lingual QA. We find that explicitly aligning the representations across languages with a post-hoc
more » ... ning step generally leads to improved performance. We additionally investigate the effect of data size as well as the language choice in this fine-tuning step, also releasing a dataset for evaluating cross-lingual QA systems. Code and dataset are publicly available here:
arXiv:2109.12028v1 fatcat:hvox6jcsgrcdhmzf5thqvis7my