A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
RobBERT: a Dutch RoBERTa-based Language Model
2020
Findings of the Association for Computational Linguistics: EMNLP 2020
unpublished
Pre-trained language models have been dominating the field of natural language processing in recent years, and have led to significant performance gains for various complex natural language tasks. One of the most prominent pre-trained language models is BERT, which was released as an English as well as a multilingual version. Although multilingual BERT performs well on many tasks, recent studies show that BERT models trained on a single language significantly outperform the multilingual
doi:10.18653/v1/2020.findings-emnlp.292
fatcat:4eby3tmflbcvvfnyqalmupcqqa