A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
AMMUS : A Survey of Transformer-based Pretrained Models in Natural Language Processing
[article]
2021
arXiv
pre-print
Transformer-based pretrained language models (T-PTLMs) have achieved great success in almost every NLP task. The evolution of these models started with GPT and BERT. These models are built on the top of transformers, self-supervised learning and transfer learning. Transformed-based PTLMs learn universal language representations from large volumes of text data using self-supervised learning and transfer this knowledge to downstream tasks. These models provide good background knowledge to
arXiv:2108.05542v2
fatcat:4uyj6uut65d37hfi7yss2fek6q