A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Neural Transfer Learning with Transformers for Social Science Text Analysis
[article]
2021
arXiv
pre-print
During the last years, there have been substantial increases in the prediction performances of natural language processing models on text-based supervised learning tasks. Especially deep learning models that are based on the Transformer architecture (Vaswani et al., 2017) and are used in a transfer learning setting have contributed to this development. As Transformer-based models for transfer learning have the potential to achieve higher prediction accuracies with relatively few training data
arXiv:2102.02111v1
fatcat:5ulwuvuwlncdhc6uiwaghymmym