A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Semi-Supervised Sequence Modeling with Cross-View Training
2018
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Unsupervised representation learning algorithms such as word2vec and ELMo improve the accuracy of many supervised NLP models, mainly because they can take advantage of large amounts of unlabeled text. However, the supervised models only learn from taskspecific labeled data during the main training phase. We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.
doi:10.18653/v1/d18-1217
dblp:conf/emnlp/ClarkLML18
fatcat:s3ghgvb2brcbbiawfrf3xddvzi