A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Quasi-Multitask Learning: an Efficient Surrogate for Obtaining Model Ensembles
2020
Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing
unpublished
We propose the technique of quasi-multitask learning (Q-MTL), a simple and easy to implement modification of standard multitask learning, in which the tasks to be modeled are identical. With this easy modification of a standard neural classifier we can get benefits similar to an ensemble of classifiers with a fraction of the resources required. We illustrate it through a series of sequence labeling experiments over a diverse set of languages, that applying Q-MTL consistently increases the
doi:10.18653/v1/2020.sustainlp-1.13
fatcat:s5v75la2wbbitnduldsjlxjtra