Transductive Auxiliary Task Self-Training for Neural Multi-Task Models

Johannes Bjerva, Katharina Kann, Isabelle Augenstein
2019 Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)  
Multi-task learning and self-training are two common ways to improve a machine learning model's performance in settings with limited training data. Drawing heavily on ideas from those two approaches, we suggest transductive auxiliary task self-training: training a multi-task model on (i) a combination of main and auxiliary task training data, and (ii) test instances with auxiliary task labels which a single-task version of the model has previously generated. We perform extensive experiments on
more » ... 6 combinations of languages and tasks. Our results are that, on average, transductive auxiliary task self-training improves absolute accuracy by up to 9.56% over the pure multitask model for dependency relation tagging and by up to 13.03% for semantic tagging.
doi:10.18653/v1/d19-6128 dblp:conf/acl-deeplo/BjervaKA19 fatcat:b2wukkawjbecdh5bxqshs5ah3y