A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is
Sequence to sequence learning has recently emerged as a new paradigm in supervised learning. To date, most of its applications focused on only one task and not much work explored this framework for multiple tasks. This paper examines three multi-task learning (MTL) settings for sequence to sequence models: (a) the one-to-many setting-where the encoder is shared between several tasks such as machine translation and syntactic parsing, (b) the many-to-one setting-useful when only the decoder canfatcat:rcu6rlhzp5fbrnzhrfwmw2mmle