Inducing Transformer's Compositional Generalization Ability via Auxiliary Sequence Prediction Tasks

Yichen Jiang, Mohit Bansal
2021 Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing   unpublished
Systematic compositionality is an essential mechanism in human language, allowing the recombination of known parts to create novel expressions. However, existing neural models have been shown to lack this basic ability in learning symbolic structures. Motivated by the failure of a Transformer model on the SCAN compositionality challenge (Lake and , which requires parsing a command into actions, we propose two auxiliary sequence prediction tasks as additional training supervision. These
more » ... ally-generated sequences are more representative of the underlying compositional symbolic structures of the input data. During inference, the model jointly predicts the next action and the next tokens in the auxiliary sequences at each step. Experiments on the SCAN dataset show that our method encourages the Transformer to understand compositional structures of the command, improving its accuracy on multiple challenging splits from ≤ 10% to 100%. With only 418 (5%) training instances, our approach still achieves 97.8% accuracy on the MCD1 split. Therefore, we argue that compositionality can be induced in Transformers given minimal but proper guidance. We also show that a better result is achieved using less contextualized vectors as the attention's query, providing insights into architecture choices in achieving systematic compositionality. Finally, we show positive generalization results on the grounded-SCAN task (Ruis et al., 2020) . 1 Split Type Input Outputs (Supervisions)
doi:10.18653/v1/2021.emnlp-main.505 fatcat:4xm6czoc6zbpvdtclpesv73zw4