A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
SHAPE: Shifted Absolute Position Embedding for Transformers
2021
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
unpublished
Position representation is crucial for building position-aware representations in Transformers. Existing position representations suffer from a lack of generalization to test data with unseen lengths or high computational cost. We investigate shifted absolute position embedding (SHAPE) to address both issues. The basic idea of SHAPE is to achieve shift invariance, which is a key property of recent successful position representations, by randomly shifting absolute positions during training. We
doi:10.18653/v1/2021.emnlp-main.266
fatcat:fnwua4n6cjggjmtxjkptxgxpoq