PermuteFormer: Efficient Relative Position Encoding for Long Sequences

Peng Chen
2021 Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing   unpublished
A recent variation of Transformer, Performer, scales Transformer to longer sequences with a linear attention mechanism. However, it is not compatible with relative position encoding, which has advantages over absolute position encoding. In this paper, we discuss possible ways to add relative position encoding to Performer. Based on the analysis, we propose Per-muteFormer, a Performer-based model with relative position encoding that scales linearly on long sequences. PermuteFormer applies
more » ... n-dependent transformation on queries and keys to encode positional information into the attention module. This transformation is carefully crafted so that the final output of selfattention is not affected by absolute positions of tokens. PermuteFormer introduces negligible computational overhead by design that it runs as fast as Performer. We evaluate Per-muteFormer on Long-Range Arena, a dataset for long sequences, as well as WikiText-103, a language modeling dataset. The experiments show that PermuteFormer uniformly improves the performance of Performer with almost no computational overhead and outperforms vanilla Transformer on most of the tasks.
doi:10.18653/v1/2021.emnlp-main.828 fatcat:5x4obwt47vd7jmsfxczxwawfqy