Linearized Relative Positional Encoding

Zhen Qin, Weixuan Sun, Kaiyue Lu, Hui Deng, Dongxu Li, Xiaodong Han, Yuchao Dai, Lingpeng Kong, Yiran Zhong

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Relative positional encoding is widely used in vanilla and linear transformers to repre-sent positional information. However, existing encoding methods of a vanilla transformer are not always directly applicable to a linear transformer, because the latter requires a decomposition of the query and key representations into separate kernel functions. Never-theless, principles for designing encoding methods suitable for linear transformers remain understudied. In this work, we put together a variety of existing linear relative positional encoding approaches under a canonical form and further propose a family of linear relative positional encoding algorithms via unitary transformation. Our formulation leads to a principled framework that can be used to develop new relative positional encoding methods that preserve linear space-time complexity. Equipped with different models, the proposed linearized relative positional encoding (LRPE) family derives effective encoding for vari-ous applications. Experiments show that compared with existing methods, LRPE achieves state-of-the-art performance in language modeling, text classification, and image classifi-cation. Meanwhile, it emphasizes a general paradigm for designing broadly more relative positional encoding methods that are applicable to linear transformers.

Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2023
StatePublished - 1 Sep 2023
Externally publishedYes

Fingerprint

Dive into the research topics of 'Linearized Relative Positional Encoding'. Together they form a unique fingerprint.

Cite this