Abstract
Transformers have emerged as a powerful tool for a broad range of natural
language processing tasks. A key component that drives the impressive
performance of Transformers is the self-attention mechanism that encodes the
influence or dependence of other tokens on each specific token. While
beneficial, the quadratic complexity of self-attention on the input sequence
length has limited its application to longer sequences -- a topic being
actively studied in the community. To address this limitation, we propose
Nyströmformer -- a model that exhibits favorable scalability as a function
of sequence length. Our idea is based on adapting the Nyström method to
approximate standard self-attention with $O(n)$ complexity. The scalability of
Nyströmformer enables application to longer sequences with thousands of
tokens. We perform evaluations on multiple downstream tasks on the GLUE
benchmark and IMDB reviews with standard sequence length, and find that our
Nyströmformer performs comparably, or in a few cases, even slightly better,
than standard self-attention. On longer sequence tasks in the Long Range Arena
(LRA) benchmark, Nyströmformer performs favorably relative to other
efficient self-attention methods. Our code is available at
https://github.com/mlpen/Nystromformer.
Users
Please
log in to take part in the discussion (add own reviews or comments).