Following the successful application of vision transformers in multiple
computer vision tasks, these models have drawn the attention of the signal
processing community. This is because signals are often represented as
spectrograms (e.g. through Discrete Fourier Transform) which can be directly
provided as input to vision transformers. However, naively applying
transformers to spectrograms is suboptimal. Since the axes represent distinct
dimensions, i.e. frequency and time, we argue that a better approach is to
separate the attention dedicated to each axis. To this end, we propose the
Separable Transformer (SepTr), an architecture that employs two transformer
blocks in a sequential manner, the first attending to tokens within the same
time interval, and the second attending to tokens within the same frequency
bin. We conduct experiments on three benchmark data sets, showing that our
separable architecture outperforms conventional vision transformers and other
state-of-the-art methods. Unlike standard transformers, SepTr linearly scales
the number of trainable parameters with the input size, thus having a lower
memory footprint. Our code is available as open source at
https://github.com/ristea/septr.
Описание
[2203.09581] SepTr: Separable Transformer for Audio Spectrogram Processing
%0 Generic
%1 ristea2022septr
%A Ristea, Nicolae-Catalin
%A Ionescu, Radu Tudor
%A Khan, Fahad Shahbaz
%D 2022
%K
%T SepTr: Separable Transformer for Audio Spectrogram Processing
%U http://arxiv.org/abs/2203.09581
%X Following the successful application of vision transformers in multiple
computer vision tasks, these models have drawn the attention of the signal
processing community. This is because signals are often represented as
spectrograms (e.g. through Discrete Fourier Transform) which can be directly
provided as input to vision transformers. However, naively applying
transformers to spectrograms is suboptimal. Since the axes represent distinct
dimensions, i.e. frequency and time, we argue that a better approach is to
separate the attention dedicated to each axis. To this end, we propose the
Separable Transformer (SepTr), an architecture that employs two transformer
blocks in a sequential manner, the first attending to tokens within the same
time interval, and the second attending to tokens within the same frequency
bin. We conduct experiments on three benchmark data sets, showing that our
separable architecture outperforms conventional vision transformers and other
state-of-the-art methods. Unlike standard transformers, SepTr linearly scales
the number of trainable parameters with the input size, thus having a lower
memory footprint. Our code is available as open source at
https://github.com/ristea/septr.
@misc{ristea2022septr,
abstract = {Following the successful application of vision transformers in multiple
computer vision tasks, these models have drawn the attention of the signal
processing community. This is because signals are often represented as
spectrograms (e.g. through Discrete Fourier Transform) which can be directly
provided as input to vision transformers. However, naively applying
transformers to spectrograms is suboptimal. Since the axes represent distinct
dimensions, i.e. frequency and time, we argue that a better approach is to
separate the attention dedicated to each axis. To this end, we propose the
Separable Transformer (SepTr), an architecture that employs two transformer
blocks in a sequential manner, the first attending to tokens within the same
time interval, and the second attending to tokens within the same frequency
bin. We conduct experiments on three benchmark data sets, showing that our
separable architecture outperforms conventional vision transformers and other
state-of-the-art methods. Unlike standard transformers, SepTr linearly scales
the number of trainable parameters with the input size, thus having a lower
memory footprint. Our code is available as open source at
https://github.com/ristea/septr.},
added-at = {2022-07-11T20:01:22.000+0200},
author = {Ristea, Nicolae-Catalin and Ionescu, Radu Tudor and Khan, Fahad Shahbaz},
biburl = {https://www.bibsonomy.org/bibtex/2124825765eca6c389cb3c0fe78ff73a4/simonh},
description = {[2203.09581] SepTr: Separable Transformer for Audio Spectrogram Processing},
interhash = {d7dc77d250ee0403738887385f1e4699},
intrahash = {124825765eca6c389cb3c0fe78ff73a4},
keywords = {},
note = {cite arxiv:2203.09581Comment: Accepted at INTERSPEECH 2022},
timestamp = {2022-07-12T10:09:13.000+0200},
title = {SepTr: Separable Transformer for Audio Spectrogram Processing},
url = {http://arxiv.org/abs/2203.09581},
year = 2022
}