Author of the publication

Linear Transformers Are Secretly Fast Weight Programmers.

, , and . ICML, volume 139 of Proceedings of Machine Learning Research, page 9355-9366. PMLR, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Training and Generating Neural Networks in Compressed Weight Space., and . CoRR, (2021)RWTH ASR Systems for LibriSpeech: Hybrid vs Attention., , , , , , , and . INTERSPEECH, page 231-235. ISCA, (2019)Improving Baselines in the Wild., , , and . CoRR, (2021)SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention., , , and . CoRR, (2023)Images as Weight Matrices: Sequential Image Generation Through Synaptic Learning Rules., and . ICLR, OpenReview.net, (2023)The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization., , and . ICLR, OpenReview.net, (2022)Training Language Models for Long-Span Cross-Sentence Evaluation., , , and . ASRU, page 419-426. IEEE, (2019)A Comparison of Transformer and LSTM Encoder Decoder Models for ASR., , , , and . ASRU, page 8-15. IEEE, (2019)On the Choice of Modeling Unit for Sequence-to-Sequence Speech Recognition., , , , , and . INTERSPEECH, page 3800-3804. ISCA, (2019)Unsupervised Learning of Temporal Abstractions With Slot-Based Transformers., , , and . Neural Comput., 35 (4): 593-626 (2023)