Author of the publication

Approximating Two-Layer Feedforward Networks for Efficient Transformers.

, , and . EMNLP (Findings), page 674-692. Association for Computational Linguistics, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

RWTH ASR Systems for LibriSpeech: Hybrid vs Attention., , , , , , , and . INTERSPEECH, page 231-235. ISCA, (2019)The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization., , and . ICLR, OpenReview.net, (2022)Training Language Models for Long-Span Cross-Sentence Evaluation., , , and . ASRU, page 419-426. IEEE, (2019)A Comparison of Transformer and LSTM Encoder Decoder Models for ASR., , , , and . ASRU, page 8-15. IEEE, (2019)Images as Weight Matrices: Sequential Image Generation Through Synaptic Learning Rules., and . ICLR, OpenReview.net, (2023)Dissecting the Interplay of Attention Paths in a Statistical Mechanics Theory of Transformers., , , and . CoRR, (2024)Exploring the Promise and Limits of Real-Time Recurrent Learning., , and . ICLR, OpenReview.net, (2024)Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules., , and . NeurIPS, (2022)Linear Transformers Are Secretly Fast Weight Programmers., , and . ICML, volume 139 of Proceedings of Machine Learning Research, page 9355-9366. PMLR, (2021)The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention., , and . ICML, volume 162 of Proceedings of Machine Learning Research, page 9639-9659. PMLR, (2022)