Author of the publication

Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks.

, , and . ICLR, OpenReview.net, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

A Generalist Neural Algorithmic Learner., , , , , , , , , and 5 other author(s). LoG, volume 198 of Proceedings of Machine Learning Research, page 2. PMLR, (2022)The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization., , and . ICLR, OpenReview.net, (2022)Improving Differentiable Neural Computers Through Memory Masking, De-allocation, and Link Distribution Sharpness Control., and . ICLR (Poster), OpenReview.net, (2019)A Modern Self-Referential Weight Matrix That Learns to Modify Itself., , , and . ICML, volume 162 of Proceedings of Machine Learning Research, page 9660-9677. PMLR, (2022)The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention., , and . ICML, volume 162 of Proceedings of Machine Learning Research, page 9639-9659. PMLR, (2022)Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions., , and . EMNLP, page 9455-9465. Association for Computational Linguistics, (2023)MoEUT: Mixture-of-Experts Universal Transformers., , , , and . CoRR, (2024)Topological Neural Discrete Representation Learning à la Kohonen., , and . CoRR, (2023)Going Beyond Linear Transformers with Recurrent Fast Weight Programmers., , , and . NeurIPS, page 7703-7717. (2021)The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers., , and . EMNLP (1), page 619-634. Association for Computational Linguistics, (2021)