Author of the publication

On the Ability and Limitations of Transformers to Recognize Formal Languages.

, , and . EMNLP (1), page 7096-7116. Association for Computational Linguistics, (2020)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

No persons found for author name Bhattamishra, Satwik
add a person with the name Bhattamishra, Satwik
 

Other publications of authors with the same name

On the Computational Power of Transformers and Its Implications in Sequence Modeling., , and . CoNLL, page 455-475. Association for Computational Linguistics, (2020)Unsung Challenges of Building and Deploying Language Technologies for Low Resource Language Communities., , , , , , , , , and . CoRR, (2019)Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation., , , and . NAACL-HLT (1), page 3609-3619. Association for Computational Linguistics, (2019)Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions., , , and . ICLR, OpenReview.net, (2024)On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages., , and . COLING, page 1481-1494. International Committee on Computational Linguistics, (2020)Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions., , , and . ACL (1), page 5767-5791. Association for Computational Linguistics, (2023)Separations in the Representational Capabilities of Transformers and Recurrent Architectures., , , and . CoRR, (2024)On the Ability and Limitations of Transformers to Recognize Formal Languages., , and . EMNLP (1), page 7096-7116. Association for Computational Linguistics, (2020)Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions., , , and . CoRR, (2023)Revisiting the Compositional Generalization Abilities of Neural Sequence Models., , , and . ACL (2), page 424-434. Association for Computational Linguistics, (2022)