From post

Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation.

, , , и . NAACL-HLT (1), стр. 3609-3619. Association for Computational Linguistics, (2019)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

No persons found for author name Bhattamishra, Satwik
add a person with the name Bhattamishra, Satwik
 

Другие публикации лиц с тем же именем

A Formal Framework for Understanding Length Generalization in Transformers., , , , , , , и . CoRR, (2024)On the Computational Power of Transformers and Its Implications in Sequence Modeling., , и . CoNLL, стр. 455-475. Association for Computational Linguistics, (2020)A Formal Framework for Understanding Length Generalization in Transformers., , , , , , , и . ICLR, OpenReview.net, (2025)Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation., , , и . NAACL-HLT (1), стр. 3609-3619. Association for Computational Linguistics, (2019)Unsung Challenges of Building and Deploying Language Technologies for Low Resource Language Communities., , , , , , , , , и . CoRR, (2019)Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions., , , и . ICLR, OpenReview.net, (2024)Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions., , , и . CoRR, (2023)Are NLP Models really able to Solve Simple Math Word Problems?, , и . NAACL-HLT, стр. 2080-2094. Association for Computational Linguistics, (2021)Revisiting the Compositional Generalization Abilities of Neural Sequence Models., , , и . ACL (2), стр. 424-434. Association for Computational Linguistics, (2022)MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations., , , и . EMNLP, стр. 2167-2189. Association for Computational Linguistics, (2023)