From post

Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation.

, , , и . NAACL-HLT (1), стр. 3609-3619. Association for Computational Linguistics, (2019)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

No persons found for author name Bhattamishra, Satwik
add a person with the name Bhattamishra, Satwik
 

Другие публикации лиц с тем же именем

On the Computational Power of Transformers and Its Implications in Sequence Modeling., , и . CoNLL, стр. 455-475. Association for Computational Linguistics, (2020)A Formal Framework for Understanding Length Generalization in Transformers., , , , , , , и . CoRR, (2024)Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation., , , и . NAACL-HLT (1), стр. 3609-3619. Association for Computational Linguistics, (2019)Unsung Challenges of Building and Deploying Language Technologies for Low Resource Language Communities., , , , , , , , , и . CoRR, (2019)Understanding In-Context Learning in Transformers and LLMs by Learning to Learn Discrete Functions., , , и . ICLR, OpenReview.net, (2024)On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages., , и . COLING, стр. 1481-1494. International Committee on Computational Linguistics, (2020)Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions., , , и . ACL (1), стр. 5767-5791. Association for Computational Linguistics, (2023)Separations in the Representational Capabilities of Transformers and Recurrent Architectures., , , и . CoRR, (2024)On the Ability and Limitations of Transformers to Recognize Formal Languages., , и . EMNLP (1), стр. 7096-7116. Association for Computational Linguistics, (2020)Inshrinkerator: Compressing Deep Learning Training Checkpoints via Dynamic Quantization., , , , , , и . SoCC, стр. 1012-1031. ACM, (2024)