From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

When Do You Need Billions of Words of Pretraining Data?, , , и . CoRR, (2020)Pretraining Language Models with Human Preferences., , , , , , , и . ICML, том 202 из Proceedings of Machine Learning Research, стр. 17506-17533. PMLR, (2023)What Do NLP Researchers Believe? Results of the NLP Community Metasurvey., , , , , , , , , и 1 other автор(ы). ACL (1), стр. 16334-16368. Association for Computational Linguistics, (2023)Instruction Induction: From Few Examples to Natural Language Task Descriptions., , , и . ACL (1), стр. 1935-1952. Association for Computational Linguistics, (2023)Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling., , , , , , , , , и 6 other автор(ы). ACL (1), стр. 4465-4476. Association for Computational Linguistics, (2019)Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks., , и . CoRR, (2018)The Capacity for Moral Self-Correction in Large Language Models., , , , , , , , , и 39 other автор(ы). CoRR, (2023)Probing What Different NLP Tasks Teach Machines about Function Word Comprehension., , , , , , , , , и 2 other автор(ы). CoRR, (2019)What Do NLP Researchers Believe? Results of the NLP Community Metasurvey., , , , , , , , , и 1 other автор(ы). CoRR, (2022)Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data., , и . Insights, стр. 82-87. Association for Computational Linguistics, (2020)