From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Prometheus: Inducing Fine-grained Evaluation Capability in Language Models., , , , , , , , , и 1 other автор(ы). CoRR, (2023)How Well Do Large Language Models Truly Ground?, , , , , , и . CoRR, (2023)Fixed Input Parameterization for Efficient Prompting., , , , и . ACL (Findings), стр. 8428-8441. Association for Computational Linguistics, (2023)The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models., , , , , , , , , и 22 other автор(ы). CoRR, (2024)Prometheus: Inducing Fine-Grained Evaluation Capability in Language Models., , , , , , , , , и 1 other автор(ы). ICLR, OpenReview.net, (2024)Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2., , , , , , , , , и 1 other автор(ы). CoRR, (2023)Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging., , , , , , , , и . CoRR, (2023)Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts., , и . TL4NLP, том 203 из Proceedings of Machine Learning Research, стр. 52-62. PMLR, (2022)Knowledge Unlearning for Mitigating Privacy Risks in Language Models., , , , , , и . ACL (1), стр. 14389-14408. Association for Computational Linguistics, (2023)Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners., , , , и . ICLR, OpenReview.net, (2023)