Author of the publication

How Well Do Large Language Models Truly Ground?

, , , , , , and . NAACL-HLT, page 2437-2465. Association for Computational Linguistics, (2024)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

How Well Do Large Language Models Truly Ground?, , , , , , and . CoRR, (2023)The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models., , , , , , , , , and 22 other author(s). CoRR, (2024)CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification., , , , and . EACL (System Demonstrations), page 195-208. Association for Computational Linguistics, (2023)Mind the Gap! Injecting Commonsense Knowledge for Abstractive Dialogue Summarization., , , , , and . COLING, page 6285-6300. International Committee on Computational Linguistics, (2022)The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning., , , , , , and . EMNLP, page 12685-12708. Association for Computational Linguistics, (2023)Semiparametric Token-Sequence Co-Supervision., , , , , , and . ACL (1), page 3864-3882. Association for Computational Linguistics, (2024)How Well Do Large Language Models Truly Ground?, , , , , , and . NAACL-HLT, page 2437-2465. Association for Computational Linguistics, (2024)