Author of the publication

How Many Demonstrations Do You Need for In-context Learning?

, , , and . EMNLP (Findings), page 11149-11159. Association for Computational Linguistics, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

How Many Demonstrations Do You Need for In-context Learning?, , , and . EMNLP (Findings), page 11149-11159. Association for Computational Linguistics, (2023)Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement., , , , , , , , , and 1 other author(s). CoRR, (2024)OPTune: Efficient Online Preference Tuning., , , , , , , , and . CoRR, (2024)Quantifying Uncertainty in Answers from any Language Model and Enhancing their Trustworthiness., and . ACL (1), page 5186-5200. Association for Computational Linguistics, (2024)Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning., , , , , , and . CoRR, (2023)Why Propagate Alone? Parallel Use of Labels and Features on Graphs., , , , , , , , , and . ICLR, OpenReview.net, (2022)Automated Data Curation for Robust Language Model Fine-Tuning., and . CoRR, (2024)Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning., , , , , and . ACL (Findings), page 16189-16211. Association for Computational Linguistics, (2024)GOAT: A Global Transformer on Large-scale Graphs., , , , , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 17375-17390. PMLR, (2023)PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer., , , and . EMNLP, page 13512-13525. Association for Computational Linguistics, (2023)