Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model., , , , , , and . ACL (industry), page 134-148. Association for Computational Linguistics, (2023)AlignBench: Benchmarking Chinese Alignment of Large Language Models., , , , , , , , , and 7 other author(s). CoRR, (2023)Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers., , , , , , , , and . CoRR, (2022)AlignBench: Benchmarking Chinese Alignment of Large Language Models., , , , , , , , , and 8 other author(s). ACL (1), page 11621-11640. Association for Computational Linguistics, (2024)Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method., , , , , , and . ACL (Findings), page 9678-9696. Association for Computational Linguistics, (2023)GLM-130B: An Open Bilingual Pre-trained Model., , , , , , , , , and 8 other author(s). CoRR, (2022)GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model., , , , , , , , , and 2 other author(s). CoRR, (2023)ChatGLM: A Family of Large Language Models from GLM-130B to GLM-4 All Tools., , , , , , , , , and 45 other author(s). CoRR, (2024)GLM-130B: An Open Bilingual Pre-trained Model., , , , , , , , , and 9 other author(s). ICLR, OpenReview.net, (2023)OAG-Bench: A Human-Curated Benchmark for Academic Graph Mining., , , , , , , , , and 12 other author(s). KDD, page 6214-6225. ACM, (2024)