Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Learning to Attend On Essential Terms: An Enhanced Retriever-Reader Model for Scientific Question Answering., , , and . CoRR, (2018)Learning to Attend On Essential Terms: An Enhanced Retriever-Reader Model for Open-domain Question Answering., , , and . NAACL-HLT (1), page 335-344. Association for Computational Linguistics, (2019)MixKD: Towards Efficient Distillation of Large-scale Language Models., , , , , , and . CoRR, (2020)Less is More: Task-aware Layer-wise Distillation for Language Model Compression., , , , , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 20852-20867. PMLR, (2023)Truncated Diffusion Probabilistic Models and Diffusion-based Adversarial Auto-Encoders., , , and . ICLR, OpenReview.net, (2023)A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models., , , , and . ACL (1), page 2763-2775. Association for Computational Linguistics, (2022)Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning., , , , , and . EMNLP (1), page 8980-8994. Association for Computational Linguistics, (2020)Finetuning Pretrained Transformers into RNNs., , , , , , , , and . EMNLP (1), page 10630-10643. Association for Computational Linguistics, (2021)Less is More: Task-aware Layer-wise Distillation for Language Model Compression., , , , , and . CoRR, (2022)What Makes Good In-Context Examples for GPT-3?, , , , , and . CoRR, (2021)