Author of the publication

Learning Visual Representation from Modality-Shared Contrastive Language-Image Pre-training.

, , , , , , , and . ECCV (27), volume 13687 of Lecture Notes in Computer Science, page 69-87. Springer, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

InheritSumm: A General, Versatile and Compact Summarizer by Distilling from GPT., , , , , , and . EMNLP (Findings), page 13879-13892. Association for Computational Linguistics, (2023)CLIP-Event: Connecting Text and Images with Event Structures., , , , , , , , and . CVPR, page 16399-16408. IEEE, (2022)Narrate Dialogues for Better Summarization., , and . EMNLP (Findings), page 3565-3575. Association for Computational Linguistics, (2022)ParaTag: A Dataset of Paraphrase Tagging for Fine-Grained Labels, NLG Evaluation, and Data Augmentation., , , , and . EMNLP, page 7111-7122. Association for Computational Linguistics, (2022)Supervised Knowledge Makes Large Language Models Better In-context Learners., , , , , , , , , and 1 other author(s). CoRR, (2023)Cross-lingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework., , , , , and . ICLR, OpenReview.net, (2020)ERBench: An Entity-Relationship based Automatically Verifiable Hallucination Benchmark for Large Language Models., , , , , , and . CoRR, (2024)Enhancing Factual Consistency of Abstractive Summarization., , , , , , and . NAACL-HLT, page 718-733. Association for Computational Linguistics, (2021)A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining., , , and . EMNLP (Findings), volume EMNLP 2020 of Findings of ACL, page 194-203. Association for Computational Linguistics, (2020)Low-resource Cross-lingual Event Type Detection via Distant Supervision with Minimal Effort., , , , , , and . COLING, page 70-82. Association for Computational Linguistics, (2018)