Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Non-Autoregressive Semantic Parsing for Compositional Task-Oriented Dialog., , , , , and . NAACL-HLT, page 2969-2978. Association for Computational Linguistics, (2021)CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training., , , , , , and . NAACL-HLT (Findings), page 2402-2420. Association for Computational Linguistics, (2022)VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding., , , , , , , and . EMNLP (1), page 6787-6800. Association for Computational Linguistics, (2021)Pre-training via Paraphrasing., , , , , and . NeurIPS, (2020)Scaling Laws for Generative Mixed-Modal Language Models., , , , , , , , , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 265-279. PMLR, (2023)Conversational Semantic Parsing., , , , , , , , , and 1 other author(s). EMNLP (1), page 5026-5035. Association for Computational Linguistics, (2020)Better Fine-Tuning by Reducing Representational Collapse., , , , , and . ICLR, OpenReview.net, (2021)Towards Language Agnostic Universal Representations., , and . ACL (1), page 4033-4041. Association for Computational Linguistics, (2019)Retrieval-Augmented Multimodal Language Modeling, , , , , , , , and . (2022)cite arxiv:2211.12561Comment: Published at ICML 2023. Blog post available at https://cs.stanford.edu/~myasu/blog/racm3/.Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning., , and . CoRR, (2020)