Author of the publication

Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time.

, , , , , , , , , , and . ICML, volume 162 of Proceedings of Machine Learning Research, page 23965-23998. PMLR, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Probing Text Models for Common Ground with Visual Representations., , , and . CoRR, (2020)Robust fine-tuning of zero-shot models., , , , , , , and . CoRR, (2021)CLIP on Wheels: Zero-Shot Object Navigation as Object Localization and Exploration., , , , and . CoRR, (2022)Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping., , , , , and . CoRR, (2020)Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus., , , , , , , and . EMNLP (1), page 1286-1305. Association for Computational Linguistics, (2021)Contrasting Contrastive Self-Supervised Representation Learning Models., , , , and . CoRR, (2021)CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation., , , , and . CVPR, page 23171-23181. IEEE, (2023)Large-Scale Representation Learning from Visually Grounded Untranscribed Speech., , and . CoNLL, page 55-65. Association for Computational Linguistics, (2019)Finetuning Pretrained Transformers into RNNs., , , , , , , , and . EMNLP (1), page 10630-10643. Association for Computational Linguistics, (2021)TaskWeb: Selecting Better Source Tasks for Multi-task NLP., , , and . EMNLP, page 11032-11052. Association for Computational Linguistics, (2023)