Author of the publication

Petals: Collaborative Inference and Fine-tuning of Large Models.

, , , , , , , and . ACL (demo), page 558-568. Association for Computational Linguistics, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

8-bit Optimizers via Block-wise Quantization., , , and . ICLR, OpenReview.net, (2022)Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models., , , , , , and . CoRR, (2022)Training Transformers Together., , , , , , , and . CoRR, (2022)High Performance Natural Language Processing., , , , , and . EMNLP (Tutorial Abstracts), page 24-27. Association for Computational Linguistics, (2020)The case for 4-bit precision: k-bit Inference Scaling Laws., and . ICML, volume 202 of Proceedings of Machine Learning Research, page 7750-7774. PMLR, (2023)LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale., , , and . CoRR, (2022)Training Transformers Together., , , , , , , and . NeurIPS (Competition and Demos), volume 176 of Proceedings of Machine Learning Research, page 335-342. PMLR, (2021)Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model., , , , and . EMNLP, page 15038-15061. Association for Computational Linguistics, (2023)Petals: Collaborative Inference and Fine-tuning of Large Models., , , , , , , and . CoRR, (2022)The case for 4-bit precision: k-bit Inference Scaling Laws, and . (2022)cite arxiv:2212.09720.