Author of the publication

ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers.

, , , , , and . CoRR, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Understanding INT4 Quantization for Transformer Models: Latency Speedup, Composability, and Failure Cases., , , , and . CoRR, (2023)ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers., , , , , and . NeurIPS, (2022)Understanding Int4 Quantization for Language Models: Latency Speedup, Composability, and Failure Cases., , , , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 37524-37539. PMLR, (2023)ZeRO-Offload: Democratizing Billion-Scale Model Training., , , , , , , and . USENIX ATC, page 551-564. USENIX Association, (2021)ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers., , , , , and . CoRR, (2023)DeepSpeed- Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale., , , , , , , , , and 1 other author(s). SC, page 46:1-46:15. IEEE, (2022)DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale., , , , , , , and . ICML, volume 162 of Proceedings of Machine Learning Research, page 18332-18346. PMLR, (2022)Fault-tolerant 3-D network-on-chip design using dynamic link sharing., , , and . DATE, page 1195-1200. IEEE, (2016)