Author of the publication

ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers.

, , , , , and . CoRR, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

SHARP: An Adaptable, Energy-Efficient Accelerator for Recurrent Neural Networks., , , , , and . ACM Trans. Embed. Comput. Syst., 22 (2): 30:1-30:23 (March 2023)Understanding INT4 Quantization for Transformer Models: Latency Speedup, Composability, and Failure Cases., , , , and . CoRR, (2023)Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model., , , , , , , , , and 10 other author(s). CoRR, (2022)ZeroQuant: Efficient and Affordable Post-Training Quantization for Large-Scale Transformers., , , , , and . NeurIPS, (2022)ZeroQuant(4+2): Redefining LLMs Quantization with a New FP6-Centric Strategy for Diverse Generative Tasks., , , , , , , , , and 2 other author(s). CoRR, (2023)Understanding Int4 Quantization for Language Models: Latency Speedup, Composability, and Failure Cases., , , , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 37524-37539. PMLR, (2023)ZeRO-Offload: Democratizing Billion-Scale Model Training., , , , , , , and . USENIX Annual Technical Conference, page 551-564. USENIX Association, (2021)DeepSpeed-FastGen: High-throughput Text Generation for LLMs via MII and DeepSpeed-Inference., , , , , , , , , and 1 other author(s). CoRR, (2024)ZeroQuant-HERO: Hardware-Enhanced Robust Optimized Post-Training Quantization Framework for W8A8 Transformers., , , , , and . CoRR, (2023)DeepSpeed- Inference: Enabling Efficient Inference of Transformer Models at Unprecedented Scale., , , , , , , , , and 1 other author(s). SC, page 46:1-46:15. IEEE, (2022)