Author of the publication

Flash-LLM: Enabling Low-Cost and Highly-Efficient Large Generative Model Inference With Unstructured Sparsity.

, , , , , , , , and . Proc. VLDB Endow., 17 (2): 211-224 (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies., , , , , , , , , and 82 other author(s). CoRR, (2023)Flash-LLM: Enabling Low-Cost and Highly-Efficient Large Generative Model Inference With Unstructured Sparsity., , , , , , , , and . Proc. VLDB Endow., 17 (2): 211-224 (2023)JSidentify: a hybrid framework for detecting plagiarism among JavaScript code in online mini games., , , , , , , , , and 3 other author(s). ICSE (SEIP), page 211-220. ACM, (2020)CorDA: Context-Oriented Decomposition Adaptation of Large Language Models., , , , , , and . CoRR, (2024)Binary Neural Network for Automated Visual Surface Defect Detection., , , , and . Sensors, 21 (20): 6868 (2021)Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity., , , , , , , , and . CoRR, (2023)Quant-LLM: Accelerating the Serving of Large Language Models via FP6-Centric Algorithm-System Co-Design on Modern GPUs., , , , , , , , , and 3 other author(s). USENIX ATC, page 699-713. USENIX Association, (2024)