Author of the publication

MEGA: A Memory-Efficient GNN Accelerator Exploiting Degree-Aware Mixed-Precision Quantization.

, , , , , , , and . HPCA, page 124-138. IEEE, (2024)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Efficient Accelerator/Network Co-Search With Circular Greedy Reinforcement Learning., , and . IEEE Trans. Circuits Syst. II Express Briefs, 70 (7): 2615-2619 (July 2023)$A^2Q$: Aggregation-Aware Quantization for Graph Neural Networks., , , , , , , and . ICLR, OpenReview.net, (2023)MEGA: A Memory-Efficient GNN Accelerator Exploiting Degree-Aware Mixed-Precision Quantization., , , , , , , and . HPCA, page 124-138. IEEE, (2024)Hardware Acceleration of CNN with One-Hot Quantization of Weights and Activations., , , , and . DATE, page 971-974. IEEE, (2020)Block Convolution: Toward Memory-Efficient Inference of Large-Scale CNNs on FPGA., , , and . IEEE Trans. Comput. Aided Des. Integr. Circuits Syst., 41 (5): 1436-1447 (2022)Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA., , , and . CoRR, (2021)EBERT: Efficient BERT Inference with Dynamic Structured Pruning., , , and . ACL/IJCNLP (Findings), volume ACL/IJCNLP 2021 of Findings of ACL, page 4814-4823. Association for Computational Linguistics, (2021)A2Q: Aggregation-Aware Quantization for Graph Neural Networks., , , , , , , and . CoRR, (2023)MEGA: A Memory-Efficient GNN Accelerator Exploiting Degree-Aware Mixed-Precision Quantization., , , , , , , and . CoRR, (2023)A System-Level Solution for Low-Power Object Detection., , , , , , , , , and 1 other author(s). ICCV Workshops, page 2461-2468. IEEE, (2019)