Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

An FPGA-based RNN-T Inference Accelerator with PIM-HBM., , , , , , and . FPGA, page 146-152. ACM, (2022)The Breakthrough Memory Solutions for Improved Performance on LLM Inference., , , , , , , , , and 13 other author(s). IEEE Micro, 44 (3): 40-48 (May 2024)An Architecture of Sparse Length Sum Accelerator in AxDIMM., , , and . AICAS, page 1-4. IEEE, (2022)MViD: Sparse Matrix-Vector Multiplication in Mobile DRAM for Accelerating Recurrent Neural Networks., , , , , , , , , and . IEEE Trans. Computers, 69 (7): 955-967 (2020)GraNDe: Efficient Near-Data Processing Architecture for Graph Neural Networks., , , , , and . IEEE Trans. Computers, 73 (10): 2391-2404 (October 2024)MVP: An Efficient CNN Accelerator with Matrix, Vector, and Processing-Near-Memory Units., , , , , , and . ACM Trans. Design Autom. Electr. Syst., 27 (5): 42:1-42:25 (2022)GraNDe: Near-Data Processing Architecture With Adaptive Matrix Mapping for Graph Convolutional Networks., , , , , and . IEEE Comput. Archit. Lett., 21 (2): 45-48 (2022)CLAY: CXL-based Scalable NDP Architecture Accelerating Embedding Layers., , , , , , , and . ICS, page 338-351. ACM, (2024)Samsung PIM/PNM for Transfmer Based AI : Energy Efficiency on PIM/PNM Cluster., , , , , , , , , and 11 other author(s). HCS, page 1-31. IEEE, (2023)Duplex: A Device for Large Language Models with Mixture of Experts, Grouped Query Attention, and Continuous Batching., , , , , , , , and . CoRR, (2024)