Author of the publication

Specializing for Efficiency: Customizing AI Inference Processors on FPGAs.

, , and . ICM, page 62-65. IEEE, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Hardware acceleration of novel chaos-based image encryption for IoT applications., , , and . ICM, page 1-4. IEEE, (2017)Embracing Diversity: Enhanced DSP Blocks for Low-Precision Deep Learning on FPGAs., , and . FPL, page 35-42. IEEE Computer Society, (2018)A Whole New World: How to Architect Beyond-FPGA Reconfigurable Acceleration Devices?, , and . FPL, page 265-270. IEEE, (2023)RAD-Sim: Rapid Architecture Exploration for Novel Reconfigurable Acceleration Devices., , and . FPL, page 438-444. IEEE, (2022)Compute-Capable Block RAMs for Efficient Deep Learning Acceleration on FPGAs., , , , , , , , and . FCCM, page 88-96. IEEE, (2021)Scalable Low-Latency Persistent Neural Machine Translation on CPU Server with Multiple FPGAs., , , , , , , , , and . FPT, page 307-310. IEEE, (2019)Why Compete When You Can Work Together: FPGA-ASIC Integration for Persistent RNNs., , , , , , , , , and 6 other author(s). FCCM, page 199-207. IEEE, (2019)Field-Programmable Gate Array Architecture for Deep Learning: Survey & Future Directions., , and . CoRR, (2024)You Cannot Improve What You Do not Measure: FPGA vs. ASIC Efficiency Gaps for Convolutional Neural Network Inference., , and . ACM Trans. Reconfigurable Technol. Syst., 11 (3): 20:1-20:23 (2018)Specializing for Efficiency: Customizing AI Inference Processors on FPGAs., , and . ICM, page 62-65. IEEE, (2021)