Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Efficient AI System Design With Cross-Layer Approximate Computing., , , , , , , , , and 30 other author(s). Proc. IEEE, 108 (12): 2232-2250 (2020)A pulsed low-voltage swing latch for reduced power dissipation in high-frequency microprocessors., , , , , and . ISLPED, page 85-88. ACM, (2006)Across the Stack Opportunities for Deep Learning Acceleration., , , , , , , , , and 21 other author(s). ISLPED, page 35:1-35:2. ACM, (2018)A Scalable Multi- TeraOPS Deep Learning Processor Core for AI Trainina and Inference., , , , , , , , , and 21 other author(s). VLSI Circuits, page 35-36. IEEE, (2018)A 45 nm SOI Embedded DRAM Macro for the POWER™ Processor 32 MByte On-Chip L3 Cache., , , , , , , , , and 1 other author(s). IEEE J. Solid State Circuits, 46 (1): 64-75 (2011)A 7nm 4-Core AI Chip with 25.6TFLOPS Hybrid FP8 Training, 102.4TOPS INT4 Inference and Workload-Aware Throttling., , , , , , , , , and 34 other author(s). ISSCC, page 144-146. IEEE, (2021)A 45nm SOI embedded DRAM macro for POWER7TM 32MB on-chip L3 cache., , , , , , , , , and . ISSCC, page 342-343. IEEE, (2010)A 7-nm Four-Core Mixed-Precision AI Chip With 26.2-TFLOPS Hybrid-FP8 Training, 104.9-TOPS INT4 Inference, and Workload-Aware Throttling., , , , , , , , , and 34 other author(s). IEEE J. Solid State Circuits, 57 (1): 182-197 (2022)RaPiD: AI Accelerator for Ultra-low Precision Training and Inference., , , , , , , , , and 44 other author(s). ISCA, page 153-166. IEEE, (2021)A 3.0 TFLOPS 0.62V Scalable Processor Core for High Compute Utilization AI Training and Inference., , , , , , , , , and 33 other author(s). VLSI Circuits, page 1-2. IEEE, (2020)