Author of the publication

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

The IBM eServer z990 floating-point unit., , , , , , and . IBM J. Res. Dev., 48 (3-4): 311-322 (2004)Efficient AI System Design With Cross-Layer Approximate Computing., , , , , , , , , and 30 other author(s). Proc. IEEE, 108 (12): 2232-2250 (2020)4GHz+ low-latency fixed-point and binary floating-point execution units for the POWER6 processor., , , , , , , , and . ISSCC, page 1728-1734. IEEE, (2006)Exponent monitoring for low-cost concurrent error detection in FPU control logic., , , and . VTS, page 235-240. IEEE Computer Society, (2011)A 7nm 4-Core AI Chip with 25.6TFLOPS Hybrid FP8 Training, 102.4TOPS INT4 Inference and Workload-Aware Throttling., , , , , , , , , and 34 other author(s). ISSCC, page 144-146. IEEE, (2021)DLFloat: A 16-b Floating Point Format Designed for Deep Learning Training and Inference., , , , , , and . ARITH, page 92-95. IEEE, (2019)A Scalable Multi- TeraOPS Deep Learning Processor Core for AI Trainina and Inference., , , , , , , , , and 21 other author(s). VLSI Circuits, page 35-36. IEEE, (2018)64-bit prefix adders: Power-efficient topologies and design solutions., , , and . CICC, page 179-182. IEEE, (2009)A 7-nm Four-Core Mixed-Precision AI Chip With 26.2-TFLOPS Hybrid-FP8 Training, 104.9-TOPS INT4 Inference, and Workload-Aware Throttling., , , , , , , , , and 34 other author(s). IEEE J. Solid State Circuits, 57 (1): 182-197 (2022)RaPiD: AI Accelerator for Ultra-low Precision Training and Inference., , , , , , , , , and 44 other author(s). ISCA, page 153-166. IEEE, (2021)