Author of the publication

A 17-95.6 TOPS/W Deep Learning Inference Accelerator with Per-Vector Scaled 4-bit Quantization for Transformers in 5nm.

, , , , , , , and . VLSI Technology and Circuits, page 16-17. IEEE, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

A 17-95.6 TOPS/W Deep Learning Inference Accelerator with Per-Vector Scaled 4-bit Quantization for Transformers in 5nm., , , , , , , and . VLSI Technology and Circuits, page 16-17. IEEE, (2022)STAxCache: An approximate, energy efficient STT-MRAM cache., , , , , and . DATE, page 356-361. IEEE, (2017)A 0.11 PJ/OP, 0.32-128 Tops, Scalable Multi-Chip-Module-Based Deep Neural Network Accelerator Designed with A High-Productivity vlsi Methodology., , , , , , , , , and 7 other author(s). Hot Chips Symposium, page 1-24. IEEE, (2019)Reading spin-torque memory with spin-torque sensors., , , , and . NANOARCH, page 40-41. IEEE Computer Society, (2013)VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference., , , , , and . MLSys, mlsys.org, (2021)Low-Precision Training in Logarithmic Number System using Multiplicative Weight Update., , , , , , and . CoRR, (2021)DWM-TAPESTRI - an energy efficient all-spin cache using domain wall shift based writes., , , and . DATE, page 1825-1830. EDA Consortium San Jose, CA, USA / ACM DL, (2013)VESPA: Variability emulation for System-on-Chip performance analysis., , , and . DATE, page 2-7. IEEE, (2011)Optimal Clipping and Magnitude-aware Differentiation for Improved Quantization-aware Training., , , , , and . ICML, volume 162 of Proceedings of Machine Learning Research, page 19123-19138. PMLR, (2022)SPINDLE: SPINtronic deep learning engine for large-scale neuromorphic computing., , , , and . ISLPED, page 15-20. ACM, (2014)