Author of the publication

Edge Inference Engine for Deep & Random Sparse Neural Networks with 4-bit Cartesian-Product MAC Array and Pipelined Activation Aligner.

, , , , , , and . HCS, page 1-21. IEEE, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Multicoated Supermasks Enhance Hidden Networks., , , , , , , and . ICML, volume 162 of Proceedings of Machine Learning Research, page 17045-17055. PMLR, (2022)Selective Fine-Tuning on a Classifier Ensemble: Realizing Adaptive Neural Networks With a Diversified Multi-Exit Architecture., , , and . IEEE Access, (2021)Quantization Error-Based Regularization in Neural Networks., , , , , , and . SGAI Conf., volume 10630 of Lecture Notes in Computer Science, page 137-142. Springer, (2017)Dither NN: An Accurate Neural Network with Dithering for Low Bit-Precision Hardware., , , , , , , , , and . FPT, page 6-13. IEEE, (2018)Area and Energy Optimization for Bit-Serial Log-Quantized DNN Accelerator with Shared Accumulators., , , , , , , , , and . MCSoC, page 237-243. IEEE Computer Society, (2018)In-memory area-efficient signal streaming processor design for binary neural networks., , , , , , , , , and 1 other author(s). MWSCAS, page 116-119. IEEE, (2017)A 3D-Stacked SRAM using Inductive Coupling with Low-Voltage Transmitter and 12: 1 SerDes., , , , , , , , and . ISCAS, page 1-5. IEEE, (2020)QUEST: Multi-Purpose Log-Quantized DNN Inference Engine Stacked on 96-MB 3-D SRAM Using Inductive Coupling Technology in 40-nm CMOS., , , , , , and . IEEE J. Solid State Circuits, 54 (1): 186-196 (2019)ProgressiveNN: Achieving Computational Scalability with Dynamic Bit-Precision Adjustment by MSB-first Accumulative Computation., , , , , , , and . Int. J. Netw. Comput., 11 (2): 338-353 (2021)Logarithmic Compression for Memory Footprint Reduction in Neural Network Training., , , , , , , , and . CANDAR, page 291-297. IEEE Computer Society, (2017)