Author of the publication

Deep compression and EIE: Efficient inference engine on compressed deep neural network.

, , , , , , and . Hot Chips Symposium, page 1-6. IEEE, (2016)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Dark Memory and Accelerator-Rich System Optimization in the Dark Silicon Era., , , , and . IEEE Des. Test, 34 (2): 39-50 (2017)Transforming a linear algebra core to an FFT accelerator., , and . ASAP, page 175-184. IEEE Computer Society, (2013)EIE: Efficient Inference Engine on Compressed Deep Neural Network., , , , , , and . ISCA, page 243-254. IEEE Computer Society, (2016)Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators., , , and . CoRR, (2020)Algorithm, Architecture, and Floating-Point Unit Codesign of a Matrix Factorization Accelerator., , and . IEEE Trans. Computers, 63 (8): 1854-1867 (2014)Retrospective: EIE: Efficient Inference Engine on Sparse and Compressed Neural Network., , , , , , and . CoRR, (2023)Improving energy efficiency of DRAM by exploiting half page row access., , , , and . MICRO, page 27:1-27:12. IEEE Computer Society, (2016)On the Efficiency of Register File versus Broadcast Interconnect for Collective Communications in Data-Parallel Hardware Accelerators., , and . SBAC-PAD, page 19-26. IEEE Computer Society, (2012)Rethinking Floating Point Overheads for Mixed Precision DNN Accelerators., , , , and . MLSys, mlsys.org, (2021)Codesign Tradeoffs for High-Performance, Low-Power Linear Algebra Architectures., , and . IEEE Trans. Computers, 61 (12): 1724-1736 (2012)