Author of the publication

Training deep neural networks with low precision multiplications

, , and . (2014)cite arxiv:1412.7024v5.pdfComment: 10 pages, 5 figures, Accepted as a workshop contribution at ICLR 2015.

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Hardware description and synthesis of control-intensive reconfigurable dataflow architectures (abstract only)., and . FPGA, page 274-275. ACM, (2013)Temporal Logic Explanations for Dynamic Decision Systems Using Anchors and Monte Carlo Tree Search (Abstract Reprint)., , and . AAAI, page 22694. AAAI Press, (2024)ASIP Accelerator for LUT-based Neural Networks Inference., , and . NEWCAS, page 524-528. IEEE, (2022)Quark: An Integer RISC-V Vector Processor for Sub-Byte Quantized DNN Inference., , , , , , , , , and 1 other author(s). ISCAS, page 1-5. IEEE, (2023)RISC-V Barrel Processor for Deep Neural Network Acceleration., , , , and . ISCAS, page 1-5. IEEE, (2021)BinaryConnect: Training Deep Neural Networks with binary weights during propagations., , and . NIPS, page 3123-3131. (2015)Bit-Slicing FPGA Accelerator for Quantized Neural Networks., , , and . ISCAS, page 1-5. IEEE, (2019)Max-hashing fragments for large data sets detection.. ReConFig, page 1-6. IEEE, (2013)Synchronized-transfer-level design methodology applied to hardware matrix multiplication., and . ReConFig, page 1-7. IEEE, (2012)Two-level configuration for FPGA: A new design methodology based on a computing fabric., , , and . ISCAS, page 265-268. IEEE, (2012)