Author of the publication

Industrially proving the SPIRIT consortium specifications for design chain integration.

, , , , , , , , , and . DATE Designers' Forum, page 142-147. European Design and Automation Association, Leuven, Belgium, (2006)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Flexible Instruction Set Architecture for Programmable Look-up Table based Processing-in-Memory., , , and . ICCD, page 66-73. IEEE, (2021)The IANET Hardware Accelerator for Audio and Visual Data Classification., , , and . SoCC, page 48-53. IEEE, (2020)Implementation and Evaluation of Deep Neural Networks in Commercially Available Processing in Memory Hardware., , , , and . SOCC, page 1-6. IEEE, (2022)A 0.36pJ/bit, 17Gbps OOK receiver in 45-nm CMOS for inter and intra-chip wireless interconnects., , , , , and . SoCC, page 132-137. IEEE, (2017)POLAR: Performance-aware On-device Learning Capable Programmable Processing-in-Memory Architecture for Low-Power ML Applications., , , , and . DSD, page 889-898. IEEE, (2022)Industrially proving the SPIRIT consortium specifications for design chain integration., , , , , , , , , and . DATE Designers' Forum, page 142-147. European Design and Automation Association, Leuven, Belgium, (2006)pPIM: A Programmable Processor-in-Memory Architecture With Precision-Scaling for Deep Learning., , , , , and . IEEE Comput. Archit. Lett., 19 (2): 118-121 (2020)A 0.24pJ/bit, 16Gbps OOK Transmitter Circuit in 45-nm CMOS for Inter and Intra-Chip Wireless Interconnects., , , , , and . ACM Great Lakes Symposium on VLSI, page 69-74. ACM, (2018)FlutPIM: : A Look-up Table-based Processing in Memory Architecture with Floating-point Computation Support for Deep Learning Applications., , , , and . ACM Great Lakes Symposium on VLSI, page 207-211. ACM, (2023)CNNET: A Configurable Hardware Accelerator for Efficient Inference of 8-bit Fixed-Point CNNs., and . SOCC, page 1-6. IEEE, (2023)