From post

Liquid state machine based pattern recognition on FPGA with firing-activity dependent power gating and approximate computing.

, , и . ISCAS, стр. 361-364. IEEE, (2016)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Visage: enabling timely analytics for drone imagery., , , , , , , , , и . MobiCom, стр. 789-803. ACM, (2021)GradiVeQ: Vector Quantization for Bandwidth-Efficient Gradient Aggregation in Distributed CNN Training., , , , , , , , и . NeurIPS, стр. 5129-5139. (2018)PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication., , , , , и . ICLR, OpenReview.net, (2022)BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks with Partition-Parallelism and Random Boundary Node Sampling., , , , и . MLSys, mlsys.org, (2022)Liquid state machine based pattern recognition on FPGA with firing-activity dependent power gating and approximate computing., , и . ISCAS, стр. 361-364. IEEE, (2016)DeepStore: In-Storage Acceleration for Intelligent Queries., , , , , , , , , и . MICRO, стр. 224-238. ACM, (2019)Accelerating distributed reinforcement learning with in-switch computing., , , , , и . ISCA, стр. 279-291. ACM, (2019)Pipe-SGD: A Decentralized Pipelined SGD Framework for Distributed Deep Net Training., , , , , и . NeurIPS, стр. 8056-8067. (2018)Doing more with less: training large DNN models on commodity servers for the masses., , , и . HotOS, стр. 119-127. ACM, (2021)A Network-Centric Hardware/Algorithm Co-Design to Accelerate Distributed Training of Deep Neural Networks., , , , , , , , , и . MICRO, стр. 175-188. IEEE Computer Society, (2018)