Author of the publication

NetBooster: Empowering Tiny Deep Learning By Standing on the Shoulders of Deep Giants.

, , , , and . DAC, page 1-6. IEEE, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks., , , , , , , , and . ICLR, OpenReview.net, (2020)Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention at Vision Transformer Inference., , , , , , , and . CVPR, page 14431-14442. IEEE, (2023)GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm and Accelerator Co-Design., , , , and . HPCA, page 460-474. IEEE, (2022)ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design., , , , , , , , and . HPCA, page 273-286. IEEE, (2023)When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models., , , , and . CoRR, (2024)G-CoS: GNN-Accelerator Co-Search Towards Both Better Accuracy and Efficiency., , , , , and . ICCAD, page 1-9. IEEE, (2021)DIAN: Differentiable Accelerator-Network Co-Search Towards Maximal DNN Efficiency., , , , , , , and . ISLPED, page 1-6. IEEE, (2021)FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training., , , , , , , and . NeurIPS, (2020)I-GCN: A Graph Convolutional Network Accelerator with Runtime Locality Enhancement through Islandization., , , , , , , , and . MICRO, page 1051-1063. ACM, (2021)HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark., , , , , , , , , and . ICLR, OpenReview.net, (2021)