Author of the publication

Shfl-BW: accelerating deep neural network inference with tensor-core aware weight pruning.

, , , , , and . DAC, page 1153-1158. ACM, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Shfl-BW: accelerating deep neural network inference with tensor-core aware weight pruning., , , , , and . DAC, page 1153-1158. ACM, (2022)A Min-Max Optimization Framework for Multi-task Deep Neural Network Compression., , , , and . ISCAS, page 1-5. IEEE, (2024)Improving Noise Tolerance of Hardware Accelerated Artificial Neural Networks., , , , and . ICMLA, page 797-801. IEEE, (2018)Noisy Computations during Inference: Harmful or Helpful?, and . CoRR, (2018)Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting., , , , , , , and . CVPR, page 10259-10269. IEEE, (2023)Pruning Parameterization with Bi-level Optimization for Efficient Semantic Segmentation on the Edge., , , , , , , , , and . CVPR, page 15402-15412. IEEE, (2023)Sanity Checks for Lottery Tickets: Does Your Winning Ticket Really Win the Jackpot?, , , , , , , , , and 1 other author(s). NeurIPS, page 12749-12760. (2021)Effective Model Sparsification by Scheduled Grow-and-Prune Methods., , , , , , , , , and . ICLR, OpenReview.net, (2022)SPViT: Enabling Faster Vision Transformers via Latency-Aware Soft Token Pruning., , , , , , , , , and 2 other author(s). ECCV (11), volume 13671 of Lecture Notes in Computer Science, page 620-640. Springer, (2022)Data Level Lottery Ticket Hypothesis for Vision Transformers., , , , , , , , and . IJCAI, page 1378-1386. ijcai.org, (2023)