From post

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

Don't just prune by magnitude! Your mask topology is a secret weapon., , , и . NeurIPS, (2023)HARDSEA: Hybrid Analog-ReRAM Clustering and Digital-SRAM In-Memory Computing Accelerator for Dynamic Sparse Self-Attention in Transformer., , , , , , , , и . IEEE Trans. Very Large Scale Integr. Syst., 32 (2): 269-282 (февраля 2024)Hypergraph-Enhanced Self-Supervised Robust Graph Learning for Social Recommendation., , и . ICASSP, стр. 5545-5549. IEEE, (2024)Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training., , , и . ICML, том 139 из Proceedings of Machine Learning Research, стр. 6989-7000. PMLR, (2021)Revisiting Pruning at Initialization Through the Lens of Ramanujan Graph., , , и . ICLR, OpenReview.net, (2023)Achieving Personalized Federated Learning with Sparse Local Models., , , , , и . CoRR, (2022)Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance., , , и . Int. J. Comput. Vis., 131 (10): 2635-2648 (октября 2023)Outlier Weighed Layerwise Sparsity (OWL): A Missing Secret Sauce for Pruning LLMs to High Sparsity., , , , , , , , , и . CoRR, (2023)Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs., , , , , , , , и . CoRR, (2023)More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity., , , , , , , , и . CoRR, (2022)