From post

Understanding Incremental Learning of Gradient Descent: A Fine-grained Analysis of Matrix Sensing.

, , , , и . ICML, том 202 из Proceedings of Machine Learning Research, стр. 15200-15238. PMLR, (2023)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed.

 

Другие публикации лиц с тем же именем

On the SDEs and Scaling Rules for Adaptive Gradient Algorithms., , , и . NeurIPS, (2022)Gradient Descent Maximizes the Margin of Homogeneous Neural Networks., и . ICLR, OpenReview.net, (2020)Reconciling Modern Deep Learning with Traditional Optimization Analyses: The Intrinsic Learning Rate., , и . NeurIPS, (2020)Gradient Descent on Two-layer Nets: Margin Maximization and Simplicity Bias., , , и . NeurIPS, стр. 12978-12991. (2021)Understanding Incremental Learning of Gradient Descent: A Fine-grained Analysis of Matrix Sensing., , , , и . ICML, том 202 из Proceedings of Machine Learning Research, стр. 15200-15238. PMLR, (2023)Theoretical Analysis of Auto Rate-Tuning by Batch Normalization., , и . ICLR (Poster), OpenReview.net, (2019)Single-Source Bottleneck Path Algorithm Faster than Sorting for Sparse Graphs., , и . ICALP, том 107 из LIPIcs, стр. 43:1-43:14. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, (2018)Fine-grained Complexity Meets IP = PSPACE., , , , и . SODA, стр. 1-20. SIAM, (2019)Why (and When) does Local SGD Generalize Better than SGD?, , , и . ICLR, OpenReview.net, (2023)Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning., , и . CoRR, (2020)