Author of the publication

Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and Beyond.

, , and . ICLR, OpenReview.net, (2022)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Are Transformers universal approximators of sequence-to-sequence functions?, , , , and . ICLR, OpenReview.net, (2020)Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study., , , , and . CoRR, (2023)Minimax Bounds on Stochastic Batched Convex Optimization., , and . COLT, volume 75 of Proceedings of Machine Learning Research, page 3065-3162. PMLR, (2018)Does SGD really happen in tiny subspaces?, , and . CoRR, (2024)Trajectory Alignment: Understanding the Edge of Stability Phenomenon via Bifurcation Theory., and . CoRR, (2023)Provable Memorization via Deep Neural Networks using Sub-linear Parameters., , , and . COLT, volume 134 of Proceedings of Machine Learning Research, page 3627-3661. PMLR, (2021)Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity., , and . NeurIPS, page 15532-15543. (2019)On the Training Instability of Shuffling SGD with Batch Normalization., , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 37787-37845. PMLR, (2023)Minimum Width for Universal Approximation., , , and . ICLR, OpenReview.net, (2021)Linear attention is (maybe) all you need (to understand Transformer optimization)., , , , , and . ICLR, OpenReview.net, (2024)