Author of the publication

Mirrorless Mirror Descent: A Natural Derivation of Mirror Descent.

, , and . AISTATS, volume 130 of Proceedings of Machine Learning Research, page 2305-2313. PMLR, (2021)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Lower Bounds for Non-Convex Stochastic Optimization., , , , , and . CoRR, (2019)A Stochastic Newton Algorithm for Distributed Convex Optimization., , , , and . NeurIPS, page 26818-26830. (2021)Graph Oracle Models, Lower Bounds, and Gaps for Parallel Stochastic Optimization., , , , and . NeurIPS, page 8505-8515. (2018)Minibatch vs Local SGD for Heterogeneous Distributed Learning., , and . NeurIPS, (2020)The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication., , , and . COLT, volume 134 of Proceedings of Machine Learning Research, page 4386-4437. PMLR, (2021)Training Well-Generalizing Classifiers for Fairness Metrics and Other Data-Dependent Constraints., , , , , , , and . ICML, volume 97 of Proceedings of Machine Learning Research, page 1397-1405. PMLR, (2019)The Gradient Complexity of Linear Regression., , , and . COLT, volume 125 of Proceedings of Machine Learning Research, page 627-647. PMLR, (2020)Towards Optimal Communication Complexity in Distributed Non-Convex Optimization., , , , and . NeurIPS, (2022)Is Local SGD Better than Minibatch SGD?, , , , , , , and . ICML, volume 119 of Proceedings of Machine Learning Research, page 10334-10343. PMLR, (2020)Two Losses Are Better Than One: Faster Optimization Using a Cheaper Proxy., , and . ICML, volume 202 of Proceedings of Machine Learning Research, page 37273-37292. PMLR, (2023)