Author of the publication

Exact expressions for double descent and implicit regularization via surrogate random design.

, , and . NeurIPS, (2020)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Adversarially-Trained Deep Nets Transfer Better: Illustration on Image Classification., , , , and . ICLR, OpenReview.net, (2021)Skip-Gram - Zipf + Uniform = Vector Additivity., , and . ACL (1), page 69-76. Association for Computational Linguistics, (2017)Generalization Bounds using Lower Tail Exponents in Stochastic Optimizers., , , and . ICML, volume 162 of Proceedings of Machine Learning Research, page 8774-8795. PMLR, (2022)Multiplicative Noise and Heavy Tails in Stochastic Optimization., and . ICML, volume 139 of Proceedings of Machine Learning Research, page 4262-4274. PMLR, (2021)Good Classifiers are Abundant in the Interpolating Regime., , and . AISTATS, volume 130 of Proceedings of Machine Learning Research, page 3376-3384. PMLR, (2021)Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data., , and . CoRR, (2020)Unified Acceleration Method for Packing and Covering Problems via Diameter Reduction., , and . ICALP, volume 55 of LIPIcs, page 50:1-50:13. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, (2016)On the Computational Inefficiency of Large Batch Sizes for Stochastic Gradient Descent., , , , , , , and . CoRR, (2018)Mapping the Similarities of Spectra: Global and Locally-biased Approaches to SDSS Galaxy Data., , and . CoRR, (2016)Large batch size training of neural networks with adversarial training and second-order information., , , and . CoRR, (2018)