Autor der Publikation

Feature Learning in Infinite-Width Neural Networks

, und . (2020)cite arxiv:2011.14522Comment: 4th paper in the Tensor Programs series. Appearing in ICML 2021.

Bitte wählen Sie eine Person um die Publikation zuzuordnen

Um zwischen Personen mit demselben Namen zu unterscheiden, wird der akademische Grad und der Titel einer wichtigen Publikation angezeigt. Zudem lassen sich über den Button neben dem Namen einige der Person bereits zugeordnete Publikationen anzeigen.

 

Weitere Publikationen von Autoren mit dem selben Namen

Lie Access Neural Turing Machine.. CoRR, (2016)High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation., , , , , und . NeurIPS, (2022)Efficient Computation of Deep Nonlinear Infinite-Width Neural Networks that Learn Features., , und . ICLR, OpenReview.net, (2022)Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes., , , , , , , , und . ICLR (Poster), OpenReview.net, (2019)Denoised Smoothing: A Provable Defense for Pretrained Classifiers., , , , und . NeurIPS, (2020)Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes.. NeurIPS, Seite 9947-9960. (2019)A Fine-Grained Spectral Perspective on Neural Networks, und . (2019)cite arxiv:1907.10599Comment: 12 pages of main text, 14 figures, 39 pages including appendix.Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes. (2019)cite arxiv:1910.12478Comment: Appearing in NeurIPS 2019; 10 pages of main text; 12 figures, 11 programs; 73 pages total.Bayesian Deep Convolutional Networks with Many Channels are Gaussian Processes, , , , , , , , und . (2018)cite arxiv:1810.05148Comment: Published as a conference paper at ICLR 2019.A Mean Field Theory of Batch Normalization, , , , und . (2019)cite arxiv:1902.08129Comment: To appear in ICLR 2019.