@kirk86

Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks

, and . (2019)cite arxiv:1909.12292.

Abstract

Recent work has revealed that overparameterized networks trained by gradient descent achieve arbitrarily low training error, and sometimes even low test error. The required width, however, is always polynomial in at least one of the sample size $n$, the (inverse) training error $1/\epsilon$, and the (inverse) failure probability $1/\delta$. This work shows that $O(1/\epsilon)$ iterations of gradient descent on two-layer networks of any width exceeding $polylog(n,1/\epsilon,1/\delta)$ and $Ømega(1/\epsilon^2)$ training examples suffices to achieve a test error of $\epsilon$. The analysis further relies upon a margin property of the limiting kernel, which is guaranteed positive, and can distinguish between true labels and random labels.

Description

[1909.12292] Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks

Links and resources

Tags