Abstract
Recent works have shown that on sufficiently over-parametrized neural nets,
gradient descent with relatively large initialization optimizes a prediction
function in the RKHS of the Neural Tangent Kernel (NTK). This analysis leads to
global convergence results but does not work when there is a standard l2
regularizer, which is useful to have in practice. We show that sample
efficiency can indeed depend on the presence of the regularizer: we construct a
simple distribution in d dimensions which the optimal regularized neural net
learns with O(d) samples but the NTK requires Ømega(d^2) samples to learn. To
prove this, we establish two analysis tools: i) for multi-layer feedforward
ReLU nets, we show that the global minimizer of a weakly-regularized
cross-entropy loss is the max normalized margin solution among all neural nets,
which generalizes well; ii) we develop a new technique for proving lower bounds
for kernel methods, which relies on showing that the kernel cannot focus on
informative features. Motivated by our generalization results, we study whether
the regularized global optimum is attainable. We prove that for infinite-width
two-layer nets, noisy gradient descent optimizes the regularized neural net
loss to a global minimum in polynomial iterations.
Users
Please
log in to take part in the discussion (add own reviews or comments).