Regularization Matters: Generalization and Optimization of Neural Nets
v.s. their Induced Kernel
C. Wei, J. Lee, Q. Liu, and T. Ma. (2018)cite arxiv:1810.05369Comment: version 2: title changed from originally Ön the Margin Theory of Feedforward Neural Networks". Substantial changes from old version of paper, including a new lower bound on NTK sample complexity version 3: reorganized NTK lower bound proof.
Abstract
Recent works have shown that on sufficiently over-parametrized neural nets,
gradient descent with relatively large initialization optimizes a prediction
function in the RKHS of the Neural Tangent Kernel (NTK). This analysis leads to
global convergence results but does not work when there is a standard l2
regularizer, which is useful to have in practice. We show that sample
efficiency can indeed depend on the presence of the regularizer: we construct a
simple distribution in d dimensions which the optimal regularized neural net
learns with O(d) samples but the NTK requires Ømega(d^2) samples to learn. To
prove this, we establish two analysis tools: i) for multi-layer feedforward
ReLU nets, we show that the global minimizer of a weakly-regularized
cross-entropy loss is the max normalized margin solution among all neural nets,
which generalizes well; ii) we develop a new technique for proving lower bounds
for kernel methods, which relies on showing that the kernel cannot focus on
informative features. Motivated by our generalization results, we study whether
the regularized global optimum is attainable. We prove that for infinite-width
two-layer nets, noisy gradient descent optimizes the regularized neural net
loss to a global minimum in polynomial iterations.
Description
[1810.05369] Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
cite arxiv:1810.05369Comment: version 2: title changed from originally Ön the Margin Theory of Feedforward Neural Networks". Substantial changes from old version of paper, including a new lower bound on NTK sample complexity version 3: reorganized NTK lower bound proof
%0 Journal Article
%1 wei2018regularization
%A Wei, Colin
%A Lee, Jason D.
%A Liu, Qiang
%A Ma, Tengyu
%D 2018
%K deep-learning generalization optimization readings theory regularisation
%T Regularization Matters: Generalization and Optimization of Neural Nets
v.s. their Induced Kernel
%U http://arxiv.org/abs/1810.05369
%X Recent works have shown that on sufficiently over-parametrized neural nets,
gradient descent with relatively large initialization optimizes a prediction
function in the RKHS of the Neural Tangent Kernel (NTK). This analysis leads to
global convergence results but does not work when there is a standard l2
regularizer, which is useful to have in practice. We show that sample
efficiency can indeed depend on the presence of the regularizer: we construct a
simple distribution in d dimensions which the optimal regularized neural net
learns with O(d) samples but the NTK requires Ømega(d^2) samples to learn. To
prove this, we establish two analysis tools: i) for multi-layer feedforward
ReLU nets, we show that the global minimizer of a weakly-regularized
cross-entropy loss is the max normalized margin solution among all neural nets,
which generalizes well; ii) we develop a new technique for proving lower bounds
for kernel methods, which relies on showing that the kernel cannot focus on
informative features. Motivated by our generalization results, we study whether
the regularized global optimum is attainable. We prove that for infinite-width
two-layer nets, noisy gradient descent optimizes the regularized neural net
loss to a global minimum in polynomial iterations.
@article{wei2018regularization,
abstract = {Recent works have shown that on sufficiently over-parametrized neural nets,
gradient descent with relatively large initialization optimizes a prediction
function in the RKHS of the Neural Tangent Kernel (NTK). This analysis leads to
global convergence results but does not work when there is a standard l2
regularizer, which is useful to have in practice. We show that sample
efficiency can indeed depend on the presence of the regularizer: we construct a
simple distribution in d dimensions which the optimal regularized neural net
learns with O(d) samples but the NTK requires \Omega(d^2) samples to learn. To
prove this, we establish two analysis tools: i) for multi-layer feedforward
ReLU nets, we show that the global minimizer of a weakly-regularized
cross-entropy loss is the max normalized margin solution among all neural nets,
which generalizes well; ii) we develop a new technique for proving lower bounds
for kernel methods, which relies on showing that the kernel cannot focus on
informative features. Motivated by our generalization results, we study whether
the regularized global optimum is attainable. We prove that for infinite-width
two-layer nets, noisy gradient descent optimizes the regularized neural net
loss to a global minimum in polynomial iterations.},
added-at = {2019-09-25T04:57:14.000+0200},
author = {Wei, Colin and Lee, Jason D. and Liu, Qiang and Ma, Tengyu},
biburl = {https://www.bibsonomy.org/bibtex/2285be15db3ec6e94309e9da3faaff130/kirk86},
description = {[1810.05369] Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel},
interhash = {9205f93e0c8a9f1686d96105aaa7d591},
intrahash = {285be15db3ec6e94309e9da3faaff130},
keywords = {deep-learning generalization optimization readings theory regularisation},
note = {cite arxiv:1810.05369Comment: version 2: title changed from originally "On the Margin Theory of Feedforward Neural Networks". Substantial changes from old version of paper, including a new lower bound on NTK sample complexity version 3: reorganized NTK lower bound proof},
timestamp = {2019-09-26T16:00:39.000+0200},
title = {Regularization Matters: Generalization and Optimization of Neural Nets
v.s. their Induced Kernel},
url = {http://arxiv.org/abs/1810.05369},
year = 2018
}