Abstract
The generalization error of deep neural networks via their classification
margin is studied in this work. Our approach is based on the Jacobian matrix of
a deep neural network and can be applied to networks with arbitrary
non-linearities and pooling layers, and to networks with different
architectures such as feed forward networks and residual networks. Our analysis
leads to the conclusion that a bounded spectral norm of the network's Jacobian
matrix in the neighbourhood of the training samples is crucial for a deep
neural network of arbitrary depth and width to generalize well. This is a
significant improvement over the current bounds in the literature, which imply
that the generalization error grows with either the width or the depth of the
network. Moreover, it shows that the recently proposed batch normalization and
weight normalization re-parametrizations enjoy good generalization properties,
and leads to a novel network regularizer based on the network's Jacobian
matrix. The analysis is supported with experimental results on the MNIST,
CIFAR-10, LaRED and ImageNet datasets.
Users
Please
log in to take part in the discussion (add own reviews or comments).