How to Start Training: The Effect of Initialization and Architecture
B. Hanin, and D. Rolnick. (2018)cite arxiv:1803.01719Comment: Final Version, 16p, Accepted NIPS 2018.
Abstract
We identify and study two common failure modes for early training in deep
ReLU nets. For each we give a rigorous proof of when it occurs and how to avoid
it, for fully connected and residual architectures. The first failure mode,
exploding/vanishing mean activation length, can be avoided by initializing
weights from a symmetric distribution with variance 2/fan-in and, for ResNets,
by correctly weighting the residual modules. We prove that the second failure
mode, exponentially large variance of activation length, never occurs in
residual nets once the first failure mode is avoided. In contrast, for fully
connected nets, we prove that this failure mode can happen and is avoided by
keeping constant the sum of the reciprocals of layer widths. We demonstrate
empirically the effectiveness of our theoretical results in predicting when
networks are able to start training. In particular, we note that many popular
initializations fail our criteria, whereas correct initialization and
architecture allows much deeper networks to be trained.
Description
[1803.01719] How to Start Training: The Effect of Initialization and Architecture
%0 Generic
%1 hanin2018start
%A Hanin, Boris
%A Rolnick, David
%D 2018
%K architecture deep-learning from:adulny initialization
%T How to Start Training: The Effect of Initialization and Architecture
%U http://arxiv.org/abs/1803.01719
%X We identify and study two common failure modes for early training in deep
ReLU nets. For each we give a rigorous proof of when it occurs and how to avoid
it, for fully connected and residual architectures. The first failure mode,
exploding/vanishing mean activation length, can be avoided by initializing
weights from a symmetric distribution with variance 2/fan-in and, for ResNets,
by correctly weighting the residual modules. We prove that the second failure
mode, exponentially large variance of activation length, never occurs in
residual nets once the first failure mode is avoided. In contrast, for fully
connected nets, we prove that this failure mode can happen and is avoided by
keeping constant the sum of the reciprocals of layer widths. We demonstrate
empirically the effectiveness of our theoretical results in predicting when
networks are able to start training. In particular, we note that many popular
initializations fail our criteria, whereas correct initialization and
architecture allows much deeper networks to be trained.
@misc{hanin2018start,
abstract = {We identify and study two common failure modes for early training in deep
ReLU nets. For each we give a rigorous proof of when it occurs and how to avoid
it, for fully connected and residual architectures. The first failure mode,
exploding/vanishing mean activation length, can be avoided by initializing
weights from a symmetric distribution with variance 2/fan-in and, for ResNets,
by correctly weighting the residual modules. We prove that the second failure
mode, exponentially large variance of activation length, never occurs in
residual nets once the first failure mode is avoided. In contrast, for fully
connected nets, we prove that this failure mode can happen and is avoided by
keeping constant the sum of the reciprocals of layer widths. We demonstrate
empirically the effectiveness of our theoretical results in predicting when
networks are able to start training. In particular, we note that many popular
initializations fail our criteria, whereas correct initialization and
architecture allows much deeper networks to be trained.},
added-at = {2022-05-04T09:54:15.000+0200},
author = {Hanin, Boris and Rolnick, David},
biburl = {https://www.bibsonomy.org/bibtex/2105e1589de3488fbd1b723d9d8941af8/adulny},
description = {[1803.01719] How to Start Training: The Effect of Initialization and Architecture},
interhash = {2a2d75131904efa454360eff2c519129},
intrahash = {105e1589de3488fbd1b723d9d8941af8},
keywords = {architecture deep-learning from:adulny initialization},
note = {cite arxiv:1803.01719Comment: Final Version, 16p, Accepted NIPS 2018},
timestamp = {2022-05-04T09:54:15.000+0200},
title = {How to Start Training: The Effect of Initialization and Architecture},
url = {http://arxiv.org/abs/1803.01719},
year = 2018
}