Abstract
Training a neural network is synonymous with learning the values of the
weights. In contrast, we demonstrate that randomly weighted neural networks
contain subnetworks which achieve impressive performance without ever training
the weight values. Hidden in a randomly weighted Wide ResNet-50 we show that
there is a subnetwork (with random weights) that is smaller than, but matches
the performance of a ResNet-34 trained on ImageNet. Not only do these
üntrained subnetworks" exist, but we provide an algorithm to effectively find
them. We empirically show that as randomly weighted neural networks with fixed
weights grow wider and deeper, an üntrained subnetwork" approaches a network
with learned weights in accuracy.
Users
Please
log in to take part in the discussion (add own reviews or comments).