Abstract
Wide neural networks with random weights and biases are Gaussian processes,
as observed by Neal (1995) for shallow networks, and more recently by Lee et
al. (2018) and Matthews et al. (2018) for deep fully-connected networks, as
well as by Novak et al. (2019) and Garriga-Alonso et al. (2019) for deep
convolutional networks. We show that this Neural Network-Gaussian Process
correspondence surprisingly extends to all modern feedforward or recurrent
neural networks composed of multilayer perceptron, RNNs (e.g. LSTMs, GRUs), (nD
or graph) convolution, pooling, skip connection, attention, batch
normalization, and/or layer normalization. More generally, we introduce a
language for expressing neural network computations, and our result encompasses
all such expressible neural networks. This work serves as a tutorial on the
*tensor programs* technique formulated in Yang (2019) and elucidates the
Gaussian Process results obtained there. We provide open-source implementations
of the Gaussian Process kernels of simple RNN, GRU, transformer, and
batchnorm+ReLU network at github.com/thegregyang/GP4A.
Description
[1910.12478] Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes
Links and resources
Tags
community