Abstract
We describe the class of convexified convolutional neural networks (CCNNs),
which capture the parameter sharing of convolutional neural networks in a
convex manner. By representing the nonlinear convolutional filters as vectors
in a reproducing kernel Hilbert space, the CNN parameters can be represented as
a low-rank matrix, which can be relaxed to obtain a convex optimization
problem. For learning two-layer convolutional neural networks, we prove that
the generalization error obtained by a convexified CNN converges to that of the
best possible CNN. For learning deeper networks, we train CCNNs in a layer-wise
manner. Empirically, CCNNs achieve performance competitive with CNNs trained by
backpropagation, SVMs, fully-connected neural networks, stacked denoising
auto-encoders, and other baseline methods.
Users
Please
log in to take part in the discussion (add own reviews or comments).