Abstract
Deep Convolutional Neural Networks (DCNNs) is currently the method of choice
both for generative, as well as for discriminative learning in computer vision
and machine learning. The success of DCNNs can be attributed to the careful
selection of their building blocks (e.g., residual blocks, rectifiers,
sophisticated normalization schemes, to mention but a few). In this paper, we
propose $\Pi$-Nets, a new class of DCNNs. $\Pi$-Nets are polynomial neural
networks, i.e., the output is a high-order polynomial of the input. $\Pi$-Nets
can be implemented using special kind of skip connections and their parameters
can be represented via high-order tensors. We empirically demonstrate that
$\Pi$-Nets have better representation power than standard DCNNs and they even
produce good results without the use of non-linear activation functions in a
large battery of tasks and signals, i.e., images, graphs, and audio. When used
in conjunction with activation functions, $\Pi$-Nets produce state-of-the-art
results in challenging tasks, such as image generation. Lastly, our framework
elucidates why recent generative models, such as StyleGAN, improve upon their
predecessors, e.g., ProGAN.
Users
Please
log in to take part in the discussion (add own reviews or comments).