Deep Learning and the Information Bottleneck Principle
N. Tishby, and N. Zaslavsky. (2015)cite arxiv:1503.02406Comment: 5 pages, 2 figures, Invited paper to ITW 2015; 2015 IEEE Information Theory Workshop (ITW) (IEEE ITW 2015).
Abstract
Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the
information bottleneck (IB) principle. We first show that any DNN can be
quantified by the mutual information between the layers and the input and
output variables. Using this representation we can calculate the optimal
information theoretic limits of the DNN and obtain finite sample generalization
bounds. The advantage of getting closer to the theoretical limit is
quantifiable both by the generalization bound and by the network's simplicity.
We argue that both the optimal architecture, number of layers and
features/connections at each layer, are related to the bifurcation points of
the information bottleneck tradeoff, namely, relevant compression of the input
layer with respect to the output layer. The hierarchical representations at the
layered network naturally correspond to the structural phase transitions along
the information curve. We believe that this new insight can lead to new
optimality bounds and deep learning algorithms.
Description
[1503.02406] Deep Learning and the Information Bottleneck Principle
%0 Generic
%1 tishby2015learning
%A Tishby, Naftali
%A Zaslavsky, Noga
%D 2015
%K compression generalization information optimization readings sparsity
%T Deep Learning and the Information Bottleneck Principle
%U http://arxiv.org/abs/1503.02406
%X Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the
information bottleneck (IB) principle. We first show that any DNN can be
quantified by the mutual information between the layers and the input and
output variables. Using this representation we can calculate the optimal
information theoretic limits of the DNN and obtain finite sample generalization
bounds. The advantage of getting closer to the theoretical limit is
quantifiable both by the generalization bound and by the network's simplicity.
We argue that both the optimal architecture, number of layers and
features/connections at each layer, are related to the bifurcation points of
the information bottleneck tradeoff, namely, relevant compression of the input
layer with respect to the output layer. The hierarchical representations at the
layered network naturally correspond to the structural phase transitions along
the information curve. We believe that this new insight can lead to new
optimality bounds and deep learning algorithms.
@misc{tishby2015learning,
abstract = {Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the
information bottleneck (IB) principle. We first show that any DNN can be
quantified by the mutual information between the layers and the input and
output variables. Using this representation we can calculate the optimal
information theoretic limits of the DNN and obtain finite sample generalization
bounds. The advantage of getting closer to the theoretical limit is
quantifiable both by the generalization bound and by the network's simplicity.
We argue that both the optimal architecture, number of layers and
features/connections at each layer, are related to the bifurcation points of
the information bottleneck tradeoff, namely, relevant compression of the input
layer with respect to the output layer. The hierarchical representations at the
layered network naturally correspond to the structural phase transitions along
the information curve. We believe that this new insight can lead to new
optimality bounds and deep learning algorithms.},
added-at = {2019-11-01T15:35:04.000+0100},
author = {Tishby, Naftali and Zaslavsky, Noga},
biburl = {https://www.bibsonomy.org/bibtex/27aa25b7d9b220a879c56978ca8d744f8/kirk86},
description = {[1503.02406] Deep Learning and the Information Bottleneck Principle},
interhash = {c916b62a9d2c6ebd7f9da6a8182b5b0b},
intrahash = {7aa25b7d9b220a879c56978ca8d744f8},
keywords = {compression generalization information optimization readings sparsity},
note = {cite arxiv:1503.02406Comment: 5 pages, 2 figures, Invited paper to ITW 2015; 2015 IEEE Information Theory Workshop (ITW) (IEEE ITW 2015)},
timestamp = {2019-11-01T15:35:04.000+0100},
title = {Deep Learning and the Information Bottleneck Principle},
url = {http://arxiv.org/abs/1503.02406},
year = 2015
}