scispace - formally typeset
Open AccessProceedings ArticleDOI

Deep learning and the information bottleneck principle

TLDR
It is argued that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer.
Abstract
Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle. We first show that any DNN can be quantified by the mutual information between the layers and the input and output variables. Using this representation we can calculate the optimal information theoretic limits of the DNN and obtain finite sample generalization bounds. The advantage of getting closer to the theoretical limit is quantifiable both by the generalization bound and by the network's simplicity. We argue that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer. The hierarchical representations at the layered network naturally correspond to the structural phase transitions along the information curve. We believe that this new insight can lead to new optimality bounds and deep learning algorithms.

read more

Citations
More filters
Journal ArticleDOI

A novel tensor-information bottleneck method for multi-input single-output applications

TL;DR: A novel tensor information channel is proposed which extends the current single-input single-output matrix information channel to a more practical multi-inputSingle-output tensor Information channel, which allows for a wider range of practical applications.
Journal ArticleDOI

Improving VAE-based Representation Learning

TL;DR: It is shown that by using a decoder that prefers to learn local features, the remaining global features can be well captured by the latent, which significantly improves performance of a downstream classi-cation task.
Posted Content

PIE: Pseudo-Invertible Encoder

TL;DR: This work introduces a new class of likelihood-based autoencoders with pseudo bijective architecture, which it is called Pseudo Invertible Encoders, and provides the theoretical explanation of their principles.
Journal Article

DEMI: Discriminative Estimator of Mutual Information

TL;DR: It is shown theoretically that the method and other variational approaches are equivalent when they achieve their optimum, while the approach does not optimize a variational bound.
Journal ArticleDOI

An exploration of mutual information based on emotion-cause pair extraction

TL;DR: In this article , Li et al. explore the difference between emotion-cause pairs and non-emotion cause pairs in their mutual information and further probe the relations among mutual information, emotion cause pair, and their relative distance.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Book

Elements of information theory

TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Journal ArticleDOI

Reducing the Dimensionality of Data with Neural Networks

TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal ArticleDOI

Representation Learning: A Review and New Perspectives

TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Book

Learning Deep Architectures for AI

TL;DR: The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.