Deep learning and the information bottleneck principle
Naftali Tishby,Noga Zaslavsky +1 more
- pp 1-5
TLDR
It is argued that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer.Abstract:
Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle. We first show that any DNN can be quantified by the mutual information between the layers and the input and output variables. Using this representation we can calculate the optimal information theoretic limits of the DNN and obtain finite sample generalization bounds. The advantage of getting closer to the theoretical limit is quantifiable both by the generalization bound and by the network's simplicity. We argue that both the optimal architecture, number of layers and features/connections at each layer, are related to the bifurcation points of the information bottleneck tradeoff, namely, relevant compression of the input layer with respect to the output layer. The hierarchical representations at the layered network naturally correspond to the structural phase transitions along the information curve. We believe that this new insight can lead to new optimality bounds and deep learning algorithms.read more
Citations
More filters
Journal ArticleDOI
Image Reconstruction is a New Frontier of Machine Learning
TL;DR: This special issue focuses on data-driven tomographic reconstruction and covers the whole workflow of medical imaging: from tomographic raw data/features to reconstructed images and then extracted diagnostic features/readings.
Posted Content
InfoVAE: Information Maximizing Variational Autoencoders
TL;DR: It is shown that this model can significantly improve the quality of the variational posterior and can make effective use of the latent features regardless of the flexibility of the decoding distribution, and it is demonstrated that the models outperform competing approaches on multiple performance metrics.
Proceedings Article
Manifold Mixup: Better Representations by Interpolating Hidden States
Vikas Verma,Alex Lamb,Christopher Beckham,Amir Najafi,Ioannis Mitliagkas,David Lopez-Paz,Yoshua Bengio +6 more
TL;DR: Manifold Mixup as discussed by the authors leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation, as a result, neural networks trained with Manifold mixup learn class-representations with fewer directions of variance.
Proceedings Article
On the Information Bottleneck Theory of Deep Learning
Andrew M. Saxe,Yamini Bansal,Joel Dapello,Madhu Advani,Artemy Kolchinsky,Brendan D. Tracey,David D. Cox +6 more
TL;DR: This article showed that the information plane trajectory is predominantly a function of the neural nonlinearity employed: double-sided saturating nonlinearities such as tanh yield a compression phase as neural activations enter the saturation regime, but linear activation functions and single-sided saturated non-linearities like ReLU in fact do not.
Proceedings ArticleDOI
Neural Sign Language Translation
TL;DR: This work formalizes SLT in the framework of Neural Machine Translation (NMT) for both end-to-end and pretrained settings (using expert knowledge) and allows to jointly learn the spatial representations, the underlying language model, and the mapping between sign and spoken language.
References
More filters
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Book
Elements of information theory
Thomas M. Cover,Joy A. Thomas +1 more
TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Journal ArticleDOI
Reducing the Dimensionality of Data with Neural Networks
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal ArticleDOI
Representation Learning: A Review and New Perspectives
TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Book
Learning Deep Architectures for AI
TL;DR: The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.