Deep Haar scattering networks
Reads0
Chats0
TLDR
In this article, an orthogonal Haar scattering transform (HAHST) is proposed to obtain sparse representations of training data with an algorithm of polynomial complexity, where the graph connectivity is unknown.Abstract:
An orthogonal Haar scattering transform is a deep network computed with a hierarchy of additions, subtractions and absolute values over pairs of coefficients. Unsupervised learning optimizes Haar pairs to obtain sparse representations of training data with an algorithm of polynomial complexity. For signals defined on a graph, a Haar scattering is computed by cascading orthogonal Haar wavelet transforms on the graph, with Haar wavelets having connected supports. It defines a representation which is invariant to local displacements of signal values on the graph. When the graph connectivity is unknown, unsupervised Haar learning can provide a consistent estimation of connected wavelet supports. Classification results are given on image data bases, defined on regular grids or graphs, with a connectivity which may be known or unknown.read more
Citations
More filters
Journal ArticleDOI
Geometric Deep Learning: Going beyond Euclidean data
TL;DR: In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques as mentioned in this paper.
Journal ArticleDOI
Geometric deep learning: going beyond Euclidean data
TL;DR: Deep neural networks are used for solving a broad range of problems from computer vision, natural-language processing, and audio analysis where the invariances of these structures are built into networks used to model them.
Journal ArticleDOI
On invariance and selectivity in representation learning
TL;DR: This paper builds on the idea that data representation, which are learned in an unsupervised manner, can be key to solve the problem of learning "good" data representation which can lower the need of labeled data in machine learning.
Posted Content
On Invariance and Selectivity in Representation Learning
TL;DR: In this paper, the authors discuss data representation which can be learned automatically from data, are invariant to transformations, and at the same time selective, in the sense that two points have the same representation only if they are one the transformation of the other.
Journal ArticleDOI
GSNs: generative stochastic networks
Guillaume Alain,Yoshua Bengio,Li Yao,Jason Yosinski,Éric Thibodeau-Laufer,Saizheng Zhang,Pascal Vincent +6 more
TL;DR: A novel training principle for generative probabilistic models that is an alternative to maximum likelihood and an interesting justication for dependency networks and generalized pseudolikelihood and dene an appropriate joint distribution and sampling mechanism, even when the conditionals are not consistent.
References
More filters
Journal ArticleDOI
Reducing the Dimensionality of Data with Neural Networks
TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal ArticleDOI
A fast learning algorithm for deep belief nets
TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Posted Content
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
TL;DR: This work proposes a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit and derives a robust initialization method that particularly considers the rectifier nonlinearities.
Journal ArticleDOI
Representation Learning: A Review and New Perspectives
TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Journal ArticleDOI
Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups
Geoffrey E. Hinton,Li Deng,Dong Yu,George E. Dahl,Abdelrahman Mohamed,Navdeep Jaitly,Andrew W. Senior,Vincent Vanhoucke,Patrick Nguyen,Tara N. Sainath,Brian Kingsbury +10 more
TL;DR: This article provides an overview of progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.