scispace - formally typeset
Search or ask a question
Author

Geoffrey E. Hinton

Bio: Geoffrey E. Hinton is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Generative model. The author has an hindex of 157, co-authored 414 publications receiving 409047 citations. Previous affiliations of Geoffrey E. Hinton include Canadian Institute for Advanced Research & Max Planck Society.


Papers
More filters
Journal ArticleDOI
TL;DR: This work shows how to improve a state-of-the-art neural network language model that converts the previous ''context'' words into feature vectors and combines these feature vectors linearly to predict the feature vector of the next word.

23 citations

Patent
21 Jun 2013
TL;DR: In this paper, a neural network is trained by optimizing an objective function which compensates for noise in the map images, and the objective function compensates both omission noise and registration noise.
Abstract: A system and method for labelling aerial images. A neural network generates predicted map data. The parameters of the neural network are trained by optimizing an objective function which compensates for noise in the map images. The function compensates both omission noise and registration noise.

22 citations

01 Jan 1998
TL;DR: In this article, a hierarchical generative model that selects from a large collection of available linear units an appropriate subset to model each observation is described, where the selection mechanism is a corresponding network of binary units each of which gates the output of a linear unit.
Abstract: We describe a hierarchical generative model that selects from a large collection of available linear units an appropriate subset to model each observation. The selection mechanism is a corresponding network of binary units each of which gates the output of a linear unit. Inference in the binary network is intractable, but the statistics required to learn maximum-likelihood model parameters can be approximated with Gibbs sampling, even if the sampling is so brief that the Markov chain is far from equilibrium.

22 citations

01 Jan 2001
TL;DR: A new way of interpreting ICA as a probability density model and anew way of fitting this model to data are presented, which suggests different generalizations of the basic ICA algorithm which preserve the computationally attractive property that the hidden activities are a simple deterministic function of the observed data.
Abstract: We present a new way of interpreting ICA as a probability density model and a new way of fitting this model to data. The advantage of our approach is that it suggests simple, novel extensions to overcomplete, undercomplete and multilayer non-linear versions of ICA. 1. ICA AS A CAUSAL GENERATIVE MODEL Factor analysis is based on a causal generative model in which an observation vector is generated in three stages. First, the activities of the factors (also known as latent or hidden variables) are chosen independently from one dimensional Gaussian priors. Next, these hidden activities are multiplied by a matrix of weights (the “factor loading” matrix) to produce a noise-free observation vector. Finally, independent Gaussian “sensor noise” is added to each component of the noise-free observation vector. Given an observation vector and a factor loading matrix, it is tractable to compute the posterior distribution of the hidden activities because this distribution is a Gaussian, though it generally has off-diagonal terms in the covariance matrix so it is not as simple as the prior distribution over hidden activities. ICA can also be viewed as a causal generative model [1, 2] that differs from factor analysis in two ways. First, the priors over the hidden activities remain independent but they are non-Gaussian. By itself, this modification would make it intractable to compute the posterior distribution over hidden activities. Tractability is restored by eliminating sensor noise and by using the same number of factors as input dimensions. This ensures that the posterior distribution over hidden activities collapses to a point. Interpreting ICA as a type of causal generative model suggests a number of ways in which it might be generalized, for instance to deal with more hidden units than input dimensions. Most of these generalizations retain marginal independence of the hidden activities and add sensor noise, but fail to preserve the property that the posterior distribution collapses to a point. As Funded by the Wellcome Trust and the Gatsby Charitable Foundation. a result inference is intractable and crude approximations are needed to model the posterior distribution, e.g., a MAP estimate in [3], a Laplace approximation in [4, 5] or more sophisticated variational approximations in [6]. 2. ICA AS AN ENERGY-BASED DENSITY MODEL We now describe a very different way of interpreting ICA as a probability density model. In the next section we describe how we can fit the model to data. The advantage of our energy-based view is that it suggests different generalizations of the basic ICA algorithm which preserve the computationally attractive property that the hidden activities are a simple deterministic function of the observed data. Instead of viewing the hidden factors as stochastic latent variables in a causal generative model, we view them as deterministic functions of the data with parameters . The hidden factors are then used for assigning an energy , to each possible observation vector :

22 citations

Proceedings Article
01 Dec 1997
TL;DR: In this article, a hierarchical, generative model that can be viewed as a non-linear generalisation of factor analysis and can be implemented in a neural network is described, which performs perceptual inference in a probabilistically consistent manner by using top-down, bottom-up and lateral connections.
Abstract: We first describe a hierarchical, generative model that can be viewed as a non-linear generalisation of factor analysis and can be implemented in a neural network. The model performs perceptual inference in a probabilistically consistent manner by using top-down, bottom-up and lateral connections. These connections can be learned using simple rules that require only locally available information. We then show how to incorporate lateral connections into the generative model. The model extracts a sparse, distributed, hierarchical representation of depth from simplified random-dot stereograms and the localised disparity detectors in the first hidden layer form a topographic map. When presented with image patches from natural scenes, the model develops topographically organised local feature detectors.

21 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings Article
01 Jan 2015
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Abstract: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.

111,197 citations

Proceedings Article
03 Dec 2012
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Abstract: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overriding in the fully-connected layers we employed a recently-developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.

73,978 citations

Journal ArticleDOI
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Abstract: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.

72,897 citations

Proceedings Article
01 Jan 2015
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Abstract: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.

49,914 citations