scispace - formally typeset
G

Geoffrey E. Hinton

Researcher at Google

Publications -  426
Citations -  501778

Geoffrey E. Hinton is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Generative model. The author has an hindex of 157, co-authored 414 publications receiving 409047 citations. Previous affiliations of Geoffrey E. Hinton include Canadian Institute for Advanced Research & Max Planck Society.

Papers
More filters
Journal ArticleDOI

Glove-Talk: a neural network interface between a data-glove and a speech synthesizer

TL;DR: To illustrate the potential of multilayer neural networks for adaptive interfaces, a VPL Data-Glove connected to a DECtalk speech synthesizer via five neural networks was used to implement a hand-gesture to speech system, demonstrating that neural networks can be used to develop the complex mappings required in a high bandwidth interface that adapts to the individual user.
Journal ArticleDOI

Modeling the manifolds of images of handwritten digits

TL;DR: Two new methods for modeling the manifolds of digitized images of handwritten digits of principal components analysis and factor analysis are described, based on locally linear low-dimensional approximations to the underlying data manifold.
Proceedings ArticleDOI

Products of experts

TL;DR: This training algorithm suggests a biologically plausible way of learning neural population codes by maximizing the probabilities that the individual models assign to the observed data.
Proceedings Article

Binary Coding of Speech Spectrograms Using a Deep Auto-encoder

TL;DR: This paper reports the recent exploration of the layer-by-layer learning strategy for training a multi-layer generative model of patches of speech spectrograms and shows that the binary codes learned produce a logspectral distortion that is approximately 2 dB lower than a subband vector quantization technique over the entire frequency range of wide-band speech.

Experiments on Learning by Back Propagation.

TL;DR: The learning procedure can discover appropriate weights in their kind of network, as well as determine an optimal schedule for varying the nonlinearity of the units during a search.