scispace - formally typeset
G

Geoffrey E. Hinton

Researcher at Google

Publications -  426
Citations -  501778

Geoffrey E. Hinton is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Generative model. The author has an hindex of 157, co-authored 414 publications receiving 409047 citations. Previous affiliations of Geoffrey E. Hinton include Canadian Institute for Advanced Research & Max Planck Society.

Papers
More filters
Journal ArticleDOI

Semantic hashing

TL;DR: In this paper, a deep graphical model of the word-count vectors obtained from a large set of documents is proposed. But the model is restricted to the deep layer of the deep neural network and cannot handle large numbers of documents.
Book

Distributed representations

TL;DR: This report describes a different type of representation that is less familiar and harder to think about than local representations, which makes use of the processing abilities of networks of simple, neuron-like computing elements.
Posted Content

Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer

TL;DR: This work introduces a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks, and applies the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora.
Posted Content

Big Self-Supervised Models are Strong Semi-Supervised Learners

TL;DR: The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2 (a modification of SimCLRs), supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.
Book ChapterDOI

Transforming auto-encoders

TL;DR: It is argued that neural networks can be used to learn features that output a whole vector of instantiation parameters and this is a much more promising way of dealing with variations in position, orientation, scale and lighting than the methods currently employed in the neural networks community.