G
Geoffrey E. Hinton
Researcher at Google
Publications - 426
Citations - 501778
Geoffrey E. Hinton is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Generative model. The author has an hindex of 157, co-authored 414 publications receiving 409047 citations. Previous affiliations of Geoffrey E. Hinton include Canadian Institute for Advanced Research & Max Planck Society.
Papers
More filters
Posted Content
Unsupervised part representation by Flow Capsules.
TL;DR: This work proposes a novel self-supervised method for learning part descriptors of an image, exploiting motion as a powerful perceptual cue for part definition, using an expressive decoder for part generation and layered image formation with occlusion.
Using EM for Reinforcement Learning
Peter Dayan,Geoffrey E. Hinton +1 more
TL;DR: This work discsus Hinton’s (1989) relative payoff procedure (RPP), a static reinforcement learning algorithm whose foundation is not stochastic gradient ascent, and shows circumstances under which applying the RPP is guaranteed to increase the mean return.
Proceedings Article
Learning Hierarchical Structures with Linear Relational Embedding
TL;DR: This work presents Linear Relational Embedding, a new method of learning a distributed representation of concepts from data consisting of instances of relations between given concepts, and shows how LRE can be used effectively to find compact distributed representations for variable-sized recursive data structures, such as trees and lists.
Posted Content
Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions
TL;DR: It is found that CapsNets always perform better than convolutional networks and the resulting perturbations can cause the input image to appear visually more like the target class and hence become non-adversarial.
Journal ArticleDOI
The ups and downs of Hebb synapses.
TL;DR: Within the neural network community, the "Hebbian" approach of using the product of pre and postsynaptic activities to drive learning was seen as inferior to error-- driven methods that use theproduct of the presynaptic activity and the post synapse activity derivative - the rate at which the objective function changes as the post Synaptic activity is changed.