scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Journal ArticleDOI

Self-organizing neural network that discovers surfaces in random-dot stereograms

TL;DR: The authors' simulations show that when the learning procedure is applied to adjacent patches of two-dimensional images, it allows a neural network that has no prior knowledge of the third dimension to discover depth in random dot stereograms of curved surfaces.
Journal ArticleDOI

A complementary systems account of word learning: neural and behavioural evidence

TL;DR: A novel theory of the cognitive and neural processes by which adults learn new spoken words is presented, which builds on neurocomputational accounts of lexical processing and spoken word recognition and complementary learning systems (CLS) models of memory.
Journal ArticleDOI

Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition.

TL;DR: It is argued that the expressive power of current Bayesian models must be developed in conjunction with mechanistic considerations to offer substantive explanations of cognition, and this unification will better facilitate lasting contributions to psychological theory, avoiding the pitfalls that have plagued previous theoretical movements.
Journal ArticleDOI

Toward an Integration of Deep Learning and Neuroscience.

TL;DR: In this paper, the authors argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes.
Journal Article

Breaking the curse of dimensionality with convex neural networks

TL;DR: In this paper, the authors consider neural networks with a single hidden layer and non-decreasing positively homogeneous activation functions like the rectified linear units and provide a detailed theoretical analysis of their generalization performance, with a study of both the approximation and the estimation errors.
References
More filters
Related Papers (5)