scispace - formally typeset
Journal ArticleDOI

Equivalence of backpropagation and contrastive Hebbian learning in a layered network

Xiaohui Xie, +1 more
- 01 Feb 2003 - 
- Vol. 15, Iss: 2, pp 441-454
Reads0
Chats0
TLDR
A special case in which they are identical: a multilayer perceptron with linear output units, to which weak feedback connections have been added suggests that the functionality of backpropagation can be realized alternatively by a Hebbian-type learning algorithm, which is suitable for implementation in biological networks.
Abstract
Backpropagation and contrastive Hebbian learning are two methods of training networks with hidden neurons. Backpropagation computes an error signal for the output neurons and spreads it over the hidden neurons. Contrastive Hebbian learning involves clamping the output neurons at desired values and letting the effect spread through feedback connections over the entire network. To investigate the relationship between these two forms of learning, we consider a special case in which they are identical: a multilayer perceptron with linear output units, to which weak feedback connections have been added. In this case, the change in network state caused by clamping the output neurons turns out to be the same as the error signal spread by backpropagation, except for a scalar prefactor. This suggests that the functionality of backpropagation can be realized alternatively by a Hebbian-type learning algorithm, which is suitable for implementation in biological networks.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book

Deep Learning: Methods and Applications

Li Deng, +1 more
TL;DR: This monograph provides an overview of general deep learning methodology and its applications to a variety of signal and information processing tasks, including natural language and text processing, information retrieval, and multimodal information processing empowered by multi-task deep learning.
Journal ArticleDOI

Random synaptic feedback weights support error backpropagation for deep learning.

TL;DR: A surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights is presented, which can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks.
Journal ArticleDOI

Backpropagation and the brain

TL;DR: It is argued that the key principles underlying backprop may indeed have a role in brain function and induce neural activities whose differences can be used to locally approximate these signals and hence drive effective learning in deep networks in the brain.
Journal ArticleDOI

Toward an Integration of Deep Learning and Neuroscience.

TL;DR: In this paper, the authors argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes.
Journal ArticleDOI

Short-term memory for serial order: a recurrent neural network model.

TL;DR: An alternative model is presented, according to which sequence information is encoded through sustained patterns of activation within a recurrent neural network architecture, which provides a parsimonious account for numerous benchmark characteristics of immediate serial recall.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

Neurons with graded response have collective computational properties like those of two-state neurons.

TL;DR: A model for a large network of "neurons" with a graded response (or sigmoid input-output relation) is studied and collective properties in very close correspondence with the earlier stochastic model based on McCulloch - Pitts neurons are studied.
Related Papers (5)