scispace - formally typeset
Proceedings ArticleDOI

Theory of the backpropagation neural network

Hecht-Nielsen
- pp 593-605
Reads0
Chats0
TLDR
A speculative neurophysiological model illustrating how the backpropagation neural network architecture might plausibly be implemented in the mammalian brain for corticocortical learning between nearby regions of the cerebral cortex is presented.
Abstract
The author presents a survey of the basic theory of the backpropagation neural network architecture covering architectural design, performance measurement, function approximation capability, and learning. The survey includes previously known material, as well as some new results, namely, a formulation of the backpropagation neural network architecture to make it a valid neural network (past formulations violated the locality of processing restriction) and a proof that the backpropagation mean-squared-error function exists and is differentiable. Also included is a theorem showing that any L/sub 2/ function from (0, 1)/sup n/ to R/sup m/ can be implemented to any desired degree of accuracy with a three-layer backpropagation neural network. The author presents a speculative neurophysiological model illustrating how the backpropagation neural network architecture might plausibly be implemented in the mammalian brain for corticocortical learning between nearby regions of the cerebral cortex. >

read more

Citations
More filters
Journal ArticleDOI

Deep learning in neural networks

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Book ChapterDOI

Neural Networks for Pattern Recognition

TL;DR: The chapter discusses two important directions of research to improve learning algorithms: the dynamic node generation, which is used by the cascade correlation algorithm; and designing learning algorithms where the choice of parameters is not an issue.
Posted Content

Overcoming catastrophic forgetting in neural networks

TL;DR: It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.
Journal ArticleDOI

Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources

TL;DR: The challenges of using deep learning for remote-sensing data analysis are analyzed, recent advances are reviewed, and resources are provided that hope will make deep learning in remote sensing seem ridiculously simple.
Journal ArticleDOI

Deep Learning-Based Classification of Hyperspectral Data

TL;DR: The concept of deep learning is introduced into hyperspectral data classification for the first time, and a new way of classifying with spatial-dominated information is proposed, which is a hybrid of principle component analysis (PCA), deep learning architecture, and logistic regression.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI

Multilayer feedforward networks are universal approximators

TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
Journal ArticleDOI

Neurons with graded response have collective computational properties like those of two-state neurons.

TL;DR: A model for a large network of "neurons" with a graded response (or sigmoid input-output relation) is studied and collective properties in very close correspondence with the earlier stochastic model based on McCulloch - Pitts neurons are studied.
Book

Neurons with graded response have collective computational properties like those of two-state neurons

TL;DR: In this article, a model for a large network of "neurons" with a graded response (or sigmoid input-output relation) is studied, which has collective properties in very close correspondence with the earlier stochastic model based on McCulloch--Pitts neurons.
Journal ArticleDOI

A learning algorithm for continually running fully recurrent neural networks

TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Related Papers (5)