scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Journal ArticleDOI

Fusion of face and speech data for person identity verification

TL;DR: This work proposes to evaluate different binary classification schemes (support vector machine, multilayer perceptron, C4.5 decision tree, Fisher's linear discriminant, Bayesian classifier) to carry on the fusion of experts for taking a final decision on identity authentication.
Journal ArticleDOI

What are artificial neural networks

TL;DR: Artificial neural networks have been applied to problems ranging from speech recognition to prediction of protein secondary structure, classification of cancers and gene prediction.
Journal ArticleDOI

Spintronic Nanodevices for Bioinspired Computing

TL;DR: This paper shows how spintronics can be used for bioinspired computing, and reviews the different approaches that have been proposed, the recent advances in this direction, and the challenges toward fully integrated spintronic complementary metal-oxide-semiconductor (CMOS) bioinspired hardware.
Journal ArticleDOI

Understanding adversarial training: Increasing local stability of supervised models through robust optimization

TL;DR: The proposed framework generalizes adversarial training, as well as previous approaches for increasing local stability of ANNs, and increases the robustness of the network to existing adversarial examples, while making it harder to generate new ones.
Journal ArticleDOI

Does the nervous system depend on kinesthetic information to control natural limb movements

TL;DR: In this paper, the authors draw together two groups of experimental studies on the control of human movement through peripheral feedback and centrally generated signals of motor commands, concluding that subjects can perceive their motor commands under various conditions, but that this is inadequate for normal movement.
References
More filters
Related Papers (5)