scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Journal ArticleDOI

Creating artificial neural networks that generalize

TL;DR: A technique to test the hypothesis that multilayered, feed-forward networks with few units on the first hidden layer generalize better than networks with many units in the first layer finds the hypothesis to be false for networks trained with noisy inputs.
Journal ArticleDOI

Comparing different classifiers for automatic age estimation

TL;DR: The aim of this work is to design classifiers that accept the model-based representation of unseen images and produce an estimate of the age of the person in the corresponding face image, which indicates that machines can estimate theAge of a person almost as reliably as humans.
Journal ArticleDOI

The parallel distributed processing approach to semantic cognition

TL;DR: Simulation models capture semantic cognitive processes and their development and disintegration, encompassing domain-specific patterns of generalization in young children, and the restructuring of conceptual knowledge as a function of experience.
Proceedings ArticleDOI

Robust Scene Text Recognition with Automatic Rectification

TL;DR: This article proposed a robust text recognizer with automatic rectification (RARE), which consists of a Spatial Transformer Network (STN) and a Sequence Recognition Network (SRN).
DissertationDOI

Bayesian methods for adaptive models

TL;DR: The Bayesian framework for model comparison and regularisation is demonstrated by studying interpolation and classification problems modelled with both linear and non-linear models, and it is shown that the careful incorporation of error bar information into a classifier's predictions yields improved performance.
References
More filters
Related Papers (5)