scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Journal ArticleDOI

Connectionist models of face processing: A survey

TL;DR: One advantage of these models over some nonconnectionist approaches is that analyzable features emerge naturally from image-based codes, and hence the problem of feature selection and segmentation from faces can be avoided.
Proceedings ArticleDOI

Deep multiple instance learning for image classification and auto-annotation

TL;DR: This paper attempts to model deep learning in a weakly supervised learning (multiple instance learning) framework, where each image follows a dual multi-instance assumption, where its object proposals and possible text annotations can be regarded as two instance sets.
Journal ArticleDOI

Rules or connections in past-tense inflections: what does the evidence rule out?

TL;DR: Evidence supports all three connectionist predictions: gradual acquisition of the past tense inflection; graded sensitivity to phonological and semantic content; and a single, integrated mechanism for regular and irregular forms, dependent jointly on phonology and semantics.
Proceedings ArticleDOI

DSAC — Differentiable RANSAC for Camera Localization

TL;DR: In this article, a differentiable version of RANSAC, called DSAC, is applied to the problem of camera localization, where deep learning has so far failed to improve on traditional approaches.
Journal ArticleDOI

Recent Progress on Generative Adversarial Networks (GANs): A Survey

TL;DR: The basic theory of GANs and the differences among different generative models in recent years were analyzed and summarized and the derived models of GAns are classified and introduced one by one.
References
More filters
Related Papers (5)