scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Proceedings Article

Dendritic cortical microcircuits approximate the backpropagation algorithm

TL;DR: A novel view of learning on dendritic cortical circuits and on how the brain may solve the long-standing synaptic credit assignment problem is introduced, in which error-driven synaptic plasticity adapts the network towards a global desired output.
Proceedings ArticleDOI

Finding approximate local minima faster than gradient descent

TL;DR: In this paper, a non-convex second-order optimization algorithm is proposed that is guaranteed to return an approximate local minimum in time which scales linearly in the underlying dimension and the number of training examples.
Book

Hierarchical Neural Networks for Image Interpretation

Sven Behnke
TL;DR: The results show clear trends in the direction of improvement in the level of supervised learning in relation to the recognition of meter values and in the application of Matrix Codes.
Posted Content

Measuring the Effects of Data Parallelism on Neural Network Training

TL;DR: This work experimentally characterize the effects of increasing the batch size on training time, as measured by the number of steps necessary to reach a goal out-of-sample error, and study how this relationship varies with the training algorithm, model, and data set, and finds extremely large variation between workloads.
Journal ArticleDOI

Survey on deep learning for radiotherapy.

TL;DR: The concept of deep learning is explained, addressing it in the broader context of machine learning, and the most common network architectures are presented, with a more specific focus on convolutional neural networks.
References
More filters
Related Papers (5)