scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Journal ArticleDOI

Object Detection and Image Segmentation with Deep Learning on Earth Observation Data: A Review-Part I: Evolution and Recent Trends

Thorsten Hoeser, +1 more
- 22 May 2020 - 
TL;DR: An overview of the evolution of DL with a focus on image segmentation and object detection in convolutional neural networks (CNN) starts in 2012, when a CNN set new standards in image recognition, and lasts until late 2019.
Proceedings ArticleDOI

Extractive Summarization using Continuous Vector Space Models

TL;DR: This paper proposes the use of continuous vector representations for semantically aware representations of sentences as a basis for measuring similarity and evaluates different compositions for sentence representation on a standard dataset using the ROUGE evaluation measures.

Learning-Based Methods for Comparing Sequences, with Applications to Audio-to-MIDI Alignment and Matching

Colin Raffel
TL;DR: Learning-Based Methods for Comparing Sequences, with Applications to Audio-to-MIDI Alignment and Matching
Proceedings Article

Decoupled neural interfaces using synthetic gradients

TL;DR: It is demonstrated that in addition to predicting gradients, the same framework can be used to predict inputs, resulting in models which are decoupled in both the forward and backwards pass -- amounting to independent networks which co-learn such that they can be composed into a single functioning corporation.
Journal ArticleDOI

SELDI-TOF-MS ProteinChip array profiling of tears from patients with dry eye

TL;DR: The SELDI-TOF-MS technology seems to be ideally suitable for the mass screening of peptides and proteins in tears and may become a very useful tool in the search for potential biomarkers for diagnosis and new therapeutics in ocular diseases such as dry eye.
References
More filters
Related Papers (5)