scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Proceedings ArticleDOI

A Deep Relevance Matching Model for Ad-hoc Retrieval

TL;DR: Deep Relevance Matching (DRMM) as mentioned in this paper employs a joint deep architecture at the query term level for relevance matching, using matching histogram mapping, a feed forward matching network, and a term gating network.
Journal ArticleDOI

Stable Architectures for Deep Neural Networks

TL;DR: New forward propagation techniques inspired by systems of Ordinary Differential Equations (ODE) are proposed that overcome this challenge and lead to well-posed learning problems for arbitrarily deep networks.
Journal ArticleDOI

Stacked Autoencoders for Unsupervised Feature Learning and Multiple Organ Detection in a Pilot Study Using 4D Patient Data

TL;DR: The application of deep learning methods to organ identification in magnetic resonance medical images is tested, with visual and temporal hierarchical features learned to categorize object classes from an unlabeled multimodal DCE-MRI dataset so that only a weakly supervised training is required for a classifier.
Journal ArticleDOI

On the use of deep learning for computational imaging

TL;DR: This paper relates the deep-learning-inspired solutions to the original computational imaging formulation and use the relationship to derive design insights, principles, and caveats of more general applicability, and explores how the machine learning process is aided by the physics of imaging when ill posedness and uncertainties become particularly severe.
Posted Content

Video (language) modeling: a baseline for generative models of natural videos.

TL;DR: For the first time, it is shown that a strong baseline model for unsupervised feature learning using video data can predict non-trivial motions over short video sequences.
References
More filters
Related Papers (5)