scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Proceedings ArticleDOI

PiCANet: Learning Pixel-Wise Contextual Attention for Saliency Detection

TL;DR: Zhang et al. as discussed by the authors proposed a pixel-wise contextual attention network to learn to selectively attend to informative context locations for each pixel, which can generate an attention map in which each attention weight corresponds to the contextual relevance at each context location.
Book

Neural Network Methods in Natural Language Processing

TL;DR: Neural networks are a family of powerful machine learning models as mentioned in this paper, and they have been widely used in natural language processing applications such as machine translation, syntactic parsing, and multi-task learning.
Journal ArticleDOI

Self-organizing semantic maps

TL;DR: Self-organized formation of topographic maps for abstract data, such as words, is demonstrated and it is argued that a similar process may be at work in the brain.
Journal ArticleDOI

Overview of deep learning in medical imaging

TL;DR: It is shown that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deepLearning.
Journal ArticleDOI

Conservation and prediction of solvent accessibility in protein families

TL;DR: A neural network system that predicts relative solvent accessibility of each residue using evolutionary profiles of amino acid substitutions derived from multiple sequence alignments is introduced, and the most reliably predicted fraction of the residues (50%) is predicted as accurately as by automatic homology modeling.
References
More filters
Related Papers (5)