scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Journal ArticleDOI

A spatially explicit deep learning neural network model for the prediction of landslide susceptibility

TL;DR: A comparative analysis using the Wilcoxon signed-rank tests revealed a significant improvement of landslide prediction using the spatially explicit DL model over the quadratic discriminant analysis, Fisher's linear discriminantAnalysis, and multi-layer perceptron neural network.
Journal ArticleDOI

Analysis of drought severity-area-frequency curves using a general circulation model and scenario uncertainty

TL;DR: In this article, the authors investigated the impact of climate change on severity-area-frequency (SAF) curves for annual droughts in the Kansabati River basin, India.
Journal ArticleDOI

Computational principles of learning in the neocortex and hippocampus.

TL;DR: An overview of the computational approach towards understanding the different contributions of the neocortex and hippocampus in learning and memory is presented, based on a set of principles derived from converging biological, psychological, and computational constraints.
Proceedings ArticleDOI

ApproxANN: an approximate computing framework for artificial neural network

TL;DR: This work proposes a novel approximate computing framework for ANN, namely ApproxANN, which characterizes the impact of neurons on the output quality in an effective and efficient manner, and judiciously determine how to approximate the computation and memory accesses of certain less critical neurons to achieve the maximum energy efficiency gain under a given quality constraint.
Posted Content

Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey

TL;DR: This survey presents the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks, gives a comprehensive overview about the taxonomy of related studies and compares several survey papers that deal with explainability in general.
References
More filters
Related Papers (5)