scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Journal ArticleDOI

Remote sensing of forest change using artificial neural networks

TL;DR: The results of the study indicate that the artificial neural network (ANN) estimates conifer mortality more accurately than the other approaches and offers a viable alternative for change detection in remote sensing.
Journal ArticleDOI

Machine learning methods for better water quality prediction

TL;DR: A Neuro-Fuzzy Inference System (WDT-ANFIS) based augmented wavelet de-noising technique has been recommended that depends on historical data of the water quality parameter and exhibited a significant improvement in predicting accuracy for all theWater quality parameters and outperformed all the recommended models.
Proceedings Article

Neural Trojans

TL;DR: This work shows that embedding hidden malicious functionality, i.e neural Trojans, into the neural IP is an effective attack and provides three mitigation techniques: input anomaly detection, re-training, and input preprocessing.
Book ChapterDOI

Large-scale Multi-label Text Classification - Revisiting Neural Networks

TL;DR: This paper proposed to use a comparably simple NN approach with recently proposed learning techniques for large-scale multi-label text classification tasks, and showed that BP-MLL's ranking loss can be efficiently and effectively replaced with the commonly used cross entropy error function, and demonstrate that several advances in neural network training that have been developed in the realm of deep learning can be effectively employed in this setting.
Journal ArticleDOI

Deep residual learning-based fault diagnosis method for rotating machinery

TL;DR: The results of this study suggest that the proposed intelligent fault diagnosis method for rotating machinery offers a new and promising approach, and significantly improves the information flow throughout the network, which is well suited for processing machinery vibration signal with variable sequential length.
References
More filters
Related Papers (5)