scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Journal ArticleDOI

Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations.

TL;DR: A concept of resistive processing unit (RPU) devices that can potentially accelerate DNN training by orders of magnitude while using much less power is proposed that will be able to tackle Big Data problems with trillions of parameters that is impossible to address today.
Book ChapterDOI

Deep learning of representations: looking forward

TL;DR: This paper proposes to examine some of the challenges of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data.
Journal ArticleDOI

Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery

TL;DR: The proposed STDCNN has three parts: one part involves a transferred DCNN with deep architecture; another part is designed to analyze multispectral images; and the final part fuses the first two parts into a classification layer, which can produce better land-use maps for real-world urban applications.
Proceedings ArticleDOI

Is it a bug or an enhancement?: a text-based approach to classify change requests

TL;DR: This paper investigates whether the text of the issues posted in bug tracking systems is enough to classify them into corrective maintenance and other kinds of activities and shows that alternating decision trees, naive Bayes classifiers, and logistic regression can be used to accurately distinguish bugs from other kinds.
Journal ArticleDOI

Scene Classification via a Gradient Boosting Random Convolutional Network Framework

TL;DR: A gradient boosting random convolutional network (GBRCN) framework for scene classification, which can effectively combine many deep neural networks and can provide more accurate classification results than the state-of-the-art methods.
References
More filters
Related Papers (5)