scispace - formally typeset
Journal ArticleDOI

Learning representations by back-propagating errors

Reads0
Chats0
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

read more

Citations
More filters
Journal ArticleDOI

Deep learning in robotics: a review of recent research

TL;DR: A review of the applications, benefits, and limitations of deep learning vis-a-vis physical robotic systems, using contemporary research as exemplars, is presented in this paper, which is intended to communicate recent advances to the wider robotics community and inspire additional interest in and application of deep Learning in robotics.
Journal ArticleDOI

Analysis of Recurrent Neural Networks for Probabilistic Modeling of Driver Behavior

TL;DR: This paper reveals that the strong performance of recurrent networks is due to the ability of the recurrent network to identify recent trends in the ego-vehicle's state, and recurrent networks are shown to perform as, well as feedforward networks with longer histories as inputs.
Journal ArticleDOI

Development of effective and efficient rainfall-runoff models using integration of deterministic, real-coded genetic algorithms and artificial neural network techniques

TL;DR: In this paper, a new approach employing real-coded genetic algorithms (GAs) to train ANN rainfall-runoff models, which are able to overcome low-magnitude flows while developing artificial neural network (ANN) rainfall runoff models trained using popular back propagation (BP) method, is presented.
Journal ArticleDOI

LSTM-Based EEG Classification in Motor Imagery Tasks

TL;DR: A one dimension-aggregate approximation (1d-AX) is employed to achieve robust classification, and Inspired by classical common spatial pattern, channel weighting technique is further deployed to enhance the effectiveness of the proposed classification framework.
Proceedings ArticleDOI

Semantically enhanced software traceability using deep learning techniques

TL;DR: A tracing network architecture that utilizes Word Embedding and Recurrent Neural Network models to generate trace links and significantly out-performed state-of-the-art tracing methods including the Vector Space Model and Latent Semantic Indexing.
References
More filters
Related Papers (5)