scispace - formally typeset
Journal ArticleDOI

Learning long-term dependencies with gradient descent is difficult

TLDR
This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.
Abstract
Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered. >

read more

Citations
More filters
Proceedings ArticleDOI

Long Short Term Memory Recurrent Neural Network Classifier for Intrusion Detection

TL;DR: This paper applies Long Short Term Memory (LSTM) architecture to a Recurrent Neural Network (RNN) and train the IDS model using KDD Cup 1999 dataset and confirms that the deep learning approach is effective for IDS.
Posted Content

Deep Learning of Representations: Looking Forward

TL;DR: In this paper, the authors examine some of the challenges of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data.
Proceedings Article

Exact solutions to the nonlinear dynamics of learning in deep linear neural networks

TL;DR: In this article, the authors show that deep linear networks exhibit nonlinear learning phenomena similar to those seen in simulations of nonlinear networks, including long plateaus followed by rapid transitions to lower error solutions, and faster convergence from greedy unsupervised pretraining initial conditions than from random initial conditions.
Journal ArticleDOI

Going deeper into action recognition

TL;DR: This survey provides a comprehensive review of the notable steps taken towards recognizing human actions, starting with the pioneering methods that use handcrafted representations, and then, navigating into the realm of deep learning based approaches.
Proceedings ArticleDOI

DataStories at SemEval-2017 Task 4: Deep LSTM with Attention for Message-level and Topic-based Sentiment Analysis.

TL;DR: Two deep-learning systems that competed at SemEval-2017 Task 4 “Sentiment Analysis in Twitter” are presented, which use Long Short-Term Memory networks augmented with two kinds of attention mechanisms, on top of word embeddings pre-trained on a big collection of Twitter messages.
References
More filters
Journal ArticleDOI

Optimization by Simulated Annealing

TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

A learning algorithm for continually running fully recurrent neural networks

TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI

Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm—Corrigenda for this article is available here

TL;DR: A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” algorithm recently introduced in combinatorial optimization, which is quite costly in terms of function evaluations, but its cost can be predicted in advance, depending only slightly on the starting point.
Related Papers (5)