scispace - formally typeset
Journal ArticleDOI

Learning long-term dependencies with gradient descent is difficult

TLDR
This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.
Abstract
Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered. >

read more

Citations
More filters
Journal ArticleDOI

Short-Term Load Forecasting Based on Deep Neural Networks Using LSTM Layer

TL;DR: The proposed STLF based on deep neural network using LSTM layer is expected to contribute to stable power system operation by providing a precise load forecasting.
Journal ArticleDOI

Sequence-based modeling of deep learning with LSTM and GRU networks for structural damage detection of floating offshore wind turbine blades

TL;DR: A sequence-based modeling of deep learning for structural damage detection of floating offshore wind turbine (FOWT) blades using Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural networks and enables engineers to harness the vast amounts of digital information to improve the safety of structures.
Journal ArticleDOI

Semantic-based padding in convolutional neural networks for improving the performance in natural language processing. A case of study in sentiment analysis

TL;DR: The proposed semantic-based padding improved the results achieved when no padding strategy is applied, and when the model used a pre-trained word embeddings, the performance of the state of the art has been surpassed.
Journal ArticleDOI

Deep learning-based BCI for gait decoding from EEG with LSTM recurrent neural network.

TL;DR: The results support for the first time the use of a memory-based deep learning classifier to decode walking activity from non-invasive brain recordings and suggest that this classifier can be a more effective input for devices restoring locomotion in impaired people.
Journal ArticleDOI

Combination of Deep Recurrent Neural Networks and Conditional Random Fields for Extracting Adverse Drug Reactions from User Reviews.

TL;DR: A novel model is proposed, uniting recurrent neural architectures and conditional random fields for identifying adverse drug reactions in free-form text, showing improvements over state-of-the-art methods of ADR extraction.
References
More filters
Journal ArticleDOI

Optimization by Simulated Annealing

TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book

Learning internal representations by error propagation

TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI

A learning algorithm for continually running fully recurrent neural networks

TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI

Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm—Corrigenda for this article is available here

TL;DR: A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” algorithm recently introduced in combinatorial optimization, which is quite costly in terms of function evaluations, but its cost can be predicted in advance, depending only slightly on the starting point.
Related Papers (5)