Journal ArticleDOI
Learning long-term dependencies with gradient descent is difficult
TLDR
This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.Abstract:
Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered. >read more
Citations
More filters
Posted Content
Deep Learning for Medical Image Segmentation.
TL;DR: An overview of the current state of the art deep learning architectures and optimisation techniques is provided, and the ADNI hippocampus MRI dataset is used as an example to compare the effectiveness and efficiency of different convolutional architectures on the task of patch-based 3-dimensional hippocampal segmentation.
Journal ArticleDOI
LSTM-Based Battery Remaining Useful Life Prediction With Multi-Channel Charging Profiles
TL;DR: Novel RUL prediction techniques based on long short-term memory (LSTM) to estimate RUL even in the presence of capacity regeneration phenomenon, which considers multiple measurable data from battery management system such as voltage, current and temperature charging profiles whose patterns vary as aging.
Journal ArticleDOI
A signal processing framework based on dynamic neural networks with application to problems in adaptation, filtering, and classification
L.A. Feldkamp,G.V. Puskorius +1 more
TL;DR: It is shown that a single time-lagged recurrent net can be trained to produce excellent one-time-step predictions for two different time series and also to be robust to severe errors in the input sequence.
Proceedings ArticleDOI
Dependency Sensitive Convolutional Neural Networks for Modeling Sentences and Documents
TL;DR: DSCNN hierarchically builds textual representations by processing pretrained word embeddings via Long ShortTerm Memory networks and subsequently extracting features with convolution operators, and does not rely on parsers and expensive phrase labeling, and thus is not restricted to sentencelevel tasks.
Journal ArticleDOI
One Vector is Not Enough: Entity-Augmented Distributed Semantics for Discourse Relations
Yangfeng Ji,Jacob Eisenstein +1 more
TL;DR: This work computes distributed meaning representations for each discourse argument by composition up the syntactic parse tree and performs a downward compositional pass to capture the meaning of coreferent entity mentions.
References
More filters
Journal ArticleDOI
Optimization by Simulated Annealing
TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
A learning algorithm for continually running fully recurrent neural networks
Ronald J. Williams,David Zipser +1 more
TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI
Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm—Corrigenda for this article is available here
TL;DR: A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” algorithm recently introduced in combinatorial optimization, which is quite costly in terms of function evaluations, but its cost can be predicted in advance, depending only slightly on the starting point.