Journal ArticleDOI
Learning long-term dependencies with gradient descent is difficult
TLDR
This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.Abstract:
Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered. >read more
Citations
More filters
Journal ArticleDOI
Can deep learning beat numerical weather prediction
Martin G. Schultz,Clara Betancourt,Bing Gong,Felix Kleinert,Michael Langguth,L. H. Leufen,Amirpasha Mozaffari,Scarlet Stadtler +7 more
TL;DR: In this article, the authors apply successful deep learning methods for image recognition, speech recognition, robotics, strategic games, and other tasks in the field of artificial intelligence, such as image classification and speech recognition.
Posted Content
Physics-Guided Machine Learning for Scientific Discovery: An Application in Simulating Lake Temperature Profiles
Xiaowei Jia,Jared Willard,Anuj Karpatne,Jordan S. Read,Jacob A. Zwart,Michael Steinbach,Vipin Kumar +6 more
TL;DR: This article proposed a physics-guided recurrent neural network model (PGRNN) that combines RNNs and physics-based models to leverage their complementary strengths and improves the modeling of physical processes.
Journal ArticleDOI
An LSTM-Based Method with Attention Mechanism for Travel Time Prediction
TL;DR: The experimental results show that the proposed model can achieve better accuracy than the Long Short-Term Memory and other baseline methods, and the case study suggests that the departure time is effectively employed by using attention mechanism.
Proceedings ArticleDOI
On Estimating Air Pollution from Photos Using Convolutional Neural Network
TL;DR: An effective convolutional neural network is devised to estimate air's quality based on photos to alleviate the vanishing gradient issue effectively and a modified activation function is developed for photo based air pollution estimation.
Journal ArticleDOI
Water quality prediction based on recurrent neural network and improved evidence theory: a case study of Qiantang River, China
TL;DR: The engineering application of the RNNs-DS algorithm has been realized on the self-developed water environmental monitoring and forecasting system, which can provide effective support for early risk assessment and prevention in water environment.
References
More filters
Journal ArticleDOI
Optimization by Simulated Annealing
TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
A learning algorithm for continually running fully recurrent neural networks
Ronald J. Williams,David Zipser +1 more
TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI
Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm—Corrigenda for this article is available here
TL;DR: A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” algorithm recently introduced in combinatorial optimization, which is quite costly in terms of function evaluations, but its cost can be predicted in advance, depending only slightly on the starting point.