Journal ArticleDOI
Learning long-term dependencies with gradient descent is difficult
TLDR
This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.Abstract:
Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered. >read more
Citations
More filters
Journal ArticleDOI
Deep learning for in vitro prediction of pharmaceutical formulations.
TL;DR: In this article, two types of dosage forms were chosen as model systems and evaluation criteria suitable for pharmaceutics were applied to assess the performance of the models, and an automatic dataset selection algorithm was developed for selecting the representative data as validation and test datasets.
Posted Content
Multi-View Representation Learning: A Survey from Shallow Methods to Deep Methods
TL;DR: This survey aims to provide an insightful overview of theoretical basis and state-of-the-art developments in the field of multi-view representation learning and to help researchers find the most appropriate tools for particular applications.
Journal ArticleDOI
Predicting flood susceptibility using LSTM neural networks
TL;DR: A local spatial sequential long short-term memory neural network (LSS-LSTM) for flood susceptibility prediction in Shangyou County, China is proposed, which can not only capture both attribution information of flood conditioning factors and local spatial information of flooding data, but also retain the powerful sequential modelling capability to deal with flood spatial relationship.
Proceedings ArticleDOI
Anchor Diffusion for Unsupervised Video Object Segmentation
TL;DR: Inspired by the non-local operators, a technique to establish dense correspondences between pixel embeddings of a reference "anchor" frame and the current one is introduced, which allows the learning of pairwise dependencies at arbitrarily long distances without conditioning on intermediate frames.
Journal ArticleDOI
Predicting online shopping behaviour from clickstream data using deep learning
TL;DR: Estimates of revenue impact together with results of standard classifier performance metrics evidence the viability of RNN-based clickstream modeling and guide employing deep recurrent learners for campaign targeting.
References
More filters
Journal ArticleDOI
Optimization by Simulated Annealing
TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
A learning algorithm for continually running fully recurrent neural networks
Ronald J. Williams,David Zipser +1 more
TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI
Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm—Corrigenda for this article is available here
TL;DR: A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” algorithm recently introduced in combinatorial optimization, which is quite costly in terms of function evaluations, but its cost can be predicted in advance, depending only slightly on the starting point.