Journal ArticleDOI
Learning long-term dependencies with gradient descent is difficult
TLDR
This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.Abstract:
Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered. >read more
Citations
More filters
Proceedings Article
Preventing Gradient Explosions in Gated Recurrent Units
TL;DR: This paper finds a condition under which the dynamics of the GRU changes drastically and proposes a learning method to address the exploding gradient problem, which can prevent the exploded gradient problem and improve modeling accuracy.
Journal ArticleDOI
Development and application of a deep learning–based sparse autoencoder framework for structural damage identification:
Chathurdara Sri Nadith Pathirage,Jun Li,Jun Li,Ling Li,Hong Hao,Hong Hao,Wanquan Liu,Ruhua Wang +7 more
TL;DR: A deep sparse autoencoders based deep neural network structure is proposed to enhance the capability and performance of the dimensionality reduction and relationship learning components with a pre-training scheme for structural damage identification.
Posted Content
A Hierarchical Recurrent Encoder-Decoder For Generative Context-Aware Query Suggestion
Alessandro Sordoni,Yoshua Bengio,Hossein Vahabi,Christina Lioma,Jakob Grue Simonsen,Jian-Yun Nie +5 more
TL;DR: In this paper, a hierarchical recurrent encoder-decoder architecture is proposed to model the order of queries in the context while avoiding data sparsity, which can be used to generate context-aware query suggestions.
Journal ArticleDOI
CT-image of rock samples super resolution using 3D convolutional neural network
TL;DR: A novel network named as three-dimensional super resolution convolutional neural network (3DSRCNN) is proposed to realize voxel super resolution for CT images to solve the practical problems in training process such as slow convergence of network training, insufficient memory, etc.
Journal ArticleDOI
Robust combination of neural networks and hidden Markov models for speech recognition
Edmondo Trentin,Marco Gori +1 more
TL;DR: Experimental results in speaker-independent, continuous speech recognition over Italian digit-strings validate the novel hybrid framework, allowing for improved recognition performance over HMMs with mixtures of Gaussian components, as well as over Bourlard and Morgan's paradigm.
References
More filters
Journal ArticleDOI
Optimization by Simulated Annealing
TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
A learning algorithm for continually running fully recurrent neural networks
Ronald J. Williams,David Zipser +1 more
TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI
Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm—Corrigenda for this article is available here
TL;DR: A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” algorithm recently introduced in combinatorial optimization, which is quite costly in terms of function evaluations, but its cost can be predicted in advance, depending only slightly on the starting point.