Journal ArticleDOI
Learning long-term dependencies with gradient descent is difficult
TLDR
This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.Abstract:
Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered. >read more
Citations
More filters
Proceedings ArticleDOI
Hardware Trojans classification for gate-level netlists using multi-layer neural networks
TL;DR: A machine-learning-based hardware-Trojan detection method for gate-level netlists using multi-layer neural networks that obtained at most 100% true positive rate with this proposed method.
Journal ArticleDOI
A new unsupervised data mining method based on the stacked autoencoder for chemical process fault diagnosis
Shaodong Zheng,Jinsong Zhao +1 more
TL;DR: A new unsupervised data mining method based on deep learning is proposed for isolating different conditions of chemical process, including normal operations and faults, and thus labeled database can be created efficiently for constructing fault diagnosis model.
Journal ArticleDOI
Deep Neural Network for Respiratory Sound Classification in Wearable Devices Enabled by Patient Specific Model Tuning
Jyotibdha Acharya,Arindam Basu +1 more
TL;DR: A deep CNN-RNN model that classifies respiratory sounds based on Mel-spectrograms is proposed that is able to achieve state of the art score on the ICBHI’17 dataset and deep learning models are shown to successfully learn domain specific knowledge when pre-trained with breathing data and produce significantly superior performance compared to generalized models.
Posted Content
Sales Demand Forecast in E-commerce using a Long Short-Term Memory Neural Network Methodology
TL;DR: This work introduces several product grouping strategies to supplement the LSTM learning schemes, in situations where sales patterns in a product portfolio are disparate, and achieves competitive results on category level and super-departmental level datasets, outperforming state-of-the-art techniques.
Proceedings ArticleDOI
LSTM networks for data-aware remaining time prediction of business process instances
TL;DR: In this paper, an approach based on deep recurrent neural networks (specifically LSTMs) is proposed to predict the completion time of business process instances under service level agreement constraints.
References
More filters
Journal ArticleDOI
Optimization by Simulated Annealing
TL;DR: There is a deep and useful connection between statistical mechanics and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters), and a detailed analogy with annealing in solids provides a framework for optimization of very large and complex systems.
Book ChapterDOI
Learning internal representations by error propagation
TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Book
Learning internal representations by error propagation
TL;DR: In this paper, the problem of the generalized delta rule is discussed and the Generalized Delta Rule is applied to the simulation results of simulation results in terms of the generalized delta rule.
Journal ArticleDOI
A learning algorithm for continually running fully recurrent neural networks
Ronald J. Williams,David Zipser +1 more
TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI
Minimizing multimodal functions of continuous variables with the “simulated annealing” algorithm—Corrigenda for this article is available here
TL;DR: A new global optimization algorithm for functions of continuous variables is presented, derived from the “Simulated Annealing” algorithm recently introduced in combinatorial optimization, which is quite costly in terms of function evaluations, but its cost can be predicted in advance, depending only slightly on the starting point.