scispace - formally typeset
Journal ArticleDOI

Financial time series forecasting model based on CEEMDAN and LSTM

Reads0
Chats0
TLDR
Two hybrid forecasting models are proposed in this paper which combine the two kinds of empirical mode decomposition (EMD) with the long short-term memory (LSTM) with a better performance in one-step-ahead forecasting of financial time series.
Abstract
In order to improve the accuracy of the stock market prices forecasting, two hybrid forecasting models are proposed in this paper which combine the two kinds of empirical mode decomposition (EMD) with the long short-term memory (LSTM). The financial time series is a kind of non-linear and non-stationary random signal, which can be decomposed into several intrinsic mode functions of different time scales by the original EMD and the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN). To ensure the effect of historical data onto the prediction result, the LSTM prediction models are established for all each characteristic series from EMD and CEEMDAN deposition. The final prediction results are obtained by reconstructing each prediction series. The forecasting performance of the proposed models is verified by linear regression analysis of the major global stock market indices. Compared with single LSTM model, support vector machine (SVM), multi-layer perceptron (MLP) and other hybrid models, the experimental results show that the proposed models display a better performance in one-step-ahead forecasting of financial time series.

read more

Citations
More filters
Journal ArticleDOI

Applications of deep learning in stock market prediction: recent progress

TL;DR: A review of recent works on deep learning models for stock market prediction by category the different data sources, various neural network structures, and common used evaluation metrics to help the interested researchers to synchronize with the latest progress and also help them to easily reproduce the previous studies as baselines.
Journal ArticleDOI

Multi-step wind speed forecasting based on hybrid multi-stage decomposition model and long short-term memory neural network

TL;DR: The proposed combination of two signal decomposition strategies, known as variational mode decomposition (VMD) and singular spectral analysis (SSA) with modulation signal theory are proposed, which indicate that ensembles learning framework are robust and reliable to applications in wind speed forecasting task.
Journal ArticleDOI

Stock price prediction using deep learning and frequency decomposition

TL;DR: The practical findings confirm this claim and indicate that CNN alongside LSTM and CEEMD or EMD could enhance the prediction accuracy and outperform other counterparts and the suggested algorithm withCEEMD provides better performance compared to EMD.
Journal ArticleDOI

Forecasting oil production using ensemble empirical model decomposition based Long Short-Term Memory neural network

TL;DR: The proposed ensemble empirical mode decomposition (EEMD) based Long Short-Term Memory (LSTM) learning paradigm is proposed for oil production forecasting and demonstrated that it is capable of giving almost perfect production forecasting.
Journal ArticleDOI

Carbon price forecasting based on CEEMDAN and LSTM

TL;DR: Li et al. as discussed by the authors proposed a hybrid one combined with Variational Modal Decomposition (VMD) for carbon price forecasting. But their method still needs to be optimized for practice.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Posted Content

Adam: A Method for Stochastic Optimization

TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Related Papers (5)