scispace - formally typeset
Search or ask a question
DOI

Autoregressive and neural network models: a comparative study with linearly lagged series

TL;DR: In this article, the authors compare the performance of autoregressive (AR), neural network (NN) and long short-term memory (LSTM) models in forecasting linearly lagged time series.
Abstract: Time series analysis such as stock price forecasting is an important part of financial research. In this regard, autoregressive (AR) and neural network (NN) models offer contrasting approaches to time series modeling. Although AR models remain widely used, NN models and their variant long short-term memory (LSTM) networks have grown in popularity. In this paper, we compare the performance of AR, NN, and LSTM models in forecasting linearly lagged time series. To test the models we carry out extensive numerical experiments based on simulated data. The results of the experiments reveal that despite the inherent advantage of AR models in modeling linearly lagged data, NN models perform just as well, if not better, than AR models. Furthermore, the NN models outperform LSTMs on the same data. We find that a simple multi-layer perceptron can achieve highly accurate out of sample forecasts. The study shows that NN models perform well even in the case of linearly lagged time series.
Citations
More filters
Book ChapterDOI
01 Jan 2022
TL;DR: An empirical study using different configurations of generalized autoregressive conditionally heteroskedastic (GARCH) time series shows that DL models can achieve a significant degree of accuracy in fitting and forecasting AR-GARCH time series.
References
More filters
Journal ArticleDOI
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Abstract: Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.

72,897 citations

Proceedings ArticleDOI
01 Jan 2010
TL;DR: The current relationship between statistics and Python and open source more generally is discussed, outlining how the statsmodels package fills a gap in this relationship.
Abstract: Statsmodels is a library for statistical and econometric analysis in Python. This paper discusses the current relationship between statistics and Python and open source more generally, outlining how the statsmodels package fills a gap in this relationship. An overview of statsmodels is provided, including a discussion of the overarching design and philosophy, what can be found in the package, and some usage examples. The paper concludes with a look at what the future holds.

3,116 citations

Book
22 Dec 2017
TL;DR: Deep Learning with Python introduces the field of deep learning using the Python language and the powerful Keras library and builds your understanding through intuitive explanations and practical examples to apply deep learning in your own projects.
Abstract: Summary Deep Learning with Python introduces the field of deep learning using the Python language and the powerful Keras library. Written by Keras creator and Google AI researcher Franois Chollet, this book builds your understanding through intuitive explanations and practical examples. Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications. About the Technology Machine learning has made remarkable progress in recent years. We went from near-unusable speech and image recognition, to near-human accuracy. We went from machines that couldn't beat a serious Go player, to defeating a world champion. Behind this progress is deep learninga combination of engineering advances, best practices, and theory that enables a wealth of previously impossible smart applications. About the Book Deep Learning with Python introduces the field of deep learning using the Python language and the powerful Keras library. Written by Keras creator and Google AI researcher Franois Chollet, this book builds your understanding through intuitive explanations and practical examples. You'll explore challenging concepts and practice with applications in computer vision, natural-language processing, and generative models. By the time you finish, you'll have the knowledge and hands-on skills to apply deep learning in your own projects. What's Inside Deep learning from first principles Setting up your own deep-learning environment Image-classification models Deep learning for text and sequences Neural style transfer, text generation, and image generation About the Reader Readers need intermediate Python skills. No previous experience with Keras, TensorFlow, or machine learning is required. About the Author Franois Chollet works on deep learning at Google in Mountain View, CA. He is the creator of the Keras deep-learning library, as well as a contributor to the TensorFlow machine-learning framework. He also does deep-learning research, with a focus on computer vision and the application of machine learning to formal reasoning. His papers have been published at major conferences in the field, including the Conference on Computer Vision and Pattern Recognition (CVPR), the Conference and Workshop on Neural Information Processing Systems (NIPS), the International Conference on Learning Representations (ICLR), and others.

868 citations

Journal ArticleDOI
TL;DR: It is concluded that RNNs are capable of modelling seasonality directly if the series in the dataset possess homogeneous seasonal patterns; otherwise, it is recommended to recommend a deseasonalisation step.

450 citations

Journal ArticleDOI
01 Jun 2002
TL;DR: In modeling the stochastic nature of reliability data, both the ARIMA and the recurrent neural network (RNN) models outperform the feed-forward model; in terms of lower predictive errors and higher percentage of correct reversal detection, however, both models perform better with short term forecasting.
Abstract: This paper aims to investigate suitable time series models for repairable system failure analysis. A comparative study of the Box-Jenkins autoregressive integrated moving average (ARIMA) models and the artificial neural network models in predicting failures are carried out. The neural network architectures evaluated are the multi-layer feed-forward network and the recurrent network. Simulation results on a set of compressor failures showed that in modeling the stochastic nature of reliability data, both the ARIMA and the recurrent neural network (RNN) models outperform the feed-forward model; in terms of lower predictive errors and higher percentage of correct reversal detection. However, both models perform better with short term forecasting. The effect of varying the damped feedback weights in the recurrent net is also investigated and it was found that RNN at the optimal weighting factor gives satisfactory performances compared to the ARIMA model.

300 citations