scispace - formally typeset
Search or ask a question

Showing papers on "Moving-average model published in 2017"


Journal ArticleDOI
TL;DR: The results of this study show that non-parametric based hybrid models generally outperform the other models and have more robust forecast performances.

86 citations


Journal ArticleDOI
TL;DR: The experimental results indicate that the proposed hybrid model has the best forecasting performance in the comparisons of all the involved mainstream wind speed forecasting models.

80 citations


Journal ArticleDOI
TL;DR: A prediction method based on wavelet transform and multiple models fusion and an improved free search algorithm are utilized for predictive model parameters optimization to improve the prediction accuracy of chaotic time series.
Abstract: In order to improve the prediction accuracy of chaotic time series, a prediction method based on wavelet transform and multiple models fusion is proposed. The chaotic time series is decomposed and reconstructed by wavelet transform, and approximate components and detail components are obtained. According to different characteristics of each component, least squares support vector machine (LSSVM) is used as predictive model for approximation components. At the same time, an improved free search algorithm is utilized for predictive model parameters optimization. Auto regressive integrated moving average model (ARIMA) is used as predictive model for detail components. The multiple prediction model predictive values are fusion by Gauss–Markov algorithm, the error variance of predicted results after fusion is less than the single model, the prediction accuracy is improved. The simulation results are compared through two typical chaotic time series include Lorenz time series and Mackey–Glass time series. The simulation results show that the prediction method in this paper has a better prediction.

74 citations


Journal ArticleDOI
15 Aug 2017-Energy
TL;DR: In this article, the authors developed a new forecasting system for hourly electricity load in six Italian macro-regions based on a dynamic regression model in which important external predictors are included in a seasonal autoregressive integrate moving average process (sarimax) Specifically, the external variables are lagged hourly loads and calendar effects.

67 citations


Journal ArticleDOI
TL;DR: The purpose of this article is to apply to time series a well‐defined local measure of serial dependence called the local Gaussian autocorrelation, which generally works well also for nonlinear models, and it can distinguish between positive and negative dependence.
Abstract: The traditional and most used measure for serial dependence in a time series is the autocorrelation function. This measure gives a complete characterization of dependence for a Gaussian time series, but it often fails for nonlinear time series models as, for instance, the generalized autoregressive conditional heteroskedasticity model (GARCH), where it is zero for all lags. The autocorrelation function is an example of a global measure of dependence. The purpose of this article is to apply to time series a well-defined local measure of serial dependence called the local Gaussian autocorrelation. It generally works well also for nonlinear models, and it can distinguish between positive and negative dependence. We use this measure to construct a test of independence based on the bootstrap technique. This procedure requires the choice of a bandwidth parameter that is calculated using a cross validation algorithm. To ensure the validity of the test, asymptotic properties are derived for the test functional and for the bootstrap procedure, together with a study of its power for different models. We compare the proposed test with one based on the ordinary autocorrelation and with one based on the Brownian distance correlation. The new test performs well. Finally, there are also two empirical examples.

20 citations


Journal ArticleDOI
TL;DR: A changepoint model, which can detect either a mean shift or a trend change when accounting for autocorrelation in short time-series, was investigated with simulations and a new method is proposed, implying that it is not possible to detect autcorrelation and that the estimate of the autoc orrelation parameter is biased.
Abstract: In this study, a changepoint model, which can detect either a mean shift or a trend change when accounting for autocorrelation in short time-series, was investigated with simulations and a new method is proposed. The changepoint hypotheses were tested using a likelihood ratio test. The test statistic does not follow a known distribution and depends on the length of the time-series and the autocorrelation. The results imply that it is not possible to detect autocorrelation and that the estimate of the autocorrelation parameter is biased. It is therefore recommended to use critical values from the empirical distribution for a fixed autocorrelation.

18 citations


Proceedings ArticleDOI
18 Jun 2017
TL;DR: A hybrid prediction method based on Empirical Mode Decomposition (EMD) and Auto-Regressive Moving Average model (ARMA) for wind speed sequence is proposed and is more accurate than the prediction precision of traditional ARMA model.
Abstract: Due to the non-stationary and nonlinear characteristics of wind speed sequences, the prediction of wind speed is difficult This paper proposes a hybrid prediction method based on Empirical Mode Decomposition (EMD) and Auto-Regressive Moving Average model (ARMA) for wind speed sequence Firstly, the method uses EMD to decompose original wind speed sequence into a series of Intrinsic Mode Functions (IMFS) and a residue, which are more stable than the original sequences Then, the ARMA model is used to predict the subsequences; the prediction result of the wind speed is obtained by adding the predicted results of the subsequences The method proposed in this article is more accurate than the prediction precision of traditional ARMA model

13 citations


Proceedings ArticleDOI
01 Aug 2017
TL;DR: Two methods are used the Box-Jenkins time series ARIMA (Auto Regressive Integrated Moving Average) approach and Multiple Linear Regression (MLR) approach for prediction of state wise rainfall on monthly scales for India.
Abstract: India is as an agricultural country where crop productivity contributes to a major share in the economy For understanding the crop productivity, prediction of rainfall is required and necessary Forecasting is a challenging work and that too for climate is much more troublesome In this proposed work two methods are used the Box-Jenkins time series ARIMA (Auto Regressive Integrated Moving Average) approach and Multiple Linear Regression (MLR) for prediction of state wise rainfall on monthly scales The performances of these algorithms will be compared using standard performance metrics, to find out which algorithm gives most accurate results

11 citations


Journal Article
TL;DR: In this article, a hybrid methodology between empirical mode decomposition with the Moving Average Model (EMD-MA) is used to improve forecasting performance in financial time series, which can forecast non-stationary and non-linear time series without a need to use any transformation method.
Abstract: Recently, forecasting time series has attracted considerable attention in the field of analyzing financial time series data, specifically within the stock market index. Moreover, stock market forecasting is a challenging area of financial time-series forecasting. In this study, a hybrid methodology between Empirical Mode Decomposition with the Moving Average Model (EMD-MA) is used to improve forecasting performances in financial time series. The strength of this EMD-MA lies in its ability to forecast non-stationary and non-linear time series without a need to use any transformation method. Moreover, EMD-MA has a relatively high accuracy and offers a new forecasting method in time series. The daily stock market time series data of 10 countries is applied to show the forecasting performance of the proposed EMD-MA. Based on the five forecast accuracy measures, the results indicate that EMD-MA forecasting performance is superior to traditional Moving Average forecasting model.

11 citations


01 Jan 2017
TL;DR: In this paper, the authors proposed a principled approach to estimate invertible functional time series by fitting functional moving average processes, where the idea is to estimate the coefficient operators in a functional linear filter.
Abstract: Functional time series have become an integral part of both functional data and time series analysis. Important contributions to methodology, theory and application for the prediction of future trajectories and the estimation of functional time series parameters have been made in the recent past. This paper continues this line of research by proposing a first principled approach to estimate invertible functional time series by fitting functional moving average processes. The idea is to estimate the coefficient operators in a functional linear filter. To do this a functional Innovations Algorithm is utilized as a starting point to estimate the corresponding moving average operators via suitable projections into principal directions. In order to establish consistency of the proposed estimators, asymptotic theory is developed for increasing subspaces of these principal directions. For practical purposes, several strategies to select the number of principal directions to include in the estimation procedure as well as the choice of order of the functional moving average process are discussed. Their empirical performance is evaluated through simulations and an application to vehicle traffic data.

9 citations


Journal ArticleDOI
TL;DR: The proposed Bayesian model provides an approach for modeling non-stationary autocorrelation in a hierarchical modeling framework to estimate task means, standard deviations, quantiles, and parameter estimates for covariates that are less biased and have better performance characteristics than some of the contemporary methods.
Abstract: Objective Direct reading instruments are valuable tools for measuring exposure as they provide real-time measurements for rapid decision making. However, their use is limited to general survey applications in part due to issues related to their performance. Moreover, statistical analysis of real-time data is complicated by autocorrelation among successive measurements, non-stationary time series, and the presence of left-censoring due to limit-of-detection (LOD). A Bayesian framework is proposed that accounts for non-stationary autocorrelation and LOD issues in exposure time-series data in order to model workplace factors that affect exposure and estimate summary statistics for tasks or other covariates of interest. Method A spline-based approach is used to model non-stationary autocorrelation with relatively few assumptions about autocorrelation structure. Left-censoring is addressed by integrating over the left tail of the distribution. The model is fit using Markov-Chain Monte Carlo within a Bayesian paradigm. The method can flexibly account for hierarchical relationships, random effects and fixed effects of covariates. The method is implemented using the rjags package in R, and is illustrated by applying it to real-time exposure data. Estimates for task means and covariates from the Bayesian model are compared to those from conventional frequentist models including linear regression, mixed-effects, and time-series models with different autocorrelation structures. Simulations studies are also conducted to evaluate method performance. Results Simulation studies with percent of measurements below the LOD ranging from 0 to 50% showed lowest root mean squared errors for task means and the least biased standard deviations from the Bayesian model compared to the frequentist models across all levels of LOD. In the application, task means from the Bayesian model were similar to means from the frequentist models, while the standard deviations were different. Parameter estimates for covariates were significant in some frequentist models, but in the Bayesian model their credible intervals contained zero; such discrepancies were observed in multiple datasets. Variance components from the Bayesian model reflected substantial autocorrelation, consistent with the frequentist models, except for the auto-regressive moving average model. Plots of means from the Bayesian model showed good fit to the observed data. Conclusion The proposed Bayesian model provides an approach for modeling non-stationary autocorrelation in a hierarchical modeling framework to estimate task means, standard deviations, quantiles, and parameter estimates for covariates that are less biased and have better performance characteristics than some of the contemporary methods.

Posted Content
TL;DR: In this article, the authors proposed a principled approach to estimate invertible functional time series by fitting functional moving average processes, where the idea is to estimate the coefficient operators in a functional linear filter.
Abstract: Functional time series have become an integral part of both functional data and time series analysis. Important contributions to methodology, theory and application for the prediction of future trajectories and the estimation of functional time series parameters have been made in the recent past. This paper continues this line of research by proposing a first principled approach to estimate invertible functional time series by fitting functional moving average processes. The idea is to estimate the coefficient operators in a functional linear filter. To do this a functional Innovations Algorithm is utilized as a starting point to estimate the corresponding moving average operators via suitable projections into principal directions. In order to establish consistency of the proposed estimators, asymptotic theory is developed for increasing subspaces of these principal directions. For practical purposes, several strategies to select the number of principal directions to include in the estimation procedure as well as the choice of order of the functional moving average process are discussed. Their empirical performance is evaluated through simulations and an application to vehicle traffic data.

Journal ArticleDOI
TL;DR: In this article, a vector double autoregressive model (VDAR) was proposed for multivariate time series, which is a straightforward extension from univariate case to multivariate case and sufficient ergodicity conditions were given for the model.

Book ChapterDOI
01 Jan 2017
TL;DR: In this paper, the assumption of independent observations is given up in the class of mixed models which combine fixed and random effects, and which are suited for both nested and longitudinal (i.e., time series) data.
Abstract: From a purely statistical point of view, one major difference between time series and data sets as discussed in the previous chapters is that temporally consecutive measurements are usually highly dependent, thus violating the assumption of identically and independently distributed observations on which most of conventional statistical inference relies. Before we dive deeper into this topic, we note that the independency assumption is not only violated in time series but also in a number of other common test situations. Hence, beyond the area of time series, statistical models and methods have been developed to deal with such scenarios. Most importantly, the assumption of independent observations is given up in the class of mixed models which combine fixed and random effects, and which are suited for both nested and longitudinal (i.e., time series) data (see, e.g., Khuri et al. 1998; West et al. 2006, for more details). Aarts et al. (2014) discuss these models specifically in the context of neuroscience, where dependent and nested data other than time series frequently occur, e.g., when we have recordings from multiple neurons, nested within animals, nested within treatment groups, thus introducing dependencies. Besides including random effects, mixed models can account for dependency by allowing for much more flexible (parameterized) forms for the involved covariance matrices. For instance, in a regression model like Eq. ( 2.6) we may assume a full covariance matrix for the error terms [instead of the scalar form assumed in Eq. ( 2.6)] that captures some of the correlations among observations. Taking such a full covariance structure for Σ into account, under the multivariate normal model the ML estimator for parameters β becomes (West et al. 2006)

Journal ArticleDOI
TL;DR: An optimal forecasting model building algorithm combined with model filter and candidate model pool is proposed, in which a univariate linear time series forecasting model is built.
Abstract: As the Box-Jenkins method could not grasp the non-stationary characteristics of time series exactly, nor identify the optimal forecasting model order quickly and precisely, a self-adaptive processing and forecasting algorithm for univariate linear time series is proposed. A self-adaptive series characteristic test framework which employs varieties of statistic tests is constructed to solve the problem of inaccurate identification and inadequate processing for non-stationary characteristics of time series. To achieve favorable forecasts, an optimal forecasting model building algorithm combined with model filter and candidate model pool is proposed, in which a univariate linear time series forecasting model is built. Experimental data demonstrates that the proposed algorithm outperforms the comparativemethod in all forecasting performance statistics.

Journal ArticleDOI
TL;DR: In this paper, the Caputo fractional derivative was used to re-specify the hybrid Phillips curve as a dynamic process of inflation with memory, and the results indicate that the model performs well against a traditional hybrid Phillips Curve, an integrated moving average model and a naive random walk model in quasi-in-sample forecasts.
Abstract: This paper adopts the Caputo fractional derivative to re-specify the hybrid Phillips curve as a dynamic process of inflation with memory. The Caputo fractional derivative contains a non-integer differencing order, providing the same insight for persistence as emphasized in the Autoregressive Fractionally Integrated Moving Average (ARFIMA) time series models. We utilize the hybrid Phillips curve with memory to forecast US inflation during 1967–2014. The results indicate that our model performs well against a traditional hybrid Phillips curve, an integrated moving average model and a naive random walk model in quasi-in-sample forecasts. In out-of-sample forecasts based on Consumer Price Index (CPI) and Personal Consumption Expenditure (PCE) data, we find that the forecasting performance of Phillips curve models depends on the sample period. Our model with CPI data can outperform others in out-of-sample forecasts during and after the most recent financial crisis (2006–2014).

Proceedings ArticleDOI
24 Aug 2017
TL;DR: Numerical tests on time series reveal that the proposed Riemannian multi-manifold feature-generation scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.
Abstract: This paper introduces Riemannian multi-manifold modeling in the context of brain-network analytics: Brainnetwork time-series yield features which are modeled as points lying in or close to a union of a finite number of submanifolds within a known Riemannian manifold. Distinguishing disparate time series amounts thus to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for brain-network time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positivedefinite matrices. Based on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their Riemannian-geometry properties. Numerical tests on time series, synthetically generated from real brain-network structural connectivity matrices, reveal that the proposed scheme outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.

Journal ArticleDOI
TL;DR: In this article, the authors derived the stationarity and invertibility conditions of the dynamic conditional correlation (or DCC) model from a vector random coefficient moving average process (VRMC).
Abstract: One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of asymptotic properties of the Quasi-Maximum Likelihood Estimators (QMLE). To date, the statistical properties of the QMLE of the DCC parameters have purportedly been derived under highly restrictive and unverifiable regularity conditions. The paper shows that the DCC model can be obtained from a vector random coefficient moving average process, and derives the stationarity and invertibility conditions of the DCC model. The derivation of DCC from a vector random coefficient moving average process raises three important issues, as follows: (i) demonstrates that DCC is, in fact, a dynamic conditional covariance model of the returns shocks rather than a dynamic conditional correlation model; (ii) provides the motivation, which is presently missing, for standardization of the conditional covariance model to obtain the conditional correlation model; and (iii) shows that the appropriate ARCH or GARCH model for DCC is based on the standardized shocks rather than the returns shocks. The derivation of the regularity conditions, especially stationarity and invertibility, should subsequently lead to a solid statistical foundation for the estimates of the DCC parameters. Several new results are also derived for univariate models, including a novel conditional volatility model expressed in terms of standardized shocks rather than returns shocks, as well as the associated stationarity and invertibility conditions.

Journal ArticleDOI
12 Jun 2017
TL;DR: In this article, an evaluation of moving average model and autoregressive moving average (ARMA) model for prediction of industrial electricity consumption in Nigeria is presented, the results show that the Autoregressive Moving Average model with coefficient of determination value of 66.0% and RMSE value of 68.628 gives better prediction performance than the Moving Average with coefficient OFD value of 42.6% and value of 84.749.
Abstract: In this paper, evaluation of moving average model and autoregressive moving average model (ARMA) for prediction of industrial electricity consumption in Nigeria is presented. Industrial electricity consumption data obtained from Central Bank of Nigeria (CBN) Statistical Bulletin for the year 1979-2014 is used to determine the model parameters and prediction performance in terms of Root Mean Square Error (RMSE) and Coefficient of determination r2 values. The results show that the Autoregressive Moving Average (ARMA) model with coefficient of determination value of 66.0% and RMSE value of 68.628 gives better prediction performance than the Moving Average with coefficient of determination value of 42.6% and value of 84.749. However, coefficient of determination value of 66% is not particularly adequate for acceptable prediction accuracy. In that case, for better prediction accuracy for the industrial electricity consumption in Nigeria, other models may need to be examined apart from the two models considered in this paper.

Book ChapterDOI
14 Jun 2017
TL;DR: This work presents a method to find some alternatives to the ANN trained with the raw data and evaluates the performance of all alternatives to take the decision, on validation subset, which of the alternatives could improve the performance, on test subset of theANN trained with raw data.
Abstract: Several tasks in science, engineering, or financial are related with sequences of values throughout the time (time series) This paper is focused in univariate time series, so unknown future values are obtained from k previous (and known) values To fit a model between independent variables (present and past values) and dependent variables (future values), Artificial Neural Networks, which are data driven, can get good results in its performance results In this work, we present a method to find some alternatives to the ANN trained with the raw data This method is based on transforming the original time series into the time series of differences between two consecutive values and the time series of increment (−1, 0, +1) between two consecutive values The three ANN obtained can be applied in an individual way or combine to get a fourth alternative which result from the combination of the other The method evaluates the performance of all alternatives and take the decision, on validation subset, which of the alternatives could improve the performance, on test subset of the ANN trained with raw data

Book ChapterDOI
01 Jul 2017
TL;DR: A wavelet–ARIMA algorithm for predicting vehicle speed is proposed and the experimental results show that with the increase of the sample data loss rate, the error of the three padding algorithms increases, but the PMM error curve is more gentle.
Abstract: Aiming at the problem of predicting the effect of floating car speed prediction due to missing data and noise disturbance, in this chapter, the accuracy of 5, 10, 20, 30% of the regression filling method, EM method, PMM method to fill the accuracy of the analysis, while using wavelet transform strong time domain and frequency domain resolution characteristics, and the original data is denoised by the translation invariant wavelet transform, combined with the Auto-Regressive Moving Average Model (ARIMA) in terms of time series prediction, a wavelet–ARIMA algorithm for predicting vehicle speed is proposed The experimental results show that with the increase of the sample data loss rate, the error of the three padding algorithms increases, but the PMM error curve is more gentle Compared with the un-denoised ARIMA model, the Wavelet–ARIMA model is more accurate for predicting the speed of the floating car

Patent
11 Jan 2017
TL;DR: In this article, a method for predicting the performances of a battery of a new energy automobile was proposed, and the method comprises the following steps: 1) carrying out the modeling of a linear part of a time sequence through employing an autoregression moving average model, and obtaining remainder terms at the same time; 2) building a neural network model through the obtained remainder terms, and 3) combining the above results and obtaining ta mixed model.
Abstract: The invention discloses a method for predicting the performances of a battery of a new energy automobile, and the method comprises the following steps: 1, carrying out the modeling of a linear part of a time sequence through employing an autoregression moving average model, and obtaining remainder terms at the same time; 2, building a neural network model through the obtained remainder terms, ie, carrying out the modeling of a non-linear part; 3, combining the above results, and obtaining ta mixed model According to the invention, the method is reasonable in design, employs the mixed model and combines the autoregression moving average model and the neural network model The method not only can capture the linear part of the time sequence, but also can capture a non-linear time sequence Compared with the prior art, the method combines the results of two parts together to obtain higher prediction precision, so the method is good in application prospect

Patent
21 Jul 2017
TL;DR: In this paper, a time slice parameter identification-based dynamic simulation model verification method is proposed, which comprises the steps of S1, establishing a nonlinear regression moving average model representation equation, with an exogenous variable, of a motion equation of a to-be-verified model; S2, designing and planning an actual navigation test or a simulation test, and obtaining data used for model verification; S3, identifying feature parameters of the motion equation by using a steady-state response method based on an NARMAX model in combination with field test data and simulation test data
Abstract: The invention discloses a time slice parameter identification-based dynamic simulation model verification method. The method comprises the steps of S1, establishing a nonlinear regression moving average model representation equation, with an exogenous variable, of a motion equation of a to-be-verified model; S2, designing and planning an actual navigation test or a simulation test, and obtaining data used for model verification; S3, identifying feature parameters of the motion equation by using a steady-state response method based on an NARMAX model in combination with field test data and simulation test data; and S4, according to a confidence distribution level of estimated values and reference true values of the identified feature parameters, taking an estimated value in a confidence level conversion form as a quantitative representation of a confidence level of model verification. The method has the advantages of simple principle, easy realization, high accuracy and the like.

Posted Content
TL;DR: The proposed framework outperforms classical and state-of-the-art techniques in clustering brain-network states/structures hidden beneath synthetic fMRI time series and brain-activity signals generated from real brain- network structural connectivity matrices.
Abstract: This paper advocates Riemannian multi-manifold modeling in the context of network-wide non-stationary time-series analysis. Time-series data, collected sequentially over time and across a network, yield features which are viewed as points in or close to a union of multiple submanifolds of a Riemannian manifold, and distinguishing disparate time series amounts to clustering multiple Riemannian submanifolds. To support the claim that exploiting the latent Riemannian geometry behind many statistical features of time series is beneficial to learning from network data, this paper focuses on brain networks and puts forth two feature-generation schemes for network-wide dynamic time series. The first is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of appropriately defined observability matrices, to points into the Grassmann manifold. The second utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positive-definite matrices. Capitilizing on recently developed research on clustering Riemannian submanifolds, an algorithm is provided for distinguishing time series based on their geometrical properties, revealed within Riemannian feature spaces. Extensive numerical tests demonstrate that the proposed framework outperforms classical and state-of-the-art techniques in clustering brain-network states/structures hidden beneath synthetic fMRI time series and brain-activity signals generated from real brain-network structural connectivity matrices.

Posted Content
TL;DR: In this paper, the authors introduced a new method to qualify the goodness of fit parameter estimation of compound Wishart models based on the free deterministic equivalent Z-score, which is a generalization of statistical hypothesis testing to one dimensional moving average model.
Abstract: We introduce a new method to qualify the goodness of fit parameter estimation of compound Wishart models. Our method based on the free deterministic equivalent Z-score, which we introduce in this paper. Furthermore, an application to two dimensional autoregressive moving-average model is provided. Our proposal method is a generalization of statistical hypothesis testing to one dimensional moving average model based on fluctuations of real compound Wishart matrices, which is a recent result by Hasegawa, Sakuma and Yoshida.

Journal Article
TL;DR: In this article, a seasonal autoregressive integrated moving average (SARIMA) model with correlated residuals was proposed and a new residual model was added to the SARIMA component to obtain a new model.
Abstract: This work proposed a method of handling a seasonal autoregressive integrated moving average (SARIMA) model with correlated residuals. SARIMA model was fitted to a rainfall data and the residuals obtained were found to be correlated. This indicated the inadequacy of the model. The correlated residual series were modelled separately using Box and Jenkins autoregressive moving average (ARMA) methods. The resulting residual model was added to the SARIMA component to obtain a new model. The new model was re-fitted to the data and the residuals obtained were found to be uncorrelated; confirming the adequacy of the new model.

Journal ArticleDOI
TL;DR: It is shown that the results of this model can be reduced to the autocovariance and autocorrelation of the standard ARMA model as well as a special case.
Abstract: Generalized ARMA (GARMA) model is a new class of model that has been introduced to reveal some unknown features of certain time series data. The objective of this paper is to derive the autocovariance and autocorrelation structure of GARMA(1,3;δ,1) model in order to study the behaviour of the model. It is shown that the results of this model can be reduced to the autocovariance and autocorrelation of the standard ARMA model as well as a special case. Numerical examples are used to illustrate the behaviour of the autocovariance and autocorrelation at different δ values to show the various structures that the model can represent

Journal ArticleDOI
21 Sep 2017
TL;DR: In this paper, the authors proposed an extension of the singular spectrum analysis (SSA) technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series.
Abstract: Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

Proceedings ArticleDOI
01 May 2017
TL;DR: It turns out the new method is an effective way to select the order of FTSM, instead of listing the forecasted results generated by Nth order model respectively, with a simpler method.
Abstract: The choice of the order of Fuzzy Time Series model has, to some extent, an influence on the accuracy of the forecasted result. This paper establishes relationship between Autoregressive Model and Fuzzy Time series model so as to apply autocorrelation theory to the selection of the order of Fuzzy Time Series Model (FTSM). Therefore, instead of listing the forecasted results generated by Nth order model respectively, we can choose the order of FTSM with a simpler method. In order to verify the effectiveness of the proposed method, the data of macula from 1701 to 1722 has been used in the experiment. It turns out the new method is an effective way to select the order of FTSM.

01 Jun 2017
TL;DR: The method that used in this paper is better than the Box-Jenkins in term of optimality time and will overcome the problem of outliers in time series by using this optimization method.
Abstract: Model identification is an important and complicated step within the autoregressive integrated moving average (ARIMA) methodology framework. This step is especially difficult for integrated series. In this article first investigate Box-Jenkins methodology and its faults in detecting model, and hence have discussed the problem of outliers in time series. By using this optimization method, we will overcome this problem. The method that used in this paper is better than the Box-Jenkins in term of optimality time.