scispace - formally typeset
Search or ask a question

Showing papers on "Moving-average model published in 2013"


BookDOI
21 Mar 2013

250 citations


Journal ArticleDOI
TL;DR: It is suggested to make a thorough evaluation of the time series properties of a data set and various avenues are suggested including some that are maybe unfamiliar to most dendrochronologists including generalized autoregressive conditional heteroscedasticity (GARCH) models.

57 citations


Journal ArticleDOI
TL;DR: In this article, the asymptotic theory of least squares estimation in a threshold moving average model is studied. But the authors focus on the limiting distribution of the estimated threshold in practice, which is the first successful effort in this direction.
Abstract: This paper studies the asymptotic theory of least squares estimation in a threshold moving average model. Under some mild conditions, it is shown that the estimator of the threshold is n-consistent and its limiting distribution is related to a two-sided compound Poisson process, whereas the estimators of other coefficients are strongly consistent and asymptotically normal. This paper also provides a resampling method to tabulate the limiting distribution of the estimated threshold in practice, which is the first successful effort in this direction. This resampling method contributes to threshold literature. Simultaneously, simulation studies are carried out to assess the performance of least squares estimation in finite samples.

36 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider continuous time moving average processes observed on a lattice, which are stationary time series and show asymptotic normality of the sample mean, the sample autocovariances and the sample auto-correlations.

28 citations


Journal ArticleDOI
01 Nov 2013
TL;DR: A robust fuzzy clustering model for classifying time series, considering the autoregressive metric based, capable of representing a large class of time series and capable of suitably neutralizing the negative influence of the presence of “outlier” time series.
Abstract: We propose a robust fuzzy clustering model for classifying time series, considering the autoregressive metric based. In particular, we suggest a clustering procedure which: 1) considers an autoregressive parameterization of the time series, capable of representing a large class of time series; 2) inherits the benefits of the partitioning around medoids approach, classifying time series in classes characterized by prototypal observed time series (the “medoid” time series), which synthesize the structural information of each cluster; 3) inherits the benefits of the fuzzy approach, capturing the vague (fuzzy) behaviour of particular time series, such as “middle” time series (time series with middle features in respect of the considered clusters in all time period) and “switching” time series (time series with a pattern typical of a given cluster during a certain time period and a completely different pattern, similar to another cluster, in another time period); 4) is capable of suitably neutralizing the negative influence of the presence of “outlier” time series in the clustering procedure, i.e., the “outlier” time series are classified in the so-called “noise cluster” and therefore cluster structure is not altered. To illustrate the effectiveness of the proposed model, a simulation study and an application to real time series are carried out.

23 citations


Journal ArticleDOI
TL;DR: In this article, Liu et al. introduced the one-step generalized method of moments (GMM) estimation method to a spatial autoregressive model that has a spatial moving average process in the disturbance term (for short SARMA (1,1)).
Abstract: In this paper, we introduce the one-step generalized method of moments (GMM) estimation methods considered in Lee (2007a) and Liu, Lee, and Bollinger (2010) to a spatial autoregressive model that has a spatial moving average process in the disturbance term (for short SARMA (1,1)). First, we determine the set of the best linear and quadratic moment functions for the GMM estimation. Second, we show that the GMM estimator (GMME) formulated from this set is the most efficient estimator within the class of GMMEs formulated from the set of linear and quadratic moment functions. Our analytical results show that the GMME can be asymptotically equivalent to the maximum likelihood estimator (MLE), when the disturbance term is i.i.d. Normal. When the disturbance term is simply i.i.d., the one-step GMME can be more efficient than the quasi MLE (QMLE). With an extensive Monte Carlo study, we compare its finite sample properties against the MLE, the QMLE and the estimators suggested in Fingleton (2008).

17 citations


Patent
25 Dec 2013
TL;DR: In this paper, a photovoltaic generation power ultra-short term prediction method based on a time series model was proposed, which is characterized by comprising of collecting and normalizing historic power data, establishing a fitting equation according to the normalized historic data, and determining the order of a model according to established fitting equation and residual variance, namely, the values of p and q; determining the value of A; establishing an auto-regressive moving average model.
Abstract: The invention discloses a photovoltaic generation power ultra-short term prediction method based on a time series model. The method is characterized by comprising the following steps of collecting and normalizing historic power data of a photovoltaic power station; establishing a fitting equation according to the normalized historic power data, and determining the order of a model according to the established fitting equation and residual variance, namely, the values of p and q; determining the value of A; establishing an auto-regressive moving average model. According to the photovoltaic generation power ultra-short term prediction method based on the time series model, by establishing the prediction model for the ultra-short term prediction of photovoltaic generation power through the historic power data, the fitting equation and the auto-regressive moving average model, so that the aim of short-term accurate prediction of the photovoltaic generation power can be achieved according to the exiting model and the existing data.

13 citations


Journal ArticleDOI
TL;DR: In this article, a multivariate Laplace moving average (MLA) model is used to model the multivariate load on a vehicle and the model is validated by analysis of the resulting damage index.

13 citations


Posted Content
TL;DR: Cumby and Huizinga as mentioned in this paper extended the L-B-P approach to cover a much wider range of hypotheses and settings: (a) tests for the presence of autocorrelation of order p-1 or less; (b) tests following estimation in which regressors are endogenous and estimation is by IV or GMM methods.
Abstract: Testing for the presence of autocorrelation in a time series is a common task for researchers working with time series data. The standard Q test statistic, introduced by Box and Pierce (1970) and refined by Ljung and Box (1978), is applicable to univariate time series and to testing for residual autocorrelation under the assumption of strict exogeneity. Breusch (1978) and Godfrey (1978) in effect extended the L-B-P approach to testing for autocorrelations in residuals in models with weakly exogenous regressors. However, each of these readily-available tests have important limitations. We use the results of Cumby and Huizinga (1992) to extend the implementation of the Q test statistic of L-B-P-B-G to cover a much wider ranges of hypotheses and settings: (a) tests for the presence of autocorrelation of order p through q, where under the null hypothesis there may be autocorrelation of order p-1 or less; (b) tests following estimation in which regressors are endogenous and estimation is by IV or GMM methods; and (c) tests following estimation using panel data. We show that the Cumby-Huizinga test, although developed for the large-T setting, formally identical to the test developed by Arellano and Bond (1991) for AR(2) in a large-N panel setting.

12 citations


Journal Article
TL;DR: In this paper, a study was designed to look at the behavior of stock price of Nigerian Breweries Plc with passage of time and to fit Autoregressive Integrated Moving Average Filter for the prediction of stock prices of the Nigeria Breweries plc.
Abstract: The financial system of any economy is seen to be divided between the financial intermediaries (banks, insurance companies and pension funds) and the markets (bond and stock markets). This study was designed to look at the behavior of stock price of Nigerian Breweries Plc with passage of time and to fit Autoregressive Integrated Moving Average Filter for the prediction of stock price of the Nigerian Breweries Plc. The data were collected from Nigerian Stock exchange and Central Securities Clearing System (CSCS).Time plot was used to detect the presence of time series components in the daily stock prices of Nigerian breweries from 2008 to 2012 and to check if the series is stationary. The structure of dependency was measured by using auto- ovariance, the auto-correlation and partial autocorrelation. An autoregressive model and moving average model were fitted to stationary series to predict the future stock prices. Alkaike Information Criteria (AIC) was used to determine the order of the fitted autoregressive model. Diagnostic checks were carried out to assess the fit of the fitted autoregressive model. The time plot showed an irregular upward trend. A first difference of the non stationary series made the series stationary. The plots of the Autocorrelation and Partial Autocorrelation showed that stationary has been introduced into the original non-stationary series in which most of the Plotted points decaying to zero sharply. The plot of Akaike Information Criterion showed that the order of the fitted autoregressive model was 8. The ARIMA model diagnostic check showed that the fitted ARIMA model had a reasonable fit for the original series. Predicted stock price ranges from 138.66 to 141.49.

11 citations


Journal ArticleDOI
TL;DR: In this article, the one-step generalized method of moments (GMM) estimation method was introduced for spatial models that impose a spatial moving average process for the disturbance term, and the set of best linear and quadratic moment functions for GMM estimation was determined.


Patent
11 Dec 2013
TL;DR: In this paper, a multi-model dynamic soft measuring modeling method comprises the steps of establishing multiple sub models by utilizing a self-adaptive fuzzy core clustering method and a least square support vector machine; then taking a probability distribution function constructed by a proof synthesis rule as a weight factor to perform fusing on sub model output to obtain the output of multiple models; finally performing dynamic estimation on predicted errors of the multiple models by combining an autoregression moving average model.
Abstract: A multi-model dynamic soft measuring modeling method comprises the steps of establishing multiple sub models by utilizing a self-adaptive fuzzy core clustering method and a least square support vector machine; then taking a probability distribution function constructed by a proof synthesis rule as a weight factor to perform fusing on sub model output to obtain the output of multiple models; finally performing dynamic estimation on predicted errors of the multiple models by combining an autoregression moving average model.

Journal Article
TL;DR: Most machine learning, data mining and statistical methods rely on the assumption that the analyzed data points are independent and identically distributed (i.i.d.), but this assumption is often violated because of the phenomenon of autocorrelation.
Abstract: Most machine learning, data mining and statistical methods rely on the assumption that the analyzed data points are independent and identically distributed (i.i.d.). More specifically, the individual examples included in the training data are assumed to be drawn independently from each other from the same probability distribution. However, cases where this assumption is violated can be easily found: For example, species are distributed non-randomly across a wide range of spatial scales. The i.i.d. assumption is often violated because of the phenomenon of autocorrelation. The cross-correlation of an attribute with itself is typically referred to as autocorrelation: This is the most general definition found in the literature. Specifically, in spatial analysis, spatial autocorrelation has been defined as the correlation among data values, which is strictly due to the relative location proximity of the objects that the data refer to. It is justified by Tobler’s first law of geography [1] according to which “everything is related to everything else, but near things are more related than distant things”. In network studies, autocorrelation is defined by the homophily principle [2] as the tendency of nodes with similar values to be linked with each other.

Journal ArticleDOI
TL;DR: In this article, a Vector Autoregressive model is used to determine the structural relationship between two or more variables and the forecast values from the VAR model is more realistic and closely reflect the current economic reality in Nigeria indicated by the forecast evaluation tools.
Abstract: Correlation and Regression are the traditional approach of determining relationship between two or more variables. When the variables are multiple and the dependent variable is considered having an explanatory variable, then a Vector Autoregressive model is used to determine the structural relationship between the variables. If these variables are co-integrated, VAR model is not appropriate, but our focus is on the structural relationship and measuring forecast performance of a VAR and Time series regression with Lagged Explanatory Variables. Some Nigerian economic series (Government Revenue and Expenditure, Inflation Rates and Investment) data were analysed and the Root mean Square forecast Error (RMSFE) and Mean Absolute Percentage Forecast Error (MAPFE) are used as measurement criteria. The VAR model was found to be better than Time series regression with Lagged Explanatory Variables model as indicated by Meta diagnostic tools. The forecast values from the VAR model is more realistic and closely reflect the current economic reality in Nigeria indicated by the forecast evaluation tools.

Posted Content
TL;DR: The objective of the paper is to establish the appropriateness of integrating in predictive simulation an econometric estimation of a given variable into a standard moving average process (a linear algorithm with constant positive weights of distributed lags).
Abstract: The objective of the paper is to establish the appropriateness of integrating in predictive simulation an econometric estimation of a given variable into a standard moving average process (a linear algorithm with constant positive weights of distributed lags) The empirical search relates to the Romanian input-output tables collapsed into ten sectors The database concerning the final output during 1989-2009 years is herein analyzed

Dissertation
10 Dec 2013
TL;DR: In this article, the effect of cross-sectional aggregation on demand forecasting is evaluated and the results indicate that performance improvements achieved through the aggregation approach are a function of the aggregation level, the smoothing constant value used for SES and the process parameters.
Abstract: Demand forecasting performance is subject to the uncertainty underlying the time series an organisation is dealing with. There are many approaches that may be used to reduce demand uncertainty and consequently improve the forecasting (and inventory control) performance. An intuitively appealing such approach that is known to be effective is demand aggregation. One approach is to aggregate demand in lower-frequency ‘time buckets’. Such an approach is often referred to, in the academic literature, as temporal aggregation. Another approach discussed in the literature is that associated with cross-sectional aggregation, which involves aggregating different time series to obtain higher level forecasts.This research discusses whether it is appropriate to use the original (not aggregated) data to generate a forecast or one should rather aggregate data first and then generate a forecast. This Ph.D. thesis reveals the conditions under which each approach leads to a superior performance as judged based on forecast accuracy. Throughout this work, it is assumed that the underlying structure of the demand time series follows an AutoRegressive Integrated Moving Average (ARIMA) process.In the first part of our1 research, the effect of temporal aggregation on demand forecasting is analysed. It is assumed that the non-aggregate demand follows an autoregressive moving average process of order one, ARMA(1,1). Additionally, the associated special cases of a first-order autoregressive process, AR(1) and a moving average process of order one, MA(1) are also considered, and a Single Exponential Smoothing (SES) procedure is used to forecast demand. These demand processes are often encountered in practice and SES is one of the standard estimators used in industry. Theoretical Mean Squared Error expressions are derived for the aggregate and the non-aggregate demand in order to contrast the relevant forecasting performances. The theoretical analysis is validated by an extensive numerical investigation and experimentation with an empirical dataset. The results indicate that performance improvements achieved through the aggregation approach are a function of the aggregation level, the smoothing constant value used for SES and the process parameters.In the second part of our research, the effect of cross-sectional aggregation on demand forecasting is evaluated. More specifically, the relative effectiveness of top-down (TD) and bottom-up (BU) approaches are compared for forecasting the aggregate and sub-aggregate demands. It is assumed that that the sub-aggregate demand follows either a ARMA(1,1) or a non-stationary Integrated Moving Average process of order one, IMA(1,1) and a SES procedure is used to extrapolate future requirements. Such demand processes are often encountered in practice and, as discussed above, SES is one of the standard estimators used in industry (in addition to being the optimal estimator for an IMA(1) process). Theoretical Mean Squared Errors are derived for the BU and TD approach in order to contrast the relevant forecasting performances. The theoretical analysis is supported by an extensive numerical investigation at both the aggregate and sub-aggregate levels in addition to empirically validating our findings on a real dataset from a European superstore. The results show that the superiority of each approach is a function of the series autocorrelation, the cross-correlation between series and the comparison level.Finally, for both parts of the research, valuable insights are offered to practitioners and an agenda for further research in this area is provided.

Journal ArticleDOI
TL;DR: The out-of-sample experiment for the 50 stocks of the Shanghai 50 Composite Index shows that the FFT model is superior to the classic moving average model in terms of both volume prediction and Volume-weighted Average Price (VWAP) tracking accuracy.
Abstract: We propose a model for decomposing a volume series based on the Fast Fourier Transform (FFT) algorithm. By setting a threshold for the power spectrum, the model extracts the periodic and nonperiodic components from the original volume series and then predicts them. By analyzing samples from four major stock indices, we find that a too small threshold and a too large threshold cause negative effects on the performance of the FFT model. Appropriate thresholds are found at approximately the 93rd to 95th percentile for the four indices studied. The out-of-sample experiment for the 50 stocks of the Shanghai 50 Composite Index shows that the FFT model is superior to the classic moving average model in terms of both volume prediction and Volume-weighted Average Price (VWAP) tracking accuracy. Meanwhile, for almost all of the 50 stocks, the FFT model outperforms the Bialkowski et al. (2008) model in terms of volume-prediction accuracy. The two models perform comparably in terms of the VWAP tracking error.


Proceedings ArticleDOI
23 Jul 2013
TL;DR: An approach of combining the ARMA model's difference equation form and transfer form (with Green's function) to achieve that new prediction value will calculate the change of the newest observation instead of reestablishing a new model, which obtains higher forecasting accuracy and less computation.
Abstract: This paper proposes an updated prediction ARMA (autoregressive moving average) model for the disadvantage of traditional model that the future value forecasted by k-step ahead predictive model from time t didn't include the newest information on time t + 1 with the passage of time after a model was build. For this purpose, we adapt an approach of combining the ARMA model's difference equation form and transfer form (with Green's function) to achieve that new prediction value will calculate the change of the newest observation instead of reestablishing a new model. Furthermore, this method obtains higher forecasting accuracy and less computation. Finally we take an experiment on a time series sequence data to indicate the model's efficiency and effectiveness.

Book ChapterDOI
01 Jan 2013
TL;DR: A class of nonlinear time series models in which the underlying process shows a threshold structure where each regime follows a vector moving average model is proposed.
Abstract: In this chapter we propose a class of nonlinear time series models in which the underlying process shows a threshold structure where each regime follows a vector moving average model. We call this class of processes Threshold Vector Moving Average. The stochastic structure is presented even proposing alternative model specifications. The invertibility of the model is discussed detail and, in this context, empirical examples are proposed to show some features that distinguish the stochastic structure under analysis from other linear and nonlinear time series models widely investigated in the literature.

Book ChapterDOI
01 Jan 2013
TL;DR: Simulations demonstrate the excellent performance of the MML criteria in comparison to standard moving average inference procedures in terms of both parameter estimation and order selection, particularly for small sample sizes.
Abstract: This paper presents a novel approach to estimating a moving average model of unknown order from an observed time series based on the minimum message length principle (MML). The nature of the exact Fisher information matrix for moving average models leads to problems when used in the standard Wallace–Freeman message length approximation, and this is overcome by utilising the asymptotic form of the information matrix. By exploiting the link between partial autocorrelations and invertible moving average coefficients an efficient procedure for finding the MML moving average coefficient estimates is derived. The MML estimating equations are shown to be free of solutions at the boundary of the invertibility region that result in the troublesome “pile-up” effect in maximum likelihood estimation. Simulations demonstrate the excellent performance of the MML criteria in comparison to standard moving average inference procedures in terms of both parameter estimation and order selection, particularly for small sample sizes.

Book
25 Apr 2013
TL;DR: In this article, the authors focus on the modeling and analysis of crop yield over space and time, and quantify the variability in yield explained by genetics and space-time (environment) factors, and study how spatio-temporal information could be incorporated and also utilized in modeling and forecasting yield.
Abstract: Space and time are often vital components of research data sets. Accounting for and utilizing the space and time information in statistical models become beneficial when the response variable in question is proved to have a space and time dependence. This work focuses on the modeling and analysis of crop yield over space and time. Specifically, two different yield data sets were used. The first yield and environmental data set was collected across selected counties in Kansas from yield performance tests conducted for multiple years. The second yield data set was a survey data set collected by USDA across the US from 1900-2009. The objectives of our study were to investigate crop yield trends in space and time, quantify the variability in yield explained by genetics and space-time (environment) factors, and study how spatio-temporal information could be incorporated and also utilized in modeling and forecasting yield. Based on the format of these data sets, trend of irrigated and dryland crops was analyzed by employing time series statistical techniques. Some traditional linear regressions and smoothing techniques are first used to obtain the yield function. These models were then improved by incorporating time and space information either as explanatory variables or as autoor crosscorrelations adjusted in the residual covariance structures. In addition, a multivariate time series modeling approach was conducted to demonstrate how the space and time correlation information can be utilized to model and forecast yield and related variables. The conclusion from this research clearly emphasizes the importance of space and time components of data sets in research analysis. That is partly because they can often adjust (make up) for those underlying variables and factor effects that are not measured or not well understood.

Posted Content
TL;DR: It is demonstrated that, as sample size is increases, the accuracy of the maximum-likelihood estimates (MLE) ultimately improves by orders of magnitude beyond that of variogram regression.
Abstract: Estimation of autocorrelations and spectral densities is of fundamental importance in many fields of science, from identifying pulsar signals in astronomy to measuring heart beats in medicine. In circumstances where one is interested in specific autocorrelation functions that do not fit into any simple families of models, such as auto-regressive moving average (ARMA), estimating model parameters is generally approached in one of two ways: by fitting the model autocorrelation function to a non-parameteric autocorrelation estimate via regression analysis or by fitting the model autocorrelation function directly to the data via maximum likelihood. Prior literature suggests that variogram regression yields parameter estimates of comparable quality to maximum likelihood. In this letter we demonstrate that, as sample size is increases, the accuracy of the maximum-likelihood estimates (MLE) ultimately improves by orders of magnitude beyond that of variogram regression. For relatively continuous and Gaussian processes, this improvement can occur for sample sizes of less than 100. Moreover, even where the accuracy of these methods is comparable, the MLE remains almost universally better and, more critically, variogram regression does not provide reliable confidence intervals. Inaccurate regression parameter estimates are typically accompanied by underestimated standard errors, whereas likelihood provides reliable confidence intervals.

01 Jan 2013
TL;DR: In this paper, a combined integer-valued moving average model of order 2 with poisson innovation, denoted by PCINMA(2), is introduced, which considers some prop- erties of this process, such as expectation, variance, autocovariance function.
Abstract: In this paper, we introduce a new combined integer-valued moving average model of order 2 with poisson innovation, denoted by PCINMA(2). We consider some prop- erties of this process, such as expectation, variance, autocovariance function. Stationary and ergodicity are obtained. We estimate the unknown parameters by using Yule-Walker estimation, and use simulation to assess the performance of Yule-Walker estimators.

Journal ArticleDOI
TL;DR: This article proposes a unified estimation method of minimal dimension using an Akaike information criterion for situations in which the dimension for multiple regressors is unknown, and presents an analysis using real data from the housing price index showing that this approach is an alternative for multiple time series modeling.
Abstract: Time series which have more than one time dependent variable require building an appropriate model in which the variables not only have relationships with each other, but also depend on previous values in time. Based on developments for a sufficient dimension reduction, we investigate a new class of multiple time series models without parametric assumptions. First, for the dependent and independent time series, we simply use a univariate time series central subspace to estimate the autoregressive lags of the series. Secondly, we extract the successive directions to estimate the time series central subspace for regressors which include past lags of dependent and independent series in a mutual information multiple-index time series. Lastly, we estimate a multiple time series model for the reduced directions. In this article, we propose a unified estimation method of minimal dimension using an Akaike information criterion, for situations in which the dimension for multiple regressors is unknown. We present an analysis using real data from the housing price index showing that our approach is an alternative for multiple time series modeling. In addition, we check the accuracy for the multiple time series central subspace method using three simulated data sets.

01 Jan 2013
TL;DR: In this paper, a proof of this proposition is presented by applying a case study of the Kenyan market, where the dollar exchange and Interbank lending rates in Kenya are analyzed using a simulation study.
Abstract: From a previous study on co-integration, it has been proposed that if two series follow a Generalized Auto-regressive Conditional Heteroskedasticity (GARCH(1,1)) model, then the two series are co-integrated. However, the proposition was carefully proved using a simulation study. In this paper, a proof of this proposition is presented by applying a case study of the Kenyan market. The dollar exchange and Interbank lending rates in Kenya are analyzed. The procedure described in the simulation study is carefully followed, and consequently all the tests and justifications given follow. Unit root tests (Augmented Dickey Fuller (ADF), Phillips Perron (PP) and Kwiatkowski Philips Schmidt Shin (KPSS)) on the data indicates non-stationarity. Differencing is applied to attain stationarity. Co-integrating factor is then estimated to be -0.490747, with its residuals being stationary. elatively same R 2 and adjusted R 2 values indicates adequacy of the model. This ascertains the proposition; and also that co-integration models can be used to analyse time series data with high volatility and heteroskedasticity. It is recommended that a similar study be undertaken with a combination of Auto Regressive Moving Average Process (ARMA) and GARCH models to capture both conditional variance and conditional expection properties.

Journal ArticleDOI
TL;DR: The circular correlation and linear compensation are employed to solve the distortion problem in the original algorithm and simulation results show that the method can reduce errors of autocorrelation analysis effectively.
Abstract: Results of autocorrelation analysis algorithm by the LabVIEW are different from the theoretical results. To address the problem, a modification of autocorrelation analysis is proposed in this paper. In the proposed approach, the circular correlation and linear compensation are employed to solve the distortion problem in the original algorithm. Simulation results show that the method can reduce errors of autocorrelation analysis effectively.

Journal ArticleDOI
Hiroyuki Kato1
TL;DR: A moving average of independent random variables with normal distributions that approximates a stochastic process whose sample paths are periodic that is not directly represented as a moving average according to the Wold decomposition theorem.
Abstract: This paper presents a moving average of independent random variables with normal distributions that approximates a stochastic process whose sample paths are periodic (we call it the periodic stochastic process). Since the periodic stochastic process does not have a spectral density, it can not be directly represented as a moving average according to the Wold decomposition theorem. The results of this paper are twofold. First, we point out that the theorem originally proved by Slutzky (1937) is not satisfactory in the sense that the moving average process constructed by him does not converge to any processes in L 2 as the sum of white noise goes to infinity though the spectral distribution of it weakly converges to a step function which is the spectral distribution of a periodic stochastic process. Secondly we propose a new moving average process that approximates a nontrivial periodic stochastic process in L 2 and almost surely.

Journal ArticleDOI
TL;DR: The estimation method of the network autocorrelation model, the time series stationarity, the consistency and effectiveness of the estimation method, and the application of the model are discussed, especially the practical significance on the forecasting and controlling.
Abstract: Efforts to develop data processing or some variable evaluation of the components autocorrelation network or interpersonal relationship network have been hampered by many obstacles. One important reason is the lag of autocorrelation network model (ANM) development. Autocorrelation network models are used to deal with the data with autocorrelation network data. However, some data are correlated with its lag station, so its necessary to introduce the time series in the ANM. The network autocorrelation model with lag and auto-correlated indicators or variables is put forward based on the expansion of the existing social network effect model. The estimation method of the network autocorrelation model is illustrated; the time series stationarity, the consistency and effectiveness of the estimation method are discussed. Also the application of the model is discussed, especially the practical significance on the forecasting and controlling.