scispace - formally typeset
Search or ask a question

Showing papers on "Moving-average model published in 1986"


Journal ArticleDOI
TL;DR: In this article, the authors proposed methods of providing suitable standard errors of estimate and prediction which assist in assessing the importance of the coefficients appearing the derived moving-average (MA) model.
Abstract: In the situations where restrictions on the multivariate subset AR model are known, we propose methods of providing suitable standard errors of estimate and prediction which assist in assessing the importance of the coefficients appearing the 'derived' moving-average (MA) model. The coefficient patterns of the derived moving-average model are proposed as an alternative basis for detecting Granger-causality. GRANGER-CAUSALITY; SUBSET AUTOREGRESSION; RESTRICTED SUBSET AUTOREGRESSION; 'DERIVED' MOVING-AVERAGE REPRESENTATION; ASYMPTOTIC STANDARD ERROR OF ESTIMATE

10 citations


Journal ArticleDOI
TL;DR: In this paper, a dynamic stationary model for mixed time series and cross-section data is proposed, where each cross-sectional unit draws a parameter set from an infinite population, and the models are framed in continuous time.
Abstract: Dynamic stationary models for mixed time series and cross-section data are studied. The models are of simple, standard form except that the unknown coefficients are not assumed constant over the cross-section; instead, each cross-sectional unit draws a parameter set from an infinite population. The models are framed in continuous time, which facilitates the handling of irregularly-spaced series, and observation times that vary over the cross-section, and covers also standard cases in which observations at the same regularly-spaced times are available for each unit. A variety of issues are considered, in particular stationarity and distributional questions, inference about the parameter distributions, and the behaviour of cross-sectionally aggregated data.

4 citations


Journal ArticleDOI
TL;DR: A second set of structural zeros are identified which leads to further significant computational savings in Pearlman's algorithms, giving a very efficient method for computing the likelihood of a seasonal moving average model, and more generally in the literature.
Abstract: SUMMARY Pearlman (1980) gives a fast filtering algorithm for an ARMA, i.e. autoregressive-moving average, model. When the algorithm is applied to a seasonal moving average model significant computational savings can be obtained by taking advantage of the structural zeros noted by Kohn & Ansley (1984) and Melard (1984). In this paper we identify a second set of structural zeros which leads to further significant computational savings. Our results can be applied to produce a fast algorithm for obtaining the likelihood of a stationary ARMA model with a seasonal moving average. Pearlman (1980) gave an algorithm for filtering observations from a stationary Gaussian ARMA (P, q) model and used it to compute the likelihood of the observations. This algorithm is based on a general fast filtering algorithm for state space models due to Morf, Sidhu & Kailath (1974). Pearlman's algorithm is efficiently implemented by Melard (1984) who pointed out additional computational savings. By using the backward transformation of Ansley (1979), Kohn & Ansley (1985) further refined Pearlman's algorithm by reducing it to filtering a pure moving average for the first N - p observations, where N is the sample size, and then switching to a Cholesky factorization method for the last p observations. For moderate to large values of N this variant of Pearlman's algorithm is the fastest in the literature for computing the likelihood of an ARMA process. For seasonal moving average models it is clear from Kohn & Ansley (1984) and Melard (1984) that considerable computational savings could be made in Pearlman's algorithms by taking account of structural zeros. These structural zeros were originally obtained for the Cholesky decomposition by Ansley (1979). In this paper we identify a second set of structural zeros, complementary to the first set. Recognizing these zeros makes the algorithm significantly faster, giving a very efficient method for computing the likelihood of a seasonal moving average model, and more generally

2 citations


Journal ArticleDOI
TL;DR: This paper showed that a non-linear model would be more appropriate than a linear model for explaining the variance of exchange rate time series for German Deutschemark per U.S. dollar.

1 citations


Journal ArticleDOI
T. Nirmalan1, N. Singh1
TL;DR: In this article, the autocorrelation function of a bilinear process is used for identification as well as for testing the linearity of the process's linearity, which can be used for both identification and validation.
Abstract: One of the problems in bilinear time series (BLTS) analysis is that of identification. Unlike linear models, the identification in BLTS modelling is not always based on the autocorrelation function (or spectrum) since it is sometimes misleading, The authors, therefore., derive in this note the autocorrelation function of a function of a bilinear process which can be used for identification as well as for testing the linearity.

1 citations


Dissertation
01 Jan 1986
TL;DR: Pierce and Kopecky as discussed by the authors used the residuals after fitting an autoregressive time series model to test for normality of the error term, in such a model, and demonstrated empirically that sample.size,N=20, is adequate for the application of the test.
Abstract: Residuals in normal regression theory are used to test for normality of the unknown error term. This test examines the normal probability plot of the residuals, or suitable modifications of these residuals, for departure from linearity. Noticeable nonlinearity of this plot indicates that the residuals, and hence the unknown errors which they estimate, are not normal. Such a test is subjective at best. However, these plots are now a standard feature of most statistical packages, such as Minitab. A large sample result of Pierce and Kopecky, combined with tables of Stephens, provides an easily applied goodness-of-fit test for normality of the error distribution in ordinary least squares regression. This study uses simulation to examine the validity of applying the (large sample) test to samples of small and moderate size. Extensive Monte Carlo runs indicate that sample size, N=20, is large enough to justify the use-of the test. Pierce shows that the same test, using the residuals after fitting an autoregressive time series model, may be used to test for normality of the error term, in such a model. It is demonstrated empirically that sample .size ,N=20, again is adequate for the application of the test.