scispace - formally typeset
Search or ask a question

Showing papers on "Moving-average model published in 1980"


Journal ArticleDOI
TL;DR: In this paper, it is shown that there is an innovation process such that the sequence of random variables generated by the linear, additive first-order autoregressive scheme X n = pXn-1 + ∊ n are marginally distributed as gamma (λ, k) variables if 0 ≦p ≦ 1.
Abstract: It is shown that there is an innovation process {∊ n } such that the sequence of random variables {X n } generated by the linear, additive first-order autoregressive scheme X n = pXn-1 + ∊ n are marginally distributed as gamma (λ, k) variables if 0 ≦p ≦ 1. This first-order autoregressive gamma sequence is useful for modelling a wide range of observed phenomena. Properties of sums of random variables from this process are studied, as well as Laplace-Stieltjes transforms of adjacent variables and joint moments of variables with different separations. The process is not time-reversible and has a zero-defect which makes parameter estimation straightforward. Other positive-valued variables generated by the first-order autoregressive scheme are studied, as well as extensions of the scheme for generating sequences with given marginal distributions and negative serial correlations.

328 citations


Journal ArticleDOI
Steven Kay1
TL;DR: In this paper, a noise compensation technique was proposed to correct the estimated reflection coefficients for the effect of white noise, assuming the noise variance is known or can be estimated, and simulation results indicate that a significant decrease in the degrading effects of noise may be realized using the noise compensation method.
Abstract: The autoregressive spectral estimator possesses excellent resolution properties for time series which satisfy the "all-pole" assumption. When noise is added to the time series under analysis, the resolution of the spectral estimator decreases rapidly as the signal-to-noise ratio decreases. The usual approach to this problem is to model the resulting time series by the more appropriate autoregressive-moving average process and to use standard time series analysis techniques to identify the autoregressive parameters. This standard technique, however, does not result in a positive-definite autocorrelation matrix. As a result, it is shown that the resulting spectral estimator may exhibit a large increase in variance. An alternative approach, termed the noise compensation technique, is proposed. It attempts to correct the estimated reflection coefficients for the effect of white noise, assuming the noise variance is known or can be estimated. Simulation results indicate that a significant decrease in the degrading effects of noise may be realized using the noise compensation technique.

154 citations


Journal ArticleDOI
TL;DR: In this paper, the effect of system size and shape on the theoretical space-time autocorrelation function for first order STARMA models is described and an initial estimation for the STAR(11 and STMA(11) models is presented.
Abstract: The effect of system size and shape on the theoretical space-time autocorrelation function is described for first order STARMA models. Figures and tables are presented to assist in identification considerations which include model interpretation, patterns of the theoretical spacetime autocorrelation and partial autocorrelation functions, and initial estimation for the STAR(11) and STMA(11) models.

126 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered forecasting a contemporal linear aggregate yt of a vector time series Z't =(Z1t,...,Zkt). And they first disciss the case where Zt follows a stationary multiple moving average process and propose a measure of the efficiency of aggregation.

79 citations


Journal ArticleDOI
TL;DR: In this article, the asymptotic distribution of the estimates provided by these two methods is derived and their covariance structure is shown in accordance with a remark of Parzen (1974).
Abstract: SUMMARY The concept of the inverse correlation function of a stationary process xt was first introduced by Cleveland (1972), who also introduced the autoregressive and the window methods for estimating this function. The asymptotic distribution of the estimates provided by these two methods is derived and their asymptotic covariance structure is shown to be in accordance with a remark of Parzen (1974). The results are extended to show that the two procedures suggested by Durbin (1959, 1961) for estimating the parameters of a moving average model are asymptotically efficient, relative to maximum likelihood in the Gaussian case. Some key word8: Akaike's information criterion; Autoregressive spectral estimate; Inverse correlation function; Inverse covariance function; Moving average model; Window spectral estimate.

62 citations


Journal ArticleDOI
TL;DR: An introduction to the identification, estimation, and diagnostic checking of discrete linear transfer functions to model the interrelationships between input and output time series.
Abstract: Time series analysis methods have been applied to a large number of practical problems, including modeling and forecasting economic time series and process and quality control. One aspect of time series analysis is the use of discrete linear transfer functions to model the interrelationships between input and output time series. This paper is an introduction to the identification, estimation, and diagnostic checking of these models. Some aspects of forecasting with transfer function models are also discussed. A survey of intervention analysis models in which the input series is an indicator variable corresponding to an isolated event thought to influence the output is also given. Familiarity with univariate autoregressive integrated moving average modeling is assumed. Extensions to more general multiple time series analysis methods are also briefly discussed.

56 citations


Journal ArticleDOI
TL;DR: In this article, exact analytical expressions for the transformation that can be used to transform a generalized regression problem into a simple regression problem are available for a variety of models such as purely heteroscedastic models, for the first-order Markov process and for error components models.

27 citations


Journal ArticleDOI
TL;DR: In this paper, the behavior of the sample autocorrelation function, r(k), for an integrated autoregressive moving average time series is examined and the validity of the approximation in moderate-sized samples is examined.
Abstract: The behavior of the sample autocorrelation function, r(k), for an integrated autoregressive moving average time series is examined. The nonnormal asymptotic distribution of r(k) is characterized as a function of lag k and the parameters of the process. The validity of the approximation in moderate-sized samples is examined.

24 citations


Journal ArticleDOI
TL;DR: In this article, a method of estimating the parameters of an autoregressive model with real and equal roots in its characteristic equation is developed, which uses the serial autocorrelation function in the estimation process.

20 citations


Journal ArticleDOI
TL;DR: In this paper, the conditions for stationarity and invertibility are determined and the autocorrelation function and Yule-Walker equations are obtained for the general case, and as particular cases for special discrete values for various grids in plane and for orders 1 and 2 in time.
Abstract: Spatially dependent autoregressive models in m dimensions are defined. The conditions for stationarity and invertibility are determined. The autocorrelation function and Yule-Walker equations are obtained for the general case, and as particular cases for special discrete values for various grids in plane and for orders 1 and 2 in time. The spectra are obtained for these particular cases, and some results for the partial autocorrelation function. All results are new. The notation, definitions, and assumptions are those given by Voss et al. (1980). We assume stationarity of z over time t, where an m dimensional vec 12m tor. We assume the covariance structure as given by Hannan (1970), 2 with and all covariances existing. Nonstationary models will be considered in later papers.

13 citations


Journal ArticleDOI
TL;DR: It is shown that Bartlett's asymptotic formula for the variance of the sample autocorrelations of moving average processes is a large overestimate when considering finitesample sizes.
Abstract: In this paper we express the sample autocorrelations for a moving average process of order q as a function of its own theoretical autocorrelations and the sample autocorrelations for the generating white noise series. Approximate analytic expressions are then obtained forthe moments of the sample autocorrelations of the moving average process. Using these expressions, together with numerical evidence, we show that Bartlett's asymptotic formula for the variance of the sample autocorrelations of moving average processes, which is used widely in identifying these processes, is a large overestimate when considering finitesample sizes. Our approach is for motivational purposes and so is purely formal, the amount of mathematics presented being kept to a minimum.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.
Abstract: It is shown that the estimation procedure of Walker leads to estimates of the parameters of a Gaussian moving average process which are asymptotically equivalent to the maximum likelihood estimates proposed by Whittle and represented by Godolphin.



Journal ArticleDOI
TL;DR: In this article, a general linear stochastic model was proposed, which assumes a time series to be generated by a linear aggregation of random shocks at various temporal and spatial locations.
Abstract: The paper describes a general linear stochastic model which supposes a time series to be generated by a linear aggregation of random shocks at various temporal and spatial locations. It is a combination of autoregressive and moving average models (ARMA). The autocorrelation functions and power spectra are determined,

01 Jun 1980
TL;DR: Lattice solution methods in batch processing and adaptive form are developed for both single and multichannel autoregressive moving average (ARMA) models for linear systems and Volterra series models for nonlinear systems.
Abstract: : The problem of obtaining parametric models for linear and nonlinear systems based on observations of the input and output of the system is one of wide ranging interest. For linear systems, moving average (MA) and autoregressive (AR) models have received considerable attention and, based on the Levinson algorithm, a number of very powerful methods involving lattice filter structures have been developed to obtain the model solutions. For nonlinear systems the Volterra series model which is a nonlinear extension of the moving average model is frequently used. The purpose of this research is to extend these techniques to more general linear and nonlinear models. Using the equation error formulation, lattice solution methods in batch processing and adaptive form are developed for both single and multichannel autoregressive moving average (ARMA) models for linear systems and Volterra series models for nonlinear systems. A nonlinear extension of the ARMA model is also considered and is shown in some cases to remedy problems encountered in Volterra modeling of nonlinear systems. Lattice methods are also developed for the nonlinear ARMA model and it is shown that the methods obtained for linear ARMA modeling follow as a special case of the nonlinear results.

Journal ArticleDOI
TL;DR: In this paper, a generalized least squares regression of the periodogram on the autocovariance of a d-dimensional moving average process of order q is proposed, and the estimators have the same asymptotic covariance matrix as those obtained by maximizing a Gaussian likelihood.
Abstract: : This paper proposes a method for estimating the autocovariances of a d-dimensional moving average process of order q. The estimators have the same asymptotic covariance matrix as those obtained by maximizing a Gaussian likelihood, and are obtained by performing a generalized least squares regression of the periodogram on the autocovariance, thus extending Parzen's (1971) estimators for d = 1. (Author)