scispace - formally typeset
Search or ask a question

Showing papers on "STAR model published in 1989"


Journal ArticleDOI
TL;DR: In this article, a bias correction to the Akaike information criterion, called AICC, is derived for regression and autoregressive time series models, which is of particular use when the sample size is small, or when the number of fitted parameters is a moderate to large fraction of the sample sample size.
Abstract: SUMMARY A bias correction to the Akaike information criterion, AIC, is derived for regression and autoregressive time series models. The correction is of particular use when the sample size is small, or when the number of fitted parameters is a moderate to large fraction of the sample size. The corrected method, called AICC, is asymptotically efficient if the true model is infinite dimensional. Furthermore, when the true model is of finite dimension, AICC is found to provide better model order choices than any other asymptotically efficient method. Applications to nonstationary autoregressive and mixed autoregressive moving average time series models are also discussed.

5,867 citations


Journal ArticleDOI
TL;DR: In this paper, a simple yet widely applicable model-building procedure for threshold autoregressive models is proposed based on some predictive residuals, and a simple statistic is proposed to test for threshold nonlinearity and specify the threshold variable.
Abstract: The threshold autoregressive model is one of the nonlinear time series models available in the literature. It was first proposed by Tong (1978) and discussed in detail by Tong and Lim (1980) and Tong (1983). The major features of this class of models are limit cycles, amplitude dependent frequencies, and jump phenomena. Much of the original motivation of the model is concerned with limit cycles of a cyclical time series, and indeed the model is capable of producing asymmetric limit cycles. The threshold autoregressive model, however, has not received much attention in application. This is due to (a) the lack of a suitable modeling procedure and (b) the inability to identify the threshold variable and estimate the threshold values. The primary goal of this article, therefore, is to suggest a simple yet widely applicable model-building procedure for threshold autoregressive models. Based on some predictive residuals, a simple statistic is proposed to test for threshold nonlinearity and specify the ...

977 citations


Journal ArticleDOI
TL;DR: In this article, the authors present new evidence about the time-series behavior of stock prices, showing that daily return series exhibit significant levels of second-order dependence, and they cannot be modeled as linear white-noise processes.
Abstract: This article presents new evidence about the time-series behavior of stock prices. Daily return series exhibit significant levels of second-order dependence, and they cannot be modeled as linear white-noise processes. A reasonable return-generating process is empirically shown to be a first-order autoregressive process with conditionally heteroskedastic innovations. In particular, generalized autoregressive conditional heteroskedastic GARCH (1, 1) processes fit to data very satisfactorily. Various out-of-sample forecasts of monthly return variances are generated and compared statistically. Forecasts based on the GARCH model are found to be superior. Copyright 1989 by the University of Chicago.

930 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic general equilibrium model for small macroeconomic vector autoregressive models connecting monetary variables to output and prices is presented. But it is argued that modeling in different styles will be appropriate for different purposes or different stages in the development of an area of economics.
Abstract: It is argued that economists ought to recognize that modeling in different styles will be appropriate for different purposes or different stages in the development of an area of economics. As an example, the paper displays simulations of a stochastic general equilibrium model which shed light on the interpretation of widely discussed small macroeconomic vector autoregressive models connecting monetary variables to output and prices.(This abstract was borrowed from another version of this item.)

142 citations


Journal ArticleDOI
TL;DR: In this paper, periodic autoregressive models are fitted to the quarterly values of seasonally unadjusted real non-durable consumers' expenditure for the United Kingdom and its components.
Abstract: The parameters of a periodic model are allowed to vary according to the time at which observations are made. Periodic autoregressive models are fitted to the quarterly values of seasonally unadjusted real nondurable consumers' expenditure for the United Kingdom and its components. The periodic model offers no improvement over conventional specifications if the aggregate is modeled directly. On the other hand, periodic models generally perform well for the components, which contain additional seasonal information. The choice between a periodic or nonperiodic specification is also shown to have an important influence on the resulting dynamic properties.

124 citations


Journal ArticleDOI
TL;DR: For a first-order autoregressive process Yt = βYt−1 + ∈ t where the ∈t'S are i.i.d. and belong to the domain of attraction of a stable law, the strong consistency of the ordinary least-squares estimator bn of β is obtained for β = 1, and the limiting distribution of bn is established as a functional of a Levy process as mentioned in this paper.
Abstract: For a first-order autoregressive process Yt = βYt−1 + ∈t where the ∈t'S are i.i.d. and belong to the domain of attraction of a stable law, the strong consistency of the ordinary least-squares estimator bn of β is obtained for β = 1, and the limiting distribution of bn is established as a functional of a Levy process. Generalizations to seasonal difference models are also considered. These results are useful in testing for the presence of unit roots when the ∈t'S are heavy-tailed.

115 citations


Journal ArticleDOI
TL;DR: In this article, the authors give sufficient conditions for strong consistency of estimators for the order of general nonstationary autoregressive models based on the minimization of an information criterion a la Akaike's (1969) AIC.
Abstract: We give sufficient conditions for strong consistency of estimators for the order of general nonstationary autoregressive models based on the minimization of an information criterion a la Akaike's (1969) AIC. The case of a time-dependent error variance is also covered by the analysis. Furthermore, the more general case of regressor selection in stochastic regression models is treated.

108 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that the least squares estimators of the coefficients of an autoregression of known, finite order are biased to order 1/T, where T is the sample length, unless the observed time series is generated by a unique model for that order.
Abstract: Least squares estimators of the coefficients of an autoregression of known, finite order are biased to order $1/T$, where $T$ is the sample length, unless the observed time series is generated by a unique model for that order. The coefficients of this special model are the fixed point of a linear mapping defined by the bias of the least squares estimator. Separate results are given for models with known mean and unknown mean. The "fixed point models" for different orders of autoregression are least squares approximations to an infinite-order autoregression which is unique but for arbitrary scaling. Explicit expressions are given for the coefficients of the fixed point models at each order. The autocorrelation function and spectral density of the underlying infinite-order process are also presented. Numerical calculations suggest similar properties hold for Yule-Walker estimators. Implications for bootstrapping autoregressive models are discussed.

76 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the largest estimated eigenvalue has distributional properties that allow us to test this unit root hypothesis using critical values tabulated by Dickey (1976).
Abstract: The characteristic equation of a multiple autoregressive time series involves the eigenvalues of a matrix equation which determine if the series is stationary. Suppose one eigenvalue is 1 and the rest are less than 1 in magnitude. We show that ordinary least squares may be used to estimate the matrices involved and that the largest estimated eigenvalue has distributional properties that allow us to test this unit root hypothesis using critical values tabulated by Dickey (1976). See also Fountis (1983). If a single unit root is suspected, a model can be fit whose parameters are constrained to produce an exact unit root. This is the vector analog of differencing in the univariate case. In the fitting process, canonical series can be computed thus extending the work of Box and Tiao (1977) to the unit root case.

60 citations


Journal ArticleDOI
TL;DR: In this paper, a method based on the assumption of approximately normal forecast errors is shown to give forecasts which perform well in both qualitative and numerical comparisons with two alternative approximations based on naive extrapolation and linearization of the autoregression function.
Abstract: . Exact forecasting of the non-linear EXPAR(1) model for several steps ahead involves a sequence of numerical integrations, thus motivating the search for reasonable approximations. A method based on the assumption of approximately normal forecast errors is shown to give forecasts which perform well in both qualitative and numerical comparisons with two alternative approximations based on naive extrapolation and linearization of the autoregression function.

42 citations



Journal ArticleDOI
TL;DR: In this paper, an exact formula for the bias of the parameter estimator of the first order autoregressive process and derived the asymptotic bias were provided. But this formula is not applicable to the first-order autoregression process.
Abstract: The paper provides an exact formula for the bias of the parameter estimator of the first order autoregressive process and derives the asymptotic bias.

Journal ArticleDOI
TL;DR: In this paper the problem of estimating autoregressive moving-average (ARMA) models is dealt with by first estimating a high-order autore progressive approximation and then using the AR estimate t to estimate the total number of particles in the model.
Abstract: In this paper the problem of estimating autoregressive moving-average (ARMA) models is dealt with by first estimating a high-order autoregressive (AR) approximation and then using the AR estimate t ...

Journal ArticleDOI
TL;DR: In this article, a Bayesian vector autoregressive technique is used to incorporate intercity wage relations into a local model of wage formation, which requires that a simple prior distribution be specified for the likely values of the coefficients for the variables that represent the intercity relations.

Journal ArticleDOI
TL;DR: In this paper, a simple time series model for bivariate exponential variables having first-order autoregressive structure is presented, the BEAR(1) model, which is an adaptation of the New Exponential Autoregressive model (NEAR(2)).
Abstract: A simple time series model for bivariate exponential variables having first-order autoregressive structure is presented, the BEAR(1) model. The linear random coefficient difference equation model is an adaptation of the New Exponential Autoregressive model (NEAR(2)). The process is Markovian in the bivariate sense and has correlation structure analogous to that of the Gaussian AR(1) bivariate time series model. The model exhibits a full range of positive correlations and cross-correlations. With some modification in either the innovation or the random coefficients, the model admits some negative values for the cross-correlations. The marginal processes are shown to have correlation structure of ARMA(2, 1) models.

Journal ArticleDOI
TL;DR: In this article, three simple stochastic models that can be used to generate correlated negative binomial variates are proposed, two of which are constructed according to the autoregressive scheme of the first-order Markovian process.
Abstract: Three simple stochastic models that can be used to generate correlated negative binomial variates are proposed. Two of the models are constructed according to the autoregressive scheme of the first—order Markovian process. The third model is constructed via the Poisson process from a first—order autoregressive gamma sequence and the resulting process has the long—term correlation structure of the mixed autoregressive moving—average process.

Journal Article
TL;DR: In this article, necessary and sufficient conditions are provided for the third-moment sequence of a white-noise-driven finite-order AR model to match given samples of the third moment sequence of an arbitrary stationary process.
Abstract: Necessary and sufficient conditions are provided for the third-moment sequence of a white-noise-driven finite-order AR (autoregressive) model to match given samples of the third-moment sequence of an arbitrary stationary process. The conditions lead to a set of nonlinear equations that are solved for the model parameters. A method for finding the third-moment sequence of a white-noise-driven AR model from its parameters is also provided. One of the key results is that, unlike a finite set of autocorrelation samples, a finite set of third-moment sequence samples is not always linearly extendable to an infinite third-moment sequence. >

Journal ArticleDOI
TL;DR: A white-noise-driven finite-order AR (autoregressive) model is fitted to match given samples of the third-moment sequence of an arbitrary stationary process and it is found that a finite set of third-Moment sequence samples is not always linearly extendable to an infinite third- Moment sequence.
Abstract: Necessary and sufficient conditions are provided for the third-moment sequence of a white-noise-driven finite-order AR (autoregressive) model to match given samples of the third-moment sequence of an arbitrary stationary process. The conditions lead to a set of nonlinear equations that are solved for the model parameters. A method for finding the third-moment sequence of a white-noise-driven AR model from its parameters is also provided. One of the key results is that, unlike a finite set of autocorrelation samples, a finite set of third-moment sequence samples is not always linearly extendable to an infinite third-moment sequence. >

Journal ArticleDOI
TL;DR: In this paper, the Hannan-Rissanen procedure for recursive order determination of an autoregressive moving-average process provides non-parametric estimators of the coefficients b(u), say, of the moving average representation of a stationary process by auto-regressive model fitting, and also that of the cross-covariances, c(u) between the process and its linear innovations.
Abstract: . The Hannan-Rissanen procedure for recursive order determination of an autoregressive moving-average process provides ‘non-parametric’ estimators of the coefficients b(u), say, of the moving-average representation of a stationary process by auto-regressive model fitting, and also that of the cross-covariances, c(u), between the process and its linear innovations. An alternative ‘autoregressive’ estimator of the b(u) is obtained by inverting the autoregressive transfer function. Some uses of these estimators are discussed, and their asymptotic distributions are derived by requiring that the order k of the fitted autoregression approaches infinity simultaneously with the length T of the observed time series. The question of bias in estimating the parameters is also examined.

Journal ArticleDOI
TL;DR: This paper is a review of nonlinear processes used in time series analysis and presents some new original results about stationary distribution of a nonlinear autoregres-sive process of the first order.
Abstract: The paper is a review of nonlinear processes used in time series analysis and presents some new original results about stationary distribution of a nonlinear autoregres-sive process of the first order. The following models are considered: nonlinear autoregessive processes, threshold AR processes, threshold MA processes, bilinear models, auto-regressive models with random parameters including double stochastic models, exponential AR models, generalized threshold models and smooth transition autoregressive models, Some tests for linearity of processes are also presented.

Journal ArticleDOI
TL;DR: In this article, the authors apply vector autoregressive modeling techniques to examine a wage transmission hypothesis and produce a model that quantifies the magnitude and timing of intercity interdependencies in the determination of wage rates.
Abstract: This study applies vector autoregressive modeling techniques to examine a wage transmission hypothesis. The techniques produce a model that quantifies the magnitude and timing of intercity interdependencies in the determination of wage rates. The Granger-Sims notion of causality is used to establish and test the statistical significance of the intercity wage relations. Impulse response functions provide a graphic depiction of the dynamics of the relations determined by the vector autoregressive model. A Granger-causal structure is found that is consistent with the hypothesis of downward wage diffusion through an urban hierarchy of cities. The approach used here holds great promise in many areas of regional science research.

Proceedings ArticleDOI
23 May 1989
TL;DR: The difficult nonlinear optimization problem is approached by first recursively solving for the maximum likelihood estimates of the data covariances subject to certain structural constraints, and then using these estimates in the Yule-Walker equations to obtain the autoregressive process parameter estimates.
Abstract: A novel method for the maximum likelihood estimation of autoregressive process parameters is presented. The approach is suited to applications in which the available data vector length is of the same order of magnitude as the autoregressive process model order, and it provides more accurate results than approximate methods that yield the maximum likelihood estimates only in the limit of long data records. The difficult nonlinear optimization problem is approached by first recursively solving for the maximum likelihood estimates of the data covariances subject to certain structural constraints, and then using these estimates in the Yule-Walker equations to obtain the autoregressive process parameter estimates. Experimental results demonstrate the potential of the method for autoregressive process power spectral density estimation using short data records. >

Journal ArticleDOI
TL;DR: Methods for the estimation of the covariance density and conditional intensity function of point processes are discussed and alternative computational efficient estimation algorithms leading always to positive semidefinite estimates are presented, therefore adequate for autoregressive spectral analysis.
Abstract: The use of autoregressive modelling has acquired great importance in time series analysis and in principle it may also be applicable in the spectral analysis of point processes with similar advantages over the nonparametric approach. Most of the methods used for autoregressive spectral analysis require positive semidefinite estimates for the covariance function, while current methods for the estimation of the covariance density function of a point process given a realization over the interval [0,T] do not guarantee a positive semidefinite estimate. This paper discusses methods for the estimation of the covariance density and conditional intensity function of point processes and present alternative computational efficient estimation algorithms leading always to positive semidefinite estimates, therefore adequate for autoregressive spectral analysis. Autoregressive spectral modelling of point processes from Yule-Walker type equations and Levinson recursion combined with the minimum AIC or CAT principle is illustrated with neurobiological data.

Journal ArticleDOI
01 Jan 1989
TL;DR: An autoregressive model is presented for the modeling of fish behavior in a water tank on the basis of the following assumption : the schooling behavior of fish can be decomposed into two components: the motion of a representative and the variation of the school size.
Abstract: This paper deals with mathematical modeling of fish behavior in a water tank. The model presented in our earlier papers describes the behavior of each individual in a school. Then, it is applicable to the case of small school.In this paper, an aggregation model is presented on the basis of the following assumption : The schooling behavior of fish can be decomposed into two components. One is the motion of a representative of the school. The other is the variation of the school size. Since the model for the representative was presented previously, the present study is directed toward the model for the school size. An autoregressive model is presented for this purpose. The model order is determined by minimizing the AIC. The unknown parameters are estimated by applying the least squares algorithm. The validity of the autoregressive model is examined by a residual whiteness test and a simulation study.

Journal ArticleDOI
TL;DR: In this article, the asymptotic properties of the estimated predictor of a k-dimensional, pth order autoregressive process with dependent error variables and a general set-up of the roots have been considered.
Abstract: In this paper the asymptotic properties of the estimated predictor of a k-dimensional, pth order autoregressive process with dependent error variables and a general set-up of the roots have been considered. An expression for the mean-square-error of the estimated predictor has also been obtained.


Journal ArticleDOI
TL;DR: In this article, the authors deal with the testing problem, whether an outlier is present or not, if some parameters x are assumed to be known, we are able to give a UMPU test (x) In the case x unknown, we replace x by a suitable estimator x and obtain a test(x).
Abstract: Let a realization of a time series be given, which coincides with that of an autoregressive Process except for a single observation. The location of this value is assumed to be known. In order to desctibe the outlier a slippage model is used. In this paper we deal with the testing problem, Whether an outlier is present or not If some parameters x are assumed to be know, we are able to give a UMPU test (x) In the case x unknown we replace x by a suitable estimator x and obtain a test (x). Severak tests are introduced and the asymptotic distributions of the test statistics are derived. A comparison of the asymptotic power functins shows that is the best of these tests.

Proceedings ArticleDOI
13 Dec 1989
TL;DR: In this paper, the identification problem of nonstationarity for autoregressive models is addressed, with the largest multiplicity of all the distinct roots on the unit cycle being determined by the criteria when the corresponding model is nonstationary.
Abstract: The paper is concerned with the identification problem of nonstationarity for autoregressive models. Several principles are proposed and used as criteria to identify the nonstationarity for autoregressive models, with the largest multiplicity of all the distinct roots on the unit cycle being determined by the criteria when the corresponding model is nonstationary. The necessary and sufficient conditions for an autoregressive model to be asymptotically stationary are given. >