scispace - formally typeset
Search or ask a question

Showing papers on "STAR model published in 2003"


Journal ArticleDOI
TL;DR: In this article, the authors proposed best spatial 2SLS estimators that are asymptotically optimal instrumental variable (IV) estimators for estimating a cross-sectional spatial model with both a spatial lag and spatially autoregressive disturbances.
Abstract: Estimation of a cross‐sectional spatial model containing both a spatial lag of the dependent variable and spatially autoregressive disturbances are considered. [Kelejian and Prucha (1998)]described a generalized two‐stage least squares procedure for estimating such a spatial model. Their estimator is, however, not asymptotically optimal. We propose best spatial 2SLS estimators that are asymptotically optimal instrumental variable (IV) estimators. An associated goodness‐of‐fit (or over identification) test is available. We suggest computationally simple and tractable numerical procedures for constructing the optimal instruments.

375 citations


Journal ArticleDOI
TL;DR: A class of generalized autoregressive moving average (GARMA) models is developed that extends the univariate Gaussian ARMA time series model to a flexible observation-driven model for non-Gaussian time series data.
Abstract: A class of generalized autoregressive moving average (GARMA) models is developed that extends the univariate Gaussian ARMA time series model to a flexible observation-driven model for non-Gaussian time series data The dependent variable is assumed to have a conditional exponential family distribution given the past history of the process The model estimation is carried out using an iteratively reweighted least squares algorithm Properties of the model, including stationarity and marginal moments, are either derived explicitly or investigated using Monte Carlo simulation The relationship of the GARMA model to other models is shown, including the autoregressive models of Zeger and Qaqish, the moving average models of Li, and the reparameterized generalized autoregressive conditional heteroscedastic GARCH model (providing the formula for its fourth marginal moment not previously derived) The model is demonstrated by the application of the GARMA model with a negative binomial conditional distribution to

276 citations


Book ChapterDOI
01 Jan 2003
TL;DR: The VAR model has proven to be especially useful for describing the dynamic behavior of economic and financial time series and for forecasting, and often provides superior forecasts to those from univariate time series models and elaborate theory-based simultaneous equations models.
Abstract: The vector autoregression (VAR) model is one of the most successful, flexible, and easy to use models for the analysis of multivariate time series. It is a natural extension of the univariate autoregressive model to dynamic multivariate time series. The VAR model has proven to be especially useful for describing the dynamic behavior of economic and financial time series and for forecasting. It often provides superior forecasts to those from univariate time series models and elaborate theory-based simultaneous equations models. Forecasts from VAR models are quite flexible because they can be made conditional on the potential future paths of specified variables in the model.

248 citations


Journal ArticleDOI
TL;DR: In this paper, the so-called time-varying smooth transition autoregressive (TV-STAR) model is used both for describing simultaneous nonlinearity and structural change and for distinguishing between these features.
Abstract: Nonlinear regime-switching behavior and structural change are often perceived as competing alternatives to linearity. In this article we study the so-called time-varying smooth transition autoregressive (TV-STAR) model, which can be used both for describing simultaneous nonlinearity and structural change and for distinguishing between these features. Two modeling strategies for empirical specification of TV-STAR models are developed. Monte Carlo simulations show that neither of the two strategies dominates the other. A specific-to-general-to-specific procedure is best suited for obtaining a first impression of the importance of nonlinearity and/or structural change for a particular time series. A specific-to-general procedure is most useful in careful specification of a model with nonlinear and/or time-varying properties. An empirical application to a large dataset of U.S. macroeconomic time series illustrates the relative merits of both modeling strategies.

212 citations


Journal ArticleDOI
TL;DR: In this paper, a fully parametric approach is taken and a marginal distribution for the counts is specified, where conditional on past observations the mean is autoregressive, and a variety of models, based on the double Poisson distribution of Efron (1986) is introduced, which in a first step introduce an additional dispersion parameter and in a second step make this dispersion parameters time-varying.
Abstract: This paper introduces and evaluates new models for time series count data. The Autoregressive Conditional Poisson model (ACP) makes it possible to deal with issues of discreteness, overdispersion (variance greater than the mean) and serial correlation. A fully parametric approach is taken and a marginal distribution for the counts is specified, where conditional on past observations the mean is autoregressive. This enables to attain improved inference on coefficients of exogenous regressors relative to static Poisson regression, which is the main concern of the existing literature, while modelling the serial correlation in a flexible way. A variety of models, based on the double Poisson distribution of Efron (1986) is introduced, which in a first step introduce an additional dispersion parameter and in a second step make this dispersion parameter time-varying. All models are estimated using maximum likelihood which makes the usual tests available. In this framework autocorrelation can be tested with a straightforward likelihood ratio test, whose simplicity is in sharp contrast with test procedures in the latent variable time series count model of Zeger (1988). The models are applied to the time series of monthly polio cases in the U.S between 1970 and 1983 as well as to the daily number of price change durations of .75$ on the IBM stock. A .75$ price-change duration is defined as the time it takes the stock price to move by at least .75$. The variable of interest is the daily number of such durations, which is a measure of intradaily volatility, since the more volatile the stock price is within a day, the larger the counts will be. The ACP models provide good density forecasts of this measure of volatility.

160 citations


Journal ArticleDOI
TL;DR: The central idea is to transform a Gaussian vector autoregressive process into the desired multivariate time-series input process that the authors presume as having a VARTA (Vector-Autoregressive-To-Anything) distribution.
Abstract: We present a model for representing stationary multivariate time-series input processes with marginal distributions from the Johnson translation system and an autocorrelation structure specified through some finite lag. We then describe how to generate data accurately to drive computer simulations. The central idea is to transform a Gaussian vector autoregressive process into the desired multivariate time-series input process that we presume as having a VARTA (Vector-Autoregressive-To-Anything) distribution. We manipulate the autocorrelation structure of the Gaussian vector autoregressive process so that we achieve the desired autocorrelation structure for the simulation input process. We call this the correlation-matching problem and solve it by an algorithm that incorporates a numerical-search procedure and a numerical-integration technique. An illustrative example is included.

131 citations


Journal ArticleDOI
TL;DR: In this paper, two competing types of multistep predictors, i.e., plug-in and direct predictors are considered in autoregressive (AR) processes, and asymptotic expressions for the mean-squared prediction errors (MSPEs) of these two predictors were obtained in stationary cases.
Abstract: In this paper, two competing types of multistep predictors, i.e., plug-in and direct predictors, are considered in autoregressive (AR) processes. When a working model AR( k ) is used for the h -step prediction with h > 1, the plug-in predictor is obtained from repeatedly using the fitted (by least squares) AR( k ) model with an unknown future value replaced by their own forecasts, and the direct predictor is obtained by estimating the h -step prediction model's coefficients directly by linear least squares. Under rather mild conditions, asymptotic expressions for the mean-squared prediction errors (MSPEs) of these two predictors are obtained in stationary cases. In addition, we also extend these results to models with deterministic time trends. Based on these expressions, performances of the plug-in and direct predictors are compared. Finally, two examples are given to illustrate that some stationary case results on these MSPEs can not be generalized to the nonstationary case. The author is deeply grateful to the co-editor Pentti Saikkonen and two referees for their helpful suggestions and comments on a previous version of this paper.

129 citations


Journal ArticleDOI
01 Jan 2003
TL;DR: In this article, the authors investigated the finite sample properties of estimators for spatial autoregressive models where the disturbance terms may follow a spatial auto-regression process and found that the FGS2SLS estimator is virtually as efficient as the maximum likelihood estimator.
Abstract: The article investigates the finite sample properties of estimators for spatial autoregressive models where the disturbance terms may follow a spatial autoregressive process. In particular we investigate the finite sample behavior of the feasible generalized spatial two-stage least squares (FGS2SLS) estimator introduced by Kelejian and Prucha (1998), the maximum likelihood (ML) estimator, as well as that of several other estimators. We find that the FGS2SLS estimator is virtually as efficient as the ML estimator. This is important because the ML estimator is computationally burdensome, and may even be forbidding in large samples, while the FGS2SLS estimator remains computationally feasible in large samples.

104 citations


Journal ArticleDOI
TL;DR: In this article, Luukkonen, Saikkonen and Terasvirta used the formal linearity test of Biometrika (Biometrika, 75, 491-499, 1998) as diagnostic tool, the empirical finding suggests that the linear autoregressive (AR) model is inadequate in describing the real exchange rates behaviour of 11 Asian economies.
Abstract: Utilizing the formal linearity test of Luukkonen, Saikkonen and Terasvirta (Biometrika, 75, 491-499, 1998) as diagnostic tool, the empirical finding suggests that the linear autoregressive (AR) model is inadequate in describing the real exchange rates behaviour of 11 Asian economies. It is noted that the conventional battery of diagnostic tests is capable of identifying the inadequacy of the linear model in only three of these series. Moreover, the linearity nature of this behaviour has been formally rejected in favour of the non-linear smooth transition autoregressive (STAR) model. The finding of non-linearity in the data generating process of these real exchange rates warrants that the use of linear framework in empirical modelling and statistical testing procedures in the field of exchange rates may lead to an inappropriate policy conclusions.

77 citations


Posted Content
TL;DR: In this paper, the authors consider testing of a type of linear restrictions implied by rational expectations hypotheses in a cointegrated vector autoregressive model for I(1) variables when there in addition is a restriction on the deterministic drift term.
Abstract: In this note we consider testing of a type of linear restrictions implied by rational expectations hypotheses in a cointegrated vector autoregressive model for I(1) variables when there in addition is a restriction on the deterministic drift term.

71 citations


Journal ArticleDOI
TL;DR: In this article, the least squares predictor obtained by fitting a finite-order autoregressive (AR) model was used for predicting the future of the observed time series (referred to as the same-realization prediction), and moment bounds for the inverse sample covariance matrix with an increasing dimension were established under various conditions.

Journal ArticleDOI
TL;DR: The authors proposed a method to obtain reduced-bias estimators for single and multiple regressor models by employing an augmented regression, adding a proxy for the errors in the autoregressive model.
Abstract: Standard predictive regressions produce biased coefficient estimates in small samples when the regressors are Gaussian first-order autoregressive with errors that are correlated with the error series of the dependent variable; see Stambaugh (1999) for the single-regressor model. This paper proposes a direct and convenient method to obtain reduced-bias estimators for single and multiple regressor models by employing an augmented regression, adding a proxy for the errors in the autoregressive model. We derive bias expressions for both the ordinary least squares and our reduced-bias estimated coefficients. For the standard errors of the estimated predictive coefficients we develop a heuristic estimator which performs well in simulations, for both the single-predictor model and an important specification of the multiple-predictor model. The effectiveness of our method is demonstrated by simulations and by empirical estimates of common predictive models in finance. Our empirical results show that some of the predictive variables that were significant under ordinary least squares become insignificant under our estimation procedure.

Journal ArticleDOI
TL;DR: Empirical evidence is provided to show that different algorithms produce substantially different estimates for the same model, and the interpretation of the model can differ according to the choice of algorithm.
Abstract: The paper investigates several empirical issues regarding quasi-maximum likelihood estimation of smooth transition autoregressive (STAR) models with GARCH errors (STAR-GARCH) and STAR models with smooth transition GARCH errors (STAR-STGARCH). Empirical evidence is provided to show that different algorithms produce substantially different estimates for the same model. Consequently, the interpretation of the model can differ according to the choice of algorithm. Convergence, the choice of different algorithms for maximizing the likelihood function, and the sensitivity of the estimates to outliers and extreme observations, are examined using daily data for S&P 500, Hang Seng and Nikkei 225 for the period January 1986 to April 2000.

Posted Content
TL;DR: In this article, a sequence of misspecification tests for a flexible nonlinear time series model is considered and the results show that the tests have size close to the nominal one and a good power.
Abstract: This paper considers a sequence of misspecification tests for a flexible nonlinear time series model. The model is a generalization of both the smooth transition autoregressive (STAR) and the autoregressive artificial neural network (AR-ANN) models. The tests are Lagrange multiplier (LM) type tests of parameter constancy against the alternative of smoothly changing ones, of serial independence, and of constant variance of the error term against the hypothesis that the variance changes smoothly between regimes. The small sample behaviour of the proposed tests is evaluated by a Monte-Carlo study and the results show that the tests have size close to the nominal one and a good power.

Posted Content
TL;DR: In this paper, the problem of detecting randomness in the coefficients of an AR(p) model against the alternative of a random coefficient autoregressive [RCAR(p)] model is considered.
Abstract: The problem of detecting randomness in the coefficients of an AR(p) model, that is, the problem of testing ordinary AR(p) dependence against the alternative of a random coefficient autoregressive [RCAR(p)] model is considered. A nonstandard LAN property is established for RCAR(p) models in the vicinity of AR(p) ones. Two main problems arise in this context. The first problem is related to the statistical model itself: Gaussian assumptions are highly unrealistic in a nonlinear context, and innovation densities should be treated as nuisance parameters. The resulting semiparametric model however appears to be severely nonadaptive. In contrast with the linear ARMA case, pseudo-Gaussian likelihood methods here are invalid under non-Gaussian densities; even the innovation variance cannot be estimated without a strict loss of efficiency. This problem is solved using a general result by Hallin and Werker, which provides semiparametrically efficient central sequences without going through explicit tangent space calculations. The second problem is related to the fact that the testing problem under study is intrinsically one-sided, while the case of multiparameter one-sided alternatives is not covered by classical asymptotic theory under LAN. A concept of locally asymptotically most stringent somewhere efficient test is proposed in order to cope with this one-sided nature of the problem.

Posted Content
14 May 2003
TL;DR: In this article, the authors compared the performance of linear autoregressive models, Markov switching models, self-exciting threshold autoregression models, and autoregulatory models in terms of point, interval, and density forecasts for industrial production of the G7 countries.
Abstract: We compare the forecasting performance of linear autoregressive models, autoregressive models with structural breaks, self-exciting threshold autoregressive models, and Markov switching autoregressive models in terms of point, interval, and density forecasts for h-month growth rates of industrial production of the G7 countries, for the period January 1960-December 2000. The results of point forecast evaluation tests support the established notion in the forecasting literature on the favorable performance of the linear AR model. By contrast, the Markov switching models render more accurate interval and density forecasts than the other models, including the linear AR model. This encouraging nding supports the idea that non-linear models may outperform linear competitors in terms of describing the uncertainty around future realizations of a time series.

Journal ArticleDOI
TL;DR: The theoretical development of a new threshold autoregressive model based on trended time series is presented and a nonlinear economic model is used to derive the specification of the empirical econometric model.
Abstract: This paper presents the theoretical development of a new threshold autoregressive model based on trended time series. The theoretical arguments underlying the model are outlined and a nonlinear economic model is used to derive the specification of the empirical econometric model. Estimation and testing issues are considered and analysed. Additionally we apply the model to the empirical investigation of U.S. GDP.

Journal ArticleDOI
TL;DR: In this paper, the identification problem for continuous time vector autoregressive models is characterised as an inverse problem involving a certain block triangular matrix, facilitating the derivation of an improved sufficient condition for the restrictions the parameters must satisfy in order that they be identified on the basis of equispaced discrete data.
Abstract: This note exposits the problem of aliasing in identifying finite parameter continuous time stochastic models, including econometric models, on the basis of discrete data. The identification problem for continuous time vector autoregressive models is characterised as an inverse problem involving a certain block triangular matrix, facilitating the derivation of an improved sufficient condition for the restrictions the parameters must satisfy in order that they be identified on the basis of equispaced discrete data. Sufficient conditions already exist in the literature but these conditions are not sharp and rule out plausible time series behaviour.

Journal ArticleDOI
TL;DR: Bayesian methods of analysis are developed for a new class of threshold autoregressive models: endogenous delay threshold and strong evidence is found in favor of the Endogenous Delay Threshold Autoregressive (EDTAR) model over linear and traditional threshold Autoregressions.
Abstract: We develop Bayesian methods of analysis for a new class of threshold autoregressive models: endogenous delay threshold. We apply our methods to the commonly used sunspot data set and find strong evidence in favor of the Endogenous Delay Threshold Autoregressive (EDTAR) model over linear and traditional threshold autoregressions.

Journal ArticleDOI
TL;DR: In this paper, the authors derived the exact likelihood equations for the model parameters, they are related to the Yule-Walker equations and involve simple functions of the data, model parameters and the autocovariances up to the order of the model.
Abstract: . The multi-variate t distribution provides a viable framework for modelling volatile time-series data; it includes the multi-variate Cauchy and normal distributions as special cases. For multi-variate t autoregressive models, we study the nature of the innovation distribution and the prediction error variance; the latter is nonconstant and satisfies a kind of generalized autoregressive conditionally heteroscedastic model. We derive the exact likelihood equations for the model parameters, they are related to the Yule–Walker equations and involve simple functions of the data, the model parameters and the autocovariances up to the order of the model. The maximum likelihood estimators are obtained by alternately solving two linear systems and illustrated using the lynx data. The simplicity of these equations contributes greatly to our theoretical understanding of the likelihood function and the ensuing estimators. Their range of applications are not limited to the parameters of autoregressive models; in fact, they are applicable to the parameters of ARMA models and covariance matrices of stochastic processes whose finite-dimensional distributions are multi-variate t.

Journal ArticleDOI
TL;DR: In this paper, auxiliary variables, called concomitants, are used to remove omitted-variable and measurement-error biases from the coefficients of an equation with the unknown ''true'' functional form.
Abstract: The parameter estimates based on an econometric equation are biased and can also be inconsistent when relevant regressors are omitted from the equation or when included regressors are measured with error. This problem gets complicated when the 'true' functional form of the equation is unknown. Here, we demonstrate how auxiliary variables, called concomitants, can be used to remove omitted-variable and measurement-error biases from the coefficients of an equation with the unknown `true' functional form. The method is specifically designed for panel data. Numerical algorithms for enacting this procedure are presented and an illustration is given using a practical example of forecasting small-area employment from nonlinear autoregressive models.

Dissertation
01 Jan 2003
TL;DR: In this paper, Generalized Autoregressive Conditional Heteroscedastic (GARCH) models are proposed and compared to the class of autoregressive Moving Average models.
Abstract: Autoregressive and Moving Average time series models and their combination are reviewed. Autoregressive Conditional Heteroscedastic (ARCH) and Generalized Autoregressive Conditional Heteroscedastic (GARCH) models are extensions of these models. These are defined and compared to the class of Autoregressive Moving Average models. Maximum likelihood estimation of parameters is examined. Conditions for existence and stationarity of GARCH models are discussed and the moments of the observations and the conditional variance are derived. Characteristics of low order GARCH models are explored further through simulations with different initial parameter values. As examples, GARCH models with different orders are fitted to the Standard & Poor's 500 Stock Price Index.

Journal ArticleDOI
TL;DR: In this article, the authors derived the asymptotic distribution of residual autocorrelations in the autoregressive conditional duration model and derived a portmanteau goodness-of-fit statistic for this kind of model.

Posted Content
TL;DR: In this paper, the authors explore some aspects of the analysis of latent component structure in non-stationary time series based on time-varying autoregressive (TVAR) models that incorporate uncertainty on model order.
Abstract: We explore some aspects of the analysis of latent component structure in non-stationary time series based on time-varying autoregressive (TVAR) models that incorporate uncertainty on model order. Our modelling approach assumes that the AR coefficients evolve in time according to a random walk and that the model order may also change in time following a discrete random walk. In addition, we use a conjugate prior structure on the autoregressive coefficients and a discrete uniform prior on model order. Simulation from the posterior distribution of the model parameters can be obtained via standard forward filtering backward simulation algorithms. Aspects of implementation and inference on decompositions, latent structure and model order are discussed for a synthetic series and for an electroencephalogram (EEG) trace previously analysed using fixed order TVAR models.

Journal ArticleDOI
TL;DR: In this article, the first-order autoregressive Mittag-Leffler process was applied to weakly stream flows of the Kallada River in Kerala, India.

Posted Content
TL;DR: In this article, two simple tests to distinguish between unit root processes and stationary nonlinear processes are proposed, and two F type test statistics for the joint unit root and linearity hypothesis against a specific nonlinear alternative.
Abstract: In this paper two simple tests to distinguish between unit root processes and stationary nonlinear processes are proposed. New limit distribution results are provided, together with two F type test statistics for the joint unit root and linearity hypothesis against a specific nonlinear alternative. Nonlinearity is defined through the smooth transition autoregressive model. Due to occasional size distortion in small samples, a simple bootstrap method is proposed for estimating the p-values of the tests. Power simulations show that the two proposed tests have at least the same or higher power than the corresponding Dickey-Fuller tests. Finally, as an example, the tests are applied on the seasonally adjusted U.S. monthly unemployment rate. The linear unit root hypothesis is strongly rejected, showing considerable evidence that the series is better described by a stationary smooth transition autoregressive process than a random walk.

Journal ArticleDOI
TL;DR: An approximation of Parzen's optimal predictor in reproducing kernel spaces framework is constructed, which did not require an estimation of the operator of the autoregressive representation.
Abstract: We study the statistical prediction of a continuous time stochastic process admitting a functional autoregressive representation. We construct an approximation of Parzen's optimal predictor in reproducing kernel spaces framework. This approach did not require an estimation of the operator of the autoregressive representation.

Journal ArticleDOI
TL;DR: This paper showed that data generated from an exponential smooth transition autoregressive model can exhibit the long memory property whether in raw or temporally aggregated form, and suggested that other non-linear models with this property are worth searching for.
Abstract: Granger and Terasvirta provided an abstract example of a non-linear model that can generate data with the misleading linear property of long memory. They suggested that other non-linear models with this property are worth searching for. The empirical results of this article indicate that data generated from an exponential smooth transition autoregressive model can exhibit the long memory property whether in raw or temporally aggregated form.

Journal ArticleDOI
TL;DR: In this article, the numerical solution of seemingly unrelated regression (SUR) models with vector autoregressive disturbances is considered, and an orthogonal transformation is applied to reduce the model to one with smaller dimensions.

01 Jan 2003
TL;DR: A detailed study of aggregated network traffic using time series analysis techniques and develops a classification scheme for the traces, finding that in many cases there is a “sweet spot”, a degree of aggregation at which predictability is maximized.
Abstract: This paper describes a detailed study of aggregated network traffic using time series analysis techniques. The study is based on three sets of packet traces: 175 short-period WAN traces from the NLANR PMA archive (NLANR), 34 long-period WAN traces from NLANR archive (AUCKLAND), and the four Bellcore LAN and WAN traces (BC). We binned the packets with different bin sizes to produce a set of time series estimating the consumed bandwidth. We studied these series using the following time series techniques: summary statistics, time series structure, the autocorrelation function, the histogram, and the power spectral density. Using a qualitative approach, we developed a classification scheme for the traces using the results of our analyses. We believe that this classification scheme will be helpful for others studying these freely available traces. We studied the predictability of the traces by choosing representatives of the different classes and then applying a wide variety of linear time series models to them. We found considerable variation in predictability. Some network traffic is essentially white noise while other traffic can be predicted with considerable accuracy. The choice of predictive model is also relatively context-dependent, although autoregressive models tend to do well. Predictability is also affected by the bin size used. As might be expected, it is often the case that predictability increases as bin size grows. However, we also found that in many cases there is a “sweet spot”, a degree of aggregation at which predictability is maximized.