scispace - formally typeset
Search or ask a question

Showing papers on "STAR model published in 2000"


Journal ArticleDOI
TL;DR: In this paper, the local linear regression technique is applied to estimation of functional-coefficient regression models for time series data and a new bootstrap test for the goodness of fit of models and a bandwidth selector based on newly defined cross-validatory estimation for the expected forecasting errors is proposed.
Abstract: The local linear regression technique is applied to estimation of functional-coefficient regression models for time series data. The models include threshold autoregressive models and functional-coefficient autoregressive models as special cases but with the added advantages such as depicting finer structure of the underlying dynamics and better postsample forecasting performance. Also proposed are a new bootstrap test for the goodness of fit of models and a bandwidth selector based on newly defined cross-validatory estimation for the expected forecasting errors. The proposed methodology is data-analytic and of sufficient flexibility to analyze complex and multivariate nonlinear structures without suffering from the “curse of dimensionality.” The asymptotic properties of the proposed estimators are investigated under the α-mixing condition. Both simulated and real data examples are used for illustration.

462 citations


Journal ArticleDOI
TL;DR: The Gaussian mixture transition distribution model is generalized to the mixture autoregressive (MAR) model for the modelling of non‐linear time series and appears to capture features of the data better than other competing models do.
Abstract: We generalize the Gaussian mixture transition distribution (GMTD) model introduced by Le and co-workers to the mixture autoregressive (MAR) model for the modelling of non-linear time series. The models consist of a mixture of K stationary or non-stationary AR components. The advantages of the MAR model over the GMTD model include a more full range of shape changing predictive distributions and the ability to handle cycles and conditional heteroscedasticity in the time series. The stationarity conditions and autocorrelation function are derived. The estimation is easily done via a simple EM algorithm and the model selection problem is addressed. The shape changing feature of the conditional distributions makes these models capable of modelling time series with multimodal conditional distributions and with heteroscedasticity. The models are applied to two real data sets and compared with other competing models. The MAR models appear to capture features of the data better than other competing models do.

286 citations


Journal ArticleDOI
TL;DR: The theoretical results can be used to develop diagnostics for deciding if a time series can be modelled by some linear autoregressive model, and for selecting among several candidate models.
Abstract: We give a general formulation of a non-Gaussian conditional linear AR(1) model subsuming most of the non-Gaussian AR(1) models that have appeared in the literature. We derive some general results giving properties for the stationary process mean, variance and correlation structure, and conditions for stationarity. These results highlight similarities and dierences with the Gaussian AR(1) model, and unify many separate results appearing in the literature. Examples illustrate the wide range of prop- erties that can appear under the conditional linear autoregressive assumption. These results are used in analysing three real data sets, illustrating general methods of estima- tion, model diagnostics and model selection. In particular, we show that the theoretical results can be used to develop diagnostics for deciding if a time series can be modelled by some linear autoregressive model, and for selecting among several candidate models.

142 citations


Journal ArticleDOI
TL;DR: A new method for the analysis of linear models that have autoregressive errors is proposed, which is not only relevant in the behavioral sciences for analyzing small-sample time-series intervention models, but is also appropriate for a wide class of small- sample linear model problems.
Abstract: A new method for the analysis of linear models that have autoregressive errors is proposed. The approach is not only relevant in the behavioral sciences for analyzing small-sample time-series intervention models, but it is also appropriate for a wide class of small-sample linear model problems in which there is interest in inferential statements regarding all regression parameters and autoregressive parameters in the model. The methodology includes a double application of bootstrap procedures. The 1st application is used to obtain bias-adjusted estimates of the autoregressive parameters. The 2nd application is used to estimate the standard errors of the parameter estimates. Theoretical and Monte Carlo results are presented to demonstrate asymptotic and small-sample properties of the method; examples that illustrate advantages of the new approach over established time-series methods are described.

120 citations


Posted Content
TL;DR: In this paper, a general approach to predict multiple time series subject to Markovian shifts in the regime is proposed. But the feasibility of the proposed forecasting techniques in empirical research is demonstrated and their forecast accuracy is evaluated.
Abstract: While there has been a great deal of interest in the modelling of non-linearities and regime shifts in economic time series, there is no clear consensus regarding the forecasting abilities of these models. In this paper we develop a general approach to predict multiple time series subject to Markovian shifts in the regime. The feasibility of the proposed forecasting techniques in empirical research is demonstrated and their forecast accuracy is evaluated.

117 citations


Posted Content
TL;DR: In this article, structural changes in the covariance matrix help identify the change points and the number of change points can be consistently estimated via the information criterion approach, and tools for constructing confidence intervals for change points in multiple time series are provided.
Abstract: This paper analyzes vector autoregressive models (VAR) with multiple structural changes. One distinct feature of this paper is the explicit consideration of structural changes in the variance-covariance matrix, in addition to changes in the autoregressive coefficients. The model is estimated by the quasi-maximum likelihood method. It is shown that shifts in the covariance matrix help identify the change points. We obtain consistency, rate of convergence, and limiting distributions for the estimated change points and the estimated regression coefficients and variance-covariance matrix. We also show that the number of change points can be consistently estimated via the information criterion approach. The paper provides tools for constructing confidence intervals for change points in multiple time series. The result is also useful for analyzing volatility changes in economic time series.

94 citations


Journal ArticleDOI
TL;DR: A linear model with time varying parameters controlled by a neural network to analyze and forecast nonlinear time series with the advantage of naturally incorporating linear multivariate thresholds and smooth transitions between regimes is considered.
Abstract: This paper considers a linear model with time varying parameters controlled by a neural network to analyze and forecast nonlinear time series. We show that this formulation, called neural coefficient smooth transition autoregressive model, is in close relation to the threshold autoregressive model and the smooth transition autoregressive model with the advantage of naturally incorporating linear multivariate thresholds and smooth transitions between regimes. In our proposal, the neural-network output is used to induce a partition of the input space, with smooth and multivariate thresholds. This also allows the choice of good initial values for the training algorithm.

76 citations


Journal ArticleDOI
TL;DR: In this article, a nonparametric version of the Akaike information criterion is developed to determine the order of the model and to select the optimal bandwidth, and a hypothesis testing technique, based on the residual sum of squares and F-test, is proposed to detect whether certain coefficients in the model are really varying or whether any variables are significant.
Abstract: In this paper, we analyze the biochemical oxygen demand data collected over two years from McDowell Creek, Charlotte, North Carolina, U.S.A., by fitting an autoregressive model with time-dependent coefficients. The local linear smoothing technique is developed and implemented to estimate the coefficient functions of the autoregressive model. A nonparametric version of the Akaike information criterion is developed to determine the order of the model and to select the optimal bandwidth. We also propose a hypothesis testing technique, based on the residual sum of squares and F-test, to detect whether certain coefficients in the model are really varying or whether any variables are significant. The approximate null distributions of the test are provided. The proposed model has some advantages, such as it is determined completely by data, it is easily implemented and it provides a better prediction. Copyright © 2000 John Wiley & Sons, Ltd.

54 citations


Journal ArticleDOI
TL;DR: In this paper, a non-linear threshold autoregressive (TAR) model was used to describe the wave height of sea states at Figueira da Foz, located in the Portuguese coast.

45 citations


Proceedings ArticleDOI
23 Jul 2000
TL;DR: It is shown that the criterion can be used to determine the update coefficient, the model order and the estimation algorithm for an adaptive (non-stationary) autoregressive model.
Abstract: A criterion, similar to the information criterion of a stationary autoregressive (AR) model, is introduced for an adaptive (non-stationary) autoregressive model. It is applied to nonstationary EEG data. It is shown that the criterion can be used to determine the update coefficient, the model order and the estimation algorithm.

45 citations


Journal ArticleDOI
TL;DR: Time varying ARMA and ARMAX models are proposed for input-output modeling of nonlinear deterministic and stochastic systems with coefficients estimated by a random walk Kalman filter (RWKF).
Abstract: Time varying ARMA (autoregressive moving average) and ARMAX (autoregressive moving average with exogenous inputs) models are proposed for input-output modeling of nonlinear deterministic and stochastic systems. The coefficients of these models are estimated by a random walk Kalman filter (RWKF). This method requires no prior assumption on the nature of the model coefficients, and is suitable for real-time implementation since no off-line training is needed. A simulation example illustrates the method. Goodness of performance is judged by the quality of the residuals, histograms, autocorrelation functions and the Kolmogorov-Smirnoff test.

Journal ArticleDOI
TL;DR: In this article, the authors investigate empirical properties of competing devices to test for autoregressive dynamics in case of heteroskedastic errors, and show that bootstrapped versions of least-squares-based statistics have better empirical size and comparable power properties.
Abstract: A puzzling characteristic of asset returns for various frequencies is the often observed positive autocorrelation at lag one. To some extent this can be explained by standard asset pricing models when assuming time-varying risk premia. However, one often finds better results when directly fitting an autoregressive model, for which there is little economic foundation. One may ask whether the underlying process does in fact contain an autoregressive component. It is therefore of interest to have a statistical test at hand that performs well under the stylized facts of financial returns. In this paper, we investigate empirical properties of competing devices to test for autoregressive dynamics in case of heteroskedastic errors. For the volatility process we assume GARCH, TGARCH and stochastic volatility. The results indicate that standard quasi-maximum-likelihood inference for the autoregressive parameter is negatively affected by misspecification of the volatility process. We show that bootstrapped versions of least-squares-based statistics have better empirical size and comparable power properties. Applied to German stock return data, the alternative tests yield very different p-values for a considerable number of stock return processes.

01 Jan 2000
TL;DR: The class of time-varying autoregressive (TVAR) models and a range of related recent developments of Bayesian time series modelling are reviewed.
Abstract: We review the class of time-varying autoregressive (TVAR) models and a range of related recent developments of Bayesian time series modelling.

Posted Content
TL;DR: In this article, the local linear regression technique is applied to estimation of functional-coefficient regression models for time series data and a new bootstrap test for the goodness of fit of models and a bandwidth selector based on newly defined cross-validatory estimation for the expected forecasting errors is proposed.
Abstract: The local linear regression technique is applied to estimation of functional-coefficient regression models for time series data. The models include threshold autoregressive models and functional-coefficient autoregressive models as special cases but with the added advantages such as depicting finer structure of the underlying dynamics and better postsample forecasting performance. Also proposed are a new bootstrap test for the goodness of fit of models and a bandwidth selector based on newly defined cross-validatory estimation for the expected forecasting errors. The proposed methodology is data-analytic and of sufficient flexibility to analyze complex and multivariate nonlinear structures without suffering from the “curse of dimensionality.” The asymptotic properties of the proposed estimators are investigated under the α-mixing condition. Both simulated and real data examples are used for illustration.

Journal ArticleDOI
TL;DR: Two types of estimators are provided, having bias of the order O(h) or of O( h/sup 2/) respectively, for small sampling interval h, which are computationally efficient and always yield a stable autoregressive polynomial.
Abstract: We extend our two earlier continuous-time estimation methods for continuous-time autoregressive (CAR) model to derive estimators using only finely sampled discrete-time data. The approach is based on the approximation of derivatives by divided differences, coupled with some bias correction. Two types of estimators are provided, having bias of the order O(h) or of O(h/sup 2/) respectively, for small sampling interval h. The procedures are computationally efficient and always yield a stable autoregressive polynomial. Simulations show that their bias are quite low.

Proceedings ArticleDOI
01 Jan 2000
TL;DR: This work combines hidden Markov models, independent component analysis (ICA) and generalised autoregressive models (GAR) into a single generative model for the analysis of nonstationary multivariate time series.
Abstract: Much research in unsupervised learning builds on the idea of using generative models for modelling the probability distribution over a set of observations. These approaches suggest that powerful new data analysis tools may be derived by combining existing models using a probabilistic generative framework. We follow this approach and combine hidden Markov models (HMMs), independent component analysis (ICA) and generalised autoregressive models (GAR) into a single generative model for the analysis of nonstationary multivariate time series. Our motivation for this work derives from our desire to analyse biomedical signals which are known to be highly non-stationary. Moreover, in signals such as the electroencephalogram (EEG), for example, we have a number of sensors (electrodes) which detect signals emanating from a number of cortical sources via an unknown mixing process. This naturally fits an ICA approach which is further enhanced by noting that the sources themselves are characterised by their dynamic content. This leads us to the use of generalised autoregressive (GAR) processes to model the sources.

Posted Content
TL;DR: In this article, a survey of recent developments related to the smooth transition autoregressive [STAR] time series model and several of its variants is presented, focusing on new methods for testing for STAR nonlinearity, model evaluation, and forecasting.
Abstract: This paper surveys recent developments related to the smooth transition autoregressive [STAR] time series model and several of its variants. We put emphasis on new methods for testing for STAR nonlinearity, model evaluation, and forecasting. Several useful extensions of the basic STAR model, which concern multiple regimes, time-varying nonlinear properties, and models for vector time series, are also reviewed.

Posted Content
TL;DR: In this paper, it was shown that the bias of estimated parameters in autoregressive models can increase as the sample size grows, and this bias is also a nonmonotonic function of the largest auto-regressive root.
Abstract: It is shown that the bias of estimated parameters in autoregressive models can increase as the sample size grows. This bias is also a nonmonotonic function of the largest autoregressive root, contrary to what asymptotic approximations had indicated so far in the literature. These unusual results are due to the effect of the initial sample observations that are typically neglected in theoretical asymptotic analysis, in spite of their empirical relevance. Implications for practical economic modelling are considered, including a comparison of the likely inaccuracies of parameter estimates in alternative models based on competing macroeconomic theories.

Journal ArticleDOI
TL;DR: This work considers the autoregressive estimation for periodically correlated processes, using the parametrization given by the partial autocorrelation function, and proposes an estimation of these parameters by extending the sample partial autOCorrelation method to this situation.
Abstract: We consider the autoregressive estimation for periodically correlated processes, using the parametrization given by the partial autocorrelation function. We propose an estimation of these parameters by extending the sample partial autocorrelation method to this situation. A comparison with other methods is made. Relationships with the stationary multivariate case are discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors explore the use of Schweppe-type weights for a class of weighted Wiicoxon estimates and apply the corresponding estimates to an autoregressive time series model.
Abstract: In this paper we explore the use of Schweppe-type weights for a class of weighted Wiicoxon estimates and apply the corresponding estimates to an autoregressive time series model This special class of estimates is essentially the autoregressive analog of the HBR-estimates proposed by Chang et al. (1999) in the linear regression context. Assuming a stationary finite second moment autoregressive model of order p, asymptotic linearity properties are derived for the HBR-estimate. Based on these properties, the HBR-estimate is shown to be asymptotically normal at rate nl/2. Tests of general linear hypotheses as well as standard errors for confidence interval procedures can be based on such results. In a linear regression setting, the HBR-estimate is highly efficient and inherits a totally bounded influence function and a 50percent breakdown point. Examples and a Monte Carlo study over innovated and additive outlier models indicate that these properties of the HBR-estimate are preserved in an autoregressive time...

01 Jan 2000
TL;DR: In this article, the authors consider a class of non-linear models for time series analysis based on mixtures of local autoregressive models, which they call MixAR models.
Abstract: We consider a novel class of non-linear models for time series analysis based on mixtures of local autoregressive models, which we call MixAR models. MixAR models are constructed so that at any given point time, one of a number of alternative AR models describes its dynamics. The driving AR model is randomly selected from the set of m possible models via according to a state (lag vector) dependent probability distribution. Thus, the MixAR process is a Markov chain with a transition kernel that takes the form of a mixture distribution with non-constant (state dependent) weights. This structure gives MixAR models considerable flexibility, as will be indicated both theoretically and via example. The theoretical aspects of MixAR models that we examine include stochastic stability of MixAR processes, parameter estimation algorithms, and approximation of quite general underlying prediction functions, when the true process is not of the MixAR family. We complement this study with some numerical examples, which seem to indicate that the outof-sample performance is competitive, in spite of the fairly large number of parameters in MixAR models. Prediction results on benchmark time series are compared to linear and non-linear models.

Journal ArticleDOI
TL;DR: In this article, the Lagrange multiplier test was used for the alternative of a nonlinear continuous-time autoregressive model with the instantaneous mean having one degree of nonlinearity.
Abstract: We have implemented a Lagrange multiplier test specifically for the alternative of a nonlinear continuous-time autoregressive model with the instantaneous mean having one degree of nonlinearity. The test is then extended to testing for the alternative of general nonlinear continuous-time autoregressive models with multiple degrees of nonlinearity. The performance of the test in the finite-sample case is compared with several existing tests for nonlinearity including Keenan's (1985) test. Petruccelli & Davies' (1986) test and Tsay's (1986, 1989) tests. The comparison is based on simulated data from some linear autoregressive models, self-exciting threshold autoregressive models, bilinear models and the nonlinear continuous-time autoregressive models which the Lagrange multiplier test is designed to detect. The Lagrange multiplier test outperforms the other tests in detecting the model for which it is designed. Compared with the other tests, the test has excellent power in detecting bilinear models, but seems less powerful in detecting self-exciting threshold autoregressive nonlinearity. The test is further illustrated with the Hong Kong beach water quality data.

Proceedings Article
01 Jan 2000
TL;DR: In this paper, an artificial fuzzy neural network based on B-spline member ship function is presented as an alternative to the stock prediction method based on autoregresive (AR) models.
Abstract: Most models for the time series of stock prices have centered on autoregresive (AR) processes. Traditionaly, fundamental Box-Jenkins analysis [3] have been the mainstream methodology used to develop time series models. Next, we briefly describe the develop a classical AR model for stock price forecasting. Then a fuzzy regression model is then introduced. Following this description, an artificial fuzzy neural network based on B-spline member ship function is presented as an alternative to the stock prediction method based on AR models. Finnaly, we present our preliminary results and some further experiments that we performed.

Posted Content
20 Jan 2000
TL;DR: This paper develops and implements a framework that can be used to assess the absorption rate of shocks in nonlinear models and uses the current-depth-of-recession model of Beaudry and Koop, the floor-and-ceiling model of Pesaran and Potter and a multivariate STAR model are used to illustrate the various concepts.
Abstract: A key feature of many nonlinear time series models is that they allow for the possibility that the model structure experiences changes, depending on, for example, the state of the economy or of the financial market. A common property of these models is that it generally is dicult, if not impossible, to fully understand and interpret the structure of the model by considering the estimated values of the model parameters only. To shed light on the characteristics of a nonlinear model it can then be useful to consider the eects of shocks on the future patterns of a time series variable. Most interest in such impulse response analysis has concentrated on measuring the persistence of shocks, or the magnitude of their (ultimate) eect on the time series variable. Interestingly, far less attention has been given to measuring the speed at which this final eect is attained, that is, how fast shocks are “absorbed” by a time series. In this paper we develop and implement a framework that can be used to assess the absorption rate of shocks in nonlinear models. The floor-and-ceiling model for output growth of Pesaran and Potter (1997) and a multivariate smooth transition model for income, consumption and investment are used to illustrate the various concepts.

Journal ArticleDOI
TL;DR: In this paper, a semiparametric instrumental variables estimator for autoregressive time series models with martingale difference sequences that satisfy an additional symmetry condition on their fourth order moments is proposed.
Abstract: This paper analyzes autoregressive time series models where the errors are assumed to be martingale difference sequences that satisfy an additional symmetry condition on their fourth order moments. Under these conditions Quasi Maximum Likelihood estimators of the autoregressive parameters are no longer efficient in the GMM sense. The main result of the paper is the construction of efficient semiparametric instrumental variables estimators for the autoregressive parameters. The optimal instruments are linear functions of the innovation sequence. It is shown that a frequency domain approximation of the optimal instruments leads to an estimator which only depends on the data periodogram and an unknown linear filter. Semiparametric methods to estimate the optimal filter are proposed. The procedure is equivalent to GMM estimators where lagged observations are used as instruments. Due to the additional symmetry assumption on the fourth moments the number of instruments is allowed to grow at the same rate as the sample. No lag truncation parameters are needed to implement the estimator which makes it particularly appealing from an applied point of view.

Journal ArticleDOI
TL;DR: In this paper, seasonal autoregressive models with an intercept or linear trend are discussed and the main focus is on the models in which the intercept or trend parameters do not depend on the season.
Abstract: Seasonal autoregressive models with an intercept or linear trend are discussed. The main focus of this paper is on the models in which the intercept or trend parameters do not depend on the season. One of the most important results from this study is the asymptotic distribution for the ordinary least squares estimator of the autoregressive parameter obtained under nearly integrated condition, and another is the approximation to the limiting distribution of the t-statistic under the null for testing the unit root hypothesis.

Journal ArticleDOI
09 Apr 2000
TL;DR: A channel model that employs signal-dependent autoregressive filters to accurately and efficiently model signal nonlinearities and media noise characteristics, such as signal-dependence and correlation, which are common in magnetic recording signals is evaluated.
Abstract: Recently, work has been published on a channel model that employs signal-dependent autoregressive filters to accurately and efficiently model signal nonlinearities and media noise characteristics, such as signal-dependence and correlation, which are common in magnetic recording signals. This paper evaluates the model's performance using 42 sets of spin-stand data that cover a large range of head/media manufacturers and operating points.


Posted Content
TL;DR: In this paper, the asymptotic properties of the ordinary least squares estimator for spatial autoregressive models are investigated, and it is shown that this estimator is biased as well as inconsistent for the parameters regardless of the distribution of the error term.
Abstract: This paper investigates the asymptotic properties of the ordinary least squares estimator for spatial autoregressive models. We show that this estimator is biased as well as inconsistent for the parameters regardless of the distribution of the error term. Illustrative examples are also provided.