scispace - formally typeset
Search or ask a question

Showing papers in "Econometrics Journal in 2004"


Journal ArticleDOI
TL;DR: In this article, Monte Carlo methods are used to examine the small sample bias of the MLE in the tobit, truncated regression and Weibull survival models as well as the binary probit and logit and ordered probit discrete choice models.
Abstract: Summary The nonlinear fixed-effects model has two shortcomings, one practical and one methodological. The practical obstacle relates to the difficulty of computing the MLE of the coefficients of non-linear models with possibly thousands of dummy variable coefficients. In fact, in many models of interest to practitioners, computing the MLE of the parameters of fixed effects model is feasible even in panels with very large numbers of groups. The result, though not new, appears not to be well known. The more difficult, methodological issue is the incidental parameters problem that raises questions about the statistical properties of the ML estimator. There is relatively little empirical evidence on the behaviour of the MLE in the presence of fixed effects, and that which has been obtained has focused almost exclusively on binary choice models. In this paper, we use Monte Carlo methods to examine the small sample bias of the MLE in the tobit, truncated regression and Weibull survival models as well as the binary probit and logit and ordered probit discrete choice models. We find that the estimator in the continuous response models behaves quite differently from the familiar and oft cited results. Among our findings are: first, a widely accepted result that suggests that the probit estimator is actually relatively well behaved appears to be incorrect; second, the estimators of the slopes in the tobit model, unlike the probit and logit models that have been studied previously, appear to be largely unaffected by the incidental parameters problem, but a surprising result related to the disturbance variance estimator arises instead; third, lest one jumps to a conclusion that the finite sample bias is restricted to discrete choice models, we submit evidence on the truncated regression, which is yet unlike the tobit in that regard—it appears to be biased towards zero; fourth, we find in the Weibull model that the biases in a vector of coefficients need not be in the same direction; fifth, as apparently unexamined previously, the estimated asymptotic standard errors for the ML estimators appear uniformly to be downward biased when the model contains fixed effects. In sum, the finite sample behaviour of the fixed effects estimator is much more varied than the received literature would suggest.

789 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider forecasting using a combination, when no model coincides with a non-constant data generation process (DGP), and show that combining forecasts adds value, and can even dominate the best individual device.
Abstract: Summary We consider forecasting using a combination, when no model coincides with a non-constant data generation process (DGP). Practical experience suggests that combining forecasts adds value, and can even dominate the best individual device. We show why this can occur when forecasting models are differentially mis-specified, and is likely to occur when the DGP is subject to location shifts. Moreover, averaging may then dominate over estimated weights in the combination. Finally, it cannot be proved that only non-encompassed devices should be retained in the combination. Empirical and Monte Carlo illustrations confirm the analysis.

404 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that both univariate and multivariate panel cointegration tests can be substantially oversized in the presence of cross-unit co-integration, and propose a test for crossunit cointegrations that performs well in practice and can be used to decide upon the usefulness of panel methods.
Abstract: Summary Existing panel cointegration tests rule out cross-unit cointegrating relationships, while economic theory and empirical observation argue strongly in favour of their presence. Using an extensive set of simulation experiments, we show that both univariate and multivariate panel cointegration tests can be substantially oversized in the presence of cross-unit cointegration. We also propose a test for cross-unit cointegration that performs well in practice and can be used to decide upon the usefulness of panel methods.

346 citations


Journal ArticleDOI
TL;DR: In this paper, a bias-corrected version of 2SLS based on the Jackknife principle was proposed and the MSE of this method was shown to be approximately the same as that of the Nagar-type estimator.
Abstract: Summary In this paper, we consider parameter estimation in a linear simultaneous equations model. It is well known that two-stage least squares (2SLS) estimators may perform poorly when the instruments are weak. In this case 2SLS tends to suffer from the substantial small sample biases. It is also known that LIML and Nagar-type estimators are less biased than 2SLS but suffer from large small sample variability. We construct a bias-corrected version of 2SLS based on the Jackknife principle. Using higher-order expansions we show that the MSE of our Jackknife 2SLS estimator is approximately the same as the MSE of the Nagar-type estimator. We also compare the Jackknife 2SLS with an estimator suggested by Fuller (Econometrica 45, 933–54) that significantly decreases the small sample variability of LIML. Monte Carlo simulations show that even in relatively large samples the MSE of LIML and Nagar can be substantially larger than for Jackknife 2SLS. The Jackknife 2SLS estimator and Fuller's estimator give the best overall performance. Based on our Monte Carlo experiments we conduct informal statistical tests of the accuracy of approximate bias and MSE formulas. We find that higher-order expansions traditionally used to rank LIML, 2SLS and other IV estimators are unreliable when identification of the model is weak. Overall, our results show that only estimators with well-defined finite sample moments should be used when identification of the model is weak.

339 citations


Journal ArticleDOI
Q. Farooq Akram1
TL;DR: In this paper, the authors explore the possibility of a non-linear relationship between oil prices and the Norwegian exchange rate, which leads to an econometrically well specified and interpretable exchange rate model that also has strong predictive properties.
Abstract: Summary Previous empirical studies have suggested an ambiguous relationship between crude oil prices and exchange rates. In contrast to these studies, we explore the possibility of a non-linear relationship between oil prices and the Norwegian exchange rate. We reveal a negative relationship between oil prices and the value of the Norwegian exchange rate that is relatively strong when oil prices are below 14 dollars and are falling. Allowance for this non-linear relationship leads to an econometrically well specified and interpretable exchange rate model that also has strong predictive properties. Notably, this model substantially improves the forecasts compared with those from a similar model but with linear oil price effects and a random walk model. We undertake an extensive evaluation of our findings to demonstrate their robustness.

220 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of estimating marginal likelihoods for mixture and Markov switching models is discussed, where the importance density is constructed in an unsupervised manner from the MCMC draws using a mixture of complete data posteriors.
Abstract: Summary This paper discusses the problem of estimating marginal likelihoods for mixture and Markov switching model. Estimation is based on the method of bridge sampling (Meng and Wong 1996; Statistica Sinica 11, 552-86.) where Markov Chain Monte Carlo (MCMC) draws from the posterior density are combined with an i.i.d. sample from an importance density. The importance density is constructed in an unsupervised manner from the MCMC draws using a mixture of complete data posteriors. Whereas the importance sampling estimator as well as the reciprocal importance sampling estimator are sensitive to the tail behaviour of the importance density, we demonstrate that the bridge sampling estimator is far more robust. Our case studies range from computing marginal likelihoods for a mixture of multivariate normal distributions, testing for the inhomogeneity of a discrete time Poisson process, to testing for the presence of Markov switching and order selection in the MSAR model.

168 citations


Journal ArticleDOI
TL;DR: In this article, the problem of forecasting in dynamic factor models using Bayesian model averaging is considered and theoretical justifications for averaging across models, as opposed to selecting a single model, are given.
Abstract: This paper considers the problem of forecasting in dynamic factor models using Bayesian model averaging. Theoretical justifications for averaging across models, as opposed to selecting a single model, are given. Practical methods for implementing Bayesian model averaging with factor models are described. These methods involve algorithms which simulate from the space defined by all possible models. We discuss how these simulation algorithms can also be used to select the model with the highest marginal likelihood (or highest value of an information criterion) in an efficient manner. We apply these methods to the problem of forecasting GDP and inflation using quarterly U.S. data on 162 time series. For both GDP and inflation, we find that the models which contain factors do out-forecast an AR(p), but only by a relatively small amount and only at short horizons. We attribute these findings to the presence of structural instability and the fact that lags of dependent variable seem to contain most of the information relevant for forecasting. Relative to the small forecasting gains provided by including factors, the gains provided by using Bayesian model averaging over forecasting methods based on a single model are appreciable.

148 citations


Journal ArticleDOI
TL;DR: In this article, the authors used copula functions to obtain a flexible bivariate parametric model for non-negative integer-valued data (counts) and recovered the distribution of the difference in the two counts from a specified bivariate count distribution.
Abstract: Summary This paper makes three contributions. Firstly, it uses copula functions to obtain a flexible bivariate parametric model for non-negative integer-valued data (counts). Secondly, it recovers the distribution of the difference in the two counts from a specified bivariate count distribution. Thirdly, the methods are applied to counts that are measured with error. Specifically, we model the determinants of the difference between the self-reported number of doctor visits (measured with error) and true number of doctor visits (also available in the data used).

142 citations


Journal ArticleDOI
TL;DR: In this article, a family of parametric, single-equation cointegration estimators that arise in the context of the autoregressive distributed lag (ADL) models are evaluated by means of Monte Carlo simulations.
Abstract: Summary This paper deals with a family of parametric, single-equation cointegration estimators that arise in the context of the autoregressive distributed lag (ADL) models. We particularly focus on a subclass of the ADL models, those that do not involve lagged values of the dependent variable, referred to as augmented static (AS) models. The general ADL and the restricted AS models give rise to the ADL and dynamic OLS (DOLS) estimators, respectively. The relative performance of these estimators is assessed by means of Monte Carlo simulations in the context of a triangular data generation process (DGP) where the cointegration error and the error that drives the regressor follow a VAR(1) process. The results suggest that ADL fares consistently better than DOLS, both in terms of estimation precision and reliability of statistical inferences. This is due to the fact that DOLS, as opposed to ADL, does not fully correct for the second-order asymptotic bias effects of cointegration, since a ‘truncation bias’ always remains. As a result, the performance of DOLS approaches that of ADL, as the number of lagged values of the first difference of the regressor in the AS model increases. Another set of Monte Carlo simulations suggests that the commonly used information criteria select the correct order of the ADL model quite frequently, thus making the employment of ADL over DOLS quite appealing and feasible. Additional results suggest that ADL re-emerges as the optimal estimator within a wider class of asymptotically efficient estimators including, apart from DOLS, the semiparametric fully modified least squares (FMLS) estimator of Phillips and Hansen (1990, Review of Economic Studies 57, 99–125), the non-linear parametric estimator (PL) of Phillips and Loretan (1991, Review of Economic Studies 58, 407–36) and the system-based maximum likelihood estimator (JOH) of Johansen (1991, Econometrica 59, 1551–80). All the aforementioned results are robust to alternative models for the error term, such as vector autoregressions of higher order, or vector moving average processes.

122 citations


Journal ArticleDOI
TL;DR: In this paper, the asymptotic properties of double-stage quantile regression estimators with random regressors are investigated. But the first stage is based on quantile regressions with the same quantile as in the second stage, which ensures robustness of the estimation procedure.
Abstract: We present the asymptotic properties of double-stage quantile regression estimators with random regressors, where the first stage is based on quantile regressions with the same quantile as in the second stage, which ensures robustness of the estimation procedure. We derive invariance properties with respect to the reformulation of the dependent variable. We propose a consistent estimator of the variance-covariance matrix of the new estimator. Finally, we investigate finite sample properties of this estimator by using Monte Carlo simulations.

111 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed statistical tests that can be used to test linearity in cointegrating smooth transition regression models and extended previous similar tests by considering I(1) regressors instead of stationary or mixing regressors.
Abstract: Summary This paper develops statistical tests that can be used to test linearity in cointegrating smooth transition regression models. These tests extend previous similar tests by considering I(1) regressors instead of stationary or mixing regressors and they also allow for more general transition mechanisms than in previous studies. As is typical in cointegrating regressions, the regressors and errors of the model can be serially and contemporaneously correlated. In order to allow for this feature, an endogeneity correction based on a leads-and-lags approach is employed. The proposed tests are very simple to use because ordinary least squares techniques and standard chi-square limiting distributions apply. Simulation experiments indicate that the tests have reasonable finite sample properties. Empirical applications to a U.K. money demand function illustrate the practical usefulness of the tests.

Journal ArticleDOI
TL;DR: In this paper, a computationally simple maximum likelihood procedure for multivariate fractionally integrated time series models is introduced, which allows efficient estimation of the memory parameters of fractional models or efficient testing of the hypothesis that two or more series are integrated of the same possibly fractional order.
Abstract: Summary A computationally simple maximum likelihood procedure for multivariate fractionally integrated time series models is introduced. This allows, e.g., efficient estimation of the memory parameters of fractional models or efficient testing of the hypothesis that two or more series are integrated of the same possibly fractional order. In particular, we show the existence of a local time domain maximum likelihood estimator and its asymptotic normality, and under Gaussianity asymptotic efficiency. The likelihood-based test statistics (Wald, likelihood ratio and Lagrange multiplier) are derived and shown to be asymptotically equivalent and chi-squared distributed under local alternatives, and under Gaussianity locally most powerful. The finite sample properties of the likelihood ratio test are evaluated by Monte Carlo experiments, which show that rejection frequencies are very close to the asymptotic local power for samples as small as n= 100.

Journal ArticleDOI
TL;DR: In this article, the effects of innovational outliers and additive outliers in cointegrated vector autoregressive models are examined and it is analyzed how outliers can be modelled with dummy variables.
Abstract: Summary The effects of innovational outliers and additive outliers in cointegrated vector autoregressive models are examined and it is analyzed how outliers can be modelled with dummy variables. A Monte Carlo simulation illustrates that additive outliers are more distortionary than innovational outliers, and misspecified dummies may distort inference on the cointegration rank in finite samples. These findings question the common practice in applied cointegration analyses of including unrestricted dummy variables to account for large residuals. Instead it is suggested to focus on additive outliers, or to test the adequacy of a particular specification of dummies prior to testing for the cointegration rank. The points are illustrated on a UK money demand data set.

Journal ArticleDOI
TL;DR: In this paper, an extension of the multivariate stochastic cycle is proposed to account for phase shifts between individual cyclical components. But the authors do not consider the impact of phase shifts in the business cycle relationships among output, total hours worked and real wage in the United States.
Abstract: Summary Stochastic cycles are often incorporated in structural time series models to identify the cyclical components in macroeconomic time series. This paper proposes an extension of the multivariate stochastic cycle to account for phase shifts between individual cyclical components. The properties of the stochastic cycle allow phase shifts to be modelled in an entirely symmetrical way. As a result, cross correlations between cyclical components can be expressed in terms of phase shifts and phase-adjusted associations. An application demonstrates the role of phase shifts in the business cycle relationships among output, total hours worked and the real wage in the United States.

Journal ArticleDOI
TL;DR: In this article, conditions for stationarity, geometric ergodicity as well as existence of moments are derived using a general multivariate Markov process and it is shown that imposing parametric restrictions on only one of the regimes of the non-linear vector autoregression is sufficient to ensure higher-order moments and linear cointegrating relations which are geometrically ergodic and hence also stationary.
Abstract: Summary Cointegration is studied for a non-linear autoregressive process characterized by discontinuous and regime-dependent equilibrium or error correction. Here the disequilibrium, as measured by the norm of linear ‘stable’ or cointegrating relations, determines the regime and hence the equilibrium correction of the process. Importantly, switching between regimes is thereby allowed to be caused endogenously. The transition function may be either observable as in, e.g. threshold processes, or unobservable when transition probabilities are specified as in, e.g. autoregressive conditional root processes. Conditions for stationarity, geometric ergodicity as well as existence of moments are derived using a general multivariate Markov process. From these conditions it is shown that imposing parametric restrictions on only one of the regimes of the non-linear vector autoregression is sufficient to ensure higher-order moments and linear cointegrating relations which are geometrically ergodic and hence also stationary. Additionally, estimation is considered when the cointegrating relations are known and asymptotic theory is provided for this case. Based on many existing empirical analyses of, e.g. real exchange rates and interest rates spreads, the proposed dynamics appears to be desirable. This is also reflected in the included analysis of the German term structure where empirical evidence is found for discontinuous threshold error correction as opposed to classic linear error correction.

Journal ArticleDOI
TL;DR: In this article, the authors make an important distinction between the discrete and continuous time frameworks and test for duration dependence in business and stock market cycles, and compare their results for business cycles with those of Diebold and Rudebusch (1990, 1991).
Abstract: Summary In this paper, we discuss discrete-time tests for duration dependence. Two of our test statistics are new to the econometrics literature, and we make an important distinction between the discrete and continuous time frameworks. We then test for duration dependence in business and stock market cycles, and compare our results for business cycles with those of Diebold and Rudebusch (1990, 1991). Our null hypothesis is that once an expansion or contraction has exceeded some minimum duration, the probability of a turning point is independent of its age—a proposition that dates back to Fisher (1925) and McCulloch (1975).

Journal ArticleDOI
TL;DR: In this article, the authors extend the standard variance component models to the analysis of multivariate counts, defining the dependence among counts through a set of correlated random coefficients, which is carried out by numerical integration through an EM algorithm without parametric assumptions upon the random coefficients distribution.
Abstract: Summary The analysis of overdispersed counts has been the focus of a wide range of literature, with the general objective of providing reliable parameter estimates in the presence of heterogeneity or dependence among subjects. In this paper we extend the standard variance component models to the analysis of multivariate counts, defining the dependence among counts through a set of correlated random coefficients. Estimation is carried out by numerical integration through an EM algorithm without parametric assumptions upon the random coefficients distribution. The proposed model is computationally parsimonious and, when applied to a real dataset, seems to produce better results than parametric models. A simulation study has been carried out to investigate the behaviour of the proposed models in a series of empirical situations.

Journal ArticleDOI
TL;DR: In this article, a new method for constructing confidence intervals for impulse response functions and half-lives of nearly nonstationary processes is proposed based on inverting the acceptance region of the likelihood ratio statistic under a sequence of null hypotheses.
Abstract: Summary Many economic time series are characterized by high persistence which typically requires nonstandard limit theory for inference. This paper proposes a new method for constructing confidence intervals for impulse response functions and half-lives of nearly nonstationary processes. It is based on inverting the acceptance region of the likelihood ratio statistic under a sequence of null hypotheses of possible values for the impulse response or the half-life. This paper shows the consistency of the restricted estimator of the localizing constant which ensures the validity of the asymptotic inference. The proposed method is used to study the persistence of shocks to real exchange rates.

Journal ArticleDOI
TL;DR: A new approach to stochastic frontier models is proposed, viz., a Markov switching structure to accommodate cross-sectional parameter heterogeneity and temporal variation in the parameters and technical inefficiency distributions.
Abstract: Summary In this paper, we propose a new approach to stochastic frontier models, viz., a Markov switching structure to accommodate cross-sectional parameter heterogeneity and temporal variation in the parameters and technical inefficiency distributions. The Markov Chain Monte Carlo techniques are developed and implemented for Bayesian inferences on parameters and technical efficiency. We illustrate new methods by estimating world production frontiers using international panel data on 59 countries observed for 26 years.

Journal ArticleDOI
TL;DR: This paper showed that the asymptotic distributions of LM-type linearity tests against Smooth Transition Autoregressive (STAR) models, in the presence of a unit root, are non-standard and using standard χ 2 critical values may lead to incorrect inference as the tails of the distribution of tests will be thicker than the χ2.
Abstract: Summary This paper shows that the asymptotic distributions of LM-type linearity tests against Smooth Transition Autoregressive (STAR) models, in the presence of a unit root, are non-standard and using standard χ2 critical values may lead to incorrect inference as the tails of the distribution of tests will be thicker than the χ2. This finding also indicates that one needs to test for stationarity prior to applying linearity tests.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the effect of misspecification on the large sample properties of change-point estimators and the validity of tests of the null hypothesis of linearity versus the alternative of a structural break.
Abstract: In this paper we investigate the consequences of misspecification on the large sample properties of change-point estimators and the validity of tests of the null hypothesis of linearity versus the alternative of a structural break. Specifically this paper concentrates on the interaction of structural breaks in the mean and variance of a time series when either of the two is omitted from the estimation and inference procedures. Our analysis considers the case of a break in mean under omitted-regime-dependent heteroscedasticity and that of a break in variance under an omitted mean shift. The large and finite sample properties of the resulting least-squares-based estimators are investigated and the impact of the two types of misspecification on inferences about the presence or absence of a structural break subsequently analysed.

Journal ArticleDOI
TL;DR: In this article, a flexible approximation approach based on Taylor expansion is proposed for a parametrized transformation function (like the Box-Cox model), and a semi-parametric approach (combining a semiparametric linear-index estimator and nonparametric regression) are proposed for the case of an unspecified transformation function.
Abstract: Summary This paper considers estimation of a transformation model in which the transformed dependent variable is subject to classical measurement error. We consider cases in which the transformation function is known and unspecified. In special cases (e.g. log and square-root transformations), least-squares or non-linear least-squares estimators are applicable. A flexible approximation approach (based on Taylor expansion) is proposed for a parametrized transformation function (like the Box-Cox model), and a semi-parametric approach (combining a semi-parametric linear-index estimator and non-parametric regression) is proposed for the case of an unspecified transformation function. The methods are applied to the estimation of earnings equations, using wage data from the Current Population Survey (CPS).

Journal ArticleDOI
TL;DR: In this article, the authors extend the well-known Sims, Stock and Watson (SSW) analysis on estimation and testing in vector autoregressive process (VARs) with integer unit roots and deterministic components to a more general set-up where non-stationary fractionally integrated (NFI) processes are considered.
Abstract: Summary In this paper, we extend the well-known Sims, Stock and Watson (SSW) (Sims et al. 1990; Econometrica 56, 113–44), analysis on estimation and testing in vector autoregressive process (VARs) with integer unit roots and deterministic components to a more general set-up where non-stationary fractionally integrated (NFI) processes are considered. In particular, we focus on partial VAR models where the conditioning variables are NFI since this is the only finite-lag VAR model compatible with such processes. We show how SSW's conclusions remain valid. This means that whenever a block of coefficients in the partial VAR can be written as coefficients on zero-mean I(0) regressors in models including a constant term, they will have a joint asymptotic normal distribution. Monte Carlo simulations and an empirical application of our theoretical results are also provided.

Journal ArticleDOI
TL;DR: The results apply to many of the dynamic factor models that have appeared in the literature and to many worthwhile generalizations of those models.
Abstract: Summary We consider identification of a class of dynamic factor model. We show that identification holds under reasonably general conditions. The results apply to many of the dynamic factor models that have appeared in the literature and to many worthwhile generalizations of those models.

Journal ArticleDOI
TL;DR: In this paper, the authors developed the likelihood-ratio test for linear restrictions implied by rational expectations hypotheses in a cointegrated vector autoregressive model for I(1) variables, when the constant or linear term is restricted to the cointegration space.
Abstract: Summary In this note we develop the likelihood-ratio test for some linear restrictions implied by rational expectations hypotheses in a cointegrated vector autoregressive model for I(1) variables, when the constant or linear term is restricted to the cointegration space.

Journal ArticleDOI
TL;DR: The Richard--Zhang accelerated importance sampler is extended for the simulation estimation of dynamic discrete choice panel models and is demonstrated to be adequate and can improve upon the Geweke--Hajivassiliou--Keane sampler for lengthy time-series panels by Monte Carlo means.
Abstract: Summary With long time series for dynamic discrete choice panel models, the Geweke–Hajivassiliou–Keane sampler has been observed to have large biases and root-mean-square errors. The Richard–Zhang accelerated importance sampler is extended for the simulation estimation of such models. It is demonstrated to be adequate and can improve upon the Geweke–Hajivassiliou–Keane sampler for lengthy time-series panels by Monte Carlo means. Empirical applications of the proposed method on firm's dividend decisions illustrate the practical value of the accelerated importance sampler.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the forecasting performance of long memory and Markov switching models for forecasting the quarterly Consumer Price inflation rate in Portugal in the period 1968-1998, and found that long memory models may capture some in-sample features of the data, but their forecasting performance is relatively poor when shifts occur in the series.
Abstract: Summary Recent research has focused on the links between long memory and structural breaks, stressing the memory properties that may arise in models with parameter changes. In this paper, we question the implications of this result for forecasting. We contribute to this research by comparing the forecasting abilities of long memory and Markov switching models. Two approaches are employed: the Monte Carlo study and an empirical comparison, using the quarterly Consumer Price inflation rate in Portugal in the period 1968-1998. Although long memory models may capture some in-sample features of the data, we find that their forecasting performance is relatively poor when shifts occur in the series, compared to simple linear and Markov switching models.

Journal ArticleDOI
TL;DR: In this article, it is shown that it is possible to characterise the cointegrating structure of a partially non-stationary, cointegrated, I(1) time series via the canonical correlations between the future and, the present and past, of the first differences of that series.
Abstract: Summary In this paper we show that it is possible to characterise the cointegrating structure of a partially non-stationary, cointegrated, I(1) time series via the canonical correlations between the future and, the present and past, of the first differences of that series. This leads to a consideration of different model free non-parametric methodologies for identifying the cointegrating rank. An adaptation of existing techniques using a novel method of spectral estimation gives rise to both a new applied tool and an alternative analytical framework that unifies current hypothesis-testing approaches. An investigation of the eigenstructure of a multivariate version of von-Neumann's ratio also leads to the development of an entirely new model free cointegrating rank selection criterion. All the procedures considered are easily implemented and the practical relevance of the theoretical results obtained, which are founded on asymptotic arguments, is demonstrated by means of a simulation study.

Journal ArticleDOI
TL;DR: This article examined the effect of X-11 seasonal adjustment on periodic autoregressive processes, using both analytic techniques and simulation, and concluded that adjustment reduces (but does not eliminate) periodicity in the coefficients of a stationary PAR(1) process.
Abstract: Summary This paper examines the effect of X-11 seasonal adjustment on periodicautoregressive processes, using both analytic techniques and simulation. Analytical resultsshow that adjustment reduces (but does not eliminate) periodicity in the coefficients ofa stationary PAR(1) process, and it generally moves the coefficients towards unity. Anonstationary periodically integrated process is converted into a process with a conventionalunit root and induced periodic heteroscedasticity. Simulations confirm that, for finite samples,evidence of periodicity in the coefficients and in residual heteroscedasticity may remain afteradjustment,butperiodicvariationinlong-runcoefficientsisannihilated.Theoverallconclusionis that adjustment alters, but does not destroy, periodic properties. Keywords: Seasonal adjustment , X-11 , Periodic processes , Unit root tests . 1. INTRODUCTION Seasonality is an important feature of macroeconomic time series, but it is typically of littleinterest to economists. Consequently, empirical analysis of macroeconomic time series is almostalways undertaken using seasonally adjusted values, frequently based on the US Bureau of theCensus X-11 Method (see, for example, Dagum 1988) or its recent development X-12 (Findley