scispace - formally typeset
Search or ask a question

Showing papers in "Econometric Theory in 1997"


Journal ArticleDOI
TL;DR: In this article, the authors proposed a method to estimate the number of break points one by one as opposed to all simultaneously, and showed that the estimated break points are T-consistent, the same rate as the simultaneous estimation.
Abstract: Sequential (one-by-one) rather than simultaneous estimation of multiple breaks is investigated in this paper. The advantage of this method lies in its compu- tational savings and its robustness to misspecification in the number of breaks. The number of least-squares regressions required to compute all of the break points is of order T, the sample size. Each estimated break point is shown to be consistent for one of the true ones despite underspecification of the num- ber of breaks. More interestingly and somewhat surprisingly, the estimated break points are shown to be T-consistent, the same rate as the simultaneous estimation. Limiting distributions are also derived. Unlike simultaneous esti- mation, the limiting distributions are generally not symmetric and are influenced by regression parameters of all regimes. A simple method is introduced to obtain break point estimators that have the same limiting distributions as those obtained via simultaneous estimation. Finally, a procedure is proposed to con- sistently estimate the number of breaks. Multiple breaks may exist in the trend function of many economic time series, as suggested by the studies of Burdekin and Siklos (1995), Cooper (1995), Garcia and Perron (1996), Lumsdaine and Papell (1995), and others. This paper presents some theory and methods for making inferences in the presence of multiple breaks with unknown break dates. The focus is the sequential method, which identifies break points one by one as opposed to all simultaneously. A number of issues arise in the presence of multiple breaks. These include the determination of the number of breaks, estimation of the break points given the number, and statistical analysis of the resulting estimators. These issues were examined by Bai and Perron (1994) when a different approach of estimation is used. The major results of Bai and Perron (1994) assume simultaneous estimation, which estimates all of the breaks at the same time. In this paper, we study an alternative method, which sequentially identifies the break points. The procedure estimates one break point even if multiple breaks exist. The number of least-squares regressions required to compute

786 citations



Journal ArticleDOI
TL;DR: In this paper, two new classes of probability distributions are introduced that radically simplify the process of developing variance components structures for extreme-value and logistic distributions, and they are shown to be computationally simpler and far more tractable than alternatives such as estimation by simulated moments.
Abstract: Two new classes of probability distributions are introduced that radically simplify the process of developing variance components structures for extremevalue and logistic distributions. When one of these new variates is added to an extreme-value (logistic) variate, the resulting distribution is also extreme value (logistic). Thus, quite complicated variance structures can be generated by recursively adding components having this new distribution, and the result will retain a marginal extreme-value (logistic) distribution. It is demonstrated that the computational simplicity of extreme-value error structures extends to the introduction of heterogeneity in duration, selection bias, limited-dependent- and qualitative-variable models. The usefulness of these new classes of distributions is illustrated with the examples of nested logit, multivariate risk, and competing risk models, where important generalizations to conventional stochastic structures are developed. The new models are shown to be computationally simpler and far more tractable than alternatives such as estimation by simulated moments. These results will be of considerable use to applied microeconomic researchers who have been hampered by computational difficulties in constructing more sophisticated estimators.

431 citations


Journal ArticleDOI
TL;DR: A key theme is that the conditionally optimal forecast is biased under asymmetric loss and that theconditionally optimal amount of bias is time varying in general and depends on higher order conditional moments.
Abstract: Prediction problems involving asymmetric loss functions arise routinely in many fields, yet the theory of optimal prediction under asymmetric loss is not well developed. We study the optimal prediction problem under general loss structures and characterize the optimal predictor. We compute the optimal predictor analytically in two leading tractable cases and show how to compute it numerically in less tractable cases. A key theme is that the conditionally optimal forecast is biased under asymmetric loss and that the conditionally optimal amount of bias is time varying in general and depends on higher order conditional moments. Thus, for example, volatility dynamics (e.g., GARCH effects) are relevant for optimal point prediction under asymmetric loss. More generally, even for models with linear conditionalmean structure, the optimal point predictor is in general nonlinear under asymmetric loss, which provides a link with the broader nonlinear time series literature.

346 citations


Journal ArticleDOI
TL;DR: In this paper, the mean and exponential statistics of Andrews and Ploberger (1994, Econometrica 62, 1383-141414) and the supremum statistic of Andrews (1993, ECONOMA 61, 821-856) were extended to allow trending and unit root regressors.
Abstract: In this paper, test statistics for detecting a break at an unknown date in the trend function of a dynamic univariate time series are proposed. The tests are based on the mean and exponential statistics of Andrews and Ploberger (1994, Econometrica 62, 1383–1414) and the supremum statistic of Andrews (1993, Econometrica 61, 821–856). Their results are extended to allow trending and unit root regressors. Asymptotic results are derived for both I(0) and I(1) errors. When the errors are highly persistent and it is not known which asymptotic theory (I(0) or I(1)) provides a better approximation, a conservative approach based on nearly integrated asymptotics is provided. Power of the mean statistic is shown to be nonmonotonic with respect to the break magnitude and is dominated by the exponential and supremum statistics. Versions of the tests applicable to first differences of the data are also proposed. The tests are applied to some macroeconomic time series, and the null hypothesis of a stable trend function is rejected in many cases.

221 citations


Journal ArticleDOI
TL;DR: In this paper, a nonparametric identification and estimation procedure for an Ito diffusion process based on discrete sampling observations is proposed, which avoids any functional form specification for either the drift function or the diffusion function.
Abstract: In this paper, we propose a nonparametric identification and estimation procedure for an Ito diffusion process based on discrete sampling observations. The nonparametric kernel estimator for the diffusion function developed in this paper deals with general Ito diffusion processes and avoids any functional form specification for either the drift function or the diffusion function. It is shown that under certain regularity conditions the nonparametric diffusion function estimator is pointwise consistent and asymptotically follows a normal mixture distribution. Under stronger conditions, a consistent nonparametric estimator of the drift function is also derived based on the diffusion function estimator and the marginal density of the process, An application of the nonparametric technique to a short-term interest rate model involving Canadian daily 3-month treasury bill rates is also undertaken. The estimation results provide evidence for rejecting the common parametric or semiparametric specifications for both the drift and diffusion functions.

205 citations


Journal ArticleDOI
TL;DR: The authors considered estimation of multiplicative, unobserved component panel data models without imposing a strict exogeneity assumption on the conditioning variables, and proposed a method of moments estimators for nonnegative explained variables, including count variables, continuously distributed nonnegative outcomes and even binary variables.
Abstract: This paper considers estimation of multiplicative, unobserved components panel data models without imposing a strict exogeneity assumption on the conditioning variables. The method of moments estimators proposed have significant robustness properties. They require only a conditional mean assumption and apply to models with lagged dependent variables and to finite distributed lag models with arbitrary feedback from the explained to future values of the explanatory variables. The model is particularly suited to nonnegative explained variables, including count variables, continuously distributed nonnegative outcomes, and even binary variables. The general model can also be applied to certain nonlinear Euler equations.

201 citations


Journal ArticleDOI
TL;DR: In this paper, projections are proposed as a means of identifying and estimating the components (endogenous and exogenous) of an additive nonlinear ARX model, and the estimates are nonparametric in nature and involve averaging of kernel-type estimates.
Abstract: We propose projections as means of identifying and estimating the components (endogenous and exogenous) of an additive nonlinear ARX model. The estimates are nonparametric in nature and involve averaging of kernel-type estimates. Such estimates have recently been treated informally in a univariate time series situation. Here we extend the scope to nonlinear ARX models and present a rigorous theory, including the derivation of asymptotic normality for the projection estimates under a precise set of regularity conditions.

116 citations


Journal ArticleDOI
TL;DR: In this paper, the Cox-Ingersoll-Ross model is used for modeling the term structure of interest rates and estimation of the parameters of this process from observations at equidistant time points.
Abstract: The Cox-Ingersoll-Ross model is a diffusion process suitable for modeling the term structure of interest rates. In this paper, we consider estimation of the parameters of this process from observations at equidistant time points. We study two estimators based on conditional least squares as well as a one-step improvement of these, two weighted conditional least-squares estimators, and the maximum likelihood estimator. Asymptotic properties of the various estimators are discussed, and we also compare their performance in a simulation study.

111 citations


Journal ArticleDOI
TL;DR: In this article, a central limit theorem for near-epoch-dependent random variables is presented for the case of triangular arrays of mixingale and near-EPO-dependent variables.
Abstract: This paper presents central limit theorems for triangular arrays of mixingale and near-epoch-dependent random variables. The central limit theorem for near-epoch-dependent random variables improves results from the literature in various respects. The approach is to define a suitable Bernstein blocking scheme and apply a martingale difference central limit theorem, which in combination with weak dependence conditions renders the result. The most important application of this central limit theorem is the improvement of the conditions that have to be imposed for asymptotic normality of minimization estimators.

109 citations


Journal ArticleDOI
TL;DR: In this article, the problem of inference on the moving average impact matrix and on its row and column spaces in cointegrated 1(1) VAR processes is addressed, and the choice of bases (i.e., the identification) of these spaces, which is of interest in the definition of the common trend structure of the system, is discussed.
Abstract: This paper addresses the problem of inference on the moving average impact matrix and on its row and column spaces in cointegrated 1(1) VAR processes. The choice of bases (i.e., the identification) of these spaces, which is of interest in the definition of the common trend structure of the system, is discussed. Maximum likelihood estimators and their asymptotic distributions are derived, making use of a relation between properly normalized bases of orthogonal spaces, a result that may be of separate interest. Finally, Wald-type tests are given, and their use in connection with existing likelihood ratio tests is discussed.

Journal ArticleDOI
TL;DR: In this article, the authors considered the solution of multivariate linear rational expectations models using the quadratic determinantal equation (QDE) method and showed that all possible classes of solutions (namely, the unique stable solution, multiple stable solutions and the case where no stable solution exists) can be characterized using the QDE method.
Abstract: This paper considers the solution of multivariate linear rational expectations models. It is described how all possible classes of solutions (namely, the unique stable solution, multiple stable solutions, and the case where no stable solution exists) of such models can be characterized using the quadratic determinantal equation (QDE) method of Binder and Pesaran (1995, in M.H. Pesaran & M. Wickens [eds.], Handbook of Applied Econometrics: Macroeconomics, pp. 139–187. Oxford: Basil Blackwell). To this end, some further theoretical results regarding the QDE method expanding on previous work are presented. In addition, numerical techniques are discussed allowing reasonably fast determination of the dimension of the solution set of the model under consideration using the QDE method. The paper also proposes a new, fully recursive solution method for models involving lagged dependent variables and current and future expectations. This new method is entirely straightforward to implement, fast, and applicable also to high-dimensional problems possibly involving coefficient matrices with a high degree of singularity.

Journal ArticleDOI
TL;DR: In this paper, the analysis of cointegrated time series using principal component methods is considered. But the authors do not consider the normalisation imposed by the triangular error correction model, nor the specification of a finite order vector autoregression.
Abstract: This paper considers the analysis of cointegrated time series using principal components methods. These methods have the advantage of neither requiring the normalisation imposed by the triangular error correction model, nor the specification of a finite order vector autoregression. An asymptotically efficient estimator of the cointegrating vectors is given, along with tests for cointegration and tests of certain linear restrictions on the cointegrating vectors. An illustrative application is provided.


Journal ArticleDOI
TL;DR: In this article, a simple semiparametric estimator of the moments of the density function of the latent variable's unobserved random component is proposed. But the results can be used as starting values for parametric estimators, for specification testing including tests of latent error skewness and kurtosis, and to estimate coefficients of discrete explanatory variables in the model.
Abstract: Latent variable discrete choice model estimation and interpretation depend on the density function of the latent variable's unobserved random component. This paper provides a simple semiparametric estimator of the moments of this density. The results can be used as starting values for parametric estimators, to estimate the appropriate location and scaling for semiparametric estimators, for specification testing including tests of latent error skewness and kurtosis, and to estimate coefficients of discrete explanatory variables in the model.

Journal ArticleDOI
TL;DR: In this paper, pseudomaximum likelihood estimators for vector autoregressive models are used to determine the cointegration rank of a multivariate time series process using pseudolikelihood ratio tests.
Abstract: This paper considers pseudomaximum likelihood estimators for vector autoregressive models. These estimators are used to determine the cointegration rank of a multivariate time series process using pseudolikelihood ratio tests. The asymptotic distributions of these tests depend on nuisance parameters if the pseudolikelihood is non-Gaussian. This even holds if the likelihood is correctly specified. The nuisance parameters have a natural interpretation and can be consistently estimated. Some simulation results illustrate the usefulness of the tests: non-Gaussian pseudolikelihood ratio tests generally have a higher power than the Gaussian test of Johansen if the innovations demonstrate leptokurtic behavior.

Journal ArticleDOI
TL;DR: This paper developed an algorithm for the exact Gaussian estimation of a mixed-order continuous-time dynamic model, with unobservable stochastic trends, from a sample of mixed stock and flow data.
Abstract: This paper develops an algorithm for the exact Gaussian estimation of a mixed-order continuous-time dynamic model, with unobservable stochastic trends, from a sample of mixed stock and flow data. Its application yields exact maximum likelihood estimates when the innovations are Brownian motion and either the model is closed or the exogenous variables are polynomials in time of degree not exceeding two, and it can be expected to yield very good estimates under much more general circumstances. The paper includes detailed formulae for the implementation of the algorithm, when the model comprises a mixture of first- and second-order differential equations and both the endogenous and exogenous variables are a mixture of stocks and flows.

Journal ArticleDOI
Oliver Linton1
TL;DR: This article developed order T−1 asymptotic expansions for the quasi-maximum likelihood estimator (QMLE) and a two-step approximate QMLE in the GARCH(l,l) model.
Abstract: We develop order T−1 asymptotic expansions for the quasi-maximum likelihood estimator (QMLE) and a two-step approximate QMLE in the GARCH(l,l) model. We calculate the approximate mean and skewness and, hence, the Edgeworth-B distribution function. We suggest several methods of bias reduction based on these approximations.

Journal ArticleDOI
TL;DR: In this article, the empirical cumulant generating function is used to estimate the parameters of a distribution from data that are independent and identically distributed (i.i.d.).
Abstract: This paper deals with the use of the empirical cumulant generating function to consistently estimate the parameters of a distribution from data that are independent and identically distributed (i.i.d.). The technique is particularly suited to situations where the density function is unknown or unbounded in parameter space. We prove asymptotic equivalence of our technique to that of the empirical characteristic function and outline a six-step procedure for its implementation. Extensions of the approach to non-i.i.d. situations are considered along with a discussion of suitable applications and a worked example.

Journal ArticleDOI
TL;DR: In this article, the authors considered vector-valued nonstationary time series models, in particular, autoregressive models, whose nonstationarity is driven by a few non-stationary (induced by unit roots) trends, in such a way that some of the linear combinations of the components of the vector model will be stationary.
Abstract: This paper considers vector-valued nonstationary time series models, in particular, autoregressive models, whose nonstationarity is driven by a few nonstationary (induced by “unit roots”) trends, in such a way that some of the linear combinations of the components of the vector model will be stationary. Models of this form are called cointegrated models. These stationary linear combinations are called cointegrating relationships. Asymptotic inference problems associated with the parameters of the cointegrating relationships when the remaining parameters are treated as unknown nuisance parameters are considered. Similarly, inference problems associated with the unit roots are considered. All possible unit roots, including complex ones, together with their possible multiplicities, are allowed. The framework under which the asymptotic inference problems are dealt with is the one described in LeCam (1986, Asymptotic Methods in Statistical Decision Theory) and LeCam and Yang (1990, Asymptotics in Statistics: Some Basic Concepts), though it will be seen that the usual normal or mixed normal situations do not apply in the present context.

Journal ArticleDOI
TL;DR: In this article, a Monte Carlo study is conducted to evaluate the potential effects of kernel choice, data-based bandwidth selection, and prewhitening on the power property of the PP test in finite samples.
Abstract: This study examines several important practical issues concerning nonparametsic estimation of the innovation variance for the Phillips-Perron (PP) test. A Monte Carlo study is conducted to evaluate the potential effects of kernel choice, databased bandwidth selection, and prewhitening on the power property of the PP test in finite samples. The Monte Carlo results are instructive. Although the kernel choice is found to make little difference, data-based bandwidth selection and prewhitening can lead to power gains for the PP test. The combined use of both the Andrews (1991, Econornetrica 59, 817-858) data-based bandwidth selection procedure and the Andrews and Monahan (1992, Econornetrica 60, 953-966) prewhitening procedure perfoms particularly well. With the combined use of these two procedures, the PP test displays relatively good power in comparison with the augmented DickeyFuller test.

Journal ArticleDOI
TL;DR: In this article, the nullness of cointegration in the presence of I(1) and I(2) variables is analyzed using residuals from Park's (1992, Econometrica 60,119-143) canonical cointegrating regression (CCR) and the leads-and-lags regression of Saikkonen and Stock and Watson.
Abstract: This paper introduces tests for the null of cointegration in the presence of I(1) and I(2) variables. These tests use residuals from Park's (1992, Econometrica 60,119–143) canonical cointegrating regression (CCR) and the leads-and-lags regression of Saikkonen (1991, Econometric Theory 9,1–21) and Stock and Watson (1993, Econometrica 61, 783–820). Asymptotic theory for CCR in the presence of I(1) and I(2) variables is also introduced. The distributions of the cointegration tests are nonstandard, and hence their percentiles are tabulated by using simulation. Monte Carlo simulation results to study the finite sample performance of the CCR estimates and the cointegration tests are also reported.


Journal ArticleDOI
Hyungtaik Ahn1
TL;DR: In this article, a theory of estimating parameters of a generated regressor model in which some explanatory variables in the equation of interest are the unknown conditional means of certain observable variables given other observable regressors was developed.
Abstract: This paper develops a theory of estimating parameters of a generated regressor model in which some explanatory variables in the equation of interest are the unknown conditional means of certain observable variables given other observable regressors. The paper imposes a weak nonparametric restriction on the form of the conditional means and maintains a single-index assumption on the distribution of the dependent variable in the equation of interest. The estimation method follows a two-step approach: The first step estimates the conditional means in the index nonparametrically, and the second step estimates the parameters by an analytically convenient weighted average derivative method. It is established that the two-step estimator is root-n-consistent and asymptotically normal. The asymptotic variance exceeds that of the one-step hypothetical estimator, which would be obtainable if the first-step regression were known.

Journal ArticleDOI
TL;DR: In this paper, the authors examine the properties of various approximation methods for solving stochastic dynamic programs in structural estimation problems and show that approximating this by the maximum of expected values frequently has poor properties.
Abstract: This paper examines the properties of various approximation methods for solving stochastic dynamic programs in structural estimation problems. The problem addressed is evaluating the expected value of the maximum of available choices. The paper shows that approximating this by the maximum of expected values frequently has poor properties. It also shows that choosing a convenient distributional assumptions for the errors and then solving exactly conditional on the distributional assumption leads to small approximation errors even if the distribution is misspecified. © 1997 Cambridge University Press.

Journal ArticleDOI
TL;DR: In this article, a method for constructing an explicit set of minimal sufficient statistics, based on partial scores and likelihood ratios, is given, and the difference in dimension between parameter and statistic and the curvature of these models have important consequences for inference.
Abstract: Curved exponential models have the property that the dimension of the minimal sufficient statistic is larger than the number of parameters in the model. Many econometric models share this feature. The first part of the paper shows that, in fact, econometric models with this property are necessarily curved exponential. A method for constructing an explicit set of minimal sufficient statistics, based on partial scores and likelihood ratios, is given. The difference in dimension between parameterand statistic and the curvature of these models have important consequences for inference. It is not the purpose of this paper to contribute significantly to the theory of curved exponential models, other than to show that the theory applies to many econometric models and to highlight some multivariate aspects. Using the methods developed in the first part, we show that demand systems, the single structural equation model, the seemingly unrelated regressions, and autoregressive models are all curved exponential models.

Journal ArticleDOI
TL;DR: In this paper, general formulae for the effect of nonnormality on the density and distribution functions of a typical statistic encountered can be characterized as a ratio of polynomials of arbitrary degree in a random vector.
Abstract: A typical statistic encountered can be characterized as a ratio of polynomials of arbitrary degree in a random vector. This vector may possess any admissible cumulant structure. We provide in this paper general formulae for the effect of nonnormality on the density and distribution functions of this ratio. The results appear in terms of generalized cumulants, a theory developed by McCullagh (1984, Biometrika 71, 461-476). With the aid of suitable notation, the expressions are applied to the distributions of tests for heteroskedasticity and autocorrelation, the least-squares estimator of the autoregressive coefficient in a dynamic model, and tests for linear restrictions.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the semiparametric efficiency of the conditional maximum likelihood estimation in some panel models, and showed that the nonparametric component of the model is the unknown distribution of the fixed effect, and the complete sufficient statistic does not depend on the parameter of interest.
Abstract: This paper investigates the semiparametric efficiency of the conditional maximum likelihood estimation in some panel models. The nonparametric component of the model is the unknown distribution of the fixed effect. For the exponential panel model, there exists a complete sufficient statistic for the fixed effect. When the complete sufficient statistic does not depend on the parameter of interest, the conditional maximum likelihood estimator (CMLE) achieves the semiparametric efficiency bound. In particular, the CMLE is semiparametrically efficient for the panel Poisson regression model and the panel negative binomial model.

Journal ArticleDOI
Mehmet Caner1
TL;DR: In this paper, the authors extended the univariate results of Chan and Tran (1989, Econometric Theory 5, 354-362) and Phillips and Durlauf (1986, Review of Economic Studies 53, 473-495) to multivariate time series and developed the limit theory for the least squares estimate of a VAR(l) for a random walk with independent and identically distributed errors.
Abstract: This paper generalizes the univariate results of Chan and Tran (1989, Econometric Theory 5, 354–362) and Phillips (1990, Econometric Theory 6, 44–62) to multivariate time series. We develop the limit theory for the least-squares estimate of a VAR(l) for a random walk with independent and identically distributed errors and for I(1) processes with weakly dependent errors whose distributions are in the domain of attraction of a stable law. The limit laws are represented by functional of a stable process. A semiparametric correction is used in order to asymptotically eliminate the “bias” term in the limit law. These results are also an extension of the multivariate limit theory for square-integrable disturbances derived by Phillips and Durlauf (1986, Review of Economic Studies 53, 473–495). Potential applications include tests for multivariate unit roots and cointegration.