scispace - formally typeset
Search or ask a question

Showing papers by "Victor Chernozhukov published in 2007"


Journal ArticleDOI
TL;DR: The authors developed a framework for performing estimation and inference in econometric models with partial identification, focusing particularly on models characterized by moment inequalities and equalities, and developed methods for analyzing the asymptotic properties of sample criterion functions under set identification.
Abstract: This paper develops a framework for performing estimation and inference in econometric models with partial identification, focusing particularly on models characterized by moment inequalities and equalities. Applications of this framework include the analysis of game-theoretic models, revealed preference restrictions, regressions with missing and corrupted data, auction models, structural quantile regressions, and asset pricing models. Specifically, we provide estimators and confidence regions for the set of minimizers Θ I of an econometric criterion function Q(θ). In applications, the criterion function embodies testable restrictions on economic models. A parameter value θ that describes an economic model satisfies these restrictions if Q(θ) attains its minimum at this value. Interest therefore focuses on the set of minimizers, called the identified set. We use the inversion of the sample analog, Q n (θ), of the population criterion, Q(θ), to construct estimators and confidence regions for the identified set, and develop consistency, rates of convergence, and inference results for these estimators and regions. To derive these results, we develop methods for analyzing the asymptotic properties of sample criterion functions under set identification.

632 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method to address the problem of lack of monotonicity in estimation of conditional and structural quantile functions, also known as the quantile crossing problem.
Abstract: This paper proposes a method to address the longstanding problem of lack of monotonicity in estimation of conditional and structural quantile functions, also known as the quantile crossing problem (Bassett and Koenker (1982)). The method consists in sorting or monotone rearranging the original estimated non-monotone curve into a monotone rearranged curve. We show that the rearranged curve is closer to the true quantile curve than the original curve in finite samples, establish a functional delta method for rearrangement-related operators, and derive functional limit theory for the entire rearranged curve and its functionals. We also establish validity of the bootstrap for estimating the limit law of the entire rearranged curve and its functionals. Our limit results are generic in that they apply to every estimator of a monotone function, provided that the estimator satisfies a functional central limit theorem and the function satisfies some smoothness conditions. Consequently, our results apply to estimation of other econometric functions with monotonicity restrictions, such as demand, production, distribution, and structural distribution functions. We illustrate the results with an application to estimation of structural distribution and quantile functions using data on Vietnam veteran status and earnings.

487 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider nonparametric identification and estimation of a model that is monotonic in a nonseparable scalar disturbance, which disturbance is independent of instruments.

219 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the implications of the statistical large sample theory for the computational complexity of Bayesian and quasi-Bayesian estimation carried out using Metropolis random walks and showed that the running time of the algorithm in large samples is bounded in probability by a polynomial in the parameter dimension $d, and, in particular, is of stochastic order $d^2$ in the leading cases after the burn-in period.
Abstract: In this paper we examine the implications of the statistical large sample theory for the computational complexity of Bayesian and quasi-Bayesian estimation carried out using Metropolis random walks. Our analysis is motivated by the Laplace-Bernstein-Von Mises central limit theorem, which states that in large samples the posterior or quasi-posterior approaches a normal density. Using the conditions required for the central limit theorem to hold, we establish polynomial bounds on the computational complexity of general Metropolis random walks methods in large samples. Our analysis covers cases where the underlying log-likelihood or extremum criterion function is possibly non-concave, discontinuous, and with increasing parameter dimension. However, the central limit theorem restricts the deviations from continuity and log-concavity of the log-likelihood or extremum criterion function in a very specific manner. Under minimal assumptions required for the central limit theorem to hold under the increasing parameter dimension, we show that the Metropolis algorithm is theoretically efficient even for the canonical Gaussian walk which is studied in detail. Specifically, we show that the running time of the algorithm in large samples is bounded in probability by a polynomial in the parameter dimension $d$, and, in particular, is of stochastic order $d^2$ in the leading cases after the burn-in period. We then give applications to exponential families, curved exponential families, and Z-estimation of increasing dimension.

63 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method to address the problem of lack of monotonicity in estimation of conditional and structural quantile functions, also known as the quantile crossing problem.
Abstract: This paper proposes a method to address the longstanding problem of lack of monotonicity in estimation of conditional and structural quantile functions, also known as the quantile crossing problem. The method consists in sorting or monotone rearranging the original estimated non-monotone curve into a monotone rearranged curve. We show that the rearranged curve is closer to the true quantile curve in finite samples than the original curve, establish a functional delta method for rearrangement-related operators, and derive functional limit theory for the entire rearranged curve and its functionals. We also establish validity of the bootstrap for estimating the limit law of the the entire rearranged curve and its functionals. Our limit results are generic in that they apply to every estimator of a monotone econometric function, provided that the estimator satisfies a functional central limit theorem and the function satisfies some smoothness conditions. Consequently, our results apply to estimation of other econometric functions with monotonicity restrictions, such as demand, production, distribution, and structural distribution functions. We illustrate the results with an application to estimation of structural quantile functions using data on Vietnam veteran status and earnings.

52 citations


Journal ArticleDOI
TL;DR: In this paper, the authors apply a regularization procedure called increasing rearrangement to monotonize Edgeworth and Cornish-Fisher expansions and any other related approximations of distribution and quantile functions of sample statistics.
Abstract: This paper applies a regularization procedure called increasing rearrangement to monotonize Edgeworth and Cornish–Fisher expansions and any other related approximations of distribution and quantile functions of sample statistics. In addition to satisfying monotonicity, required of distribution and quantile functions, the procedure often delivers strikingly better approximations to the distribution and quantile functions of the sample mean than the original Edgeworth–Cornish–Fisher expansions.

46 citations


Journal ArticleDOI
TL;DR: In this paper, asymptotic and finite sample confidence bounds in instrumental variables quantile regressions of wages on schooling with relatively weak instruments were considered. But they did not consider the effect of the instrument parameters on the performance of the quantile regression.

42 citations


Posted Content
TL;DR: In this article, the authors proposed a method to address the problem of lack of monotonicity in estimation of conditional and structural quantile functions, also known as the quantile crossing problem.
Abstract: This paper proposes a method to address the longstanding problem of lack of monotonicity in estimation of conditional and structural quantile functions, also known as the quantile crossing problem The method consists in sorting or monotone rearranging the original estimated non-monotone curve into a monotone rearranged curve We show that the rearranged curve is closer to the true quantile curve in finite samples than the original curve, establish a functional delta method for rearrangement-related operators, and derive functional limit theory for the entire rearranged curve and its functionals We also establish validity of the bootstrap for estimating the limit law of the the entire rearranged curve and its functionals Our limit results are generic in that they apply to every estimator of a monotone econometric function, provided that the estimator satisfies a functional central limit theorem and the function satisfies some smoothness conditions Consequently, our results apply to estimation of other econometric functions with monotonicity restrictions, such as demand, production, distribution, and structural distribution functions We illustrate the results with an application to estimation of structural quantile functions using data on Vietnam veteran status and earnings

29 citations


Posted Content
TL;DR: In this article, the authors show that the original estimate of a target function can always be improved with no harm using rearrangement techniques, and they illustrate the results with a computational example and an empirical example dealing with age-height growth charts.
Abstract: Suppose that a target function is monotonic, namely, weakly increasing, and an original estimate of the target function is available, which is not weakly increasing. Many common estimation methods used in statistics produce such estimates. We show that these estimates can always be improved with no harm using rearrangement techniques: The rearrangement methods, univariate and multivariate, transform the original estimate to a monotonic estimate, and the resulting estimate is closer to the true curve in common metrics than the original estimate. We illustrate the results with a computational example and an empirical example dealing with age-height growth charts.

23 citations


Journal ArticleDOI
TL;DR: In this article, the computational complexity of Bayesian and quasi-Bayesian estimation in large samples carried out using a basic Metropolis random walk was studied and it was shown that the running time of the algorithm is bounded in probability by a polynomial in the parameter dimension d, and in particular is of stochastic order d2 in the leading cases after the burn-in period.
Abstract: This paper studies the computational complexity of Bayesian and quasi-Bayesian estimation in large samples carried out using a basic Metropolis random walk. The framework covers cases where the underlying likelihood or extremum criterion function is possibly non-concave, discontinuous, and of increasing dimension. Using a central limit framework to provide structural restrictions for the problem, it is shown that the algorithm is computationally efficient. Specifically, it is shown that the running time of the algorithm in large samples is bounded in probability by a polynomial in the parameter dimension d, and in particular is of stochastic order d2 in the leading cases after the burn-in period. The reason is that, in large samples, a central limit theorem implies that the posterior or quasi-posterior approaches a normal density, which restricts the deviations from continuity and concavity in a specific manner, so that the computational complexity is polynomial. An application to exponential and curved exponential families of increasing dimension is given.

21 citations


Posted Content
TL;DR: In this paper, the authors show that the original estimate of a target function can always be improved with no harm using rearrangement techniques, and they illustrate the results with a computational example and an empirical example dealing with age-height growth charts.
Abstract: Suppose that a target function is monotonic, namely, weakly increasing, and an original estimate of the target function is available, which is not weakly increasing. Many common estimation methods used in statistics produce such estimates. We show that these estimates can always be improved with no harm using rearrangement techniques: The rearrangement methods, univariate and multivariate, transform the original estimate to a monotonic estimate, and the resulting estimate is closer to the true curve in common metrics than the original estimate. We illustrate the results with a computational example and an empirical example dealing with age-height growth charts.

Journal ArticleDOI
TL;DR: In this article, it was shown that the class of tests covered by this admissibility result contains the Anderson and Rubin (1949) test, and that the test proposed by Moreira (2003) belongs to the closure of (i.e., can be interpreted as a limiting case of) the classes covered by the admissability result.
Abstract: This paper studies a model widely used in the weak instruments literature and establishes admissibility of the weighted average power likelihood ratio tests recently derived by Andrews, Moreira, and Stock (2004). The class of tests covered by this admissibility result contains the Anderson and Rubin (1949) test. Thus, there is no conventional statistical sense in which the Anderson and Rubin (1949) test "wastes degrees of freedom". In addition, it is shown that the test proposed by Moreira (2003) belongs to the closure of (i.e., can be interpreted as a limiting case of) the class of tests covered by our admissibility result.

Posted Content
TL;DR: In this paper, the authors studied the natural monotonization of these empirical curves induced by sampling from the estimated non-monotone model, and then taking the resulting conditional quantile curves that by construction are monotone in the probability.
Abstract: The most common approach to estimating conditional quantile curves is to fit a curve, typically linear, pointwise for each quantile. Linear functional forms, coupled with pointwise fitting, are used for a number of reasons including parsimony of the resulting approximations and good computational properties. The resulting fits, however, may not respect a logical monotonicity requirement - that the quantile curve be increasing as a function of probability. This paper studies the natural monotonization of these empirical curves induced by sampling from the estimated non-monotone model, and then taking the resulting conditional quantile curves that by construction are monotone in the probability. This construction of monotone quantile curves may be seen as a bootstrap and also as a monotonic rearrangement of the original non-monotone function. It is shown that the monotonized curves are closer to the true curves in finite samples, for any sample size. Under correct specification, the rearranged conditional quantile curves have the same asymptotic distribution as the original non-monotone curves. Under misspecification, however, the asymptotics of the rearranged curves may partially differ from the asymptotics of the original non-monotone curves. An analogous procedure is developed to monotonize the estimates of conditional distribution functions. The results are derived by establishing the compact (Hadamard) differentiability of the monotonized quantile and probability curves with respect to the original curves in discontinuous directions, tangentially to a set of continuous functions. In doing so, the compact differentiability of the rearrangement-related operators is established.

Posted Content
TL;DR: In this paper, the authors apply a regularization procedure called increasing rearrangement to monotonize Edgeworth and Cornish-Fisher expansions and any other related approximations of distribution and quantile functions of sample statistics.
Abstract: This paper applies a regularization procedure called increasing rearrangement to monotonize Edgeworth and Cornish-Fisher expansions and any other related approximations of distribution and quantile functions of sample statistics. Besides satisfying the logical monotonicity, required of distribution and quantile functions, the procedure often delivers strikingly better approximations to the distribution and quantile functions of the sample mean than the original Edgeworth-Cornish-Fisher expansions.

Report SeriesDOI
TL;DR: In this paper, the authors show that these estimates can always be improved with no harm using rearrangement techniques, and they illustrate the results with a computational example and an empirical example dealing with age-height growth charts.
Abstract: Suppose that a target function f0 : R → R is monotonic, namely, weakly increasing, and an original estimate f of the target function is available, which is not weakly increasing. Many common estimation methods used in statistics produce such estimates f . We show that these estimates can always be improved with no harm using rearrangement techniques: The rearrangement methods, univariate and multivariate, transform the original estimate to a monotonic estimate f∗, and the resulting estimate is closer to the true curve f0 in common metrics than the original estimate f . We illustrate the results with a computational example and an empirical example dealing with age-height growth charts.

Posted Content
TL;DR: In this paper, the authors apply a regularization procedure called increasing rearrangement to monotonize Edgeworth and Cornish-Fisher expansions and any other related approximations of distribution and quantile functions of sample statistics.
Abstract: This paper applies a regularization procedure called increasing rearrangement to monotonize Edgeworth and Cornish-Fisher expansions and any other related approximations of distribution and quantile functions of sample statistics. Besides satisfying the logical monotonicity, required of distribution and quantile functions, the procedure often delivers strikingly better approximations to the distribution and quantile functions of the sample mean than the original Edgeworth-Cornish-Fisher expansions.

Journal ArticleDOI
TL;DR: In this paper, the authors apply a regularization procedure called increasing rearrangement to monotonize Edgeworth and Cornish-Fisher expansions and any other related approximations of distribution and quantile functions of sample statistics.
Abstract: This paper applies a regularization procedure called increasing rearrangement to monotonize Edgeworth and Cornish-Fisher expansions and any other related approximations of distribution and quantile functions of sample statistics. Besides satisfying the logical monotonicity, required of distribution and quantile functions, the procedure often delivers strikingly better approximations to the distribution and quantile functions of the sample mean than the original Edgeworth-Cornish-Fisher expansions.

Journal ArticleDOI
TL;DR: In this paper, the authors study the large sample properties of posterior-based inference in the curved exponential family under increasing dimensions and establish conditions under which the posterior distribution is approximately normal, which implies various good properties of estimation and inference procedures based on the posterior.
Abstract: In this paper, we study the large‐sample properties of the posterior‐based inference in the curved exponential family under increasing dimensions. The curved structure arises from the imposition of various restrictions on the model, such as moment restrictions, and plays a fundamental role in econometrics and others branches of data analysis. We establish conditions under which the posterior distribution is approximately normal, which in turn implies various good properties of estimation and inference procedures based on the posterior. In the process, we also revisit and improve upon previous results for the exponential family under increasing dimensions by making use of concentration of measure. We also discuss a variety of applications to high‐dimensional versions of classical econometric models, including the multinomial model with moment restrictions, seemingly unrelated regression equations, and single structural equation models. In our analysis, both the parameter dimensions and the number of moments are increasing with the sample size.

01 Jan 2007
TL;DR: In this paper, it was shown that the class of tests covered by this admissibility result contains the Anderson and Rubin (1949, Annals of Mathematical Statistics 20,46-63) test.
Abstract: This paper studies a model widely used in the weak instruments literature and es tablishes admissibility of the weighted average power likelihood ratio tests recently derived by Andrews, Moreira, and Stock (2004, NBER Technical Working Paper 199). The class of tests covered by this admissibility result contains the Anderson and Rubin (1949, Annals of Mathematical Statistics 20,46-63) test. Thus, there is no conventional statistical sense in which the Anderson and Rubin (1949) test "wastes degrees of freedom." In addition, it is shown that the test proposed by Moreira (2003, Econometrica 71, 1027-1048) belongs to the closure of (i.e., can be interpreted as a limiting case of) the class of tests covered by our admissibility result.