scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 2002"


Journal ArticleDOI
TL;DR: This article developed a Ricardian trade model that incorporates realistic geographic features into general equilibrium and delivered simple structural equations for bilateral trade with parameters relating to absolute advantage, comparative advantage, and geographic barriers.
Abstract: We develop a Ricardian trade model that incorporates realistic geographic features into general equilibrium It delivers simple structural equations for bilateral trade with parameters relating to absolute advantage, to comparative advantage (promoting trade), and to geographic barriers (resisting it) We estimate the parameters with data on bilateral trade in manufactures, prices, and geography from 19 OECD countries in 1990 We use the model to explore various issues such as the gains from trade, the role of trade in spreading the benefits of new technology, and the effects of tariff reduction

3,782 citations


Journal ArticleDOI
TL;DR: In this article, the convergence rate for the factor estimates that will allow for consistent estimation of the number of factors is established, and some panel criteria are proposed to obtain the convergence rates.
Abstract: In this paper we develop some econometric theory for factor models of large dimensions. The focus is the determination of the number of factors (r), which is an unresolved issue in the rapidly growing literature on multifactor models. We first establish the convergence rate for the factor estimates that will allow for consistent estimation of r. We then propose some panel criteria and show that the number of factors can be consistently estimated using the criteria. The theory is developed under the framework of large cross-sections (N) and large time dimensions (T). No restriction is imposed on the relation between N and T. Simulations show that the proposed criteria have good finite sample properties in many configurations of the panel data encountered in practice.

2,863 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply the axioms of revealed preference to the altruistic actions of subjects and find that over 98% of the subjects made choices that are consistent with utility maximization.
Abstract: Subjects in economic laboratory experiments have clearly expressed an interest in behaving unselfishly. They cooperate in prisoners’ dilemma games, they give to public goods, and they leave money on the table when bargaining. While some are tempted to call this behavior irrational, economists should ask if this unselfish and altruistic behavior is indeed self-interested. That is, can subjects’ concerns for altruism or fairness be expressed in the economists’ language of a well-behaved preference ordering? If so, then behavior is consistent and meets our definition of rationality. This paper explores this question by applying the axioms of revealed preference to the altruistic actions of subjects. If subjects adhere to these axioms, such as GARP, then we can infer that a continuous, convex, and monotonic utility function could have generated their choices. This means that an economic model is sufficient to understand the data and that, in fact, altruism is rational. We do this by offering subjects several opportunities to share a surplus with another anonymous subject. However, the costs of sharing and the surplus available vary across decisions. This price and income variation creates budgets for altruistic activity that allow us to test for an underlying preference ordering. We found that subjects exhibit a significant degree of rationally altruistic behavior. Over 98% of our subjects made choices that are consistent with utility maximization. Only a quarter of subjects are selfish money-maximizers, and the rest show varying degrees of altruism. Perhaps most strikingly, almost half of the subjects exhibited behavior that is exactly consistent with one of three standard CES utility functions: perfectly selfish, perfect substitutes, or Leontief. Those with Leontief preferences are always dividing the surplus equally, while those with perfect substitutes preferences give everything away when the price of giving is less than one, but keep everything when the price of giving is greater than one. Using the data on choices, we estimated a population of utility functions and applied these to predict the results of other studies. We found that our results could successfully characterize the outcomes of other studies, indicating still further that altruism can be captured in an economic model.

1,742 citations


Journal ArticleDOI
TL;DR: In this paper, a structural model of optimal life-cycle consumption expenditures in the presence of realistic labor income uncertainty is proposed. But the model is not suitable for the general population.
Abstract: This paper estimates a structural model of optimal life-cycle consumption expenditures in the presence of realistic labor income uncertainty. We employ synthetic cohort techniques and Consumer Expenditure Survey data to construct average age-profiles of consumption and income over the working lives of typical households across different education and occupation groups. The model fits the profiles quite well. In addition to providing reasonable estimates of the discount rate and risk aversion, we find that consumer behavior changes strikingly over the life cycle. Young consumers behave as buffer-stock agents. Around age 40, the typical household starts accumulating liquid assets for retirement and its behavior mimics more closely that of a certainty equivalent consumer. Our methodology provides a natural decomposition of saving and wealth into its precautionary and life-cycle components.

1,223 citations


Journal ArticleDOI
TL;DR: The standard envelope theorems apply to choice sets with convex and topological structure, providing sufficient conditions for the value function to be differentiable in a parameter and characterizing its derivative as mentioned in this paper.
Abstract: The standard envelope theorems apply to choice sets with convex and topological structure, providing sufficient conditions for the value function to be differentiable in a parameter and characterizing its derivative. This paper studies optimization with arbitrary choice sets and shows that the traditional envelope formula holds at any differentiability point of the value function. We also provide conditions for the value function to be, variously, absolutely continuous, left- and right-differentiable, or fully differentiable. These results are applied to mechanism design, convex programming, continuous optimization problems, saddle-point problems, problems with parameterized constraints, and optimal stopping problems.

1,183 citations


Journal ArticleDOI
Alvin E. Roth1
TL;DR: In this article, the authors make the case that experimental and computational economics are natural complements to game theory in the work of design, and that some of the challenges facing both markets involve related kinds of complementarities.
Abstract: Economists have lately been called upon not only to analyze markets, but to design them. Market design involves a responsibility for detail, a need to deal with all of a market's complications, not just its principle features. Designers therefore cannot work only with the simple conceptual models used for theoretical insights into the general working of markets. Instead, market design calls for an engineering approach. Drawing primarily on the design of the entry level labor market for American doctors (the National Resident Matching Program), and of the auctions of radio spectrum conducted by the Federal Communications Commission, this paper makes the case that experimental and computational economics are natural complements to game theory in the work of design. The paper also argues that some of the challenges facing both markets involve dealing with related kinds of complementarities, and that this suggests an agenda for future theoretical research.

968 citations


Journal ArticleDOI
TL;DR: In this article, a continuous-time intertemporal version of multiple-priors utility, where aversion to ambiguity is admissible, is presented. But the model is restricted to a representative agent asset market setting.
Abstract: Models of utility in stochastic continuous-time settings typically assume that beliefs are represented by a probability measure, hence ruling out a priori any concern with ambiguity. This paper formulates a continuous-time intertemporal version of multiple-priors utility, where aversion to ambiguity is admissible. In a representative agent asset market setting, the model delivers restrictions on excess returns that admit interpretations reflecting a premium for risk and a separate premium for ambiguity.

861 citations


Journal ArticleDOI
TL;DR: In this article, the authors used Hermite polynomials to construct an explicit sequence of closed-form functions and showed that it converges to the true (but unknown) likelihood function.
Abstract: When a continuous-time diffusion is observed only at discrete dates, in most cases the transition distribution and hence the likelihood function of the observations is not explicitly computable. Using Hermite polynomials, I construct an explicit sequence of closed-form functions and show that it converges to the true (but unknown) likelihood function. I document that the approximation is very accurate and prove that maximizing the sequence results in an estimator that converges to the true maximum likelihood estimator and shares its asymptotic properties. Monte Carlo evidence reveals that this method outperforms other approximation schemes in situations relevant for financial models.

823 citations


Journal ArticleDOI
TL;DR: This paper developed a Roy model of mobility and earnings where workers choose in which of the 50 states (plus the District of Columbia) to live and work and developed an alternative econometric methodology that combines Lee's (1983) parametric maximum order statistic approach to reduce the dimensionality of the error terms with more recent work on semiparametric estimation of selection models.
Abstract: Self-selected migration presents one potential explanation for why observed returns to a college education in local labor markets vary widely even though U.S. workers are highly mobile. To assess the impact of self-selection on estimated returns, this paper first develops a Roy model of mobility and earnings where workers choose in which of the 50 states (plus the District of Columbia) to live and work. Available estimation methods are either infeasible for a selection model with so many alternatives or place potentially severe restrictions on earnings and the selection process. This paper develops an alternative econometric methodology that combines Lee's (1983) parametric maximum order statistic approach to reduce the dimensionality of the error terms with more recent work on semiparametric estimation of selection models (e.g., Ahn and Powell (1993)). The resulting semiparametric correction is easy to implement and can be adapted to a variety of other polychotomous choice problems. The empirical work, which uses 1990 U.S. Census data, confirms the role of comparative advantage in mobility decisions. The results suggest that self-selection of higher educated individuals to states with higher returns to education generally leads to upward biases in OLS estimates of the returns to education in state-specific labor markets. While the estimated returns to a college education are significantly biased, correcting for the bias does not narrow the range of returns across states. Consistent with the finding that the corrected return to a college education differs across the U.S., the relative state-to-state migration flows of college- versus high school-educated individuals respond strongly to differences in the return to education and amenities across states.

638 citations


Journal ArticleDOI
TL;DR: In this article, the existence of a symmetric equilibrium in a circular city in which businesses and housing can both be located anywhere in the city is proved, where firms balance the external benefits from locating near other producers against the costs of longer commutes for workers.
Abstract: We prove the existence of a symmetric equilibrium in a circular city in which businesses and housing can both be located anywhere in the city. In this equilibrium, firms balance the external benefits from locating near other producers against the costs of longer commutes for workers. An equilibrium city need not take the form of a central business district surrounded by a residential area. We propose a general algorithm for constructing equilibria, and use it to study the way land use is affected by changes in the model's underlying parameters.

613 citations


Journal ArticleDOI
TL;DR: In this paper, the effects of JTPA training programs on the distribution of earnings were investigated using a new instrumental variable (IV) method that measures program impacts on quantiles, and the quantile treatment effects estimator reduces to quantile regression when selection for treatment is exogenously determined.
Abstract: This paper reports estimates of the effects of JTPA training programs on the distribution of earnings. The estimation uses a new instrumental variable (IV) method that measures program impacts on quantiles. The quantile treatment effects (QTE) estimator reduces to quantile regression when selection for treatment is exogenously determined. QTE can be computed as the solution to a convex linear programming problem, although this requires first-step estimation of a nuisance function. We develop distribution theory for the case where the first step is estimated nonparametrically. For women, the empirical results show that the JTPA program had the largest proportional impact at low quantiles. Perhaps surprisingly, however, JTPA training raised the quantiles of earnings for men only in the upper half of the trainee earnings distribution.

Journal ArticleDOI
TL;DR: In this article, an equilibrium search model with on-the-job-search is presented, where firms make take-it-or-leave-it wage offers to workers conditional on their characteristics and they can respond to the outside job offers received by their employees.
Abstract: We construct and estimate an equilibrium search model with on–the–job–search. Firms make take–it–or–leave–it wage offers to workers conditional on their characteristics and they can respond to the outside job offers received by their employees. Unobserved worker productive heterogeneity is introduced in the form of cross–worker differences in a "competence" parameter. On the other side of the market, firms also are heterogeneous with respect to their marginal productivity of labor. The model delivers a theory of steady–state wage dispersion driven by heterogenous worker abilities and firm productivities, as well as by matching frictions. The structural model is estimated using matched employer and employee French panel data. The exogenous distributions of worker and firm heterogeneity components are nonparametrically estimated. We use this structural estimation to provide a decomposition of cross–employee wage variance. We find that the share of the cross–sectional wage variance that is explained by person effects varies across skill groups. Specifically, this share lies close to 40% for high–skilled white collars, and quickly decreases to 0% as the observed skill level decreases. The contribution of market imperfections to wage dispersion is typically around 50%.

Journal ArticleDOI
TL;DR: In this paper, game theory complements standard economic theory to examine effects on efficiency and incentives in wholesale power markets, and game theory is used to examine how organization and procedure affect market performance.
Abstract: Liberalization of infrastructure industries presents classic economic issues about how organization and procedure affect market performance. These issues are examined in wholesale power markets. The perspective from game theory complements standard economic theory to examine effects on efficiency and incentives.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the nature of price competition among firms that produce differentiated products and compete in markets that are limited in extent, and propose an instrumental variables series estimator for the matrix of cross price response coefficients, demonstrate that their estimator is consistent, and derive its asymptotic distribution.
Abstract: We investigate the nature of price competition among firms that produce differentiated products and compete in markets that are limited in extent. We propose an instrumental variables series estimator for the matrix of cross price response coefficients, demonstrate that our estimator is consistent, and derive its asymptotic distribution. Our semiparametric approach allows us to discriminate among models of global competition, in which all products compete with all others, and local competition, in which products compete only with their neighbors. We apply our semiparametric estimator to data from U.S. wholesale gasoline markets and find that, in this market, competition is highly localized.

Journal ArticleDOI
TL;DR: In this article, the authors considered a dynamic panel AR(1) model with fixed effects when both n and T are large and showed that a relatively simple fix to OLS or the MLE results in an asymptotically unbiased estimator.
Abstract: We consider a dynamic panel AR(1) model with fixed effects when both n and T are large. Under the “T fixed n large” asymptotic approximation, the ordinary least squares (OLS) or Gaussian maximum likelihood estimator (MLE) is known to be inconsistent due to the well-known incidental parameter problem. We consider an alternative asymptotic approximation where n and T grow at the same rate. It is shown that, although OLS or the MLE is asymptotically biased, a relatively simple fix to OLS or the MLE results in an asymptotically unbiased estimator. Under the assumption of Gaussian innovations, the bias-corrected MLE is shown to be asymptotically efficient by a Hajek type convolution theorem.

Journal ArticleDOI
Thierry Magnac1, David Thesmar1
TL;DR: In this paper, the authors show that dynamic discrete choice models can not be identified as long as the following structural parameters are not set: the distribution function of unobserved preference shocks, the discount rate and the current and future preferences in one (reference) alternative.
Abstract: In this paper, we analyze the nonparametric identification of dynamic discrete choice models. Our methodology is based on the insight of Hotz and Miller (1993) that Bellman equations can be interpreted as moment conditions. We consider cases with and without unobserved heterogeneity. Not only do we show that these models are not identified (Rust (1994)), we are also able to determine their exact degree of underidentification. We begin with the case without correlated unobserved heterogeneity. Using Bellman equations as moment conditions, we show that utility functions in each alternative cannot be (nonparametrically) identified as long as the following structural parameters are not set: the distribution function of unobserved preference shocks, the discount rate, and the current and future preferences in one (reference) alternative. We also investigate how exclusion or parametric restrictions can provide identifying restrictions. As the identification proof is constructive, a simple method of moment estimator can be derived and overidentifying restrictions can be tested. Provided that one is willing to make stronger identifying assumptions, dynamic discrete choice modelling is thus little different from the continuous case. Bellman equations can be used to recover deep structural parameters as are Euler equations. We continue by exploring a case where the unobserved component of preferences is correlated over time. Even if the functional degree of underidentification of this model is larger, we present reasonable identifying assumptions that lead to the same identification results as without unobserved heterogeneity. The same methodology using moment conditions is applied. This paper expands upon the work in Rust (1994), where the generic nonidentification result is stated. We use a slightly different model. In our case, agents' preferences have unobservable and possibly persistent components. The constructive aspect of our proof allows us to interpret Rust's underidentification result and to propose identifying restrictions. On the technical side, the insights for our identification strategy are borrowed from the works of Hotz and Miller (1993), Hotz et al. (1994), and Altug and Miller (1998). We

Journal ArticleDOI
TL;DR: This article showed that the assumption of an unobserved index crossing a threshold that defines the selection model is equivalent to the independence and monotonicity assumptions at the center of the LATE approach.
Abstract: The selection model and instrumental variable, local average treatment effect (LATE) framework are widely interpreted as alternative, competing frameworks. This note shows that the assumption of an unobserved index crossing a threshold that defines the selection model is equivalent to the independence and monotonicity assumptions at the center of the LATE approach. The underlying assumptions of the two approaches are equivalent.

Journal ArticleDOI
TL;DR: In this paper, the authors establish global convergence results for stochastic fictitious play for four classes of games: games with an interior ESS, zero sum games, potential games, and supermodular games.
Abstract: We establish global convergence results for stochastic fictitious play for four classes of games: games with an interior ESS, zero sum games, potential games, and supermodular games. We do so by appealing to techniques from stochastic approximation theory, which relate the limit behavior of a stochastic process to the limit behavior of a differential equation defined by the expected motion of the process. The key result in our analysis of supermodular games is that the relevant differential equation defines a strongly monotone dynamical system. Our analyses of the other cases combine Lyapunov function arguments with a discrete choice theory result: that the choice probabilities generated by any additive random utility model can be derived from a deterministic model based on payoff perturbations that depend nonlinearly on the vector of choice probabilities.

Journal ArticleDOI
TL;DR: In this article, the authors examined inference on regressions when interval data are available on one variable, the other variables being measured precisely, and found that the IMMI Assumptions alone imply simple nonparametric bounds on E(y|x, v) and E(v|x) and combined with a semiparametric binary regression model yield an identification region for the parameters that may be estimated consistently by modified maximum score (MMS) method.
Abstract: This paper examines inference on regressions when interval data are available on one variable, the other variables being measured precisely. Let a population be characterized by a distribution P(y,x, v, v 0 , v 1 ), where y ∈ R 1 , x ∈ R k , and the real variables (v, v 0 , v 1 ) satisfy v 0 ≤ v ≤ v 1 . Let a random sample be drawn from P and the realizations of (y, x, v 0 , v 1 ) be observed, but not those of v. The problem of interest may be to infer E(y|x, v) or E(v|x). This analysis maintains Interval (I), Monotonicity (M), and Mean Independence (MI) assumptions: (I) P(v 0 ≤ v ≤ v 1 ) = 1; (M) E(y|x, v) is monotone in v; (MI) E(y|x, v, v 0 , v 1 ) = E(y|x, v). No restrictions are imposed on the distribution of the unobserved values of v within the observed intervals [v 0 , v 1 ]. It is found that the IMMI Assumptions alone imply simple nonparametric bounds on E(y|x, v) and E(v|x). These assumptions invoked when y is binary and combined with a semiparametric binary regression model yield an identification region for the parameters that may be estimated consistently by a modified maximum score (MMS) method. The IMMI assumptions combined with a parametric model for E(y|x, v) or E(v|x) yield an identification region that may be estimated consistently by a modified minimum-distance (MMD) method. Monte Carlo methods are used to characterize the finite-sample performance of these estimators. Empirical case studies are performed using interval wealth data in the Health and Retirement Study and interval income data in the Current Population Survey.

Journal ArticleDOI
TL;DR: In this paper, the effects of progressive income taxes and education finance in a dynamic heterogeneous-agent economy are explored, both theoretically and quantitatively, in a model that yields complete analytical solutions.
Abstract: This paper studies the effects of progressive income taxes and education finance in a dynamic heterogeneous-agent economy. Such redistributive policies entail distortions to labor supply and savings, but also serve as partial substitutes for missing credit and insurance markets. The resulting tradeoffs for growth and efficiency are explored, both theoretically and quantitatively, in a model that yields complete analytical solutions. Progressive education finance always leads to higher income growth than taxes and transfers, but at the cost of lower insurance. Overall efficiency is assessed using a new measure that properly reflects aggregate resources and idiosyncratic risks but, unlike a standard social welfare function, does not reward equality per se. Simulations using empirical parameter estimates show that the efficiency costs and benefits of redistribution are generally of the same order of magnitude, resulting in plausible values for the optimal rates. Aggregate income and aggregate welfare provide only crude lower and upper bounds around the true efficiency tradeoff.

Journal ArticleDOI
TL;DR: This article used a belief elicitation procedure (proper scoring rule) to elicit subject beliefs directly and found that the stated beliefs of the subjects differ dramatically from the type of empirical or historical beliefs usually used as proxies for them.
Abstract: This paper investigates belief learning. Unlike other investigators who have been forced to use observable proxies to approximate unobserved beliefs, we have, using a belief elicitation procedure (proper scoring rule), elicited subject beliefs directly. As a result we were able to perform a more direct test of the proposition that people behave in a manner consistent with belief learning. What we find is interesting. First to the extent that subjects tend to "belief learn," the beliefs they use are the stated beliefs we elicit from them and not the "empirical beliefs" posited by fictitious play or Cournot models. Second, we present evidence that the stated beliefs of our subjects differ dramatically, both quantitatively and qualitatively, from the type of empirical or historical beliefs usually used as proxies for them. Third, our belief elicitation procedures allow us to examine how far we can be led astray when we are forced to infer the value of parameters using observable proxies for variables previously thought to be unobservable. By transforming a heretofore unobservable into an observable, we can see directly how parameter estimates change when this new information is introduced. Again, we demonstrate that such differences can be dramatic. Finally, our belief learning model using stated beliefs outperforms both a reinforcement and EWA model when all three models are estimated using our data.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a correction factor for the likelihood ratio test for cointegration in the vector autoregressive model to improve the finite sample properties of the test.
Abstract: With the cointegration formulation of economic long-run relations the test for cointegrating rank has become a useful econometric tool. The limit distribution of the test is often a poor approximation to the finite sample distribution and it is therefore relevant to derive an approximation to the expectation of the likelihood ratio test for cointegration in the vector autoregressive model in order to improve the finite sample properties. The correction factor depends on moments of functions of the random walk, which are tabulated by simulation, and functions of the parameters, which are estimated. From this approximation we propose a correction factor with the purpose of improving the small sample performance of the test. The correction is found explicitly in a number of simple models and its usefulness is illustrated by some simulation experiments.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce a friction into a standard model that helps resolve these anomalies, which is that international loans are imperfectly enforceable; any country can renege on its debts and suffer the consequences for future borrowing.
Abstract: Backus, Kehoe, and Kydland (1992), Baxter and Crucini (1995), and Stockman and Tesar (1995) find two major discrepancies between standard international business cycle models with complete markets and the data: In the models, cross-country correlations are much higher for consumption than for output, while in the data the opposite is true; and cross-country correlations of employment and investment are negative, while in the data they are positive. This paper introduces a friction into a standard model that helps resolve these anomalies. The friction is that international loans are imperfectly enforceable; any country can renege on its debts and suffer the consequences for future borrowing. To solve for equilibrium in this economy with endogenous incomplete markets, the methods of Marcet and Marimon (1999) are extended. Incorporating the friction helps resolve the anomalies more than does exogenously restricting the assets that can be traded.

Journal ArticleDOI
TL;DR: A novel statistic for testing the structural parameters in Instrumental Variables Regression that has a limiting distribution that is pivotal with a degrees of freedom parameter that is equal to the number of tested parameters is proposed.
Abstract: We propose a novel statistic for testing the structural parameters in Instrumental Variables Regression. The statistic is straightforward to compute and has a limiting distribution that is pivotal with a degrees of freedom parameter that is equal to the number of tested parameters. It therefore differs from the Anderson-Rubin statistic, whose limiting distribution is pivotal but has a degrees of freedom parameter that is equal to the number of instruments, and the Likelihood based, Wald, Likelihood Ratio and Lagrange Multiplier, statistics, whose limiting distributions are not pivotal. We analyze the relationship between the statistic and the concentrated likelihood of the structural parameters and show that its' limiting distribution is not affected by weak instruments. We discuss examples of the non-standard shapes of the asymptotically pivotal confidence sets that can be constructed using the statistic and investigate its power properties. To show its importance for practical purposes, we apply the statistic to the Angrist-Krueger (1991) data and find similar results as in Staiger and Stock (1997). This discussion paper has resulted in a publication in Econometrica, 2002, 70(5), 1781-1803.

Journal ArticleDOI
TL;DR: It is proved that what really matters in transmission of information is the local behavior of the utilities of the senders at the ideal point of the policy maker (receiver), not the distances between the ideal points of players.
Abstract: In previous work on cheap talk, uncertainty has almost always been modeled using a single-dimensional state variable. In this paper we prove that the dimensionality of the uncertain variable has an important qualitative impact on results and yields interesting insights into the "mechanics" of information transmission. Contrary to the unidimensional case, if there is more than one sender, full revelation of information in all states of nature is generically possible, even when the conflict of interest is arbitrarily large. What really matters in transmission of information is the local behavior of senders' indifference curves at the ideal point of the receiver, not the proximity of players' ideal point.

Journal ArticleDOI
TL;DR: In this article, the authors consider an approach to the Durbin problem involving a martingale transformation of the parametric empirical process suggested by Khmaladze (1981) and show that it can be adapted to a wide variety of inference problems involving quantile regression process.
Abstract: Tests based on the quantile regression process can be formulated like the classical Kolmogorov–Smirnov and Cramer–von–Mises tests of goodness–of–fit employing the theory of Bessel processes as in Kiefer (1959). However, it is frequently desirable to formulate hypotheses involving unknown nuisance parameters, thereby jeopardizing the distribution free character of these tests. We characterize this situation as “the Durbin problem” since it was posed in Durbin (1973), for parametric empirical processes. In this paper we consider an approach to the Durbin problem involving a martingale transformation of the parametric empirical process suggested by Khmaladze (1981) and show that it can be adapted to a wide variety of inference problems involving the quantile regression process. In particular, we suggest new tests of the location shift and location–scale shift models that underlie much of classical econometric inference. The methods are illustrated with a reanalysis of data on unemployment durations from the Pennsylvania Reemployment Bonus Experiments. The Pennsylvania experiments, conducted in 1988–89, were designed to test the efficacy of cash bonuses paid for early reemployment in shortening the duration of insured unemployment spells.

Journal ArticleDOI
TL;DR: In this article, the latent demand and information structure of first price, second price, ascending (English), and descending (Dutch) auctions is considered and the identification of a series of nested models and derive testable restrictions enabling discrimination between models on the basis of observed data.
Abstract: This paper presents new identification results for models of first-price, second-price, ascending (English), and descending (Dutch) auctions. We consider a general specification of the latent demand and information structure, nesting both private values and common values models, and allowing correlated types as well as ex ante asymmetry. We address identification of a series of nested models and derive testable restrictions enabling discrimination between models on the basis of observed data. The simplest model-symmetric independent private values-is nonparametrically identified even if only the transaction price from each auction is observed. For richer models, identification and testable restrictions may be obtained when additional information of one or more of the following types is available: (i) the identity of the winning bidder or other bidders; (ii) one or more bids in addition to the transaction price; (iii) exogenous variation in the number of bidders; (iv) bidder-specific covariates. While many private values (PV) models are nonparametrically identified and testable with commonly available data, identification of common values (CV) models requires stringent assumptions. Nonetheless, the PV model can be tested against the CV alternative, even when neither model is identified.

Report SeriesDOI
TL;DR: In this paper, sufficient conditions for the consistency and asymptotic normality of a class of semiparametric optimization estimators where the criterion function does not obey standard smoothness conditions and simultaneously depends on some nonparametric estimators that can themselves depend on the parameters to be estimated are provided.
Abstract: We provide easy to verify sufficient conditions for the consistency and asymptotic normality of a class of semiparametric optimization estimators where the criterion function does not obey standard smoothness conditions and simultaneously depends on some nonparametric estimators that can themselves depend on the parameters to be estimated. Our results extend existing theories such as those of Pakes and Pollard (1989), Andrews (1994a), and Newey (1994). We also show that bootstrap provides asymptotically correct confidence regions for the finite dimensional parameters. We apply our results to two examples: a 'hit rate' and a partially linear median regression with some endogenous regressors.

Journal ArticleDOI
TL;DR: In this article, a new specification test for IV estimators adopting a particular second-order approximation of Bekker was developed, where the difference between the forward (conventional) 2SLS estimator of the coefficient of the right-hand side endogenous variable with the reverse (non-SLS) estimators of the same unknown parameter when the normalization is changed.
Abstract: We develop a new specification test for IV estimators adopting a particular second order approximation of Bekker. The new specification test compares the difference of the forward (conventional) 2SLS estimator of the coefficient of the right-hand side endogenous variable with the reverse 2SLS estimator of the same unknown parameter when the normalization is changed. Under the null hypothesis that conventional first order asymptotics provide a reliable guide to inference, the two estimates should be very similar. Our test sees whether the resulting difference in the two estimates satisfies the results of second order asymptotic theory. Essentially the same idea is applied to develop another new specification test using second-order unbiased estimators of the type first proposed by Nagar. If the forward and reverse Nagar-type estimators are not significantly different we recommend estimation by LIML, which we demonstrate is the optimal linear combination of the Nagar-type estimators (to second order). We also demonstrate the high degree of similarity for k-class estimators between the approach of Bekker and the Edgeworth expansion approach of Rothenberg. An empirical example and Monte Carlo evidence demonstrate the operation of the new specification test.

Journal ArticleDOI
TL;DR: This paper proves that this method produces the maximum likelihood estimator under the same conditions as NFXP, and defines a class of sequential consistent estimators that encompasses MLE and Holz-Miller, and obtains a recursive expression for their asymptotic covariance matrices.
Abstract: This paper proposes a procedure for the estimation of discrete Markov decision models and studies its statistical and computational properties. Our Nested Pseudo-Likelihood method (NPL) is similar to Rust's Nested Fixed Point algorithm (NFXP), but the order of the two nested algorithms is swapped. First, we prove that NPL produces the Maximum Likelihood Estimator under the same conditions as NFXP. Our procedure requires fewer policy iterations at the expense of more likelihood-climbing iterations. We focus on a class of infinite-horizon, partial likelihood problems for which NPL results in large computational gains. Second, based on this algorithm we define a class of consistent and asymptotically equivalent Sequential Policy Iteration (PI) estimators, which encompasses both Hotz-Miller's CCP estimator and the partial Maximum Likekihood estimator. This presents the researcher with a ''menu'' of sequential estimators reflecting a trade-off between finite-sample precision and computational cost. Using actual and simulated data we compare the relative performance of these estimators. In all our experiments the benefits in terms of precision of using a 2-stage PI estimator instead of 1-stage (i.e., Hotz-Miller) are very significant. More interestingly, the benefits of MLE relative to 2-stage PI are small.