scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 2005"


Journal ArticleDOI
TL;DR: In this article, the authors disentangle the impact of schools and teachers in influencing achievement with special attention given to the potential problems of omitted or mismeasured variables and of student and school selection.
Abstract: This paper disentangles the impact of schools and teachers in influencing achievement with special attention given to the potential problems of omitted or mismeasured variables and of student and school selection. Unique matched panel data from the UTD Texas Schools Project permit the identification of teacher quality based on student performance along with the impact of specific, measured components of teachers and schools. Semiparametric lower bound estimates of the variance in teacher quality based entirely on within-school heterogeneity indicate that teachers have powerful effects on reading and mathematics achievement, though little of the variation in teacher quality is explained by observable characteristics such as education or experience. The results suggest that the effects of a costly ten student reduction in class size are smaller than the benefit of moving one standard deviation up the teacher quality distribution, highlighting the importance of teacher effectiveness in the determination of school quality.

3,076 citations


Journal ArticleDOI
TL;DR: In this article, a model of preferences over acts is proposed and axiomatize, such that the decision maker will preference act f to act g if and only if Eμφ (Eπu ◦ f) ≥ Eπu g, where E is the expectation operator, u is a vN-M utility function, φ is an increasing transformation, and μ is a subjective probability over the set Π of probability measures π that thedecision maker thinks are relevant given his subjective information.
Abstract: We propose and axiomatize a model of preferences over acts such that the decision maker prefers act f to act g if and only if Eμφ (Eπu ◦ f) ≥ Eμφ (Eπu ◦ g), where E is the expectation operator, u is a vN-M utility function, φ is an increasing transformation, and μ is a subjective probability over the set Π of probability measures π that the decision maker thinks are relevant given his subjective information. A key feature of our model is that it achieves a separation between ambiguity, identified as a characteristic of the decision maker’s subjective information, and ambiguity attitude, a characteristic of the decision maker’s tastes. We show that attitudes towards risk are characterized by the shape of u, as usual, while attitudes towards ambiguity are characterized by the shape of φ. We also derive φ (x) = −1 α e−αx as the special case of constant ambiguity aversion. Ambiguity itself is defined behaviorally and is shown to be characterized by properties of the subjective set of measures Π. This characterization of ambiguity is formally related to the definitions of subjective ambiguity advanced by Epstein-Zhang (2001) and Ghirardato-Marinacci (2002). One advantage of this model is that the welldeveloped machinery for dealing with risk attitudes can be applied as well to ambiguity attitudes. The model is also distinct from many in the literature on ambiguity in that allows smooth, rather than kinked, indifference curves. This leads to different behavior and improved tractability, while still sharing the main features (e.g., Ellsberg’s Paradox, etc.). The Maxmin EU model (e.g., Gilboa and Schmeidler (1989)) with a given set of measures may be seen as an extreme case of our model with infinite ambiguity aversion. Two illustrative applications to portfolio choice are offered.

1,285 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study how intermediation and asset prices in over-the-counter markets are aected by illiquidity associated with search and bargaining, and compute explicitly the prices at which investors trade with each other as well as marketmakers' bid and ask prices in a dynamic model with strategic agents.
Abstract: We study how intermediation and asset prices in over-the-counter markets are aected by illiquidity associated with search and bargaining. We compute explicitly the prices at which investors trade with each other as well as marketmakers’ bid and ask prices in a dynamic model with strategic agents. Bid-ask spreads are lower if investors can more easily nd other investors, or have easier access to multiple marketmakers. With a monopolistic marketmaker, bid-ask spreads are higher if investors have easier access to the marketmaker. We characterize endogenous search and welfare, and discuss empirical implications.

1,066 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a model of quantile treatment effects (QTE) in the presence of endogeneity and obtained conditions for identification of the QTE without functional form assumptions.
Abstract: The ability of quantile regression models to characterize the heterogeneous impact of variables on different points of an outcome distribution makes them appealing in many economic applications. However, in observational studies, the variables of interest (e.g., education, prices) are often endogenous, making conventional quantile regression inconsistent and hence inappropriate for recovering the causal effects of these variables on the quantiles of economic outcomes. In order to address this problem, we develop a model of quantile treatment effects (QTE) in the presence of endogeneity and obtain conditions for identification of the QTE without functional form assumptions. The principal feature of the model is the imposition of conditions that restrict the evolution of ranks across treatment states. This feature allows us to overcome the endogeneity problem and recover the true QTE through the use of instrumental variables. The proposed model can also be equivalently viewed as a structural simultaneous equation model with nonadditive errors, where QTE can be interpreted as the structural quantile effects (SQE).

902 citations


Journal ArticleDOI
TL;DR: In this article, the marginal treatment effect (MTE) is used to unify the nonparametric literature on treatment effects with the econometric literature on structural estimation using a non-parametric analog of a policy invariant parameter; to generate a variety of treatment effects from a common semiparametric functional form; and to explore what policy questions commonly used estimators in the treatment effect literature answer.
Abstract: This paper uses the marginal treatment effect (MTE) to unify the nonparametric literature on treatment effects with the econometric literature on structural estimation using a nonparametric analog of a policy invariant parameter; to generate a variety of treatment effects from a common semiparametric functional form; to organize the literature on alternative estimators; and to explore what policy questions commonly used estimators in the treatment effect literature answer. A fundamental asymmetry intrinsic to the method of instrumental variables (IV) is noted. Recent advances in IV estimation allow for heterogeneity in responses but not in choices, and the method breaks down when both choice and response equations are heterogeneous in a general way.

685 citations


Journal ArticleDOI
TL;DR: The authors applied mixed logit to combined revealed and stated preference data on commuter choices of whether to pay a toll for congestion-free express travel and found that motorists exhibit high values of travel time and reliability, and substantial heterogeneity in those values.
Abstract: We apply recent econometric advances to study the distribution of commuters' preferences for speedy and reliable highway travel. Our analysis applies mixed logit to combined revealed and stated preference data on commuter choices of whether to pay a toll for congestion-free express travel. We find that motorists exhibit high values of travel time and reliability, and substantial heterogeneity in those values. We suggest that road pricing policies designed to cater to such varying preferences can improve efficiency and reduce the disparity of welfare impacts compared with recent pricing experiments.

467 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study the "ex post equivalence" question: when is interim implementation on all possible type spaces equivalent to requiring ex post implementation on the same type space.
Abstract: The mechanism design literature assumes too much common knowledge of the environment among the players and planner. We relax this assumption by studying implementation on richer type spaces, with more higher order uncertainty. We study the "ex post equivalence" question: when is interim implementation on all possible type spaces equivalent to requiring ex post implementation on the

448 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the driving forces behind informal sanctions in cooperation games and the extent to which theories of fairness and reciprocity capture these forces, and they find that cooperators' punishment is almost exclusively targeted toward the defectors, but the latter also impose a considerable amount of spiteful punishment on the cooperators.
Abstract: This paper investigates the driving forces behind informal sanctions in cooperation games and the extent to which theories of fairness and reciprocity capture these forces. We find that cooperators' punishment is almost exclusively targeted toward the defectors, but the latter also impose a considerable amount of spiteful punishment on the cooperators. However, spiteful punishment vanishes if the punishers can no longer affect the payoff differences between themselves and the punished individual, whereas the cooperators even increase the resources devoted to punishment in this case. Our data also discriminate between different fairness principles. Fairness theories that are based on the assumption that players compare their own payoff to the group's average or the group's total payoff cannot explain the fact that cooperators target their punishment at the defectors. Fairness theories that assume that players aim to minimize payoff inequalities cannot explain the fact that cooperators punish defectors even if payoff inequalities cannot be reduced. Therefore, retaliation, i.e., the desire to harm those who committed unfair acts, seems to be the most important motive behind fairness-driven informal sanctions.

443 citations


Journal ArticleDOI
TL;DR: A dynamic matching model of demand under uncertainty in which patients learn from prescription experience about the effectiveness of alternative drugs is estimated, indicating that while there is substantial heterogeneity in drug efficacy across patients, learning enables patients and their doctors to dramatically reduce the costs of uncertainty in pharmaceutical markets.
Abstract: Exploiting a rich panel data set on anti-ulcer drug prescriptions, we measure the effects of uncertainty and learning in the demand for pharmaceutical drugs. We estimate a dynamic matching model of demand under uncertainty in which patients learn from prescription experience about the effectiveness of alternative drugs. Unlike previous models, we allow drugs to have distinct symptomatic and curative effects, and endogenize treatment length by allowing drug choices to affect patients’ underlying probability of recovery. We find that drugs’ rankings along these dimensions differ, with high symptomatic effects for drugs with the highest market shares and high curative effects for drugs with the greatest medical efficacy. Our results also indicate that while there is substantial heterogeneity in drug efficacy across patients, learning enables patients and their doctors to dramatically reduce the costs of uncertainty in pharmaceutical markets.

432 citations


Journal ArticleDOI
TL;DR: The authors compare three market structures for monetary economies: bargaining (search equilibrium), price taking (competitive equilibrium), and price posting (competitive search equilibrium) and show that under bargaining, trade and entry are both inefficient, and inflation implies first-order welfare losses.
Abstract: We compare three market structures for monetary economies: bargaining (search equilibrium); price taking (competitive equilibrium); and price posting (competitive search equilibrium). We also extend work on the microfoundations of money by allowing a general matching technology and entry. We study how equilibrium and the effects of policy depend on market structure. Under bargaining, trade and entry are both inefficient, and inflation implies first-order welfare losses. Under price taking, the Friedman rule solves the first inefficiency but not the second, and inflation may actually improve welfare. Under posting, the Friedman rule yields the first best, and inflation implies second-order welfare losses.

399 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that people are born with different abilities to turn effort into output, that is, with different skills, and that these skills evolve stochastically.
Abstract: PEOPLE ARE BORN WITH DIFFERENT ABILITIES to turn effort into output— that is, with different skills. These skills evolve stochastically. Talented people may awake one day with crippling back pain or chronic fatigue syndrome that renders them low skilled. Unemployed people may suddenly find a good job opportunity. Some people learn faster or forget slower than others. It is plausible to think of skills at birth as private information (as in Mirrlees (1971)), but these kinds of changes in skills are also difficult for outside observers to verify directly.

Journal ArticleDOI
TL;DR: In this paper, two new methods for estimating models with nonseparable errors and endogenous regressors were proposed, one estimating the response of the conditional mean of the dependent variable to a change in the explanatory variable while conditioning on an external variable and then undoing the conditioning.
Abstract: We propose two new methods for estimating models with nonseparable errors and endogenous regressors. The first method estimates a local average response. One estimates the response of the conditional mean of the dependent variable to a change in the explanatory variable while conditioning on an external variable and then undoes the conditioning. The second method estimates the nonseparable function and the joint distribution of the observable and unobservable explanatory variables. An external variable is used to impose an equality restriction, at two points of support, on the conditional distribution of the unobservable random term given the regressor and the external variable. Our methods apply to cross sections, but our lead examples involve panel data cases in which the choice of the external variable is guided by the assumption that the distribution of the unobservable variables is exchangeable in the values of the endogenous variable for members of a group.

Journal ArticleDOI
TL;DR: General model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks are developed, which are both easy-to-implement and highly accurate in empirically realistic situations.
Abstract: We develop general model-free adjustment procedures for the calculation of unbiased volatility loss functions based on practically feasible realized volatility benchmarks. The procedures, which exploit recent nonparametric asymptotic distributional results, are both easy-to-implement and highly accurate in empirically realistic situations. We also illustrate that properly accounting for the measurement errors in the volatility forecast evaluations reported in the existing literature can result in markedly higher estimates for the true degree of return volatility predictability.

Journal ArticleDOI
TL;DR: In this paper, the authors derived a lower bound for the volatility of the permanent component of investors' marginal utility of wealth or, more generally, asset pricing kernels, based on return properties of long-term zero-coupon bonds, risk-free bonds, and other risky securities.
Abstract: We derive a lower bound for the volatility of the permanent component of investors' marginal utility of wealth or, more generally, asset pricing kernels. The bound is based on return properties of long-term zero-coupon bonds, risk-free bonds, and other risky securities. We find the permanent component of the pricing kernel to be very volatile; its volatility is about at least as large as the volatility of the stochastic discount factor. A related measure for the transitory component suggest it to be considerably less important. We also show that, for many cases where the pricing kernel is a function of consumption, innovations to consumption need to have permanent effects.

Journal ArticleDOI
TL;DR: In this paper, the authors extend the standard model of general equilibrium with incomplete markets to allow for default and punishment by thinking of assets as pools, and show that refined equilibrium always exists in their model, and that default, in conjunction with refinement, opens the door to a theory of endogenous assets.
Abstract: We extend the standard model of general equilibrium with incomplete markets to allow for default and punishment by thinking of assets as pools. The equilibrating variables include expected delivery rates, along with the usual prices of assets and commodities. By reinterpreting the variables, our model encompasses a broad range of adverse selection and signalling phenomena in a perfectly competitive, general equilibrium framework. Perfect competition eliminates the need for lenders to compute how the size of their loan or the price they quote might affect default rates. It also makes for a simple equilibrium refinement, which we propose in order to rule out irrational pessimism about deliveries of untraded assets. We show that refined equilibrium always exists in our model, and that default, in conjunction with refinement, opens the door to a theory of endogenous assets. The market chooses the promises, default penalties, and quantity constraints of actively traded assets.

ReportDOI
TL;DR: In this article, a simple algorithm for computing an oblivious equilibrium, in which each firm is assumed to make decisions based only on its own state and knowledge of the long run average industry state, but where firms ignore current information about competitors' states.
Abstract: We propose an approximation method for analyzing Ericson and Pakes (1995)-style dynamic models of imperfect competition. We develop a simple algorithm for computing an “oblivious equilibrium,” in which each firm is assumed to make decisions based only on its own state and knowledge of the long run average industry state, but where firms ignore current information about competitors’ states. We prove that, as the market becomes large, if the equilibrium distribution of firm states obeys a certain “lighttail” condition, then oblivious equilibria closely approximate Markov perfect equilibria. We develop bounds that can be computed to assess the accuracy of the approximation for any given applied problem. Through computational experiments, we find that the method often generates useful approximations for industries with hundreds of firms and in some cases even tens of firms.

Journal ArticleDOI
TL;DR: This paper studies a game of strategic experimentation with two-armed bandits whose risky arm might yield a payoff only after some exponentially distributed random time and characterizes the unique symmetric Markovian equilibrium of the game, which is in mixed strategies.
Abstract: This paper studies a game of strategic experimentation with two-armed bandits whose risky arm might yield a payoff only after some exponentially distributed random time. Because of free-riding, there is an inefficiently low level of experimentation in any equilibrium where the players use stationary Markovian strategies with posterior beliefs as the state variable. After characterizing the unique symmetric Markovian equilibrium of the game, which is in mixed strategies, we construct a variety of pure-strategy equilibria. There is no equilibrium where all players use simple cut-off strategies. Equilibria where players switch finitely often between the roles of experimenter and free-rider all lead to the same pattern of information acquisition; the efficiency of these equilibria depends on the way players share the burden of experimentation among them. In equilibria where players switch roles infinitely often, they can acquire an approximately efficient amount of information, but the rate at which it is acquired still remains inefficient; moreover, the expected payoff of an experimenter exhibits the novel feature that it rises as players become more pessimistic. Finally, over the range of beliefs where players use both arms a positive fraction of the time, the symmetric equilibrium is dominated by any asymmetric one in terms of aggregate payoffs.

Journal ArticleDOI
TL;DR: In this paper, a Generalized Method of Moments (GMM) Lagrange multiplier statistic is proposed, which uses the Jacobian at the evaluated parameter value instead of the expected Jacobian.
Abstract: We propose a Generalized Method of Moments (GMM) Lagrange multiplier statistic, i.e. the K-statistic, that uses the Jacobian at the evaluated parameter value instead of the expected Jacobian. To obtain its limit behavior, we use a novel assumption that brings GMM closer to maximum likelihood and which is easily satisfied. The usual asymptotic x2 distribution of the K-statistic then holds under a wider set of circumstances, like weak and many instrument asymptotics and combinations thereof, than the standard full rank case for the Jacobian. The behavior of the K-statistic can be spurious around inflexion points and the maximum of the objective function since the moment conditions are then not satisfied. Combinations of the K-statistic with statistics that test the validity of the moment equations overcome the spurious behavior. We conduct a power comparison to test for the risk aversion parameter in a stochastic discount factor model and construct its confidence set for observed consumption growth and asset return series.

Journal ArticleDOI
TL;DR: In this article, the properties of least squares estimators for cross-section data with common shocks, such as macroeconomic shocks, have been analyzed, and necessary and sufficient conditions are given for consistency.
Abstract: This paper considers regression models for cross-section data that exhibit cross-section dependence due to common shocks, such as macroeconomic shocks. The paper analyzes the properties of least squares (LS) estimators in this context. The results of the paper allow for any form of cross-section dependence and heterogeneity across population units. The probability limits of the LS estimators are determined, and necessary and sufficient conditions are given for consistency. The asymptotic distributions of the estimators are found to be mixed normal after recentering and scaling. The t, Wald, and F statistics are found to have asymptotic standard normal, X 2 , and scaled X 2 distributions, respectively, under the null hypothesis when the conditions required for consistency of the parameter under test hold. However, the absolute values of t, Wald, and F statistics are found to diverge to infinity under the null hypothesis when these conditions fail. Confidence intervals exhibit similarly dichotomous behavior. Hence, common shocks are found to be innocuous in some circumstances, but quite problematic in others. Models with factor structures for errors and regressors are considered. Using the general results, conditions are determined under which consistency of the LS estimators holds and fails in models with factor structures. The results are extended to cover heterogeneous and functional factor structures in which common factors have different impacts on different population units.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the effect of employer-provided health insurance on job mobility rates and economic welfare using a search, matching, and bargaining framework, and find that workers at jobs with health insurance are less likely to leave those jobs, even after conditioning on the wage rate.
Abstract: We investigate the effect of employer-provided health insurance on job mobility rates and economic welfare using a search, matching, and bargaining framework. In our model, health insurance coverage decisions are made in a cooperative manner that recognizes the productivity effects of health insurance as well as its nonpecuniary value to the employee. The resulting equilibrium is one in which not all employment matches are covered by health insurance, wages at jobs providing health insurance are larger (in a stochastic sense) than those at jobs without health insurance, and workers at jobs with health insurance are less likely to leave those jobs, even after conditioning on the wage rate. We estimate the model using the 1996 panel of the Survey of Income and Program Participation, and find that the employer-provided health insurance system does not lead to any serious inefficiencies in mobility decisions.

Journal ArticleDOI
TL;DR: In the Self Sufficiency Project (SSP) welfare demonstration, members of a randomly assigned treatment group could receive a subsidy for full-time work, but only to people who began working full time within 12 months of random assignment as mentioned in this paper.
Abstract: In the Self Sufficiency Project (SSP) welfare demonstration, members of a randomly assigned treatment group could receive a subsidy for full-time work. The subsidy was available for 3 years, but only to people who began working full time within 12 months of random assignment. A simple optimizing model suggests that the eligibility rules created an “establishment” incentive to find a job and leave welfare within a year of random assignment, and an “entitlement” incentive to choose work over welfare once eligibility was established. Building on this insight, we develop an econometric model of welfare participation that allows us to separate the two effects and estimate the impact of the earnings subsidy on welfare entry and exit rates among those who achieved eligibility. The combination of the two incentives explains the time profile of the experimental impacts, which peaked 15 months after random assignment and faded relatively quickly. Our findings suggest that about half of the peak impact of SSP was attributable to the establishment incentive. Despite the extra work effort generated by SSP, the program had no lasting impact on wages and little or no long-run effect on welfare participation.

Journal ArticleDOI
TL;DR: In this paper, the authors combine the microeconomic-labor and macroeconomic-equilibrium views of matching in labor markets, and obtain two new equilibrium implications of job matching and search frictions for wage inequality.
Abstract: This paper brings together the microeconomic-labor and the macroeconomic-equilibrium views of matching in labor markets. We nest a job matching model a la Jovanovic (1984) into a Mortensen and Pissarides (1994)-type equilibrium search environment. The resulting framework preserves the implications of job matching theory for worker turnover and wage dynamics, and it also allows for aggregation and general equilibrium analysis. We obtain two new equilibrium implications of job matching and search frictions for wage inequality. First, learning about match quality and worker turnover map Gaussian output noise into an ergodic wage distribution of empirically accurate shape: unimodal, skewed, with a Paretian right tail. Second, high idiosyncratic productivity risk hinders learning and sorting, and reduces wage inequality. The equilibrium solutions for the wage distribution and for the aggregate worker flows-quits to unemployment and to other jobs, displacements, hires-provide the likelihood function of the model in closed form.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the conditions under which consistent estimation can be achieved in instrumental variables (IV) regression when the available instruments are weak and the number of instruments, K n, goes to infinity with the sample size.
Abstract: This paper analyzes the conditions under which consistent estimation can be achieved in instrumental variables (IV) regression when the available instruments are weak and the number of instruments, K n , goes to infinity with the sample size. We show that consistent estimation depends importantly on the strength of the instruments as measured by r,,, the rate of growth of the so-called concentration parameter, and also on K n . In particular, when K n → ∞, the concentration parameter can grow, even if each individual instrument is only weakly correlated with the endogenous explanatory variables, and consistency of certain estimators can be established under weaker conditions than have previously been assumed in the literature. Hence, the use of many weak instruments may actually improve the performance of certain point estimators. More specifically, we find that the limited information maximum likelihood (LIML) estimator and the bias-corrected two-stage least squares (B2SLS) estimator are consistent when √K n /r n → 0, while the two-stage least squares (2SLS) estimator is consistent only if K n /r n → 0 as n → ∞. These consistency results suggest that LIML and B2SLS are more robust to instrument weakness than 2SLS.

Journal ArticleDOI
TL;DR: In this article, a structural model of the following decisions by individuals: where to submit applications, which school to attend, and what field to study, is presented, and the model also includes decisions by schools as to which students to accept and how much financial aid to offer.
Abstract: This paper addresses how changing the admission and financial aid rules at colleges affects future earnings. I estimate a structural model of the following decisions by individuals: where to submit applications, which school to attend, and what field to study. The model also includes decisions by schools as to which students to accept and how much financial aid to offer. Simulating how black educational choices would change were they to face the white admission and aid rules shows that race-based advantages had little effect on earnings. However, removing race-based advantages does affect black educational outcomes. In particular, removing advantages in admissions substantially decreases the number of black students at top-tier schools, while removing advantages in financial aid causes a decrease in the number of blacks who attend college.

Journal ArticleDOI
TL;DR: The authors consider a general equilibrium model in which the distinction between uncertainty and risk is formalized by assuming agents have incomplete preferences over state-contingent consumption bundles, as in Bewley (1986).
Abstract: This paper considers a general equilibrium model in which the distinction between uncertainty and risk is formalized by assuming agents have incomplete preferences over statecontingent consumption bundles, as in Bewley (1986). Without completeness, individual decision making depends on a set of probability distributions over the state space. A bundle is preferred to another if and only if it has larger expected utility for all probabilities in this set. When preferences are complete this set is a singleton, and the model reduces to standard expected utility. In this setting, we characterize Pareto optima and equilibria, and show that the presence of uncertainty generates robust indeterminacies in equilibrium prices and allocations for any specification of initial endowments. We derive comparative statics results linking the degree of uncertainty with changes in equilibria. Despite the presence of robust indeterminacies, we show that equilibrium prices and allocations vary continuously with underlying fundamentals. Equilibria in a standard risk economy are thus robust to adding small degrees of uncertainty. Finally, we give conditions under which some assets are not traded due to uncertainty aversion.

Report SeriesDOI
TL;DR: The authors used revealed preference inequalities to provide the tightest possible (best) nonparametric bounds on predicted consumer responses to price changes using consumer-level data over a finite set of relative price changes.
Abstract: This paper uses revealed preference inequalities to provide the tightest possible (best) nonparametric bounds on predicted consumer responses to price changes using consumer-level data over a finite set of relative price changes. These responses are allowed to vary nonparametrically across the income distribution. This is achieved by combining the theory of revealed preference with the semiparametric estimation of consumer expansion paths (Engel curves). We label these expansion path based bounds on demand responses as E-bounds. Deviations from revealed preference restrictions are measured by preference perturbations which are shown to usefully characterize taste change and to provide a stochastic environment within which violations of revealed preference inequalities can be assessed.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a class of strategies that generalize examples constructed in two-player games under imperfect private monitoring, and provide a simple and sharp characterization of equilibrium payoffs using those strategies.
Abstract: We introduce a class of strategies that generalizes examples constructed in two-player games under imperfect private monitoring. A sequential equilibrium is belief-free if, after every private history, each player's continuation strategy is optimal independently of his belief about his opponents' private histories. We provide a simple and sharp characterization of equilibrium payoffs using those strategies. While such strategies support a large set of payoffs, they are not rich enough to generate a folk theorem in most games besides the prisoner's dilemma, even when noise vanishes.

Journal ArticleDOI
TL;DR: In this article, the authors show that the bootstrap does not consistently estimate the asymptotic distribution of the maximum score estimator of a single-parameter estimator within a cube-root convergence class.
Abstract: This paper shows that the bootstrap does not consistently estimate the asymptotic distribution of the maximum score estimator. The theory developed also applies to other estimators within a cube-root convergence class. For some single-parameter estimators in this class, the results suggest a simple method for inference based upon the bootstrap.

Journal ArticleDOI
TL;DR: An asymptotic theory is developed for a class of kernel-based smoothed nonparametric entropy measures of serial dependence in a time-series context and used to derive the limiting distribution of Granger and Lin's normalized entropy measure ofserial dependence, which was previously not available in the literature.
Abstract: Entropy is a classical statistical concept with appealing properties. Establishing asymptotic distribution theory for smoothed nonparametric entropy measures of dependence has so far proved challenging. In this paper, we develop an asymptotic theory for a class of kernel-based smoothed nonparametric entropy measures of serial dependence in a time-series context. We use this theory to derive the limiting distribution of Granger and Lin's (1994) normalized entropy measure of serial dependence, which was previously not available in the literature. We also apply our theory to construct a new entropy-based test for serial dependence, providing an alternative to Robinson's (1991) approach. To obtain accurate inferences, we propose and justify a consistent smoothed bootstrap procedure. The naive bootstrap is not consistent for our test. Our test is useful in, for example, testing the random walk hypothesis, evaluating density forecasts, and identifying important lags of a time series. It is asymptotically locally more powerful than Robinson's (1991) test, as is confirmed in our simulation. An application to the daily S&P 500 stock price index illustrates our approach.

Journal ArticleDOI
TL;DR: In this article, the existence of equilibria in distributional strategies for a wide class of private value auctions, including double auctions, was shown, and the existence proof established new connections among existence techniques for discontinuous Bayesian games.
Abstract: We show existence of equilibria in distributional strategies for a wide class of private value auctions, including the first general existence result for double auctions. The set of equilibria is invariant to the tie-breaking rule. The model incorporates multiple unit demands, all standard pricing rules, reserve prices, entry costs, and stochastic demand and supply. Valuations can be correlated and asymmetrically distributed. For double auctions, we show further that at least one equilibrium involves a positive volume of trade. The existence proof establishes new connections among existence techniques for discontinuous Bayesian games.