scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1999"


Journal ArticleDOI
TL;DR: In this article, a longitudinal sample of over one million French workers from more than five hundred thousand employing firms was used to study wage variation in France and found that person effects are a very important source of wage variation.
Abstract: We study a longitudinal sample of over one million French workers from more than five hundred thousand employing firms. We decompose real total annual compensation per worker into components related to observable employee characteristics, personal heterogeneity, firm heterogeneity, and residual variation. Except for the residual, all components may be correlated in an arbitrary fashion. At the level of the individual, we find that person effects, especially those not related to observables like education, are a very important source of wage variation in France. Firm effects, while important, are not as important as person effects. At the level of firms, we find that enterprises that hire high-wage workers are more productive but not more profitable. They are also more capital and high-skilled employee intensive. Enterprises that pay higher wages, controlling for person effects, are more productive and more profitable. They are also more capital intensive but are not more high-skilled labor intensive. We find that person effects explain about 90% of inter-industry wage differentials and about 75% of the firm-size wage effect while firm effects explain relatively little of either differential.

1,527 citations


Journal ArticleDOI
TL;DR: Experience Weighted Attraction (EWA) as mentioned in this paper is a special case of reinforcement learning that combines reinforcement learning and belief learning, and hybridizes their key elements, allowing attractions to begin and grow flexibly as choice reinforcement does but reinforcing unchosen strategies substantially as belief-based models implicitly do.
Abstract: In ‘experience-weighted attraction’ (EWA) learning, strategies have attractions that reflect initial predispositions, are updated based on payoff experience, and determine choice probabilities according to some rule (e.g., logit). A key feature is a parameter δ that weights the strength of hypothetical reinforcement of strategies that were not chosen according to the payoff they would have yielded, relative to reinforcement of chosen strategies according to received payoffs. The other key features are two discount rates, φ and ρ, which separately discount previous attractions, and an experience weight. EWA includes reinforcement learning and weighted fictitious play (belief learning) as special cases, and hybridizes their key elements. When δ= 0 and ρ= 0, cumulative choice reinforcement results. When δ= 1 and ρ=φ, levels of reinforcement of strategies are exactly the same as expected payoffs given weighted fictitious play beliefs. Using three sets of experimental data, parameter estimates of the model were calibrated on part of the data and used to predict a holdout sample. Estimates of δ are generally around .50, φ around .8 − 1, and ρ varies from 0 to φ. Reinforcement and belief-learning special cases are generally rejected in favor of EWA, though belief models do better in some constant-sum games. EWA is able to combine the best features of previous approaches, allowing attractions to begin and grow flexibly as choice reinforcement does, but reinforcing unchosen strategies substantially as belief-based models implicitly do.

1,450 citations


Journal ArticleDOI
TL;DR: In this article, a regression limit theory for nonstationary panel data with large numbers of cross section (n) and time series (T) observations is developed, and the relationship between these multidimensional limits is explored.
Abstract: This paper develops a regression limit theory for nonstationary panel data with large numbers of cross section (n) and time series (T) observations. The limit theory allows for both sequential limits, wherein T→∞ followed by n→∞, and joint limits where T, n→∞ simultaneously; and the relationship between these multidimensional limits is explored. The panel structures considered allow for no time series cointegration, heterogeneous cointegration, homogeneous cointegration, and near-homogeneous cointegration. The paper explores the existence of long-run average relations between integrated panel vectors when there is no individual time series cointegration and when there is heterogeneous cointegration. These relations are parameterized in terms of the matrix regression coefficient of the long-run average covariance matrix. In the case of homogeneous and near homogeneous cointegrating panels, a panel fully modified regression estimator is developed and studied. The limit theory enables us to test hypotheses about the long run average parameters both within and between subgroups of the full population.

1,399 citations


Journal ArticleDOI
TL;DR: In this paper, the authors take stock of the advances and directions for research on the incomplete contracting front and illustrate some of the main ideas of the incomplete contract literature through an example, and offer methodological insights on the standard approach to modeling incomplete contracts; in particular, they discuss a tension between two assumptions made in the literature, namely rationality and the existence of transaction costs.
Abstract: The paper takes stock of the advances and directions for research on the incomplete contracting front. It first illustrates some of the main ideas of the incomplete contract literature through an example. It then offers methodological insights on the standard approach to modeling incomplete contracts; in particular it discusses a tension between two assumptions made in the literature, namely rationality and the existence of transaction costs. Last, it argues that, contrary to what is commonly argued, the complete contract methodology need not be unable to account for standard institutions such as authority and ownership; and it concludes with a discussion of the research agenda.

1,179 citations


Journal ArticleDOI
TL;DR: It is shown that a method that has been used to extend to the overidentified case standard algorithms for Bayesian intervals in reduced form models is incorrect, and it is shown how to obtain correct Bayesian interval intervals.
Abstract: We show how correctly to extend known methods for generating error bands in reduced form VAR's to overidentified models. We argue that the conventional pointwise bands common in the literature should be supplemented with measures of shape uncertainty, and we show how to generate such measures. We focus on bands that characterize the shape of the likelihood. Such bands are not classical confidence regions. We explain that classical confidence regions mix information about parameter location with information about model fit, and hence can be misleading as summaries of the implications of the data for the location of parameters. Because classical confidence regions also present conceptual and computational problems in multivariate time series models, we suggest that likelihood-based bands, rather than approximate confidence bands based on asymptotic theory, be standard in reporting results for this type of model.

988 citations


Journal ArticleDOI
TL;DR: The authors characterize the optimal security design in several cases and demonstrate circumstances under which standard debt is optimal and show that the riskiness of the debt is increasing in the issuer's retention costs for assets.
Abstract: We consider the problem of the design and sale of a security backed by specified assets. Given access to higher-return investments, the issuer has an incentive to raise capital by securitizing part of these assets. At the time the security is issued, the issuer’s or underwriter’s private information regarding the payoff of the security may cause illiquidity, in the form of a downward-sloping demand curve for the security. The severity of this illiquidity depends upon the sensitivity of the value of the issued security to the issuer’s private information. Thus, the security-design problem involves a tradeoff between the retention cost of holding cash flows not included in the security design, and the liquidity cost of including the cash flows and making the security design more sensitive to the issuer’s private information. We characterize the optimal security design in several cases. We also demonstrate circumstances under which standard debt is optimal and show that the riskiness of the debt is increasing in the issuer’s retention costs for assets.

722 citations


Journal ArticleDOI
TL;DR: A game is better-reply secure if for every nonequilibrium strategy x * and every payoff vector limit u * resulting from strategies approaching x *, some player i has a strategy yielding a payoff strictly above u i * even if the others deviate slightly from x *.
Abstract: A game is better-reply secure if for every nonequilibrium strategy x * and every payoff vector limit u * resulting from strategies approaching x * , some player i has a strategy yielding a payoff strictly above u i * even if the others deviate slightly from x * . If strategy spaces are compact and convex, payoffs are quasiconcave in the owner's strategy, and the game is better-reply secure, then a pure strategy Nash equilibrium exists. Better-reply security holds in many economic games. It also permits new results on the existence of symmetric and mixed strategy Nash equilibria.

648 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed and structurally estimated a sequential model of high school attendance and work decisions, and found that working while in school reduces school performance, and that even the most restrictive prohibition on working while attending high school would have only a limited impact on the high school graduation rates of white males.
Abstract: In this paper, we develop and structurally estimate a sequential model of high school attendance and work decisions. The model's estimates imply that youths who drop out of high school have different traits than those who graduate—they have lower school ability and/or motivation, they have lower expectations about the rewards from graduation, they have a comparative advantage at jobs that are done by nongraduates, and they place a higher value on leisure and have a lower consumption value of school attendance. We also found that working while in school reduces school performance. However, policy experiments based on the model's estimates indicate that even the most restrictive prohibition on working while attending high school would have only a limited impact on the high school graduation rates of white males.

557 citations


Journal ArticleDOI
TL;DR: In this article, a simple two-step nonparametric estimator for a triangular simultaneous equation model is presented, which employs series approximations that exploit the additive structure of the model.
Abstract: This paper presents a simple two-step nonparametric estimator for a triangular simultaneous equation model. Our approach employs series approximations that exploit the additive structure of the model. The first step comprises the nonparametric estimation of the reduced form and the corresponding residuals. The second step is the estimation of the primary equation via nonparametric regression with the reduced form residuals included as a regressor. We derive consistency and asymptotic normality results for our estimator, including optimal convergence rates. Finally we present an empirical example, based on the relationship between the hourly wage rate and annual hours worked, which illustrates the utility of our approach.

522 citations


Journal ArticleDOI
TL;DR: In this paper, a dynamic search framework is developed to analyze the intertemporal labor force participation behavior of married women, using longitudinal data to allow for a rich dynamic structure, and the sensitivity to alternative distributional assumptions is evaluated using linear probability and probit models.
Abstract: A dynamic search framework is developed to analyze the intertemporal labor force participation behavior of married women, using longitudinal data to allow for a rich dynamic structure. The sensitivity to alternative distributional assumptions is evaluated using linear probability and probit models. The dynamic probit models are estimated using maximum simulated likelihood (MSL) estimation, to overcome the computational difficulties inherent in maximum likelihood estimation of models with nontrivial error structures. The results find that participation decisions are characterized by significant state dependence, unobserved heterogeneity, and negative serial correlation in the error component. The hypothesis that fertility decisions are exogenous to women's participation decisions is rejected when dynamics are ignored; however, there is no evidence against this hypothesis in dynamic model specifications. Women's participation response is stronger to permanent than current nonlabor income, reflecting unobserved taste factors.

501 citations


Journal ArticleDOI
TL;DR: Caballero et al. as discussed by the authors showed that realistic time-to-build aspects of investment are not incontradiction with the view that investment episodes are lumpy in nature.
Abstract: We are grateful to Olivier Blanchard, Whitney Newey, James Stock, an editor, three anonymousreferees, and seminar participants at Brown, CEPR-Champoussin, Chicago, Columbia, EconometricŽ.Society Meetings Caracas and Tokyo , EFCC, Harvard, IMPA, LSE, NBER, Princeton, Rochester,Ž.SITE, Toronto, U. de Chile, and Yale for their comments. Financial support to Caballero from theŽ. Ž .National Science and Sloan Foundations and to Engel from FONDECYT Grant 195-510 and theŽ.Mellon Foundation Grant 9608 is gratefully acknowledged.2Since plants’ entry is excluded from their sample, these statistics are likely to represent lowerbounds on the degree of lumpiness in plants’ investment patterns.3We use the word ‘‘project’’ to emphasize the fact that the actual implementation of a projectmay cover more than a year-observation; realistic time-to-build aspects of investment are not incontradiction with the view that investment episodes are lumpy in nature.4Ž.See Chirinko 1994 for a survey of the empirical investment literature.

Journal ArticleDOI
TL;DR: In this article, the authors consider a generalized method of moments (GMM) estimation problem in which one has a vector of moment conditions, some of which are correct and some incorrect.
Abstract: This paper considers a generalized method of moments (GMM) estimation problem in which one has a vector of moment conditions, some of which are correct and some incorrect. The paper introduces several procedures for consistently selecting the correct moment conditions. The procedures also can consistently determine whether there is a sufficient number of correct moment conditions to identify the unknown parameters of interest. The paper specifies moment selection criteria that are GMM analogues of the widely used BIC and AIC model selection criteria. (The latter is not consistent.) The paper also considers downward and upward testing procedures. All of the moment selection procedures discussed in this paper are based on the minimized values of the GMM criterion function for different vectors of moment conditions. The procedures are applicable in time-series and cross-sectional contexts. Application of the results of the paper to instrumental variables estimation problems yields consistent procedures for selecting instrumental variables.

Journal ArticleDOI
TL;DR: In this paper, the authors established the asymptotic distribution of an extremum estimator when the true parameter lies on the boundary of the parameter space, where the boundary may be linear, curved, and/or kinked.
Abstract: This paper establishes the asymptotic distribution of an extremum estimator when the true parameter lies on the boundary of the parameter space. The boundary may be linear, curved, and/or kinked. Typically the asymptotic distribution is a function of a multivariate normal distribution in models without stochastic trends and a function of a multivariate Brownian motion in models with stochastic trends. The results apply to a wide variety of estimators and models. Examples treated in the paper are: (i) quasi-ML estimation of a random coefficients regression model with some coefficient variances equal to zero and (ii) LS estimation of an augmented Dickey-Fuller regression with unit root and time trend parameters on the boundary of the parameter space.

Journal ArticleDOI
TL;DR: This paper found that the non-employed are very heterogeneous, so that any single division into "unemployment" and "out-of-the-labor force" is unlikely to fully capture the variety of degrees of labor force attachment.
Abstract: Although the unemployment rate is one of the most widely cited and closely monitored economic statistics, the definition and measurement of unemployment remain controversial. An important issue is whether non-employed persons who display a marginal attachment to the labor force (for example, those who are available for and desire work but are not searching for work) should be classified as unemployed or non-participants. Although this issue has been extensively debated, it has never been tested empirically. This paper carries out empirical tests of this and related hypotheses using a unique longitudinal data set from Canada. We find within the marginally attached a "waiting" group whose behavior indicates that they would be more appropriately classified as unemployed rather than out-of-the-labor force. The remainder of the marginally attached exhibit behavior between that of the unemployed and the balance of non-participants, suggesting that the desire for work among non-searchers conveys substantial information about labor force attachment and future employment status. Our methods also apply to heterogeneity within the unemployed, and we investigate behavioral variation linked to differences in job search methods and reasons for entry into unemployment. Although those using "passive" job search do exhibit behavior somewhat distinct from "active" searchers, our results reject the practice of classifying passive job searchers as out-of-the-labor force. Overall, our results indicate that the non-employed are very heterogeneous, so that any single division into "unemployment" and "out-of-the-labor force" is unlikely to fully capture the variety of degrees of labor force attachment.(This abstract was borrowed from another version of this item.)

Journal ArticleDOI
TL;DR: In this article, the authors argue that these two views of growth may capture different phases of a single growth experience, and they argue that both investment and innovation are essential in sustaining growth indefinitely, and yet they move in an asynchronized way; only one appears to play a dominant role in each phase.
Abstract: The neoclassical growth model focuses on factor accumulation as an engine of growth, while the neo-Schumpetarian growth model stresses innovation. This paper argues that these two views of growth may capture different phases of a single growth experience. In the model presented below, the balanced growth path is unstable and the economy achieves sustainable growth through cycles under an empirically plausible condition, perpetually moving back and forth between two phases. One phase is characterized by higher output growth, higher investment, no innovation, and a competitive market structure. The other phase is characterized by lower output growth, lower investment, high innovation, and a more monopolistic market structure. Both investment and innovation are essential in sustaining growth indefinitely, and yet they move in an asynchronized way; only one of them appears to play a dominant role in each phase. The economy grows faster along the cycles than along the (unstable) balanced growth path.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a new equilibrium concept for political games based on the fact of factional conflict within parties, where each party is supposed to consist of reformists, militants, and opportunists: each faction has a complete preference order on policy space, but together they can only agree on a partial order.
Abstract: Why do both left and right political parties typically propose progressive income taxation schemes in political competition? Analysis of this problem has been hindered by the two-dimensionality of the issue space. To give parties a choice over a domain that contains both progressive and regressive income tax policies requires an issue space that is at least two-dimensional. Nash equilibrium in pure strategies of the standard two-party game, whose players have complete preferences over a two-dimensional policy space, generically fails to exist. I introduce a new equilibrium concept for political games, based on the fact of factional conflict within parties. Each party is supposed to consist of reformists, militants, and opportunists: each faction has a complete preference order on policy space, but together they can only agree on a partial order. Nash equilibria of the two-party game, where the policy space consists of all quadratic income tax functions, and each party is represented by its partial order, exist, and it is shown that, in such equilibria, both parties propose progressive income taxation.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the strategic options facing workers in labor markets with centralized market clearing mechanisms such as those in the entry level labor markets of a number of professions and demonstrate that stating preferences that reverse the true preference order of two acceptable firms is not beneficial in a low-information environment, but submitting a truncation of the true preferences may be.
Abstract: We consider the strategic options facing workers in labor markets with centralized market clearing mechanisms such as those in the entry level labor markets of a number of professions. If workers do not have detailed information about the preferences of other workers and firms, the scope of potentially profitable strategic behavior is considerably reduced, although not entirely eliminated. Specifically, we demonstrate that stating preferences that reverse the true preference order of two acceptable firms is not beneficial in a low information environment, but submitting a truncation of the true preferences may be. This gives some insight into the successful operation of these market mechanisms.

Journal ArticleDOI
TL;DR: In this paper, the MIT Center for Energy and Environmental Policy Research, the U.S. Dept. of Energy and the National Science Foundation have supported the work of the authors.
Abstract: Supported by the MIT Center for Energy and Environmental Policy Research, the U.S. Dept. of Energy and the National Science Foundation.

Journal ArticleDOI
TL;DR: In this article, the authors search for solutions to various classes of allocation problems and show that the core correspondence of Roth and Postlewaite 1977 is strategy-proof in the context of housing markets.
Abstract: IN THIS PAPER WE SEARCH for solutions to various classes of allocation problems. We hand results pertaining to housing markets are much more encouraging. Roth 1982b shows that in the context of housing markets the core correspondence, which is shown to Ž. Ž . be single-valued by Roth and Postlewaite 1977 , is strategy-proof. Moreover Ma 1994 shows that it is the only solution that is Pareto efficient, individually rational, and strategy-proof.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a systematic treatment of the asymptotic properties of weighted M-estimators under variable probability stratified sampling and show that the unweighted estimator is more efficient than the weighted estimator under a generalized conditional information matrix equality.
Abstract: I provide a systematic treatment of the asymptotic properties of weighted M-estimators under variable probability stratified sampling. The characterization of the sampling scheme and representation of the objective function allow for a straightforward analysis. Simple, consistent asymptotic variance matrix estimators are proposed for a large class of problems. When stratification is based on exogenous variables, I show that the unweighted M-estimator is more efficient than the weighted estimator under a generalized conditional information matrix equality. When population frequencies are known, a more efficient weighting is possible. I also show how the results carry over to multinomial sampling.

Journal ArticleDOI
TL;DR: In this paper, the authors define a general equilibrium model with exchange and club formation, where agents trade multiple private goods widely in the market, can belong to several clubs, and care about the characteristics of the other members of their clubs.
Abstract: This paper defines a general equilibrium model with exchange and club formation. Agents trade multiple private goods widely in the market, can belong to several clubs, and care about the characteristics of the other members of their clubs. The space of agents is a continuum, but clubs are finite. It is shown that (i) competitive equilibria exist, and (ii) the core coincides with the set of equilibrium states. The central subtlety is in modeling club memberships and expressing the notion that membership choices are consistent across the population.

Journal ArticleDOI
TL;DR: In this article, it is shown that the linkage principle does not extend to the multi-unit auction setting and an analysis of the equilibium bidding strategies is carried out for the general two-agent/two-unit Vickrey auction in order to provide economic insight into the nature of the failure.
Abstract: It is shown that the linkage principle (Milgrom and Weber (1982)) does not extend to the multi-unit auction setting. An analysis of the equilibium bidding strategies is carried out for the general two-agent/two-unit Vickrey auction in order to provide economic insight into the nature of the failure. In addition, an explicit counterexample is provided. ∗ Both authors acknowledge support from the Binational Science Foudation (grant#9500023/1). Reny also acknowledges support from the National Science Foundation (grant# SBR-970932), and the University of Pittsburgh’s Faculty of Arts and Sciences.

Journal ArticleDOI
TL;DR: In this article, the baseline hazard function and the distribution of the unobserved heterogeneity are estimated nonparametrically for the proportional hazard model with the assumption that the heterogeneity distribution does not belong to known, finite-dimensional, parametric families.
Abstract: The proportional hazard model with unobserved heterogeneity gives the hazard function of a random variable conditional on covariates and a second random variable representing unobserved heterogeneity. This paper shows how to estimate the baseline hazard function and the distribution of the unobserved heterogeneity nonparametrically. The baseline hazard function and heterogeneity distribution are assumed to satisfy smoothness conditions but are not assumed to belong to known, finite-dimensional, parametric families. Existing estimators assume that the baseline hazard function or heterogeneity distribution belongs to a known parametric family. Thus, the estimators presented here are more general than existing ones.

Journal ArticleDOI
TL;DR: This article showed that the extent and likelihood of the divergence between the Lorenz criterion and the variance of logarithms can be extremely large, and that the likelihood is far from negligible.
Abstract: The variance of logarithms is a widely used inequality measure which is well known to disagree with the Lorenz criterion. Up to now, the extent and likelihood of this inconsistency were thought to be vanishingly small. We find that this view is mistaken : the extent of the disgreement can be extremely large; the likelihood is far from negligible. (This abstract was borrowed from another version of this item.)

Journal ArticleDOI
TL;DR: In this article, the authors provide a simple characterization of the sets of Nash and of subgame perfect equilibrium payoffs in two-player repeated games, and show that the set of feasible repeated game payoffs is typically larger than the convex hull of the underlying stage-game payoffs.
Abstract: When players have identical time preferences, the set of feasible repeated game payoffs coincides with the convex hull of the underlying stage- game payoffs. Moreover, all feasible and individually rational payoffs can be sustained by equilibria if the players are sufficiently patient. Neither of these facts generalizes to the case of different time preferences. First, players can mutually benefit from trading payoffs across time. Hence, the set of feasible repeated game payoffs is typically larger than the convex hull of the underlying stage-game payoffs. Second, it is not usually the case that every trade plan that guarantees individually rational payoffs can be sustained by an equilibrium, no matter how patient the players are. This paper provides a simple characterization of the sets of Nash and of subgame perfect equilibrium payoffs in two-player repeated games.

Journal ArticleDOI
TL;DR: In this article, the authors study a model in which two carriers choose networks to connect cities and compete for customers, and show that if carriers compete aggressively (e.g., Bertrand-like behavior), one carrier operating a single hub-spoke network is an equilibrium outcome.
Abstract: We study a model in which two carriers choose networks to connect cities and compete for customers. We show that if carriers compete aggressively (e.g., Bertrand-like behavior), one carrier operating a single hub-spoke network is an equilibrium outcome. Competing hub-spoke networks are not an equilibrium outcome, although duopoly equilibria in nonhub networks can exist. If carriers do not compete aggressively, an equilibrium with competing hub-spoke networks exists as long as the number of cities is not too small. We provide conditions under which all equilibria consist of hub-spoke networks.

Journal ArticleDOI
TL;DR: In this article, the authors consider the case where preferences are separable only with respect to elements of some partition of the set of components and these partitions vary across individuals, and they characterize the libertarian social choice function and show that no superset of the top-separable domain admits strategyproof non-dictatorial social choice functions.
Abstract: We consider strategyproof social choice functions defined over product domains. If preferences are strict orderings and separable, then strategyproof social choice functions must be decomposable provided that the domain of preferences is rich. We provide several characterization results in the case where preferences are separable only with respect to the elements of some partition of the set of components and these partitions vary across individuals. We characterize the libertarian social choice function and show that no superset of the tops separable domain admits strategyproof nondictatorial social choice functions.

Journal ArticleDOI
TL;DR: In this article, the authors studied first-price common value auctions where an insider is better informed than other bidders (outsiders) about the value of the item and showed that insiders make substantially greater profits, conditional on winning, than outsiders, and increase their bids in response to more rivals.
Abstract: Bidding is studied in first-price common value auctions where an insider is better informed than other bidders (outsiders) about the value of the item. With inexperienced bidders, having an insider does not materially reduce the severity of the winner's curse compared to auctions with a symmetric information structure (SIS). In contrast, super-experienced bidders, who have largely overcome the winner's curse, satisfy the comparative static predictions of equilibrium bidding theory: (i) average seller's revenue is larger with an insider than in SIS auctions, (ii) insiders make substantially greater profits, conditional on winning, than outsiders, and (iii) insiders increase their bids in response to more rivals. Further, changes in insiders' bids are consistent with directional learning theory (Selten and Buchta (1994)).

Journal ArticleDOI
TL;DR: In this article, the authors study preferences over Savage acts that map states to opportunity sets and satisfy the Savage axioms and show that preference over opportunity sets may exhibit a preference for flexibility due to an implicit uncertainty about future preferences reflecting anticipated unforeseen contingencies.
Abstract: We study preferences over Savage acts that map states to opportunity sets and satisfy the Savage axioms. Preferences over opportunity sets may exhibit a preference for flexibility due to an implicit uncertainty about future preferences reflecting anticipated unforeseen contingencies. The main result of this paper characterizes maximization of the expected indirect utility in terms of an ‘‘Indirect Stochastic Dominance’’ axiom that expresses a preference for ‘‘more opportunities in expectation.’’ The key technical tool of the paper, a version of Mobius inversion, has been imported ¨ from the theory of nonadditive belief functions; it allows an alternative representation Ž. using Choquet integration, and yields a simple proof of Kreps’ 1979 classic result.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of public goods problem where a group of individuals must decide on a level of public good that is produced according to constant returns to scale up to some capacity constraint.
Abstract: In this paper, we consider the following classical public goods problem. A group of individuals must decide on a level of public good that is produced according to constant returns to scale up to some capacity constraint. In addition to deciding the level of public good, the group must decide how to tax the individuals in the group in order to cover the cost. The distribution of the burden of taxation is important because different individuals have different marginal rates of substitution between the private good (taxes) and the public good, and may have different incomes as well. These individual marginal rates of substitution are private information; that is, each individual knows his or her own marginal rate of substitution, but not those of the other members of the group. Adopting a Bayesian mechanism design framework, we assume that the distribution of marginal rates of substitution is common knowledge.