scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 2010"


Journal ArticleDOI
TL;DR: In this paper, the elasticity of substitution between investments in one period and stocks of skills in another period is estimated to assess the benefits of early investment in children compared to later remediation.
Abstract: This paper formulates and estimates multistage production functions for children's cognitive and noncognitive skills. Skills are determined by parental environments and investments at different stages of childhood. We estimate the elasticity of substitution between investments in one period and stocks of skills in that period to assess the benefits of early investment in children compared to later remediation. We establish nonparametric identification of a general class of production technologies based on nonlinear factor models with endogenous inputs. A by-product of our approach is a framework for evaluating childhood and schooling interventions that does not rely on arbitrarily scaled test scores as outputs and recognizes the differential effects of the same bundle of skills in different tasks. Using the estimated technology, we determine optimal targeting of interventions to children with different parental and personal birth endowments. Substitutability decreases in later stages of the life cycle in the production of cognitive skills. It is roughly constant across stages of the life cycle in the production of noncognitive skills. This finding has important implications for the design of policies that target the disadvantaged. For most configurations of disadvantage it is optimal to invest relatively more in the early stages of childhood than in later stages.

1,050 citations


Journal ArticleDOI
TL;DR: In this paper, preliminary results of this paper were presented at Chernozhukov's invited Cowles Foundation lecture at the Northern American meetings of the Econometric society in June of 2009.
Abstract: Date: First version: June 2009, this version October 28, 2010. Preliminary results of this paper were FIRST presented at Chernozhukov's invited Cowles Foundation lecture at the Northern American meetings of the Econometric society in June of 2009. We thank seminar participants at Brown, Columbia, Harvard-MIT, the Dutch Econometric Study Group, Fuqua School of Business, and NYU for helpful comments. We also thank Denis Chetverikov, JB Doyle, and Joonhwan Lee for thorough reading of this paper and helpful feedback.

690 citations


Journal ArticleDOI
TL;DR: This article developed a new framework for examining the determinants of wage distributions that emphasizes within-industry reallocation, labor market frictions, and differences in workforce composition across firms.
Abstract: This paper develops a new framework for examining the determinants of wage distributions that emphasizes within-industry reallocation, labor market frictions, and differences in workforce composition across firms. More productive firms pay higher wages and exporting increases the wage paid by a firm with a given productivity. The opening of trade enhances wage inequality and can either raise or reduce unemployment. While wage inequality is higher in a trade equilibrium than in autarky, gradual trade liberalization first increases and later decreases inequality.

668 citations


ReportDOI
TL;DR: The authors construct a new index of media slant that measures the similarity of a news outlet's language to that of a congressional Republican or Democrat, and find that readers have an economically significant preference for like-minded news.
Abstract: We construct a new index of media slant that measures the similarity of a news outlet's language to that of a congressional Republican or Democrat. We estimate a model of newspaper demand that incorporates slant explicitly, estimate the slant that would be chosen if newspapers independently maximized their own profits, and compare these profit-maximizing points with firms' actual choices. We find that readers have an economically significant preference for like-minded news. Firms respond strongly to consumer preferences, which account for roughly 20 percent of the variation in measured slant in our sample. By contrast, the identity of a newspaper's owner explains far less of the variation in slant.

554 citations


Journal ArticleDOI
TL;DR: This paper used regression discontinuity to examine the long-run impacts of the mita, an extensive forced mining labor system in effect in Peru and Bolivia between 1573 and 1812.
Abstract: This study utilizes regression discontinuity to examine the long-run impacts of the mita, an extensive forced mining labor system in effect in Peru and Bolivia between 1573 and 1812. Results indicate that a mita effect lowers household consumption by around 25% and increases the prevalence of stunted growth in children by around 6 percentage points in subjected districts today. Using data from the Spanish Empire and Peruvian Republic to trace channels of institutional persistence, I show that the mita's influence has persisted through its impacts on land tenure and public goods provision. Mita districts historically had fewer large landowners and lower educational attainment. Today, they are less integrated into road networks and their residents are substantially more likely to be subsistence farmers.

528 citations


Journal ArticleDOI
TL;DR: In this article, a simple analytical structure in which state capacities are modeled as forward looking investments by government is presented, and the authors link these state capacity investments to patterns of development and growth.
Abstract: The absence of state capacities to raise revenue and to support markets is a key factor in explaining the persistence of weak states. This paper reports on an ongoing project to investigate the incentive to invest in such capacities. The paper sets out a simple analytical structure in which state capacities are modeled as forward looking investments by government. The approach highlights some determinants of state building including the risk of external or internal conflict, the degree of political instability, and dependence on natural resources. Throughout, we link these state capacity investments to patterns of development and growth.

403 citations


Journal ArticleDOI
TL;DR: In this paper, a new class of confidence sets and tests based on generalized moment selection (GMS) procedures are introduced, and the power of GMS tests is compared to that of subsampling, m out of n bootstrap and plug-in asymptotic (PA) tests.
Abstract: The topic of this paper is inference in models in which parameters are defined by moment inequalities and/or equalities. The parameters may or may not be identified. This paper introduces a new class of confidence sets and tests based on generalized moment selection (GMS). GMS procedures are shown to have correct asymptotic size in a uniform sense and are shown not to be asymptotically conservative. The power of GMS tests is compared to that of subsampling, m out of n bootstrap, and “plug-in asymptotic” (PA) tests. The latter three procedures are the only general procedures in the literature that have been shown to have correct asymptotic size (in a uniform sense) for the moment inequality/equality model. GMS tests are shown to have asymptotic power that dominates that of subsampling, m out of n bootstrap, and PA tests. Subsampling and m out of n bootstrap tests are shown to have asymptotic power that dominates that of PA tests.

329 citations


Journal ArticleDOI
TL;DR: In this article, the authors study a continuous-time principal-agent model in which a risk-neutral agent with limited liability must exert unobservable effort to reduce the likelihood of large but relatively infrequent losses.
Abstract: We study a continuous-time principal-agent model in which a risk-neutral agent with limited liability must exert unobservable effort to reduce the likelihood of large but relatively infrequent losses. Firm size can be decreased at no cost or increased subject to adjustment costs. In the optimal contract, investment takes place only if a long enough period of time elapses with no losses occurring. Then, if good performance continues, the agent is paid. As soon as a loss occurs, payments to the agent are suspended, and so is investment if further losses occur. Accumulated bad performance leads to downsizing. We derive explicit formulae for the dynamics of firm size and its asymptotic growth rate, and we provide conditions under which firm size eventually goes to zero or grows without bounds.

308 citations


Journal ArticleDOI
TL;DR: This paper presented a parsimonious characterization of risk taking behavior by estimating a finite mixture model for three different experimental data sets, two Swiss and one Chinese, over a large number of real gains and losses.
Abstract: It has long been recognized that there is considerable heterogeneity in individual risk taking behavior, but little is known about the distribution of risk taking types. We present a parsimonious characterization of risk taking behavior by estimating a finite mixture model for three different experimental data sets, two Swiss and one Chinese, over a large number of real gains and losses. We find two major types of individuals: In all three data sets, the choices of roughly 80% of the subjects exhibit significant deviations from linear probability weighting of varying strength, consistent with prospect theory. Twenty percent of the subjects weight probabilities near linearly and behave essentially as expected value maximizers. Moreover, individuals are cleanly assigned to one type with probabilities close to unity. The reliability and robustness of our classification suggest using a mix of preference theories in applied economic modeling.

289 citations


Journal ArticleDOI
TL;DR: In this paper, simulation-based estimators for static, discrete games of complete information have been proposed to identify and estimate game equilibria under weak functional form assumptions using exclusion restrictions and an identification at infinity approach.
Abstract: We discuss the identification and estimation of discrete games of complete information. Following Bresnahan and Reiss (1990, 1991), a discrete game is a generalization of a standard discrete choice model where utility depends on the actions of other players. Using recent algorithms to compute all of the Nash equilibria to a game, we propose simulation-based estimators for static, discrete games. We demonstrate that the model is identified under weak functional form assumptions using exclusion restrictions and an identification at infinity approach. Monte Carlo evidence demonstrates that the estimator can perform well in moderately sized samples. As an application, we study entry decisions by construction contractors to bid on highway projects in California. We find that an equilibrium is more likely to be observed if it maximizes joint profits, has a higher Nash product, uses mixed strategies, and is not Pareto dominated by another equilibrium.

285 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that if it is necessary for (point) identification that the weights take arbitrarily large values, then the parameter of interest, though point identified, cannot be estimated at the regular (parametric) rate and is said to be irregularly identified.
Abstract: In weighted moment condition models, we show a subtle link between identification and estimability that limits the practical usefulness of estimators based on these models. In particular, if it is necessary for (point) identification that the weights take arbitrarily large values, then the parameter of interest, though point identified, cannot be estimated at the regular (parametric) rate and is said to be irregularly identified. This rate depends on relative tail conditions and can be as slow in some examples as n−1/4. This nonstandard rate of convergence can lead to numerical instability and/or large standard errors. We examine two weighted model examples: (i) the binary response model under mean restriction introduced by Lewbel (1997) and further generalized to cover endogeneity and selection, where the estimator in this class of models is weighted by the density of a special regressor, and (ii) the treatment effect model under exogenous selection (Rosenbaum and Rubin (1983)), where the resulting estimator of the average treatment effect is one that is weighted by a variant of the propensity score. Without strong relative support conditions, these models, similar to well known “identified at infinity” models, lead to estimators that converge at slower than parametric rate, since essentially, to ensure point identification, one requires some variables to take values on sets with arbitrarily small probabilities, or thin sets. For the two models above, we derive some rates of convergence and propose that one conducts inference using rate adaptive procedures that are analogous to Andrews and Schafgans (1998) for the sample selection model.

Journal ArticleDOI
TL;DR: In this article, the authors consider truthful implementation of the socially e¢ cient allocation in an independent private-value environment in which agents receive private information over time, and propose a suitable generalization of the pivot mechanism, based on the marginal contribution of each agent.
Abstract: We consider truthful implementation of the socially e¢ cient allocation in an independent private-value environment in which agents receive private information over time. We propose a suitable generalization of the pivot mechanism, based on the marginal contribution of each agent. In the dynamic pivot mechanism, the ex-post incentive and ex-post participation constraints are satis…ed for all agents after all histories. In an environment with diverse preferences it is the unique mechanism satisfying ex-post incentive, ex-post participation and e¢ cient exit conditions. We develop the dynamic pivot mechanism in detail for a repeated auction of a single object in which each bidder learns over time her true valuation of the object. We show that the dynamic pivot mechanism is equivalent to a modi…ed second price auction.

Journal ArticleDOI
TL;DR: In this article, the authors provide a novel approach to ordering signals based on the property that more informative signals lead to greater variability of conditional expectations and propose two nested information criteria (supermodular precision and integral precision) by combining this approach with two variability orders (dispersive and convex orders).
Abstract: This paper provides a novel approach to ordering signals based on the property that more informative signals lead to greater variability of conditional expectations. We define two nested information criteria (supermodular precision and integral precision) by combining this approach with two variability orders (dispersive and convex orders). We relate precision criteria with orderings based on the value of information to a decision maker. We then use precision to study the incentives of an auctioneer to supply private information. Using integral precision, we obtain two results: (i) a more precise signal yields a more efficient allocation; (ii) the auctioneer provides less than the efficient level of information. Supermodular precision allows us to extend the previous analysis to the case in which supplying information is costly and to obtain an additional finding ; (iii) there is a complementarity between information and competition, so that both the socially efficient and the auctioneer's optimal choice of precision increase with the number of bidders.

Journal ArticleDOI
TL;DR: In this article, a tractable characterization of the sharp identi-cation region of the parameters of incomplete econometric models with convex moment predictions is provided. But the analysis is restricted to games with complete and incomplete information.
Abstract: We provide a tractable characterization of the sharp identi…cation region of the parameters � in a broad class of incomplete econometric models. Models in this class have set valued predictions that yield a convex set of conditional or unconditional moments for the observable model variables. In short, we call these models with convex moment predictions. Examples include static, simultaneous move …nite games of complete and incomplete information in the presence of multiple equilibria; best linear predictors with interval outcome and covariate data; and random utility models of multinomial choice in the presence of interval regressors data. Given a candidate value for �; we establish that the convex set of moments yielded by the model predictions can be represented as the Aumann expectation of a properly de…ned random set.

Journal ArticleDOI
TL;DR: In this article, a novel bootstrap procedure is introduced to perform inference in a wide class of partially identi-ed econometric models, where the objective of the inferential procedure is to cover the identi…ed set with a prespeci…ed probability.
Abstract: This paper introduces a novel bootstrap procedure to perform inference in a wide class of partially identi…ed econometric models. We consider econometric models de…ned by …nitely many weak moment inequalities y , which encompass many applications of economic interest. The objective of our inferential procedure is to cover the identi…ed set with a prespeci…ed probability z . We compare our bootstrap procedure, a competing asymptotic approximation and subsampling proce- dures in terms of the rate at which they achieve the desired coverage level, also known as the error in the coverage probability. Under certain conditions, we show that our bootstrap procedure and the asymp- totic approximation have the same order of error in the coverage probability, which is smaller than the one obtained by using subsampling. This implies that inference based on our bootstrap and asymptotic approximation should eventually be more precise than inference based on subsampling. A Monte Carlo study con…rms this …nding in a small sample simulation.

ReportDOI
TL;DR: In this article, the effects of private information on both the intensive and extensive margins (the terms and probability of trade) were analyzed in a competitive search setting, where agents choose where to apply, and they match bilaterally.
Abstract: We study economies with adverse selection, plus the frictions in competitive search theory. With competitive search, principals post terms of trade (contracts), then agents choose where to apply, and they match bilaterally. Search allows us to analyze the effects of private information on both the intensive and extensive margins (the terms and probability of trade). There always exists a separating equilibrium where each type applies to a different contract. The equilibrium is unique in terms of payoffs. It is not generally efficient. We provide an algorithm for constructing equilibrium. Three applications illustrate the usefulness of the approach, and contrast our results with those in standard contract and search theory.

Journal ArticleDOI
TL;DR: In this article, the authors provide a general model of dynamic competition that accounts for these economic fundamentals and show how they shape industry structure and dynamics, and show that forgetting does not simply negate learning.
Abstract: Learning-by-doing and organizational forgetting have been shown to be important in a variety of industrial settings. This paper provides a general model of dynamic competition that accounts for these economic fundamentals and shows how they shape industry structure and dynamics. Previously obtained results regarding the dominance properties of flrms’ pricing behavior no longer hold in this more general setting. We show that forgetting does not simply negate learning. Rather, learning and forgetting are distinct economic forces. In particular, a model with learning and forgetting can give rise to aggressive pricing behavior, market dominance, and multiple equilibria, whereas a model with learning alone cannot.

Journal ArticleDOI
TL;DR: In the context of decision under uncertainty, this article proposed axioms that the two notions of rationality might satisfy, which allow a joint representation by a single set of prior probabilities and a single utility index.
Abstract: A decision maker (DM) is characterized by two binary relations. The first reflects choices that are rational in an "objective" sense: the DM can convince others that she is right in making them. The second relation models choices that are rational in a "subjective" sense: the DM cannot be convinced that she is wrong in making them. In the context of decision under uncertainty, we propose axioms that the two notions of rationality might satisfy. These axioms allow a joint representation by a single set of prior probabilities and a single utility index. It is "objectively rational" to choose f in the presence of g if and only if the expected utility of f is at least as high as that of g given each and every prior in the set. It is "subjectively rational" to choose f rather than g if and only if the minimal expected utility of f (with respect to all priors in the set) is at least as high as that of g. In other words, the objective and subjective rationality relations admit, respectively, a representation a la Bewley (2002) and a la Gilboa and Schmeidler (1989). Our results thus provide a bridge between these two classic models, as well as a novel foundation for the latter.

Journal ArticleDOI
TL;DR: This work shows that each signal can be modeled in reduced form as a measure over ex post utility functions without reference to a state space and provides a measure of comparative contemplation costs and characterize the special case of the representation where contemplation is costless.
Abstract: We study preferences over menus which can be represented as if the individual is uncertain of her tastes, but is able to engage in costly contemplation before selecting an alternative from a menu. Since contemplation is costly, our key axiom, aversion to contingent planning, reflects the individual's preference to learn the menu from which she will be choosing prior to engaging in contemplation about her tastes for the alternatives. Our representation models contemplation strategies as subjective signals over a subjective state space. The subjectivity of the state space and the information structure in our representation makes it difficult to identify them from the preference. To overcome this issue, we show that each signal can be modeled in reduced form as a measure over ex post utility functions without reference to a state space. We show that in this reduced-form representation, the set of measures and their costs are uniquely identified. Finally, we provide a measure of comparative contemplation costs and characterize the special case of our representation where contemplation is costless.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a model of annuity contract choice and estimate it using data from the U.K. annuity market, which allows for private information about mortality risk as well as heterogeneity in preferences over different contract options.
Abstract: Much of the extensive empirical literature on insurance markets has focused on whether adverse selection can be detected. Once detected, however, there has been little attempt to quantify its welfare cost or to assess whether and what potential government interventions may reduce these costs. To do so, we develop a model of annuity contract choice and estimate it using data from the U.K. annuity market. The model allows for private information about mortality risk as well as heterogeneity in preferences over different contract options. We focus on the choice of length of guarantee among individuals who are required to buy annuities. The results suggest that asymmetric information along the guarantee margin reduces welfare relative to a first-best symmetric information benchmark by about £127 million per year or about 2 percent of annuitized wealth. We also find that by requiring that individuals choose the longest guarantee period allowed, mandates could achieve the first-best allocation. However, we estimate that other mandated guarantee lengths would have detrimental effects on welfare. Since determining the optimal mandate is empirically difficult, our findings suggest that achieving welfare gains through mandatory social insurance may be harder in practice than simple theory may suggest.

Journal ArticleDOI
TL;DR: An emerging literature in time series econometrics concerns the modeling of potentially nonlinear temporal dependence in stationary Markov chains using copula functions and obtains sucient conditions for a geometric rate of mixing in models of this kind.
Abstract: An emerging literature in time series econometrics concerns the modeling of potentially nonlinear temporal dependence in stationary Markov chains using copula functions. We obtain sucient conditions for a geometric rate of mixing in models of this kind. Geometric beta-mixing is established under a rather strong sucient condition that rules out asymmetry and tail dependence in the copula function. Geometric rho-mixing is obtained under a weaker condition that permits both asymmetry and tail dependence. We verify one or both of these conditions for a range of parametric copula functions that are popular in applied work.

Journal ArticleDOI
TL;DR: In this article, the conditional probability of a positive response is obtained by evaluating a given distribution function (F) at a linear combination of the predictor variables, and the information bound is zero unless F is logistic.
Abstract: This paper considers a panel data model for predicting a binary outcome. The conditional probability of a positive response is obtained by evaluating a given distribution function (F) at a linear combination of the predictor variables. One of the predictor variables is unobserved. It is a random effect that varies across individuals but is constant over time. The semiparametric aspect is that the conditional distribution of the random effect, given the predictor variables, is unrestricted. This paper has two results. If the support of the observed predictor variables is bounded, then identification is possible only in the logistic case. Even if the support is unbounded, so that (from Manski (1987)) identification holds quite generally, the information bound is zero unless F is logistic. Hence consistent estimation at the standard pn rate is possible only in the logistic case.

Journal ArticleDOI
TL;DR: In this paper, the role of search frictions in markets with price competition and how it leads to sorting of heterogeneous agents is investigated, and it is shown that positive assortative matching obtains when complementarities in the former outweigh complementarity in the latter.
Abstract: We investigate the role of search frictions in markets with price competition and how it leads to sorting of heterogeneous agents. There are two aspects of value creation: the match value when two agents actually trade and the probability of trading governed by the search technology. We show that positive assortative matching obtains when complementarities in the former outweigh complementarities in the latter. This happens if and only if the match-value function is root-supermodular, that is, its nth root is supermodular, where n reflects the elasticity of substitution of the search technology. This condition is weaker than the condition required for positive assortative matching in markets with random search.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the random priority mechanism becomes equivalent to the probabilistic serial dictatorship mechanism when the number of copies of each object type approaches infinity, and the random assignments in these mechanisms converge to each other.
Abstract: The random priority (random serial dictatorship) mechanism is a common method for assigning objects. The mechanism is easy to implement and strategy-proof. However, this mechanism is inefficient, because all agents may be made better off by another mechanism that increases their chances of obtaining more preferred objects. This form of inefficiency is eliminated by a mechanism called probabilistic serial, but this mechanism is not strategy-proof. Thus, which mechanism to employ in practical applications is an open question. We show that these mechanisms become equivalent when the market becomes large. More specifically, given a set of object types, the random assignments in these mechanisms converge to each other as the number of copies of each object type approaches infinity. Thus, the inefficiency of the random priority mechanism becomes small in large markets. Our result gives some rationale for the common use of the random priority mechanism in practical problems such as student placement in public schools.

Journal ArticleDOI
TL;DR: In this article, the authors provide characterizations of agent-proposing deferred acceptance allocation rules in terms of non-wastefulness, population monotonicity, and weak Maskin monotoneity.
Abstract: The deferred acceptance algorithm is often used to allocate indivisible objects when monetary transfers are not allowed. We provide two characterizations of agent-proposing deferred acceptance allocation rules. Two new axioms—individually rational monotonicity and weak Maskin monotonicity—are essential to our analysis. An allocation rule is the agent-proposing deferred acceptance rule for some acceptant substitutable priority if and only if it satisfies non-wastefulness and individually rational monotonicity. An alternative characterization is in terms of non-wastefulness, population monotonicity, and weak Maskin monotonicity. We also offer an axiomatization of the deferred acceptance rule generated by an exogenously specified priority structure. We apply our results to characterize efficient deferred acceptance rules.

Journal ArticleDOI
TL;DR: In this article, the impact of two types of financial frictions on long-term average savings and investment rates is quantitatively investigated, and they find that the calibrated model with both frictions produces a savings-investment correlation and a volume of capital flows close to the data.
Abstract: Unlike the prediction of a frictionless open economy model, long-term average savings and investment rates are highly correlated across countries—a puzzle first identified by Feldstein and Horioka (1980). We quantitatively investigate the impact of two types of financial frictions on this correlation. One is limited enforcement ,w here contracts are enforced by the threat of default penalties. The other is limited spanning, where the only asset available is noncontingent bonds. We find that the calibrated model with both frictions produces a savings–investment correlation and a volume of capital flows close to the data. To solve the puzzle, the limited enforcement friction needs low default penalties under which capital flows are much lower than those in the data, and the limited spanning friction needs to exogenously restrict capital flows to the observed level. When combined, the two frictions interact to endogenously restrict capital flows and thereby solve the Feldstein–Horioka puzzle.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the outcome in terms of interim expected probabilities of trade and interim expected transfers of any Bayesian mechanism can also be obtained with a dominant strategy mechanism.
Abstract: We prove—in the standard independent private-values model—that the outcome, in terms of interim expected probabilities of trade and interim expected transfers, of any Bayesian mechanism can also be obtained with a dominant-strategy mechanism.

Journal ArticleDOI
TL;DR: In this article, the authors combine dynamic social choice and strategic experimentation to study the following question: How does a society, a committee, or, more generally, a group of individuals with potentially heterogeneous preferences, experiment with new opportunities?
Abstract: This paper combines dynamic social choice and strategic experimentation to study the following question: How does a society, a committee, or, more generally, a group of individuals with potentially heterogeneous preferences, experiment with new opportunities ? Each voter recognizes that, during experimentation, other voters also learn about their preferences. As a result, pivotal voters today are biased against experimentation because it reduces their likelihood of remaining pivotal. This phenomenon reduces equilibrium experimentation below the socially efficient level, and may even result in a negative option value of experimentation. However, one can restore efficiency by designing a voting rule that depends deterministically on time. Another main result is that even when payoffs of a reform are independently distributed across the population, good news about any individual's payoff increases other individuals' incentives to experiment with that reform, due to a positive voting externality.

Journal ArticleDOI
TL;DR: In this paper, the authors consider a model of strategic trading with asymmetric information of an asset whose value follows a Brownian motion, where an insider continuously observes a signal that tracks the evolution of the asset's fundamental value.
Abstract: We consider a model of strategic trading with asymmetric information of an asset whose value follows a Brownian motion. An insider continuously observes a signal that tracks the evolution of the asset's fundamental value. The value of the asset is publicly revealed at a random time. The equilibrium has two regimes separated by an endogenously determined time T. In [0, T), the insider gradually transfers her information to the market. By time T, all her information has been transferred and the price agrees with the market value of the asset. In the interval [T, ∞), the insider trades large volumes and reveals her information immediately, so market prices track the market value perfectly. Despite this market efficiency, the insider is able to collect strictly positive rents after T.

Journal ArticleDOI
TL;DR: In this article, the authors introduce a distinction between two types of uncertainty, choice uncertainty and trade uncertainty, both of which could lead to exchange asymmetry, and design an experiment where the two treatments impact dierently on trade uncertainty while controlling for choice uncertainty.
Abstract: Simple exchange experiments have revealed that participants trade their endowment less frequently than standard demand theory would predict. List (2003a) finds that the most experienced dealers acting in a well-functioning market are not subject to this exchange asymmetry, suggesting that a significant amount of market experience is required to overcome it. In order to understand this market-experience eect, we introduce a distinction between two types of uncertainty, choice uncertainty and trade uncertainty, both of which could lead to exchange asymmetry. We conjecture that trade uncertainty is most important for exchange asymmetry. To test this conjecture, we design an experiment where the two treatments impact dierently on trade uncertainty, while controlling for choice uncertainty. Supporting our conjecture, we find that “forcing” subjects to give away their endowment in a series of exchanges eliminates exchange asymmetry in a subsequent test. We discuss why markets might not provide sucient