Showing papers in "Econometrica in 2004"
••
TL;DR: In this article, the authors evaluate a Kenyan project in which school-based mass treatment with deworming drugs was randomly phased into schools, rather than to individuals, allowing estimation of overall program effects.
Abstract: This brief summarizes worms: identifying impacts on education and health in the presence of treatment externalities in Kenya for the period 1998-99. Intestinal helminthes includes hookworm, roundworm, whipworm, and schistoso-miasis-infect more than one-quarter of the world's population. Studies in which medical treatment is randomized at the individual level potentially doubly underestimate the benefits of treatment, missing externality benefits to the comparison group from reduced disease transmission, and therefore also underestimating benefits for the treatment group. The author evaluate a Kenyan project in which school-based mass treatment with deworming drugs was randomly phased into schools, rather than to individuals, allowing estimation of overall program effects. The program reduced school absenteeism in treatment schools by one-quarter, and was far cheaper than alternative ways of boosting school participation. Deworming substantially improved health and school participation among untreated children in both treatment schools and neighboring schools and these externalities are large enough to justify fully subsidizing treatment.
2,020 citations
••
TL;DR: In this article, the authors used political reservations for women in India to study the impact of women's leadership on policy decisions and found that women invest more in infrastructure that is directly relevant to the needs of their own genders.
Abstract: This paper uses political reservations for women in India to study the impact of women's leadership on policy decisions. Since the mid-1990's, one third of Village Council head positions in India have been randomly reserved for a woman: In these councils only women could be elected to the position of head. Village Councils are responsible for the provision of many local public goods in rural areas. Using a dataset we collected on 265 Village Councils in West Bengal and Rajasthan, we compare the type of public goods provided in reserved and unreserved Village Councils. We show that the reservation of a council seat affects the types of public goods provided. Specifically, leaders invest more in infrastructure that is directly relevant to the needs of their own genders.
1,471 citations
••
TL;DR: In this paper, a new methodology called PANIC (Pan Analysis of Nonstationarity in Idiosyncratic and Common components) is proposed to detect whether the nonstationarity of a series is pervasive or variable-specific.
Abstract: This paper develops a new methodology that makes use of the factor structure of large dimensional panels to understand the nature of nonstationarity in the data. We refer to it as PANIC—Panel Analysis of Nonstationarity in Idiosyncratic and Common components. PANIC can detect whether the nonstationarity in a series is pervasive, or variable-specific, or both. It can determine the number of independent stochastic trends driving the common factors. PANIC also permits valid pooling of individual statistics and thus panel tests can be constructed. A distinctive feature of PANIC is that it tests the unobserved components of the data instead of the observed series. The key to PANIC is consistent estimation of the space spanned by the unobserved common factors and the idiosyncratic errors without knowing a priori whether these are stationary or integrated processes. We provide a rigorous theory for estimation and inference and show that the tests have good finite sample properties.
1,255 citations
••
TL;DR: In this paper, asymptotic properties of the maximum likelihood estimators and the quasi-maximum likelihood estimator for the spatial autoregressive model were investigated. But the convergence rates of those estimators may depend on some general features of the spatial weights matrix of the model.
Abstract: This paper investigates asymptotic properties of the maximum likelihood estimator and the quasi-maximum likelihood estimator for the spatial autoregressive model. The rates of convergence of those estimators may depend on some general features of the spatial weights matrix of the model. It is important to make the distinction with dif- ferent spatial scenarios. Under the scenario that each unit will be influenced by only a few neighboring units, the estimators may have >/n-rate of convergence and be asymp- totically normal. When each unit can be influenced by many neighbors, irregularity of the information matrix may occur and various components of the estimators may have different rates of convergence.
905 citations
••
TL;DR: In this paper, a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance is proposed, which is based on a fixed interval of time (e.g., a day or week).
Abstract: This paper analyses multivariate high frequency financial data using realized covariation. We provide a new asymptotic distribution theory for standard methods such as regression, correlation analysis, and covariance. It will be based on a fixed interval of time (e.g., a day or week), allowing the number of high frequency returns during this period to go to infinity. Our analysis allows us to study how high frequency correlations, regressions, and covariances change through time. In particular we provide confidence intervals for each of these quantities.
717 citations
••
TL;DR: In this paper, the authors compared prospect theory with traditional neoclassical theory and found that prospect theory adequately organizes behavior among inexperienced consumers, but consumers with intense market experience behave largely in accordance with traditional predictions.
Abstract: Several experimental studies have provided evidence that suggest indifference curves have a kink around the current endowment level. These results, which clearly contradict closely held economic doctrines, have led some influential commentators to call for an entirely new economic paradigm to displace conventional neoclassical theory—e.g., prospect theory, which invokes psychological effects. This paper pits neoclassical theory against prospect theory by investigating data drawn from more than 375 subjects actively participating in a well-functioning marketplace. The pattern of results suggests that prospect theory adequately organizes behavior among inexperienced consumers, but consumers with intense market experience behave largely in accordance with neoclassical predictions. Moreover, the data are consistent with the notion that consumers learn to overcome the endowment effect in situations beyond specific problems they have previously encountered. This “transference of behavior” across domains has important implications in both a positive and normative sense.
521 citations
••
TL;DR: In this article, the authors distinguish financial intermediaries according to whether they issue complete contingent contracts or incomplete contracts, and they argue that there may be a role for regulating liquidity provision in an economy in which markets for aggregate risks are incomplete.
Abstract: A complex financial system comprises both financial markets and financial intermediaries. We distinguish financial intermediaries according to whether they issue complete contingent contracts or incomplete contracts. Intermediaries such as banks that issue incomplete contracts, e.g., demand deposits, are subject to runs, but this does not imply a market failure. A sophisticated financial system—a system with complete markets for aggregate risk and limited market participation—is incentive-efficient, if the intermediaries issue complete contingent contracts, or else constrained-efficient, if they issue incomplete contracts. We argue that there may be a role for regulating liquidity provision in an economy in which markets for aggregate risks are incomplete.
494 citations
••
TL;DR: In this paper, the authors characterize a class of polarization measures that fit into what they call the identityalienation framework, and simultanously satisfies a set of axioms, and provide sample estimators of population polarization indices that can be used to compare polarization across time or entities.
Abstract: distributions can be described using density functions. The main theorem uniquely characterizes a class of polarization measures that fits into what we call the “identityalienation” framework, and simultanously satisfies a set of axioms. Second, we provide sample estimators of population polarization indices that can be used to compare polarization across time or entities. Distribution-free statistical inference results are also used in order to ensure that the orderings of polarization across entities are not simply due to sampling noise. An illustration of the use of these tools using data from 21 countries shows that polarization and inequality orderings can often differ in practice.
426 citations
••
TL;DR: This paper study the behavior of agents who are susceptible to temptation in infinite horizon consumption problems under uncertainty, and define and characterize dynamic self-control preferences, which are recursive and separable.
Abstract: To study the behavior of agents who are susceptible to temptation in infinite horizon consumption problems under uncertainty, we define and characterize dynamic self-control (DSC) preferences. DSC preferences are recursive and separable. In economies with DSC agents, equilibria exist but may be inefficient; in such equilibria, steady state consumption is independent of initial endowments and increases in self-control. Increasing the preference for commitment while keeping self-control constant increases the equity premium. Removing nonbinding constraints changes equilibrium allocations and prices. Debt contracts can be sustained even if the only feasible punishment for default is the termination of the contract.
389 citations
••
TL;DR: In the absence of third-party enforcement, markets resemble a collection of bilateral trading islands rather than a competitive market as discussed by the authors, and long-term relationships between trading parties emerge endogenously and are associated with a fundamental change in the nature of market interactions.
Abstract: We provide evidence that long-term relationships between trading parties emerge endogenously in the absence of third party enforcement of contracts and are associated with a fundamental change in the nature of market interactions. Without third party enforcement, the vast majority of trades are initiated with private offers and the parties share the gains from trade equally. Low effort or bad quality is penalized by the termination of the relationship, wielding a powerful effect on contract enforcement. Successful long-term relations exhibit generous rent sharing and high effort (quality) from the very beginning of the relationship. In the absence of third-party enforcement, markets resemble a collection of bilateral trading islands rather than a competitive market. If contracts are third party enforceable, rent sharing and long-term relations are absent and the vast majority of trades are initiated with public offers. Most trades take place in one-shot transactions and the contracting parties are indifferent with regard to the identity of their trading partner.
379 citations
••
TL;DR: In this paper, the authors show that when the price impact of trades is permanent and time-independent, only linear price-impact functions rule out quasi-arbitrage and thus support viable market prices.
Abstract: In an environment where trading volume affects security prices and where prices are uncertain when trades are submitted, quasi-arbitrage is the availability of a series of trades that generate infinite expected profits with an infinite Sharpe ratio. We show that when the price impact of trades is permanent and time-independent, only linear price-impact functions rule out quasi-arbitrage and thus support viable market prices. When trades have also a temporary price impact, only the permanent price impact must be linear while the temporary one can be of a more general form. We also extend the analysis to a time-dependent framework.
••
TL;DR: The authors examined changes in the distribution of wages using bounds to allow for the impact of nonrandom selection into work and found evidence of an increase in inequality within education groups, changes in educational differentials, and increases in the relative wages of women.
Abstract: This paper examines changes in the distribution of wages using bounds to allow for the impact of nonrandom selection into work. We show that worst case bounds can be informative. However, because employment rates in the United Kingdom are often low, they are not informative about changes in educational or gender wage differentials. Thus we explore ways to tighten these bounds using restrictions motivated from economic theory. With these assumptions, we find convincing evidence of an increase in inequality within education groups, changes in educational differentials, and increases in the relative wages of women.
••
TL;DR: In this paper, a structural consumer level demand system for satellite, basic cable, premium cable and local antenna using micro data on almost 30, 000 households in 317 markets, including extensive controls for unobserved product quality and allowing the distribution of unobserved tastes to follow a fully flexible multivariate normal distribution.
Abstract: This paper examines direct broadcast satellites (DBS) as a competitor to cable. We first estimate a structural consumer level demand system for satellite, basic cable, premium cable and local antenna using micro data on almost 30� 000 households in 317 markets, including extensive controls for unobserved product quality and allowing the distribution of unobserved tastes to follow a fully flexible multivariate normal distribution. The estimated elasticity of expanded basic is about −1� 5, with the demand for premium cable and DBS more elastic. The results identify strong correlations in the taste for different products not captured in conventional logit models. Estimates of the supply response of cable suggest that without DBS entry cable prices would be about 15 percent higher and cable quality would fall. We find a welfare gain of between $127 and $190 per year (aggregate $2.5 billion) for satellite buyers, and about $50 (aggregate $3 billion) for cable subscribers.
••
TL;DR: In this article, the effects of financial market globalization on the inequality of nations are investigated, and the model is tractable enough to allow for a complete characterization of the stable steady states.
Abstract: This paper investigates the effects of financial market globalization on the inequality of nations. The world economy consists of inherently identical countries, which differ only in their levels of capital stock. Each country is represented by the standard overlapping generations model, modified only to incorporate credit market imperfection. An integration of financial markets affects the set of stable steady states, as it changes the balance between the equalizing force of the diminishing returns technology and the unequalizing force of the wealth-dependent borrowing constraint. The model is tractable enough to allow for a complete characterization of the stable steady states. In the absence of the international financial market, the world economy has a unique steady state, which is symmetric and stable. In the presence of the international financial market, symmetry-breaking occurs under some conditions. That is, the symmetric steady state loses its stability and stable asymmetric steady states come to exist. In the stable asymmetric steady states, the world economy is endogenously divided into the rich and poor countries; the borrowing constraints are binding in the poor but not in the rich; the world output is smaller, the rich are richer and the poor are poorer in any of the stable asymmetric steady states than in the (unstable) symmetric steady state.
••
TL;DR: This paper examines the problem of measuring intellectual influence based on data on citations between scholarly publications and finds that the properties of invariance to reference intensity, weak homogeneity, weak consistency, and invarianceto splitting of journals characterize a unique ranking method.
Abstract: This paper examines the problem of measuring intellectual influence based on data on citations between scholarly publications. We follow an axiomatic approach and find that the properties of invariance to reference intensity, weak homogeneity, weak consistency, and invariance to splitting of journals characterize a unique ranking method. This method is different from those regularly used in economics and other social sciences.
••
TL;DR: In this article, the root n consistent estimator for nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available, is presented.
Abstract: This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the “true” value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved “true” variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the “true,” unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach.
••
TL;DR: The theory of global games has shown that coordination games with multiple equilibria may have a unique equilibrium if certain parameters of the payoff function are private information instead of common knowledge as discussed by the authors.
Abstract: The theory of global games has shown that coordination games with multiple equilibria may have a unique equilibrium if certain parameters of the payoff function are private information instead of common knowledge. We report the results of an experiment designed to test the predictions of this theory. Comparing sessions with common and private information, we observe only small differences in behavior. For common information, subjects coordinate on threshold strategies that deviate from the global game solution towards the payoff-dominant equilibrium. For private information, thresholds are closer to the global game solution than for common information. Variations in the payoff function affect behavior as predicted by comparative statics of the global game solution. Predictability of coordination points is about the same for both information conditions.
••
TL;DR: In this paper, the authors consider bilateral matching problems where each person views those on the other side of the market as either acceptable or unacceptable: an acceptable mate is preferred to remaining single, and the latter to an unacceptable mate; all acceptable mates are welfare-wise identical.
Abstract: We consider bilateral matching problems where each person views those on the other side of the market as either acceptable or unacceptable: an acceptable mate is preferred to remaining single, and the latter to an unacceptable mate; all acceptable mates are welfare-wise identical. Using randomization, many efficient and fair matching methods define strategyproof revelation mechanisms. Randomly selecting a priority ordering of the participants is a simple example. Equalizing as much as possible the probability of getting an acceptable mate across all participants stands out for its normative and incentives properties: the profile of probabilities is Lorenz dominant, and the revelation mechanism is group-strategyproof for each side of the market. Our results apply to the random assignment problem as well.
••
TL;DR: This paper proposed an asymptotically efficient method for estimating models with conditional moment restrictions, which generalizes the maximum empirical likelihood estimator (MELE) of Qin and Lawless (1994).
Abstract: This paper proposes an asymptotically efficient method for estimating models with conditional moment restrictions. Our estimator generalizes the maximum empirical likelihood estimator (MELE) of Qin and Lawless (1994). Using a kernel smoothing method, we efficiently incorporate the information implied by the conditional moment restrictions into our empirical likelihood-based procedure. This yields a one-step estimator which avoids estimating optimal instruments. Our likelihood ratio-type statistic for parametric restrictions does not require the estimation of variance, and achieves asymptotic pivotalness implicitly. The estimation and testing procedures we propose are normalization invariant. Simulation results suggest that our new estimator works remarkably well in finite samples.
••
TL;DR: In this paper, a new Bayesian procedure for drawing inferences about the nature and number of decision rules present in a population, and use it to analyze the behaviors of laboratory subjects confronted with a difficult dynamic stochastic decision problem.
Abstract: Different people may use different strategies, or decision rules, when solving complex decision problems. We provide a new Bayesian procedure for drawing inferences about the nature and number of decision rules present in a population, and use it to analyze the behaviors of laboratory subjects confronted with a difficult dynamic stochastic decision problem. Subjects practiced before playing for money. Based on money round decisions, our procedure classifies subjects into three types, which we label “Near Rational,”“Fatalist,” and “Confused.” There is clear evidence of continuity in subjects' behaviors between the practice and money rounds: types who performed best in practice also tended to perform best when playing for money. However, the agreement between practice and money play is far from perfect. The divergences appear to be well explained by a combination of type switching (due to learning and/or increased effort in money play) and errors in our probabilistic type assignments.
••
TL;DR: In this paper, the Glosten-Milgrom model was extended to the case of a single strategic informed trader and competitive market makers, where the informed trader can optimize his times of trading.
Abstract: This paper analyzes models of securities markets with a single strategic informed trader and competitive market makers. In one version, uninformed trades arrive as a Brownian motion and market makers see only the order imbalance, as in Kyle (1985). In the other version, uninformed trades arrive as a Poisson process and market makers see individual trades. This is similar to the Glosten–Milgrom (1985) model, except that we allow the informed trader to optimize his times of trading. We show there is an equilibrium in the Glosten–Milgrom-type model in which the informed trader plays a mixed strategy (a point process with stochastic intensity). In this equilibrium, informed and uninformed trades arrive probabilistically, as Glosten and Milgrom assume. We study a sequence of such markets in which uninformed trades become smaller and arrive more frequently, approximating a Brownian motion. We show that the equilibria of the Glosten–Milgrom model converge to the equilibrium of the Kyle model.
••
TL;DR: In this paper, the authors generalize this result by demonstrating that one can always value defaultable claims using expected risk-adjusted discounting provided that the expectation is taken under a slightly modified probability measure.
Abstract: Previous research has shown that under a suitable no-jump condition, the price of a defaultable security is equal to its risk-neutral expected discounted cash flows if a modified discount rate is introduced to account for the possibility of default. Below, we generalize this result by demonstrating that one can always value defaultable claims using expected risk-adjusted discounting provided that the expectation is taken under a slightly modified probability measure. This new probability measure puts zero probability on paths where default occurs prior to the maturity, and is thus only absolutely continuous with respect to the risk-neutral probability measure. After establishing the general result and discussing its relation with the existing literature, we investigate several examples for which the no-jump condition fails. Each example illustrates the power of our general formula by providing simple analytic solutions for the prices of defaultable securities.
••
TL;DR: In this article, a speaker wishes to persuade a listener to accept a certain request, and the conditions under which the request is justified, from the listener's point of view, depend on the values of two aspects.
Abstract: A speaker wishes to persuade a listener to accept a certain request. The conditions under which the request is justified, from the listener's point of view, depend on the values of two aspects. The values of the aspects are known only to the speaker and the listener can check the value of at most one. A mechanism specifies a set of messages that the speaker can send and a rule that determines the listener's response, namely, which aspect he checks and whether he accepts or rejects the speaker's request. We study mechanisms that maximize the probability that the listener accepts the request when it is justified and rejects the request when it is unjustified, given that the speaker maximizes the probability that his request is accepted. We show that a simple optimal mechanism exists and can be found by solving a linear programming problem in which the set of constraints is derived from what we call the L-principle.
••
[...]
TL;DR: In this paper, the authors show that the equilibria survive even if the simultaneous-play assumption is relaxed to allow for a large variety of extensive modifications, such as sequential play with partial and differential revelation of information, commitments, multiple revisions of choices, cheap talk announcements, and more.
Abstract: With many semi-anonymous players, the equilibria of simultaneous-move games are extensively robust. This means that the equilibria survive even if the simultaneous-play assumption is relaxed to allow for a large variety of extensive modifications. Such modifications include sequential play with partial and differential revelation of information, commitments, multiple revisions of choices, cheap talk announcements, and more.
••
TL;DR: In this article, a simple and consistent estimation procedure for conditional moment restrictions is proposed, which is directly based on the definition of the conditional moments and does not require the selection of any user-chosen number.
Abstract: In econometrics, models stated as conditional moment restrictions are typically estimated by means of the generalized method of moments (GMM). The GMM estimation procedure can render inconsistent estimates since the number of arbitrarily chosen instruments is finite. In fact, consistency of the GMM estimators relies on additional assumptions that imply unclear restrictions on the data generating process. This article introduces a new, simple and consistent estimation procedure for these models that is directly based on the definition of the conditional moments. The main feature of our procedure is its simplicity, since its implementation does not require the selection of any user-chosen number, and statistical inference is straightforward since the proposed estimator is asymptotically normal. In addition, we suggest an asymptotically efficient estimator constructed by carrying out one Newton–Raphson step in the direction of the efficient GMM estimator.
••
TL;DR: In this article, the existence of pure strategy equilibria in monotone bidding functions in first-price auctions with asymmetric bidders, interdependent values, and affiliated one-dimensional signals was established.
Abstract: We establish the existence of pure strategy equilibria in monotone bidding functions in first-price auctions with asymmetric bidders, interdependent values, and affiliated one-dimensional signals. By extending a monotonicity result due to Milgrom and Weber (1982), we show that single crossing can fail only when ties occur at winning bids or when bids are individually irrational. We avoid these problems by considering limits of ever finer finite bid sets such that no two bidders have a common serious bid, and by recalling that single crossing is needed only at individually rational bids. Two examples suggest that our results cannot be extended to multidimensional signals or to second-price auctions.
••
TL;DR: In this article, the authors examine a model of an evolutionary environment in which Nature optimally builds relative consumption effects into preferences in order to compensate for incomplete environmental information. But their model assumes that a person's satisfaction with their own consumption appears to depend upon how much others are consuming.
Abstract: Preferences exhibit relative consumption effects if a person's satisfaction with their own consumption appears to depend upon how much others are consuming. This paper examines a model of an evolutionary environment in which Nature optimally builds relative consumption effects into preferences in order to compensate for incomplete environmental information.
••
TL;DR: In this article, the authors generalize the local Whittle estimator to circumvent the problem of sample bias that can be large and approximate its logarithm by a polynomial.
Abstract: The local Whittle (or Gaussian semiparametric) estimator of long range dependence, proposed by Kunsch (1987) and analyzed by Robinson (1995a), has a relatively slow rate of convergence and a finite sample bias that can be large. In this paper, we generalize the local Whittle estimator to circumvent these problems. Instead of approximating the short-run component of the spectrum, ϕ(λ)� by a constant in a shrinking neighborhood of frequency zero, we approximate its logarithm by a polynomial. This leads to a “local polynomial Whittle” (LPW) estimator. We specify a data-dependent adaptive procedure that adjusts the degree of the polynomial to the smoothness of ϕ(λ) at zero and selects the bandwidth. The resulting “adaptive LPW” estimator is shown to achieve the optimal rate of convergence, which depends on the smoothness of ϕ(λ) at zero, up to a logarithmic factor.
••
TL;DR: In this paper, the authors show that an efficient mechanism exists in an environment where first the final outcome (e.g., allocation of the goods) is determined, then the agents observe their own outcome-decision payoffs, and then final transfers are made.
Abstract: ggAgents' valuations are interdependent if they depend on the signals, or types, of all agents. Under the implicit assumption that agents cannot observe their outcome-decision payoffs, previous literature has shown that with interdependent valuations and independent signals, efficient design is impossible. This paper shows that an efficient mechanism exists in an environment where first the final outcome (e.g., allocation of the goods) is determined, then the agents observe their own outcome-decision payoffs, and then final transfers are made.
••
TL;DR: In this paper, the authors established consistency and asymptotic normality of the quasi-maximum likelihood estimator in the linear ARCH model and allowed the parameters to be in the region where no stationary version of the process exists.
Abstract: We establish consistency and asymptotic normality of the quasi-maximum likelihood estimator in the linear ARCH model. Contrary to the existing literature, we allow the parameters to be in the region where no stationary version of the process exists. This implies that the estimator is always asymptotically normal.