scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 2001"


Journal ArticleDOI
TL;DR: In this paper, a modified information criterion (MIC) with a penalty factor that is sample dependent was proposed to select appropriate truncation lag values for unit root tests with a moving-average root close to -1.
Abstract: It is widely known that when there are errors with a moving-average root close to -1, a high order augmented autoregression is necessary for unit root tests to have good size, but that information criteria such as the AIC and the BIC tend to select a truncation lag (k) that is very small. We consider a class of Modified Information Criteria (MIC) with a penalty factor that is sample dependent. It takes into account the fact that the bias in the sum of the autoregressive coefficients is highly dependent on k and adapts to the type of deterministic components present. We use a local asymptotic framework in which the moving-average root is local to -1 to document how the MIC performs better in selecting appropriate values of k. In Monte-Carlo experiments, the MIC is found to yield huge size improvements to the DF GLS and the feasible point optimal P T test developed in Elliott, Rothenberg, and Stock (1996). We also extend the M tests developed in Perron and Ng (1996) to allow for GLS detrending of the data. The MIC along with GLS detrended data yield a set of tests with desirable size and power properties.

4,084 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that identifying conditions invoked in previous applications of regression discontinuity methods are often overly strong and that treatment effects can be nonparametrically identified under an RD design by a weak functional form restriction.
Abstract: Ž. THE REGRESSION DISCONTINUITY RD data design is a quasi-experimental design with the defining characteristic that the probability of receiving treatment changes discontinuously as a function of one or more underlying variables. This data design arises frequently in economic and other applications but is only infrequently exploited as a source of identifying information in evaluating effects of a treatment. In the first application and discussion of the RD method, Thistlethwaite and Campbell Ž. 1960 study the effect of student scholarships on career aspirations, using the fact that awards are only made if a test score exceeds a threshold. More recently, Van der Klaauw Ž. 1997 estimates the effect of financial aid offers on students’ decisions to attend a particular college, taking into account administrative rules that set the aid amount partly on the basis of a discontinuous function of the students’ grade point average and SAT Ž. score. Angrist and Lavy 1999 estimate the effect of class size on student test scores, taking advantage of a rule stipulating that another classroom be added when the average Ž. class size exceeds a threshold level. Finally, Black 1999 uses an RD approach to estimate parents’ willingness to pay for higher quality schools by comparing housing prices near geographic school attendance boundaries. Regression discontinuity methods have potentially broad applicability in economic research, because geographic boundaries or rules governing programs often create discontinuities in the treatment assignment mechanism that can be exploited under the method. Although there have been several discussions and applications of RD methods in the literature, important questions still remain concerning sources of identification and ways of estimating treatment effects under minimal parametric restrictions. Here, we show that identifying conditions invoked in previous applications of RD methods are often overly strong and that treatment effects can be nonparametrically identified under an RD design by a weak functional form restriction. The restriction is unusual in that it requires imposing continuity assumptions in order to take advantage of the known discontinuity in the treatment assignment mechanism. We also propose a way of nonparametrically estimating treatment effects and offer an interpretation of the Wald estimator as an RD estimator.

2,577 citations


Journal ArticleDOI
TL;DR: The authors empirically examined the ready-to-eat cereal industry and concluded that the prices in the industry are consistent with noncollusive pricing behavior, despite the high price-cost margins.
Abstract: The ready-to-eat cereal industry is characterized by high concentration, high price-cost margins, large advertising-to-sales ratios, and numerous introductions of new products. Previous researchers have concluded that the ready-to-eat cereal industry is a classic example of an industry with nearly collusive pricing behavior and intense nonprice competition. This paper empirically examines this conclusion. In particular, I estimate price-cost margins, but more importantly I am able empirically to separate these margins into three sources: (i) that which is due to product differentiation; (ii) that which is due to multi-product firm pricing; and (iii) that due to potential price collusion. The results suggest that given the demand for different brands of cereal, the first two effects explain most of the observed price-cost margins. I conclude that prices in the industry are consistent with noncollusive pricing behavior, despite the high price-cost margins. Leading firms are able to maintain a portfolio of differentiated products and influence the perceived product quality. It is these two factors that lead to high price-cost margins.

1,595 citations


Journal ArticleDOI
TL;DR: In this paper, a set of recent studies have attempted to measure the causal effect of education on labor market earnings by using institutional features of the education system as exogenous determinants of schooling outcomes.
Abstract: This paper reviews a set of recent studies that have attempted to measure the causal effect of education on labor market earnings by using institutional features of the supply side of the education system as exogenous determinants of schooling outcomes. A simple theoretical model that highlights the role of comparative advantage in the optimal schooling decision is presented and used to motivate an extended discussion of econometric issues, including the properties of ordinary least squares and instrumental variables estimators. A review of studies that have used compulsory schooling laws, differences in the accessibility of schools, and similar features as instrumental variables for completed education, reveals that the resulting estimates of the return to schooling are typically as big or bigger than the corresponding ordinary least squares estimates. One interpretation of this finding is that marginal returns to education among the low-education subgroups typically affected by supply-side innovations tend to be relatively high, reflecting their high marginal costs of schooling, rather than low ability that limits their return to education.

1,470 citations


Journal ArticleDOI
TL;DR: In this article, the authors study the implications of imperfect information for term structures of credit spreads on corporate bonds and derive the conditional distribution of the assets, given accounting data and survivorship.
Abstract: We study the implications of imperfect information for term structures of credit spreads on corporate bonds. We suppose that bond investors cannot observe the issuer's assets directly, and receive instead only periodic and imperfect accounting reports. For a setting in which the assets of the firm are a geometric Brownian motion until informed equityholders optimally liquidate, we derive the conditional distribution of the assets, given accounting data and survivorship. Contrary to the perfect-information case, there exists a default-arrival intensity process. That intensity is calculated in terms of the conditional distribution of assets. Credit yield spreads are characterized in terms of accounting information. Generalizations are provided.

1,373 citations


Journal ArticleDOI
TL;DR: In this article, a two-period model where ex ante inferior choice may tempt the decision-maker in the second period was studied, where individuals have preferences over sets of alternatives that represent second period choices.
Abstract: We study a two-period model where ex ante inferior choice may tempt the decision-maker in the second period. Individuals have preferences over sets of alternatives that represent second period choices. Our axioms yield a representation that identifies the individual's commitment ranking, temptation ranking, and cost of self-control. An agent has a preference for commitment if she strictly prefers a subset of alternatives to the set itself. An agent has self-control if she resists temptation and chooses an option with higher ex ante utility. We introduce comparative measures of preference for commitment and self-control and relate them to our representations.

1,142 citations


Journal ArticleDOI
TL;DR: In this article, the authors develop an asymptotic theory of inference for an unrestricted two-regime TAR model with an autoregressive unit root, which is based on a new set of tools that combine unit root and empirical process methods.
Abstract: This paper develops an asymptotic theory of inference for an unrestricted two-regime Ž. threshold autoregressive TAR model with an autoregressive unit root. We find that the asymptotic null distribution of Wald tests for a threshold are nonstandard and different from the stationary case, and suggest basing inference on a bootstrap approximation. We also study the asymptotic null distributions of tests for an autoregressive unit root, and find that they are nonstandard and dependent on the presence of a threshold effect. We propose both asymptotic and bootstrap-based tests. These tests and distribution theory Ž. Ž allow for the joint consideration of nonlinearity thresholds and nonstationary unit . roots . Our limit theory is based on a new set of tools that combine unit root asymptotics with empirical process methods. We work with a particular two-parameter empirical process that converges weakly to a two-parameter Brownian motion. Our limit distributions involve stochastic integrals with respect to this two-parameter process. This theory is entirely new and may find applications in other contexts. We illustrate the methods with an application to the U.S. monthly unemployment rate. We find strong evidence of a threshold effect. The point estimates suggest that the threshold effect is in the short-run dynamics, rather than in the dominate root. While the conventional ADF test for a unit root is insignificant, our TAR unit root tests are arguably significant. The evidence is quite strong that the unemployment rate is not a unit root process, and there is considerable evidence that the series is a stationary TAR process.

719 citations


Journal ArticleDOI
TL;DR: In this paper, the extent to which behavior in games reflects attempts to predict others' decisions, taking their incentives into account, was investigated, where subjects' initial responses to normal-form games with various patterns of iterated dominance and unique pure-strategy equilibria without dominance, using a computer interface that allowed them to search for hidden payoff information.
Abstract: This paper reports experiments designed to study strategic sophistication, the extent to which behavior in games reflects attempts to predict others' decisions, taking their incentives into account. We studied subjects' initial responses to normal-form games with various patterns of iterated dominance and unique pure-strategy equilibria without dominance, using a computer interface that allowed them to search for hidden payoff information, while recording their searches. Monitoring subjects' information searches along with their decisions allows us to better understand how their decisions are determined, and subjects' deviations from the search patterns suggested by equilibrium analysis help to predict their deviations from equilibrium decisions.

665 citations


ReportDOI
TL;DR: In this paper, a randomized evaluation of a project in Kenya suggests that school-based mass treatment with deworming drugs reduced school absenteeism in treatment schools by one quarter gains are especially large among the youngest children.
Abstract: Intestinal helminths - including hookworm roundworm schistosomiasis and whipworm - infect more than one-quarter of the worlds population. A randomized evaluation of a project in Kenya suggests that school-based mass treatment with deworming drugs reduced school absenteeism in treatment schools by one quarter gains are especially large among the youngest children. Deworming is found to be cheaper than alternative ways of boosting school participation. By reducing disease transmission deworming creates substantial externality health and school participation benefits among untreated children in the treatment schools and among children in neighboring schools. These externalities are large enough to justify fully subsidizing treatment. We do not find evidence that deworming improves academic test scores. Existing experimental studies in which treatment is randomized among individuals in the same school find small and insignificant deworming treatment effects on education; however these studies underestimate true treatment effects if deworming creates positive externalities for the control group and reduces treatment group attrition. (authors)

552 citations


Journal ArticleDOI
TL;DR: In this article, the role of speculation in the formation of bubbles and crashes in laboratory asset markets was investigated, and it was found that much of the trading activity that accompanies bubble formation, in markets where speculation is possible, is due to the fact that there is no other activity available for participants in the experiment.
Abstract: We report the results of an experiment designed to study the role of speculation in the formation of bubbles and crashes in laboratory asset markets. In a setting in which speculation is not possible, bubbles and crashes are observed. The results suggest that the departures from fundamental values are not caused by the lack of common knowledge of rationality leading to speculation, but rather by behavior that itself exhibits elements of irrationality. Much of the trading activity that accompanies bubble formation, in markets where speculation is possible, is due to the fact that there is no other activity available for participants in the experiment.

496 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study efficient Bayes-Nash incentive compatible mechanisms in a social choice setting that allows for informational and allocative externalities, and show that such mechanisms exist only if a congruence condition relating private and social rates of information substitution is satisfied.
Abstract: We study efficient, Bayes-Nash incentive compatible mechanisms in a social choice setting that allows for informational and allocative externalities. We show that such mechanisms exist only if a congruence condition relating private and social rates of information substitution is satisfied. If signals are multi-dimensional, the congruence condition is determined by an integrability constraint, and it can hold only in nongeneric cases where values are private or a certain symmetry assumption holds. If signals are one-dimensional, the congruence condition reduces to a monotonicity constraint and it can be generically satisfied. We apply the results to the study of multi-object auctions, and we discuss why such auctions cannot be reduced to one-dimensional models without loss of generality.

Journal ArticleDOI
TL;DR: The PPP puzzle is based on empirical evidence that international price differences for individual goods (LOOP) or baskets of goods (PPP) appear highly persistent or even nonstationary as discussed by the authors.
Abstract: The PPP puzzle is based on empirical evidence that international price differences for individual goods (LOOP) or baskets of goods (PPP) appear highly persistent or even nonstationary. The present consensus is these price differences have a half-life that is of the order of five years at best, and infinity at worst. This seems unreasonable in a world where transportation and transaction costs appear so low as to encourage arbitrage and the convergence of price gaps over much shorter horizons, typically days or weeks. However, current empirics rely on a particular choice of methodology, involving (i) relatively low-frequency monthly, quarterly, or annual data, and (ii) a linear model specification. In fact, these methodological choices are not innocent, and they can be shown to bias analysis towards findings of slow convergence and a random walk. Intuitively, if we suspect that the actual adjustment horizon is of the order of days, then monthly and annual data cannot be expected to reveal it. If we suspect arbitrage costs are high enough to produce a substantial “band of inaction,” then a linear model will fail to support convergence if the process spends considerable time random-walking in that band. Thus, when testing for PPP or LOOP, model specification and data sampling should not proceed without consideration of the actual institutional context and logistical framework of markets.

Journal ArticleDOI
TL;DR: In this paper, the decision problem of a hyperbolic consumer who faces stochastic income and a borrowing constraint is solved by using the bounded variation calculus to derive the Hyperbolic Euler Relation, a natural generalization of the standard exponential Euler relation.
Abstract: Laboratory and field studies of time preference find that discount rates are much greater in the short-run than in the long-run. Hyperbolic discount functions capture this property. This paper solves the decision problem of a hyperbolic consumer who faces stochastic income and a borrowing constraint. The paper uses the bounded variation calculus to derive the Hyperbolic Euler Relation, a natural generalization of the standard Exponential Euler Relation. The Hyperbolic Euler Relation implies that consumers act as if they have endogenous rates of time preference that rise and fall with the future marginal propensity to consume (e.g., discount rates that endogenously range from 5% to 41% for the example discussed in the paper).

Report SeriesDOI
TL;DR: In this paper, the conditional variance of the income shocks is modelled as a parsimonious ARCH process with both observable and unobserved heterogeneity, and the empirical analysis is conducted on data drawn from the 1967-1992 Panel Study of Income Dynamics.
Abstract: Recent theoretical work has shown the importance of measuring microeconomic uncertainty for models of both general and partial equilibrium under imperfect insurance. In this paper the assumption of i.i.d. income innovations used in previous empirical studies is removed and the focus of the analysis is placed on models for the conditional variance of income shocks, which is related to the measure of risk emphasized by the theory. We first discriminate amongst various models of earnings determination that separate income shocks into idiosyncratic transitory and permanent components. We allow for education- and time-specific differences in the stochastic process for earnings and for measurement error. The conditional variance of the income shocks is modelled as a parsimonious ARCH process with both observable and unobserved heterogeneity. The empirical analysis is conducted on data drawn from the 1967–1992 Panel Study of Income Dynamics. We find strong evidence of sizeable ARCH effects as well as evidence of unobserved heterogeneity in the variances.

Journal ArticleDOI
TL;DR: In particular, the authors analyzes a class of games of incomplete information where each agent has private information about her own type, and the types are drawn from an atomless joint probability distribution.
Abstract: This paper analyzes a class of games of incomplete information where each agent has private information about her own type, and the types are drawn from an atomless joint probability distribution. The main result establishes existence of pure strategy Nash equilibria (PSNE) under a condition we call the single crossing condition (SCC), roughly described as follows: whenever each opponent uses a nondecreasing strategy (in the sense that higher types choose higher actions), a player's best response strategy is also nondecreasing. When the SCC holds, a PSNE exists in every finite-action game. Further, for games with continuous payoffs and a continuum of actions, there exists a sequence of PSNE to finite-action games that converges to a PSNE of the continuum-action game. These convergence and existence results also extend to some classes of games with discontinuous payoffs, such as first-price auctions, where bidders may be heterogeneous and reserve prices are permitted. Finally, the paper characterizes the SCC based on properties of utility functions and probability distributions over types. Applications include first-price, multi-unit, and all-pay auctions; pricing games with incomplete information about costs; and noisy signaling games.

Journal ArticleDOI
TL;DR: In this paper, an asymptotic theory for nonlinear regression with integrated processes is developed, and sufficient conditions for weak consistency are given and a limit distribution theory is provided, which is mixed normal with mixing variates that depend on the sojourn time of the limiting Brownian motion of the integrated process.
Abstract: An asymptotic theory is developed for nonlinear regression with integrated processes. The models allow for nonlinear effects from unit root time series and therefore deal with the case of parametric nonlinear cointegration. The theory covers integrable and asymptotically homogeneous functions. Sufficient conditions for weak consistency are given and a limit distribution theory is provided. The rates of convergence depend on the properties of the nonlinear regression function, and are shown to be as slow as n 1/4 for integrable functions, and to be generally polynomial in n 1/2 for homogeneous functions. For regressions with integrable functions, the limiting distribution theory is mixed normal with mixing variates that depend on the sojourn time of the limiting Brownian motion of the integrated process.

Journal ArticleDOI
TL;DR: In this article, the authors consider the case where the null hypothesis may lie on the boundary of the maintained hypothesis and there may be a nuisance parameter that appears under the alternative hypothesis, but not under the null.
Abstract: This paper considers testing problems where several of the standard regularity conditions fail to hold. We consider the case where (i) parameter vectors in the null hypothesis may lie on the boundary of the maintained hypothesis and (ii) there may be a nuisance parameter that appears under the alternative hypothesis, but not under the null. The paper establishes the asymptotic null and local alternative distributions of quasi-likelihood ratio, rescaled quasi-likelihood ratio, Wald, and score tests in this case. The results apply to tests based on a wide variety of extremum estimators and apply to a wide variety of models. Examples treated in the paper are: (i) tests of the null hypothesis of no conditional heteroskedasticity in a GARCH(1, 1) regression model and (ii) tests of the null hypothesis that some random coefficients have variances equal to zero in a random coefficients regression model with (possibly) correlated random coefficients.

Journal ArticleDOI
TL;DR: Bayesian estimation of non-linear stochastic differential equations when only discrete observations are available using a tuned MCMC method and by using the Euler-Maruyama discretisation scheme.
Abstract: This paper is concerned with the Bayesian estimation of nonlinear stochastic differential equations when observations are discretely sampled. The estimation framework relies on the introduction of latent auxiliary data to complete the missing diffusion between each pair of measurements. Tuned Markov chain Monte Carlo (MCMC) methods based on the Metropolis-Hastings algorithm, in conjunction with the Euler-Maruyama discretization scheme, are used to sample the posterior distribution of the latent data and the model parameters. Techniques for computing the likelihood function, the marginal likelihood, and diagnostic measures (all based on the MCMC output) are developed. Examples using simulated and real data are presented and discussed in detail.

Journal ArticleDOI
TL;DR: In this article, the authors derived simple mean-square error criteria that can be minimized to choose the instrument set, and developed these criteria for two-stage least squares, limited information maximum likelihood (LIML), and a bias adjusted version of 2SLS.
Abstract: Properties of instrumental variable estimators are sensitive to the choice of valid instruments, even in large cross-section applications. In this paper we address this problem by deriving simple mean-square error criteria that can be minimized to choose the instrument set. We develop these criteria for two-stage least squares (2SLS), limited information maximum likelihood (LIML), and a bias adjusted version of 2SLS (B2SLS). We give a theoretical derivation of the mean-square error and show optimality. In Monte Carlo experiments we find that the instrument choice generally yields an improvement in performance. Also, in the Angrist and Krueger (1991) returns to education application, when the instrument set is chosen in the way we consider, it turns out that both 2SLS and LIML give similar (large) returns to education.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a new test of a parametric model of a conditional mean function against a nonparametric alternative, which adapts to the unknown smoothness of the alternative model and is uniformly consistent against alternatives whose distance from the parametric models converges to zero at the fastest possible rate.
Abstract: We develop a new test of a parametric model of a conditional mean function against a nonparametric alternative. The test adapts to the unknown smoothness of the alternative model and is uniformly consistent against alternatives whose distance from the parametric model converges to zero at the fastest possible rate. This rate is slower than n -1/2 . Some existing tests have nontrivial power against restricted classes of alternatives whose distance from the parametric model decreases at the rate n -1/2 . There are, however, sequences of alternatives against which these tests are inconsistent and ours is consistent. As a consequence, there are alternative models for which the finite-sample power of our test greatly exceeds that of existing tests. This conclusion is illustrated by the results of some Monte Carlo experiments.

Journal ArticleDOI
TL;DR: In this article, the authors extend Kreps' 1979 analysis of preference for flexibility, reinterpreted by Kreps Ž.1992 as a model of unforeseen contingencies, and obtain uniqueness results that were not possible in Kreps’ model.
Abstract: Ž. We extend Kreps’ 1979 analysis of preference for flexibility, reinterpreted by Kreps Ž. 1992 as a model of unforeseen contingencies. We enrich the choice set, consequently obtaining uniqueness results that were not possible in Kreps’ model. We consider several representations and allow the agent to prefer commitment in some contingencies. In the representations, the agent acts as if she had coherent beliefs about a set of possible future Ž. ex post preferences, each of which is an expected-utility preference. We show that this set of ex post preferences, called the subjectie state space, is essentially unique given the restriction that all ex post preferences are expected-utility preferences and is minimal even without this restriction. Because the subjective state space is identified, the way ex post utilities are aggregated into an ex ante ranking is also essentially unique. Hence when a representation that is additive across states exists, the additivity is meaningful in the sense that all representations are intrinsically additive. Uniqueness enables us to show that the size of the subjective state space provides a measure of the agent’s uncertainty about future contingencies and that the way the states are aggregated indicates whether these contingencies lead to a desire for flexibility or commitment.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed long-term debt and optimal policy in the fiscal theory and found that the maturity structure of the debt matters, and that the expected pattern of future state-contingent debt sales, repurchases and redemptions is important for the effects of a debt operation.
Abstract: The fiscal theory says that the price level is determined by the ratio of nominal debt to the present value of real primary surpluses. I analyze long-term debt and optimal policy in the fiscal theory. I find that the maturity structure of the debt matters. For example, it determines whether news of future deficits implies current inflation or future inflation. When long-term debt is present, the government can trade current inflation for future inflation by debt operations; this tradeoff is not present if the government rolls over short-term debt. The maturity structure of outstanding debt acts as a ‘‘budget constraint’’ determining which periods’ price levels the government can affect by debt variation alone. In addition, debt policythe expected pattern of future state-contingent debt sales, repurchases and redemptionsmatters crucially for the effects of a debt operation. I solve for optimal debt policies to minimize the variance of inflation. I find cases in which long-term debt helps to stabilize inflation. I also find that the optimal policy produces time series that are similar to U.S. surplus and debt time series. To understand the data, I must assume that debt policy offsets the inflationary impact of cyclical surplus shocks, rather than causing price level disturbances by policy-induced shocks. Shifting the objective from price level variance to inflation variance, the optimal policy produces much less volatile inflation at the cost of a unit root in the price level; this is consistent with the stabilization of U.S. inflation after the gold standard was abandoned.

Journal ArticleDOI
TL;DR: In this article, the sensitivity of bidders demanding multiple units of a homogeneous commodity to the demand reduction incentives inherent in uniform price auctions was investigated, and the behavioral process underlying these differences along with dynamic Vickrey auctions designed to eliminate the inefficiencies resulting from demand reduction in the uniform price auction was explored.
Abstract: We experimentally investigate the sensitivity of bidders demanding multiple units of a homogeneous commodity to the demand reduction incentives inherent in uniform price auctions. There is substantial demand reduction in both sealed bid and ascending price clock auctions with feedback regarding rivals' drop-out prices. Although both auctions have the same normal form representation, bidding is much closer to equilibrium in the ascending price auctions. We explore the behavioral process underlying these differences along with dynamic Vickrey auctions designed to eliminate the inefficiencies resulting from demand reduction in the uniform price auctions.

Journal ArticleDOI
TL;DR: In this article, the revelation principle is applied to contracting problems between a principal and a single agent, where the principal may optimally use a direct mechanism under which truthful revelation is an optimal strategy for the agent.
Abstract: This paper extends the revelation principle to environments in which the mechanism designer cannot fully commit to the outcome induced by the mechanism. We show that he may optimally use a direct mechanism under which truthful revelation is an optimal strategy for the agent. In contrast with the conventional revelation principle, however, the agent may not use this strategy with probability one. Our results apply to contracting problems between a principal and a single agent. By reducing such problems to well-defined programming problems they provide a basic tool for studying imperfect commitment.

Journal ArticleDOI
TL;DR: In this article, the authors compare three stag hunt games that have the same best-response correspondence and the same expected payoff from the mixed equilibrium, but differ in the incentive to play a best response rather than an inferior response.
Abstract: This paper reports an experiment comparing three stag hunt games that have the same best-response correspondence and the same expected payoff from the mixed equilibrium, but differ in the incentive to play a best response rather than an inferior response. In each game, risk dominance conflicts with payoff dominance and selects an inefficient pure strategy equilibrium. We find statistically and economically significant evidence that the differences in the incentive to optimize help explain observed behavior.

Journal ArticleDOI
TL;DR: In this article, a stochastic algorithm for computing symmetric Markov perfect equilibria is proposed, which computes equilibrium policy and value functions, and generates a transition kernel for the evolution of the state of the system.
Abstract: This paper introduces a stochastic algorithm for computing symmetric Markov perfect equilibria. The algorithm computes equilibrium policy and value functions, and generates a transition kernel for the (stochastic) evolution of the state of the system. It has two features that together imply that it need not be subject to the curse of dimensionality. First, the integral that determines continuation values is never calculated; rather it is approximated by a simple average of returns from past outcomes of the algorithm, an approximation whose computational burden is not tied to the dimension of the state space. Second, iterations of the algorithm update value and policy functions at a single (rather than at all possible) points in the state space. Random draws from a distribution set by the updated policies determine the location of the next iteration's updates. This selection only repeatedly hits the recurrent class of points, a subset whose cardinality is not directly tied to that of the state space. Numerical results for industrial organization problems show that our algorithm can increase speed and decrease memory requirements by several orders of magnitude.

Journal ArticleDOI
TL;DR: In this article, a behavioral definition of ambiguity in an abstract setting where objects of choice are Savage-style acts is proposed, and axioms are described that deliver probabilistic sophistication of preference on the set of unambiguous acts.
Abstract: This paper suggests a behavioral definition of (subjective) ambiguity in an abstract setting where objects of choice are Savage-style acts. Then axioms are described that deliver probabilistic sophistication of preference on the set of unambiguous acts. In particular, both the domain and the values of the decision-maker's probability measure are derived from preference. It is argued that the noted result also provides a decision-theoretic foundation for the Knightian distinction between risk and ambiguity.

Journal ArticleDOI
TL;DR: In this paper, the authors compare two different models in a common environment: the first model has liquidity constraints, in which consumers save a single asset that they cannot sell short, and the second model has debt constraints in that consumers cannot borrow so much that they would want to default, but is otherwise a standard complete markets model.
Abstract: This paper compares two different models in a common environment. The first model has liquidity constraints in that consumers save a single asset that they cannot sell short. The second model has debt constraints in that consumers cannot borrow so much that they would want to default, but is otherwise a standard complete markets model. Both models share the features that individuals are unable to completely insure against idiosyncratic shocks and that interest rates are lower than subjective discount rates. In a stochastic environment, the two models have quite different dynamic properties, with the debt constrained model exhibiting simple stochastic steady states, while the liquidity constrained model has greater persistence of shocks.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the allocations associated with equilibria relative to any ad hoc set of feasible mechanisms can be reproduce as equilibres relative to (some subset of) the set of menus.
Abstract: In the common agency problem multiple mechanism designer simultaneously attempt to control the behavior of a single privately informed agent. The paper shows that the allocations associated with equilibria relative to any ad hoc set of fessible mechanisms can be reproduce as equilibria relative to (some subset of) the set of menus. Furthermore, equilibria relative to the set of menus are weakly robust in the sense that it is possible to find continuation equilibria so that the equilibrium allocations persist even when the set of feasible mechanisms is enlarged.(This abstract was borrowed from another version of this item.)

Journal ArticleDOI
TL;DR: In this paper, the authors presented a characterization of the equilibrium value set of a Ramsey tax model and developed a dynamic programming method for a class of policy games between the government and a continuum of households.
Abstract: This paper presents a full characterization of the equilibrium value set of a Ramsey tax model. More generally, it develops a dynamic programming method for a class of policy games between the government and a continuum of households. By selectively incorporating Euler conditions into a strategic dynamic programming framework, we wed two technologies that are usually considered competing alternatives, resulting in a substantial simplification of the problem.