scispace - formally typeset
Search or ask a question

Showing papers in "The American Economic Review in 2007"


Journal ArticleDOI
TL;DR: In this paper, a dynamic stochastic general equilibrium (DSGE) model for the US economy is proposed, which incorporates many types of real and nominal frictions: sticky nominal price and wage setting, habit formation in consumption, investment adjustment costs, variable capital utilisation and fixed costs in production.
Abstract: We estimate a dynamic stochastic general equilibrium (DSGE) model for the US economy. The model incorporates many types of real and nominal frictions: sticky nominal price and wage setting, habit formation in consumption, investment adjustment costs, variable capital utilisation and fixed costs in production. It also contains many types of shocks including productivity, labour supply, investment, preference, cost-push and monetary policy shocks. Using Bayesian estimation techniques, the relative importance of the various frictions and shocks in explaining the US business cycle are empirically investigated. We also show that this model is able to outperform standard VAR and BVAR models in out-of-sample prediction.

3,115 citations


Posted Content
TL;DR: The phenomenon of superstars as discussed by the authors is a well-known phenomenon in economics, where relatively small numbers of people earn enormous amounts of money and seem to dominate the fields in which they are engaged.
Abstract: IN RECENT years has not felt his gorge rise upon learning the staggeringly high salary of a shortstop, a movie star, an opera singer? A basketball player on a losing team earns $1.2 million; an author sells the paperback rights to his book for $800,000; a television interviewer switches networks and signs a contract calling for her to receive an annual income of just under $2 million. And the gorge continues to rise. The spectacle of people doing work that doesn't always seem overweighted with significance for annual (and, in the case of rock singers, sometimes nightly) sums of money that figure to exceed what you and I may earn in our lifetimes this, as they say nowadays, does not give off good vibes. What's going on here? What we are talking about, of course, is the phenomenon of superstars, wherein relatively small numbers of people earn enormous amounts of money and seem to dominate the fields in which they are engaged. This phenomenon appears to be increasingly important in the modern world certainly, with the breakdown of economic privacy, it is an increasingly visible phenomenon. The very word superstar implies inflation in our most precious currency, language; to be a star would have been sufficient in my youth. Yet we appear to be stuck with the term. As for the phenomenon itself, viewed from the standpoint of an economist, it may not be as puzzling as it at first glimpse seems. The first thing to be said in this connection is that certain economic activities admit extreme concentration of both personal reward and market size among a handful of participants. Every economic activity supports considerable diversity of talent and significant inequality in the personal distribution of rewards. Activities where superstars are found differ from those in which most of us make our livings by supporting much less diversity and much more inequality in the distribution of earnings. The bulk of earnings goes to relatively small numbers of practitioners typically, the few regarded as among the best in their fields. Similar distributions of earnings in the industrial sector would ultimately come to the attention of the Federal Trade Commission or the

2,091 citations


Journal ArticleDOI
TL;DR: Cunha and Heckman as mentioned in this paper discuss the technology of skill formation, report,ChicagoAmerican Economic Association,2007.May 7, 2007, pp. 17-20, p.
Abstract: Flavio Cunha; James Heckman.May, 2007.The technology of skill formation,Report,ChicagoAmerican Economic Association,17

1,885 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the generalized second-price (GSP) auction, a new mechanism used by search engines to sell online advertising, and show that it has a unique equilibrium, with the same payoffs to all players as the dominant strategy equilibrium of VCG.
Abstract: We investigate the "generalized second-price" (GSP) auction, a new mechanism used by search engines to sell online advertising. Although GSP looks similar to the Vickrey-Clarke-Groves (VCG) mechanism, its properties are very different. Unlike the VCG mechanism, GSP generally does not have an equilibrium in dominant strategies, and truth-telling is not an equilibrium of GSP. To analyze the properties of GSP, we describe the generalized English auction that corresponds to GSP and show that it has a unique equilibrium. This is an ex post equilibrium, with the same payoffs to all players as the dominant strategy equilibrium of VCG.

1,406 citations


Journal ArticleDOI
TL;DR: In this article, the productivity gains from reducing tariffs on final goods and from reducing taxes on intermediate inputs are estimated. And they show that a 10 percentage point fall in input tariffs leads to a productivity gain of 12 percent for firms that import their inputs.
Abstract: This paper estimates the productivity gains from reducing tariffs on final goods and from reducing tariffs on intermediate inputs. Lower output tariffs can increase productivity by inducing tougher import competition, whereas cheaper imported inputs can raise productivity via learning, variety, and quality effects. We use Indonesian manufacturing census data from 1991 to 2001, which include plant-level information on imported inputs. The results show that a 10 percentage point fall in input tariffs leads to a productivity gain of 12 percent for firms that import their inputs, at least twice as high as any gains from reducing output tariffs. (JEL F12, F13, L16, O14, O19, O24)

1,303 citations


Posted Content
TL;DR: For example, Selten et al. as mentioned in this paper showed that in the guessing game, players engage in a finite depth of reasoning on players' beliefs about one another, where a player selects a strategy at random without forming beliefs or picks a number that is salient to him.
Abstract: Consider the following game: a large number of players have to state simultaneously a number in the closed interval [0, 100]. The winner is the person whose chosen number is closest to the mean of all chosen numbers multiplied by a parameter p, where p is a predetermined positive parameter of the game; p is common knowledge. The payoff to the winner is a fixed amount, which is independent of the stated number and p. If there is a tie, the prize is divided equally among the winners. The other players whose chosen numbers are further away receive nothing.' The game is played for four rounds by the same group of players. After each round, all chosen numbers, the mean, p times the mean, the winning numbers, and the payoffs are presented to the subjects. For 0 c p < 1, there exists only one Nash equilibrium: all players announce zero. Also for the repeated supergame, all Nash equilibria induce the same announcements and payoffs as in the one-shot game. Thus, game theory predicts an unambiguous outcome. The structure of the game is favorable for investigating whether and how a player's mental process incorporates the behavior of the other players in conscious reasoning. An explanation proposed, for out-of-equilibrium behavior involves subjects engaging in a finite depth of reasoning on players' beliefs about one another. In the simplest case, a player selects a strategy at random without forming beliefs or picks a number that is salient to him (zero-order belief). A somewhat more sophisticated player forms first-order beliefs on the behavior of the other players. He thinks that others select a number at random, and he chooses his best response to this belief. Or he forms second-order beliefs on the first-order beliefs of the others and maybe nth order beliefs about the (n I )th order beliefs of the others, but only up to a finite n, called the ndepth of reasoning. The idea that players employ finite depths of reasoning has been studied by various theorists (see e.g., Kenneth Binmore, 1987, 1988; Reinhard Selten, 1991; Robert Aumann, 1992; Michael Bacharach, 1992; Cristina Bicchieri, 1993; Dale 0. Stahl, 1993). There is also the famous discussion of newspaper competitions by John M. Keynes (1936 p. 156) who describes the mental process of competitors confronted with picking the face that is closest to the mean preference of all competitors.2 Keynes's game, which he considered a Gedankenexperiment, has p = 1. However, with p = 1, one cannot distinguish between different steps of reasoning by actual subjects in an experiment. There are some experimental studies in which reasoning processes have been analyzed in ways similar to the analysis in this paper. Judith Mehta et al. (1994), who studied behavior in two-person coordination games, suggest that players coordinate by either applying depth of reasoning of order I or by picking a focal point (Thomas C. Schelling, 1964), which they call "Schelling salience." Stahl and Paul W. Wilson (1994) analyzed behavior in symmetric 3 x 3 games and concluded that subjects were using depths of reasoning of orders 1 or 2 or a Nash-equilibrium strategy. * Department of Economics, Universitat Pompeu Fabra, Balmes 132, Barcelona 08008, Spain. Financial support from Deutsche Forschungsgemeinschaft (DFG) through Sonderforschungsbereich 303 and a postdoctoral fellowship from the University of Pittsburgh are gratefully acknowledged. I thank Reinhard Selten, Dieter Balkenborg, Ken Binmore, John Duffy, Michael Mitzkewitz, Alvin Roth, Karim Sadrieh, Chris Starmer, and two anonymous referees for helpful discussions and comments. I learned about the guessing game in a game-theory class given by Roger Guesnerie, who used the game as a demonstration experiment. 'The game is mentioned, for example, by Herve Moulin (1986), as an example to explain rationalizability, and by Mario H. Simonsen (1988). 2 This metaphor is frequently mentioned in the macroeconomic literature (see e.g., Roman Frydman, 1982).

1,221 citations


Journal ArticleDOI
TL;DR: In this article, the authors report the results of a study of the impact of climate change on the US agricultural sector using the same model used by Deschenes and Greenstone (2007).
Abstract: Fisher et al. (2012) (hereafter, FHRS) have uncovered coding and data errors in our paper, Deschenes and Greenstone ( 2007) (hereafter, DG) . We acknowledge and are embarrassed by these mistakes. We are grateful to FHRS for uncovering them. We hope that this Reply will also contribute to advancing the literature on the vital question of the impact of climate change on the US agricultural sector. FHRS’ main critiques of DG are as follows: (i) there are errors in the weather data and climate change projections used by DG; (ii) the climate change projections are based on the Hadley 2 model and scenarios, rather than the more recent Hadley 3 model and scenarios; (iii) standard errors are biased due to spatial correlation; (iv ) the inclusion of state by year fixed effects does not leave enough weather variation to obtain meaningful estimates of the relationship between agriculture profits and weather; (v) storage and inventory adjustment in response to yield shocks invalidate the use of annual profit data; and (vi) FHRS argue that a better-specified hedonic model produces robust estimates, unlike the results reported in DG. Four of these critiques have little basis and we respond to them here in the introduction. Specifically, with respect to: (ii) The more recent daily climate predictions were not available when we wrote DG. Nevertheless, the most important issue is providing the reliable estimates of climate change and in this note we report estimates based on the climate model we used in DG and a more recent one that we gained access to in the meantime. (iii) In the primary table on agricultural profits, DG reports two sets of standard errors with the first clustered at the county level and the second based on a variance-covariance matrix that accounts for spatial correlation, using the method proposed in Conley (1999). Thus, the claim of FHRS 2012 seems overblown. Nevertheless, to ease comparisons of papers in this literature, this note will adopt the FHRS convention of reporting estimated standard errors clustered at the county and state levels; we find that inference is largely unaffected by the choice between these different assumptions about the variance-covariance matrix.

920 citations


Journal ArticleDOI
TL;DR: The authors use reference-dependent utility models to study preferences over monetary risk, and show that a prior expectation to take on risk decreases aversion to both the anticipated and additional risk. But their model does not consider the impact of the environment on risk aversion.
Abstract: We use Koszegi and Rabin's (2006) model of reference-dependent utility, and an extension of it that applies to decisions with delayed consequences, to study preferences over monetary risk. Because our theory equates the reference point with recent probabilistic beliefs about outcomes, it predicts specific ways in which the environment influences attitudes toward modest-scale risk. It replicates "classical" prospect theory-including the prediction of distaste for insuring losses-when exposure to risk is a surprise, but implies first-order risk aversion when a risk, and the possibility of insuring it, are anticipated. A prior expectation to take on risk decreases aversion to both the anticipated and additional risk. For large-scale risk, the model allows for standard "consumption utility" to dominate reference-dependent "gain-loss utility," generating nearly identical risk aversion across situations.

901 citations


Journal ArticleDOI
TL;DR: In this article, the authors embed a model of imperfect competition and variable markups in a quantitative model of international trade and find that when their model is parameterized to match salient features of the data on international trade, it can reproduce deviations from relative purchasing power parity similar to those observed in the data because firms choose to price-to-market.
Abstract: International relative prices across industrialized countries show large and systematic deviations from relative purchasing power parity. We embed a model of imperfect competition and variable markups in a quantitative model of international trade. We find that when our model is parameterized to match salient features of the data on international trade and market structure in the United States, it can reproduce deviations from relative purchasing power parity similar to those observed in the data because firms choose to price-to-market. We then examine how pricing-to-market depends on the presence of international trade costs and various features of market structure.

741 citations


Journal ArticleDOI
TL;DR: In this article, the authors study a multisector model of growth with differences in TFP growth rates across sectors and derive sufficient conditions for the coexistence of structural change, characterized by sectoral labor reallocation and balanced aggregate growth.
Abstract: We study a multisector model of growth with differences in TFP growth rates across sectors and derive sufficient conditions for the coexistence of structural change, characterized by sectoral labor reallocation and balanced aggregate growth. The conditions are weak restrictions on the utility and production functions. Along the balanced growth path, labor employed in the production of consumption goods gradually moves to the sector with the lowest TFP growth rate, until in the limit it is the only sector with nontrivial employment of this kind. The employment shares of intermediate and capital goods remain constant during the reallocation process.

721 citations


Posted Content
TL;DR: Card and Krueger as discussed by the authors presented a meta-analysis of minimum-wage studies in the context of the American Economic Association's 2011 Meeting of the National Association of Manufacturers.
Abstract: Time-Series Minimum-Wage Studies: A Meta-analysis Author(s): David Card and Alan B. Krueger Reviewed work(s): Source: The American Economic Review, Vol. 85, No. 2, Papers and Proceedings of the Hundredth and Seventh Annual Meeting of the American Economic Association Washington, DC, January 6-8, 1995 (May, 1995), pp. 238-243 Published by: American Economic Association Stable URL: http://www.jstor.org/stable/2117925 . Accessed: 04/11/2011 15:46

Journal ArticleDOI
TL;DR: This article found that after German reunification, East Germans are more in favor of redistribution and state intervention than West Germans, even after controlling for economic incentives, and this effect is especially strong for older cohorts, who lived under Communism for a longer time period.
Abstract: Preferences for redistribution, as well as the generosities of welfare states, differ significantly across countries. In this paper, we test whether there exists a feedback process of the economic regime on individual preferences. We exploit the “experiment” of German separation and reunification to establish exogeneity of the economic system. From 1945 to 1990, East Germans lived under a Communist regime with heavy state intervention and extensive redistribution. We find that, after German reunification, East Germans are more in favor of redistribution and state intervention than West Germans, even after controlling for economic incentives. This effect is especially strong for older cohorts, who lived under Communism for a longer time period. We further find that East Germans’ preferences converge towards those of West Germans. We calculate that it will take one to two generations for preferences to converge completely.

Journal ArticleDOI
TL;DR: In this article, the authors considered a prototypical New Keynesian model, in which the equilibrium is undetermined if monetary policy is "passive" and extended the likelihood-based estimation of dynamic equilibrium models to allow for indeterminacies and sunspot fluctuations.
Abstract: This paper considers a prototypical New Keynesian model, in which the equilibrium is undetermined if monetary policy is "passive." The likelihood-based estimation of dynamic equilibrium models is extended to allow for indeterminacies and sunspot fluctuations. We construct posterior weights for the determinacy and indeterminacy region of the parameter space and estimates for the propagation of fundamental and sunspot shocks. According to the estimated model, U.S. monetary policy post-1982 is consistent with determinacy, whereas the pre-Volcker policy is not. We find that before 1979 indeterminacy substantially altered the propagation of shocks.

Journal ArticleDOI
TL;DR: In this paper, Baumeister, Stillwell, and Heatherton developed techniques to analyze equilibria when players are motivated, in part, by a desire to avoid guilt, and developed a formal approach for providing answers.
Abstract: “A clear conscience is a good pillow.” Why does this old proverb contain an insight? The emotion of guilt holds a key. Psychologists report that “the prototypical cause of guilt would be the infliction of harm, loss, or distress on a relationship partner” (Roy Baumeister, Arlene M. Stillwell, and Todd F. Heatherton 1994, 245; June Price Tangney 1995). Moreover, guilt is unpleasant and may affect behavior to render the associated pangs counterfactual. Baumeister, Stillwell, and Heatherton state, “If people feel guilt for hurting their partners ... and for failing to live up to their expectations, they will alter their behavior (to avoid guilt) in ways that seem likely to maintain and strengthen the relationship.” Avoided guilt is the down of the sound sleeper’s bolster. How can guilt be modeled? How are human interaction and economic outcomes influenced? We offer a formal approach for providing answers. Start with an extensive game form which associates a monetary outcome with each end node. Say that player i lets player j down if as a result of i’s choice of strategy, j gets a lower monetary payoff than j expected to get before play started. Player i’s guilt may depend on how much he lets j down. Player i’s guilt may also depend on how much j believes i believes he lets j down. We develop techniques to analyze equilibria when players are motivated, in part, by a desire to avoid guilt. The intellectual home for our exercise is what has been called psychological game theory. This framework—originally developed by John Geanakoplos, David Pearce, and Ennio Stacchetti (1989) and recently extended by Battigalli and Dufwenberg (2005) (henceforth B&D)—allows players’ utilities to depend on beliefs (about choices, states of nature,

Journal ArticleDOI
TL;DR: It is shown that as the random/network-based meeting ratio varies, the resulting degree distributions can be ordered in the sense of stochastic dominance, which allows us to infer how the formation process affects average utility in the network.
Abstract: We present a dynamic model of network formation where nodes find other nodes with whom to form links in two ways: some are found uniformly at random, while others are found by searching locally through the current structure of the network (e.g., meeting friends of friends). This combination of meeting processes results in a spectrum of features exhibited by large social networks, including the presence of more high- and low-degree nodes than when links are formed independently at random, having low distances between nodes in the network, and having high clustering of links on a local level. We fit the model to data from six networks and impute the relative ratio of random to network-based meetings in link formation, which turns out to vary dramatically across applications. We show that as the random/network-based meeting ratio varies, the resulting degree distributions can be ordered in the sense of stochastic dominance, which allows us to infer how the formation process affects average utility in the network.

Journal ArticleDOI
TL;DR: In this article, the authors present the results from a dictator game where the distribution phase is preceded by a production phase, and estimate simultaneously the prevalence of three principles of distributive justice among the players and the distribution of the weight they attach to fairness.
Abstract: A core question in the contemporary debate on distributive justice is how to understand fairness in situations involving production. Important theories of distributive justice, such as strict egalitarianism, liberal egalitarianism, and libertarianism, provide different answers to this question. This paper presents the results from a dictator game where the distribution phase is preceded by a production phase. Each player's contribution is a result of a freely chosen investment level and an exogenously given rate of return. We estimate simultaneously the prevalence of three principles of distributive justice among the players and the distribution of the weight they attach to fairness.

Journal ArticleDOI
TL;DR: In this paper, the authors present a condition for checking when two state space systems match up and when they do not when there are equal numbers of economic and VAR shocks. (JEL C32, E32)
Abstract: The dynamics of a linear (or linearized) dynamic stochastic economic model can be expressed in terms of matrices (A, B, C, D) that define a state space system for a vector of observables. An associated state space system (A,ˆB,C,ˆD) determines a vector autoregression for those same observables. We present a simple condition for checking when these two state space systems match up and when they do not when there are equal numbers of economic and VAR shocks. We illustrate our condition with a permanent income example. (JEL C32, E32)

Journal ArticleDOI
TL;DR: Akerlof and Kranton as discussed by the authors explored implications outside of macroeconomics of utility functions dependent on people's notions of what ought to be, and also benefited from conversations with Robert Shiller.
Abstract: Macroeconomics changed between the early 1960s and the late 1970s The macroeconomics of the early 1960s was avowedly Keynesian This was manifested in the textbooks of the time, which showed a remarkable unity from the introductory through the graduate levels John Maynard Keynes appeared, posthumously, on the cover of Time Even Milton Friedman was famously—although perhaps misleadingly— quoted: “We are all Keynesians now” A little more than a decade later Robert Lucas and Thomas Sargent (1979) had published “After Keynesian Macroeconomics” The love-fest was over The decline of the old-style Keynesian economics was due in part to the simultaneous rise in inflation and unemployment in the late 1960s and early 1970s That occurrence was impossible to reconcile with the simple nonaccelerationist Phillips curves of the time But Keynesian economics also declined because of a change in economic methodology The Keynesians had emphasized the dependence of consumption on disposable income and, similarly, of investment on current profits and current cash flow They posited a Phillips curve, where nominal—rather than real—wage inflation depended upon the unemployment rate, which was used as an indication of the looseness of the labor market They based these functions on their own introspection regarding how the various actors in the economy would behave They also brought some discipline into their judgments by estimating statistical relations But a new school of thought, based on clas† Presidential Address delivered at the one hundred eighteenth meeting of the American Economic Association, January 6, 2007, Chicago, IL * Department of Economics, University of California at Berkeley, 549 Evans Hall, Berkeley, CA 94720 (e-mail: akerlof@econberkeleyedu) This paper is based on a longterm research program with Rachel Kranton on the implications of identity for economic behavior Our previous joint papers (Akerlof and Kranton 2000, 2002, 2005) have explored implications outside of macroeconomics of utility functions dependent on people’s notions of what ought to be Some of this paper—especially Section III (“The Missing Motivation: Norms”) and Section IX (“Economic Methodology”)—has been directly taken from our joint manuscript: The Missing Motivation: Economics Made Human (Akerlof and Kranton 2006) I am especially grateful to Professor Kranton for extending to me the invitation to join this project, after she had the initial insight in the spring of 1996 that concerns regarding identity were missing from economic theory I have also benefited from conversations with Robert Shiller, with whom I am coauthoring work on behavioral macroeconomics In addition, I especially wish to thank Robert Akerlof and Janet Yellen for invaluable advice I also want to thank Roland Benabou, Alan Blinder, Louis Christofides, Stephen Cosslett, Ernst Fehr, David Hirshleifer, Houston McCulloch, John Morgan, George Perry, Antonio Rangel, Paola Sapienza, Robert Solow, Dennis Snower, and Luigi Zingales, and seminar participants at the IMF, the World Bank, Ohio State University, Vanderbilt University, the University of California at Berkeley, the Munich Behavioral Economics Summer Camp, the 2006 Macroeconomics and Individual Decision Making Conference of the NBER and the Federal Reserve Bank of Boston, and at the Social Interactions, Identity, and Well-Being, and Institutions, Organizations, and Growth groups of the CIAR I am also grateful to Marina Halac for invaluable research assistance and to the Canadian Institute for Advanced Research and to the National Science Foundation under Research Grant SES 04-17871 for invaluable financial support 1 See, for example, Paul A Samuelson (1964), Thomas F Dernburg and Duncan M McDougall (1967), and Gardner Ackley (1961) The econometric model of Lawrence R Klein and Arthur S Goldberger (1955) provides a useful synopsis of the variables that the early Keynesians thought most important for a macroeconomic model, and how they would be included 2 Time, December 31, 1965 His appearance on the cover was especially remarkable because Time covers are rarely posthumous Keynes had died in 1946 3 But in a later disclaimer, Friedman said, almost surely correctly, that he had been quoted out of context See http://wwwlibertyhavencom/thinkers/miltonfriedman/ miltonexkeynesianhtml, which quotes Friedman (1968), Dollars and Sense, 15 4 The treatment of consumption in The General Theory, as we shall see below, was typical of such thinking Keynes first discusses the dependence of consumption on current income, which he clearly sees as the primary determinant of current consumption; but, in addition, he makes a long list of other factors that will alter the relation between consumption and current income 5 A good example of this methodology can be seen in Alban W Phillips’s (1958) mixture of light theory and statistical analysis in his estimation of the relation between wage inflation and unemployment

Journal ArticleDOI
TL;DR: In this article, the authors investigate the normative criteria that guide the allocation of a policy task to an elected politician versus an independent bureaucrat, and find that the bureaucrat is preferable for technical tasks for which ability is more important than effort, or if there is great uncertainty about whether the policymaker has the required abilities.
Abstract: This paper investigates the normative criteria that guide the allocation of a policy task to an elected politician versus an independent bureaucrat. The bureaucrat is preferable for technical tasks for which ability is more important than effort, or if there is great uncertainty about whether the policymaker has the required abilities. The optimal allocation of redistributive tasks is ambiguous, and depends on how the bureaucrat can be instructed. But irrespective of the normative conclusion, the politician prefers not to delegate redistributive policies.


Journal ArticleDOI
TL;DR: In this article, a quantitative model of consumer bankruptcy with three key features: life-cycle component, idiosyncratic earnings uncertainty, and expense uncertainty is presented, and the authors find that transitory and persistent earnings shocks have very different implications for evaluating bankruptcy rules.
Abstract: Consumer bankruptcy provides partial insurance against bad luck, but, by driving up interest rates, makes life-cycle smoothing more difficult. We argue that to assess this trade-off one needs a quantitative model of consumer bankruptcy with three key features: life-cycle component, idiosyncratic earnings uncertainty, and expense uncertainty (exogenous negative shocks to household balance sheets). We find that transitory and persistent earnings shocks have very different implications for evaluating bankruptcy rules. More persistent shocks make the bankruptcy option more desirable. Larger transitory shocks have the opposite effect. Our findings suggest the current US bankruptcy system may be desirable for reasonable parameter values. (JEL D14, D91, K35)

Journal ArticleDOI
TL;DR: In this article, the saliency of the group membership is manipulated by making the group present as an audience in the corresponding room, or not, and the effect on the outcomes of this increased aggressive stance depends on the game: In the Battle of the Sexes, the aggressiveness of hosts leads to coordination on an efficient, alternating outcome; in the Prisoner's Dilemma, it leads to conflict and inefficient outcomes.
Abstract: People who are members of a group, and identify with it, behave differently from people in isolation. The way in which the behavior differs depends in subtle ways from the way in which the nature of the group is perceived, as well as its saliency, and also on the way in which people perceive that the behavior of others is affected by the group. We study these hypotheses in a strategic experimental environment. Participants are allocated randomly to two groups (Row and Column players), and a room is assigned to each group. The saliency of the group membership is manipulated by making the group present as an audience in the corresponding room, or not. We use two stage games, the Battle of the Sexes and Prisoner’s Dilemma. We show that the saliency of the group affects behavior of members, as well as the behavior of people in the other group, and that participants anticipate these effects. Group membership increases the aggressive stance of the hosts (people who have their group members in the audience). The effect on the outcomes of this increased aggressive stance depends on the game: In the Battle of the Sexes, the aggressiveness of hosts leads to coordination on an efficient, alternating outcome; in the Prisoner’s Dilemma, it leads to conflict and inefficient outcomes.

Journal ArticleDOI
TL;DR: In this article, the authors argue that consumption risk explains none of the cross-sectional variation in the expected returns of their portfolios, and they conclude that there is no spread in these covariances, and that the statistical insignificance of the factor betas implies that LV's measure of the SDF is also uncorrelated with the excess returns.
Abstract: Hanno Lustig and Adrien Verdelhan (2007) claim that aggregate consumption growth risk explains the excess returns to borrowing US dollars to finance lending in other currencies. They reach this conclusion after estimating a consumption-based asset pricing model using data on the returns of portfolios of short-term foreigncurrency denominated money market securities sorted according to their interest differential with the United States. Based on their evidence and additional US data, I argue that consumption risk explains none of the cross-sectional variation in the expected returns of their portfolios. Standard theory predicts that the expected excess return of an asset, E( R t ), is given by − cov ( R t , m t ), where m t denotes some proposed stochastic discount factor (SDF). Therefore, any risk-based explanation of the cross-section of returns relies on significant spread, across portfolios, in the covariance between the returns and the SDF. For the SDFs that Lustig and Verdelhan (henceforth, LV) calibrate and estimate in their 2007 article, it is impossible to reject that there is no spread in these covariances. In fact, it is impossible to reject that these covariances are all zero. LV’s SDF is linear in a vector of risk factors, so they implement a widely used two-pass procedure to estimate its parameters. The first pass is a series of time series regressions of each portfolio’s excess return on the risk factors. These regressions determine the factor betas, β. When there are n portfolios and k risk factors, β is an n × k matrix. In LV’s case n = 8 and k = 3. None of the individual elements of LV’s estimate,  β , is statistically different from zero. For each of the three factors, we also cannot reject the hypothesis that all eight of the relevant elements of  β are jointly zero. Confronted with this evidence, alone, it would be reasonable to conclude that LV’s model does not explain currency portfolios sorted on interest rates. The statistical insignificance of the factor betas implies that LV’s measure of the SDF is also uncorrelated with the excess returns that they study. To demonstrate this, I consider three calibrations of the parameters of the SDF in order to construct time series for m t : (i) the SDF parameters corresponding to LV’s two-pass estimates, (ii) LV’s Generalized Method of Moments (GMM, Lars P. Hansen 1982) estimates of the SDF parameters, and (iii) Motohiro Yogo’s (2006) estimates of the

Journal ArticleDOI
TL;DR: In this article, the authors develop a tractable framework for the analysis of the relationship between contractual incompleteness, technological complementarities, and technology adoption, where a firm chooses its technology and investment levels in contractible activities by suppliers of intermediate inputs, anticipating payoffs from an ex post bargaining game.
Abstract: We develop a tractable framework for the analysis of the relationship between contractual incompleteness, technological complementarities, and technology adoption. In our model, a firm chooses its technology and investment levels in contractible activities by suppliers of intermediate inputs. Suppliers then choose investments in noncontractible activities, anticipating payoffs from an ex post bargaining game. We show that greater contractual incompleteness leads to the adoption of less advanced technologies, and that the impact of contractual incompleteness is more pronounced when there is greater complementary among the intermediate inputs. We study a number of applications of the main framework and show that the mechanism proposed in the paper can generate sizable productivity differences across countries with different contracting institutions, and that differences in contracting institutions lead to endogenous comparative advantage differences. (JEL D86, O33)

Journal ArticleDOI
TL;DR: The results suggest that children on the margin of placement tend to have better outcomes when they remain at home, especially older children.
Abstract: Little is known about the effects of placing children who are abused or neglected into foster care. This paper uses the placement tendency of child protection investigators as an instrumental variable to identify causal effects of foster care on long-term outcomes--including juvenile delinquency, teen motherhood, and employment--among children in Illinois where a rotational assignment process effectively randomizes families to investigators. Large marginal treatment effect estimates suggest caution in the interpretation, but the results suggest that children on the margin of placement tend to have better outcomes when they remain at home, especially older children.

Journal ArticleDOI
TL;DR: This article developed a structural econometric model to estimate risk preferences from data on deductible choices in auto insurance contracts and found that women are more risk averse than men, risk aversion exhibits a U-shape with respect to age, and proxies for income and wealth are positively associated with absolute risk aversion.
Abstract: We develop a structural econometric model to estimate risk preferences from data on deductible choices in auto insurance contracts. We account for adverse selection by modeling unobserved heterogeneity in both risk (claim rate) and risk aversion. We find large and skewed heterogeneity in risk attitudes. In addition, women are more risk averse than men, risk aversion exhibits a U-shape with respect to age, and proxies for income and wealth are positively associated with absolute risk aversion. Finally, unobserved heterogeneity in risk aversion is greater than that of risk, and, as we illustrate, has important implications for insurance pricing.

Journal ArticleDOI
TL;DR: In this article, the authors conducted a randomized field experiment in a setting in which workers were free to choose their working times and their efforts during working time and found that only loss averse individuals exhibit a significantly negativeneffort response to the wage increase and that the degree of loss aversion predicts the size of the negative effort response.
Abstract: Most previous studies on intertemporal labor supply found very small or insignificantnsubstitution effects. It is not clear, however, whether these results are due to institutionalnconstraints on workers’ labor supply choices or whether the behavioral assumptions of thenstandard life cycle model with time separable preferences are empirically invalid. We conducted a randomized field experiment in a setting in which workers were free to choose their working times and their efforts during working time. We document a large positive wage elasticity of overall labor supply and an even larger wage elasticity of labor hours, which implies that the wage elasticity of effort per hour is negative.nWhile the standard life cycle model cannot explain the negative effort elasticity, we show that a modified neoclassical model with preference spillovers across periods and a model withnreference dependent, loss averse preferences are consistent with the evidence. With the help of anfurther experiment we can show that only loss averse individuals exhibit a significantly negativeneffort response to the wage increase and that the degree of loss aversion predicts the size of the negative effort response.

Journal ArticleDOI
TL;DR: In this article, the intrinsic motivation of bureaucrats is investigated, and three primary results are shown: they should be biased, sometimes this bias takes the form of advocating for their clients more than would their principal, while in other cases they are more hostile to their interests.
Abstract: Many individuals are motivated to exert effort because they care about their jobs, rather than because there are monetary consequences to their actions. The intrinsic motivation of bureaucrats is the focus of this paper, and three primary results are shown. First, bureaucrats should be biased. Second, sometimes this bias takes the form of advocating for their clients more than would their principal, while in other cases, they are more hostile to their interests. For a range of bureaucracies, those who are biased against clients lead to more efficient outcomes. Third, self-selection need not produce the desired bias. Instead, selection to bureaucracies is likely to be bifurcated, in the sense that it becomes composed of those who are most preferred by the principal, and those who are least preferred.

Posted Content
TL;DR: This paper developed a simple model of the determination of output, the stock market and the term structure of interest rates, which is an extension of the IS-LM model and borrows from it the assumption that output is determined by aggregate demand and that the price level can only adjust over time to its equilibrium value.
Abstract: This paper develops a simple model of the determination of output, the stock market and the term structure of interest rates. The model is an extension of the IS-LM model and borrows from it the assumption that output is determined by aggregate demand and that the price level can only adjust over time to its equilibrium value. However, whereas the IS-LM emphasizes the interaction between "the interest rate" and output, this model emphasizes the interaction between asset values and output. Asset values, rather than the interest rate, are the main determinants of aggregate demand and output. Current and anticipated output and income are in turn the main determinants of asset values. It is this interaction that the model intends to capture; its goal is to characterize the joint response of asset values and output to changes in the environment, such as changes or announcement of changes in monetary and fiscal policy. As the above brief description makes clear, anticipations are central to the story; the assumption made in this paper will be one of rational expectations. The paper is organized as follows. Section I describes the model, and Sections II-IV characterize the behavior of the economy under the extreme but convenient assumption that prices are fixed forever. Sections V and VI extend the analysis to the case where prices adjust over time to their equilibrium value.

Journal ArticleDOI
TL;DR: In this article, the authors examine the risk situation facing individuals in the labor market and examine how it alters individuals' consumption-saving decision, assuming that individuals do not know their pro-les exactly at the beginning of life, but learn in a Bayesian way with successive income observations.
Abstract: In this paper we examine the risk situation facing individuals in the labor market. The current consensus in the literature is that the labor income process has a large random walk component. We argue two points. First, the estimates of persistence from income data appear to be upward biased due to the omission of heterogeneity in income pro…les across the population that would be implied, for example, by a human capital model with heterogeneity. When we allow for dierences in pro…les, the estimated persistence falls from 0.99 to about 0.8. Moreover, the main evidence against pro…le heterogeneity in the existing literature— that the autocorrelations of income changes are small and typically negative— is also replicated by the pro…le heterogeneity model we estimate, casting doubt on the previous interpretation of this evidence. Second, we embed this process in a life-cycle model to examine how it alters individuals'consumption-saving decision. We assume that— as seems plausible— individuals do not know their pro…les exactly at the beginning of life, but learn in a Bayesian way with successive income observations. We …nd that learning is very slow and aects consumption decision throughout the life-cycle. The model generates substantial rise in consumption inequality over the life-cycle, which matches empirical observations (Deaton and Paxson 1994). Moreover, the shape of the age-inequality pro…le is non-concave as in the data, but unlike in a model with very persistent shocks. Finally, the consumption pro…les of college graduates are steeper than those of high-school graduates in the model consistent with the data because they face a wider dispersion of, and hence uncertainty about, income growth rates. Overall this evidence indicates that income shocks may be signi…cantly less persistent than what is currently assumed.