scispace - formally typeset
Search or ask a question

Showing papers in "The American Economic Review in 2001"


Journal ArticleDOI
TL;DR: Acemoglu, Johnson, and Robinson as discussed by the authors used estimates of potential European settler mortality as an instrument for institutional variation in former European colonies today, and they followed the lead of Curtin who compiled data on the death rates faced by European soldiers in various overseas postings.
Abstract: In Acemoglu, Johnson, and Robinson, henceforth AJR, (2001), we advanced the hypothesis that the mortality rates faced by Europeans in different parts of the world after 1500 affected their willingness to establish settlements and choice of colonization strategy. Places that were relatively healthy (for Europeans) were—when they fell under European control—more likely to receive better economic and political institutions. In contrast, places where European settlers were less likely to go were more likely to have “extractive” institutions imposed. We also posited that this early pattern of institutions has persisted over time and influences the extent and nature of institutions in the modern world. On this basis, we proposed using estimates of potential European settler mortality as an instrument for institutional variation in former European colonies today. Data on settlers themselves are unfortunately patchy—particularly because not many went to places they believed, with good reason, to be most unhealthy. We therefore followed the lead of Curtin (1989 and 1998) who compiled data on the death rates faced by European soldiers in various overseas postings. 1 Curtin’s data were based on pathbreaking data collection and statistical work initiated by the British military in the mid-nineteenth century. These data became part of the foundation of both contemporary thinking about public health (for soldiers and for civilians) and the life insurance industry (as actuaries and executives considered the

6,495 citations


Journal ArticleDOI
TL;DR: In this article, the authors used the Jakarta Stock Exchange's reaction to news about former President Suharto's health to assess the value of political connections and found that as much as a quarter of a firm's share price may be accounted for by political connections.
Abstract: While political connections have been widely discussed in the literature on corruption, little work has been done to assess the value of these connections. This paper uses the Jakarta Stock Exchange's reaction to news about former President Suharto's health to address this issue. By examining the difference in share price reactions of firms with varying degrees of political exposure, a market valuation of the proportion of a firm’s value derived from political connections is inferred. The implied value is very high, suggesting that as much as a quarter of a firm’s share price may be accounted for by political connections. (JEL D21, G14)

2,560 citations


Journal ArticleDOI
TL;DR: This article found that the canonical model is not supported in any society studied, and that group-level differences in economic organization and the degree of market integration explain a substantial portion of the behavioral variation across societies.
Abstract: We can summarize our results as follows. First, the canonical model is not supported in any society studied. Second, there is considerably more behavioral variability across groups than had been found in previous cross-cultural research, and the canonical model fails in a wider variety of ways than in previous experiments. Third, group-level differences in economic organization and the degree of market integration explain a substantial portion of the behavioral variation across societies: the higher the degree of market integration and the higher the payoffs to cooperation, the greater the level of cooperation in experimental games. Fourth, individual-level economic and demographic variables do not explain behavior either within or across groups. Fifth, behavior in the experiments is generally consistent with economic patterns of everyday life in these societies.

2,019 citations


Journal ArticleDOI
TL;DR: In this article, the authors develop a theoretical model to divide trade's impact on pollution into scale, technique and composition effects and then examine this theory using data on sulfur dioxide concentrations from the Global Environment Monitoring Project.
Abstract: This paper sets out a theory of how openness to international goods markets affects pollution concentrations. We develop a theoretical model to divide trade's impact on pollution into scale, technique and composition effects and then examine this theory using data on sulfur dioxide concentrations from the Global Environment Monitoring Project. We find international trade creates relatively small changes in pollution concentrations when it alters the composition, and hence the pollution intensity, of national output. Our estimates of the associated technique and scale effects created by trade imply a net reduction in pollutio n from these sources. Combining our estimates of scale, composition and technique effects yields a somewhat surprising conclusion: freer trade appears to be good for the environment.

1,916 citations


Journal ArticleDOI
TL;DR: Di Tella et al. as mentioned in this paper showed that the costs of inflation in terms of unemployment can be measured by the relative size of the weights attached to these variables in social well-being.
Abstract: Modern macroeconomics textbooks rest upon the assumption of a social welfare function defined on inflation, p, and unemployment, U. However, no formal evidence for the existence of such a function has been presented in the literature. Although an optimal policy rule cannot be chosen unless the parameters of the presumed W(p, U) function are known, that has not prevented its use in a large theoretical literature in macroeconomics. This paper has two aims. The first is to show that citizens care about these two variables. We present evidence that inflation and unemployment belong in a well-being function. The second is to calculate the costs of inflation in terms of unemployment. We measure the relative size of the weights attached to these variables in social well-being. Policy implications emerge. Economists have often puzzled over the costs of inflation. Survey evidence presented in Robert J. Shiller (1997) shows that, when asked how they feel about inflation, individuals report a number of unconventional costs, like exploitation, national prestige, and loss of morale. Skeptics wonder. One textbook concludes: “we shall see that standard characterizations of the policy maker’s objective function put more weight on the costs of inflation than is suggested by our understanding of the effects of inflation; in doing so, they probably reflect political realities and the heavy political costs of high inflation” (Blanchard and Fischer, 1989 pp. 567–68). Since reducing inflation is often costly, in terms of extra unemployment, some observers have argued that the industrial democracies’ concern with nominal price stability is excessive—and have urged different monetary policies. This paper proposes a new approach. It examines how survey respondents’ reports of their well-being vary as levels of unemployment and inflation vary. Because the survey responses are available across time and countries, we are able to quantify how self-reported well-being alters with unemployment and inflation rates. Only a few economists have looked at patterns in subjective happiness and life satisfaction. Richard Easterlin (1974) helped to begin the literature. Later contributions include David Morawetz et al. (1977), Robert H. Frank (1985), Ronald Inglehart (1990), Yew-Kwang Ng (1996), Andrew J. Oswald (1997), and Liliana Winkelmann and Rainer Winkelmann (1998). More recently Ng (1997) discusses the measurability of happiness, and Daniel Kahneman et al. (1997) provide an axiomatic defense of experienced utility, and propose applications to economics. Our paper also borders on work in the psychology literature; see, for example, Edward Diener (1984), William Pavot et al. (1991), and David Myers (1993). Section I describes the main data source and the estimation strategy. This relies on a regressionadjusted measure of well-being in a particular year and country—the level not explained by individual personal characteristics. This residual macroeconomic well-being measure is the paper’s focus. * Di Tella: Harvard Business School, Morgan Hall, Soldiers Field, Boston, MA 02163; MacCulloch: STICERD, London School of Economics, London WC2A 2AE, England; Oswald: Department of Economics, University of Warwick, Coventry CV4 7AL, England. For helpful discussions, we thank George Akerlof, Danny Blanchflower, Andrew Clark, Ben Friedman, Duncan Gallie, Sebastian Galiani, Ed Glaeser, Berndt Hayo, Daniel Kahneman, Guillermo Mondino, Steve Nickell, Julio Rotemberg, Hyun Shin, John Whalley, three referees, and seminar participants at Oxford University, Harvard Business School, and the NBER Behavioral Macro Conference in 1998. The third author is grateful to the Leverhulme Trust and the Economic and Social Research Council for research support. 1 See, for example, Olivier Blanchard and Stanley Fischer (1989), Michael Burda and Charles Wyplosz (1993), and Robert E. Hall and John Taylor (1997). Early influential papers include Robert J. Barro and David Gordon (1983). 2 N. Gregory Mankiw (1997) describes the question “How costly is inflation?” as one of the four major unsolved problems of macroeconomics. 3 A recent contribution to this debate in the United States is Paul Krugman’s piece, “Stable Prices and Fast Growth: Just Say No,” The Economist, August 31, 1996.

1,757 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe a growth model with the property that human capital accumulation can account for all observed growth, which is consistent with evidence on individual productivities as measured by census earnings data.
Abstract: This paper describes a growth model with the property that human capital accumulation can account for all observed growth. The model is shown to be consistent with evidence on individual productivities as measured by census earnings data. The central hypothesis is that we learn more when we interact with more productive people.

1,493 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the proportion invested in stocks depends strongly on the proportion of stock funds in the plan and that some investors follow the "1/n" strategy.
Abstract: There is a worldwide trend toward defined contribution saving plans and growing interest in privatized social security plans. In both environments, individuals are given some responsibility to make their own asset-allocation decisions, raising concerns about how well they do at this task. This paper investigates one aspect of the task, namely diversification. It is shown that some investors follow the "1/n strategy": they divide their contributions evenly across the funds offered in the plan. Consistent with this naive notion of diversification, it is found that the proportion invested in stocks depends strongly on the proportion of stock funds in the plan.

1,418 citations


Journal ArticleDOI
TL;DR: Mayer et al. as discussed by the authors pointed out that academic discussions of skill and skill formation almost exclusively focus on measures of cognitive ability and ignore non-cognitive skills and that the lack of any reliable measure of them is due to a lack of reliable measures of noncognitive traits.
Abstract: It is common knowledge outside of academic journals that motivation, tenacity, trustworthiness, and perseverance are important traits for success in life. Thomas Edison wrote that "genius is 1 percent inspiration and 99 percent perspiration." Most parents read the Aesop fable of the "Tortoise and The Hare" to their young children at about the same time they read them the story of "The Little Train That Could." Numerous instances can be cited of high-IQ people who failed to achieve success in life because they lacked self discipline and low-IQ people who succeeded by virtue of persistence, reliability, and self-discipline. The value of trustworthiness has recently been demonstrated when market systems were extended to Eastern European societies with traditions of corruption and deceit. It is thus surprising that academic discussions of skill and skill formation almost exclusively focus on measures of cognitive ability and ignore noncognitive skills. The early literature on human capital (e.g. Gary Becker, 1964) contrasted cognitive-ability models of earnings with human capital models, ignoring noncognitive traits entirely. The signaling literature (e.g., Michael Spence, 1974), emphasized that education was a signal of a one-dimensional ability, usually interpreted as a cognitive skill. Most discussions of ability bias in the estimated return to education treat omitted ability as cognitive ability and attempt to proxy the missing ability by cognitive tests. Most assessments of school reforms stress the gain from reforms as measured by the ability of students to perform on a standardized achievement test. Widespread use of standardized achievement and ability tests for admissions and educational evaluation are premised on the belief that the skills that can be tested are essential for success in schooling, a central premise of the educational-testing movement since its inception. Much of the neglect of noncognitive skills in analyses of earnings, schooling, and other lifetime outcomes is due to the lack of any reliable measure of them. Many different personality and motivational traits are lumped into the category of noncognitive skills. Psychologists have developed batteries of tests to measure noncognitive skills (e.g., Robert Sternberg, 1985). These tests are used by companies to screen workers but are not yet used to ascertain college readiness or to evaluate the effectiveness of schools or reforms of schools. The literature on cognitive tests ascertains that one dominant factor ("g") summarizes cognitive tests and their effects on outcomes. No single factor has yet emerged to date in the literature on noncognitive skills, and it is unlikely that one will ever be found, given the diversity of traits subsumed under the category of noncognitive skills. Studies by Samuel Bowles and Herbert Gintis (1976), Rick Edwards (1976), and Roger Klein et al. (1991) demonstrate that job stability and dependability are traits most valued by employers as ascertained by supervisor ratings and questions of employers although they present no direct evidence on wages and educational t Discussants: Susan Mayer, University of Chicago; Cecilia Rouse, Princeton University; Nan Maxwell, California State University-Hayward; Janet Currie, University of California-Los Angeles.

1,418 citations


Journal ArticleDOI
TL;DR: A comprehensive analysis of the ownership and control structure of East Asian corporations, with West European corporations as benchmarks, is presented in this article, where the authors find evidence of systematic expropriation of the outside shareholders of corporations at the base of extensive corporate pyramids.
Abstract: Whereas most U.S. corporations are widely held, the predominant form of ownership in East Asia is control by a family, which often supplies a top manager. These features of "crony capitalism" are actually more pronounced in Western Europe. In both regions, the salient agency problem is expropriation of outside shareholders by controlling shareholders. Dividends provide evidence on this. Group-affiliated corporations in Europe pay higher dividends than in Asia, dampening insider expropriation. Dividend rates are higher in Europe, but lower in Asia, when there are multiple large shareholders, suggesting that they dampen expropriation in Europe, but exacerbate it in Asia. (JEL G34, G35) Failures in East Asian corporate governance have recently attracted wide attention through being blamed for the East Asian financial crisis. Based only on journalistic anecdotes, the accusations of "crony capitalism" met regional scepticism and are now being shrugged off as East Asian economies recover. This paper provides a comprehensive analysis of the ownership and control structure of East Asian corporations, with West European corporations as benchmarks. We document that the problems of East Asian corporate governance are, if anything, more severe and intractable than suggested by commentators at the height of the financial crisis. These problems we locate in an extraordinary concentration of control, whereby eight groups control more than one-quarter of the corporations in the nine most advanced East Asian economies. This control is obscured behind layers of corporations, hence insulated against the forces of competition on less-thantransparent capital markets. By examining how dividend behavior is related to the structure of ownership and control, we find evidence of systematic expropriation of the outside shareholders of corporations at the base of extensive corporate pyramids. Thus, the controlling share

1,333 citations


Journal ArticleDOI
TL;DR: This article found that each primary school constructed per 1,000 children led to an average increase of 0.12 to 0.19 years of education, as well as a 1.5 to 2.7 percent increase in wages.
Abstract: Between 1973 and 1978, the Indonesian government engaged in one of the largest school construction programs on record. Combining differences across regions in the number of schools constructed with differences across cohorts induced by the timing of the program suggests that each primary school constructed per 1,000 children led to an average increase of 0.12 to 0.19 years of education, as well as a 1.5 to 2.7 percent increase in wages. This implies estimates of economic returns to education ranging from 6.8 to 10.6 percent.

1,316 citations


Journal ArticleDOI
TL;DR: In this paper, a Benchmark Resource Allocation Problem with Model Misspecification and Robust Control Problems is discussed. But the problem is not addressed in this paper, and the following sections are included:
Abstract: The following sections are included:IntroductionA Benchmark Resource Allocation ProblemModel MisspecificationTwo Robust Control ProblemsRecursivity of the Multiplier FormulationTwo Preference OrderingsRecursivity of the Preference OrderingsConcluding Remarks

Journal ArticleDOI
TL;DR: In this paper, Shiller et al. show that, once the predictive content of asset prices for inflation has been accounted for, there should be no additional response of monetary policy to asset price volatility, except insofar as they affect the inflation forecast.
Abstract: In recent decades, asset booms and busts have been important factors in macroeconomic fluctuations in both industrial and developing countries. In light of this experience, how, if at all, should central bankers respond to asset price volatility? We have addressed this issue in previous work (Bernanke and Gertler, 1999). The context of our earlier study was the relatively new, but increasingly popular, monetary-policy framework known as inflation-targeting (see e.g., Bernanke and Frederic Mishkin, 1997). In an inflation-targeting framework, publicly announced medium-term inflation targets provide a nominal anchor for monetary policy, while allowing the central bank some flexibility to help stabilize the real economy in the short run. The inflation-targeting approach gives a specific answer to the question of how central bankers should respond to asset prices: Changes in asset prices should affect monetary policy only to the extent that they affect the central bank’s forecast of inflation. To a first approximation, once the predictive content of asset prices for inflation has been accounted for, there should be no additional response of monetary policy to assetprice fluctuations. In use now for about a decade, inflationtargeting has generally performed well in practice. However, so far this approach has not often been stress-tested by large swings in asset prices. Our earlier research employed simulations of a small, calibrated macroeconomic model to examine how an inflation-targeting policy (defined as one in which the central bank’s instrument interest rate responds primarily to changes in expected inflation) might fare in the face of a boom-and-bust cycle in asset prices. We found that an aggressive inflationtargeting policy rule (in our simulations, one in which the coefficient relating the instrument interest rate to expected inflation is 2.0) substantially stabilizes both output and inflation in scenarios in which a bubble in stock prices develops and then collapses, as well as in scenarios in which technology shocks drive stock prices. Intuitively, inflation-targeting central banks automatically accommodate productivity gains that lift stock prices, while offsetting purely speculative increases or decreases in stock values whose primary effects are through aggregate demand. Conditional on a strong policy response to expected inflation, we found little if any additional gains from allowing an independent response of central-bank policy to the level of asset prices. In our view, there are good reasons, outside of our formal model, to worry about attempts by central banks to influence asset prices, including the fact that (as history has shown) the effects of such attempts on market psychology are dangerously unpredictable. Hence, we concluded that inflationtargeting central banks need not respond to asset prices, except insofar as they affect the inflation forecast. In the spirit of recent work on robust control, the exercises in our earlier paper analyzed the performance of policy rules in worst-case † Discussants: Robert Shiller, Yale University; Glenn Rudebusch, Federal Reserve Bank of San Francisco; Kenneth Rogoff, Harvard University.

Journal ArticleDOI
TL;DR: In this paper, the authors untersucht, how Telekommunikations-Infrastruktur auf die wirtschaftliche Entwicklung ausubt.
Abstract: In diesem Beitrag wird untersucht, welchen Einflus die Telekommunikations-Infrastruktur auf die wirtschaftliche Entwicklung ausubt. Diese wichtige Frage hat im Zusammenhang mit der Diskussion um Informationsautobahnen Aktualitat erlangt. In der vorliegenden Studie wird der Einflus der Telekommunikations-Infrastruktur fur 21 OECDLander fur die vergangenen 20 Jahre analysiert. Es wird ein Strukturgleichungsmodell geschatzt, in dem Investitionen in die Telekommunikations-Infrastruktur als endogene Variable erfast werden und in einem Mikromodell Angebot und Nachfrage nach Telekommunikations- Investitionen spezifiziert werden. Das Mikromodell wird dann zusammen mit der Makro-Wachstumsgleichung geschatzt. Als Ergebnis stellen die Autoren dann eine positive Kausalbeziehung zwischen Telekommunikations-Investitionen und wirtschaftlicher Entwicklung fest, wenn eine kritische Masse an Telekommunikations- Infrastruktur existiert.

Journal ArticleDOI
TL;DR: In this article, the authors report further empirical evidence on the relative efficiency of public and private enterprises and study the performance of government-owned and privately-owned corporations over longer time periods.
Abstract: For some, it is an article of faith that governmentowned firms must be less efficient or, at least, less profitable than privately owned firms. Maxim Boycko et al. (1996) argue that politicians cause government-owned firms to employ excess labor inputs. Anne O. Krueger (1990) suggests that such firms may be pressured to hire politically connected people rather than those best qualified to perform desired tasks. More generally, government-owned firms are thought to forgo maximum profit in the pursuit of social and political objectives, such as wealth redistribution. In addition, the residual cash flow claims of these firms are not readily transferable like the shares of a private corporation. This impairs residual claimant incentives to monitor managers and, ultimately, degrades firm performance. As a consequence, one expects government-owned firms to be technically less efficient and, therefore, less profitable than private firms. The implication is that in competitive markets without significant externalities private ownership is the superior organizational form. The view that government firms are inherently less efficient than private ones, however, remains controversial among economists. John Vickers and George Yarrow (1991), among others, point out that agency problems arise in private firms as well as public ones. In most large private corporations managers own little of the stock. Because monitoring managers is costly, a divergence arises between their objectives and those of private shareholders. Private shareholders typically hold a small stake in any one firm and it may not pay any shareholder to bear the cost of monitoring management. It follows from this discussion that whether government firms are more or less efficient than private firms is primarily an empirical issue. To date the body of empirical evidence is mixed. In this paper, we report further empirical evidence on the relative efficiency of public and private enterprises. We build on previous work by studying larger samples over longer time periods and by allowing for additional factors that influence firm performance. In particular, our analysis controls for time-series variation in the general level of economic activity that may otherwise confound performance comparisons between government and private firms. 2 We approach the issue in two different ways. First, using accounting numbers, we conduct a large-sample cross-sectional comparison of government-owned and privately owned corporations. This comparison is similar in design to that of Boardman and Vining (1989), but our sample is three times the size of theirs and includes three

Journal ArticleDOI
TL;DR: A survey of U.S. universities supports this view, emphasizing the embryonic state of most technologies licensed and the need for inventor cooperation in commercialization as discussed by the authors, which is a moral hazard problem with inventor effort.
Abstract: Proponents of the Bayh-Dole Act argue that industrial use of federally funded research would be reduced without university patent licensing. Our survey of U.S. universities supports this view, emphasizing the embryonic state of most technologies licensed and the need for inventor cooperation in commercialization. Thus, for most university inventions, there is a moral-hazard problem with inventor effort. For such inventions, development does not occur unless the inventor's income is tied to the licensee's output by payments such as royalties or equity. Sponsored research from the licensee cannot by itself solve this problem.

Journal ArticleDOI
TL;DR: In this paper, a large experimental literature by and large supports economists' skepticism of subjective questions, and they cast serious doubts on attempts to use subjective data as dependent variables, because the measurement error appears to correlate with a large set of characteristics in behaviors.
Abstract: Four main messages emerge from the study of subjective survey data. First, a large experimental literature by and large supports economists' skepticism of subjective questions. Second, put in an econometric framework, these findings cast serious doubts on attempts to use subjective data as dependent variables, because the measurement error appears to correlate with a large set of characteristics in behaviors. Third, these data may be useful as explanatory variables. Finally, the empirical work suggests that subjective variables are useful in practice for explaining differences in behavior across individuals. Changes in answers to these questions, however, do not appear useful in explaining changes in behavior.

Journal ArticleDOI
TL;DR: In this paper, two modifications are introduced into the standard real-business-cycle model: habit preferences and a two-sector technology with limited inter-sectoral factor mobility, which is consistent with the observed mean risk-free rate, equity premium, and Sharpe ratio on equity.
Abstract: Two modifications are introduced into the standard real-business-cycle model: habit preferences and a two-sector technology with limited intersectoral factor mobility. The model is consistent with the observed mean risk-free rate, equity premium, and Sharpe ratio on equity. In addition, its business-cycle implications represent a substantial improvement over the standard model. It accounts for persistence in output, comovement of employment across different sectors over the business cycle, the evidence of "excess sensitivity" of consumption growth to output growth, and the "inverted leading-indicator property of interest rates," that interest rates are negatively correlated with future output.

Journal ArticleDOI
TL;DR: The authors developed a theory of political transitions inspired by the experiences of Western Europe and Latin America, where the initially disenfranchised poor can contest power by threatening revolution, especially when the opportunity cost is low.
Abstract: We develop a theory of political transitions inspired by the experiences of Western Europe and Latin America. Nondemocratic societies are controlled by a rich elite. The initially disenfranchised poor can contest power by threatening revolution, especially when the opportunity cost is low, for example, during recessions. The threat of revolution may force the elite to democratize. Democracy may not consolidate because it is redistributive, and so gives the elite an incentive to mount a coup. Highly unequal societies are less likely to consolidate democracy, and may end up oscillating between regimes and suffer substantial fiscal volatility.

Journal ArticleDOI
TL;DR: In this paper, a simple model of process innovation is proposed, where firms learn about their ideal production process by making prototypes and switch to mass-production and relocate to specialised cities with lower costs.
Abstract: A simple model of process innovation is proposed, where firms learn about their idealproduction processby making prototypes. We build around this a dynamic general equilibrium model, and derive conditions under which diversified and spe- cialised cities coexist. New products are developed in diversified cities, trying processes borrowed from different activities. On finding their ideal process, firms switch to mass-production and relocate to specialised cities with lower costs. When in equilib- rium, this configuration welfare-dominates those with only di- versified or only specialised cities. We find strong evidence of this relocation pattern in establishment relocations across French employment areas 1993-1996.

Journal ArticleDOI
TL;DR: This paper argued that the benefits of trade created by currency union may swamp any costs of forgoing independent monetary policy, since national money seems to be a significant barrier to international trade in the data.
Abstract: Europeans are proceeding with Economic and Monetary Union (EMU); a number of countries in the Americas are pursuing dollarization. Why? Conventional wisdom is that the costs are high, since members of currency unions cannot employ domestic monetary policy to smooth business cycles. More intriguingly, most economists think that the economic benefits from currency union are low. We argue below that conventional wisdom may be wrong, since national money seems to be a significant barrier to international trade in the data. Currency unions lower these monetary barriers to trade and are thus associated with higher trade and welfare; we estimate that EMU will cause European trade to rise by over 50 percent. The benefits of trade created by currency union may swamp any costs of forgoing independent monetary policy.

Journal ArticleDOI
TL;DR: The starting point for the economic debate is the thesis that the 1990's are a mirror image of the 1970's, when an unfavorable series of "supply shocks" led to stagflation - slower growth and higher inflation as discussed by the authors.
Abstract: The resurgence of the American economy since 1995 has outrun all but the most optimistic expectations. Economic forecasting models have been seriously off track and growth projections have been revised to reflect a more sanguine outlook only recently. It is not surprising that the unusual combination of more rapid growth and slower inflation in the 1990's has touched off a strenuous debate among economists about whether improvements in America's economic performance can be sustained. The starting point for the economic debate is the thesis that the 1990's are a mirror image of the 1970's, when an unfavorable series of "supply shocks" led to stagflation -- slower growth and higher inflation. In this view, the development of information technology (IT) is one of a series of positive, but temporary, shocks. The competing perspective is that IT has produced a fundamental change in the U.S. economy, leading to a permanent improvement in growth prospects.

Journal ArticleDOI
TL;DR: In this paper, the authors compare a winner-take-all system to a proportional system, where the spoils of office are split among candidates proportionally to their share of the vote.
Abstract: Politicians who care about the spoils of office may underprovide a public good because its benefits cannot be targeted to voters as easily as pork-barrel spending. We compare a winner-take-all system--where all the spoils go to the winner--to a proportional system--where the spoils of office are split among candidates proportionally to their share of the vote. In a winner-take-all system the public good is provided less often than in a proportional system when the public good is particularly desirable. We then consider the electoral college system and show that it is particularly subject to this inefficiency.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the Taylor rule in the context of a simple, but widely used, optimizing model of the monetary transmission mechanism, which allows one to reach clear conclusions about economic welfare.
Abstract: where it denotes the Fed’s operating target for the federal funds rate, pt is the inflation rate (measured by the GDP deflator), yt is the log of real GDP, and y# t is the log of potential output (identified empirically with a linear trend). The rule has since been subject to considerable attention, both as an account of actual policy in the United States and elsewhere, and as a prescription for desirable policy. Taylor argues for the rule’s normative significance both on the basis of simulations and on the ground that it describes U.S. policy in a period in which monetary policy is widely judged to have been unusually successful (Taylor, 1999), suggesting that the rule is worth adopting as a principle of behavior. Here I wish to consider to what extent this prescription resembles the sort of policy that economic theory would recommend. I consider the question in the context of a simple, but widely used, optimizing model of the monetary transmission mechanism, which allows one to reach clear conclusions about economic welfare. The model is highly stylized but incorporates important features of more realistic models and allows me to make several points that are of more general validity. Out of concern for the robustness of the conclusions reached, the analysis here addresses only broad, qualitative features of the Taylor rule and attempts to identify features of a desirable policy rule that are likely to hold under a variety of model specifications.

Journal ArticleDOI
TL;DR: In this article, the authors study a contest with multiple (not necessarily equal) prizes and show that for any number of contestants having linear, convex or concave cost functions, and for any distribution of abilities, it is optimal for the designer to allocate the entire prize sum to a single ''first'' prize.
Abstract: We study a contest with multiple (not necessarily equal) prizes. Contestants have private information about an ability parameter that affects their costs of bidding. The contestant with the highest bid wins the first prize, the contestant with the second-highest bid wins the second prize, and so on until all the prizes are allocated. All contestants incur their respective costs of bidding. The contest's designer maximizes the expected sum of bids. Our main results are: 1) We display bidding equlibria for any number of contestants having linear, convex or concave cost functions, and for any distribution of abilities. 2) If the cost functions are linear or concave, then, no matter what the distribution of abilities is, it is optimal for the designer to allocate the entire prize sum to a single ''first'' prize. 3) We give a necessary and sufficient conditions ensuring that several prizes are optimal if contestants have a convex cost function.

Journal ArticleDOI
TL;DR: This paper examined the role of sectors in aggregate convergence for fourteen OECD countries during 1970-87 and found that manufacturing shows little evidence of either labor productivity or multifactor productivity convergence, while other sectors, especially services, are driving the aggregate convergence result.
Abstract: This paper examines the role of sectors in aggregate convergence for fourteen OECD countries during 1970-87. The major finding is that manufacturing shows little evidence of either labor productivity or multifactor productivity convergence, while other sectors, especially services, are driving the aggregate convergence result. To determine the robustness of the convergence results, the paper introduces a new measure of multifactor productivity which avoids many problems inherent to traditional measures of total factor productivity when comparing productivity levels. The lack of convergence in manufacturing is robust to the method of calculating multifactor productivity. Copyright 1996 by American Economic Association.

Journal ArticleDOI
TL;DR: The authors examined a panel of U.S. and Canadian manufacturing industries to test the models and found support for either model, depending on whether we estimate based on within or between variation, the preponderance of the evidence supports national product.
Abstract: An increasing returns model where varieties are linked to firms predicts home market effects: increases in a country's share of demand cause disproportionate increases in its share of output. In contrast, a constant returns model with national product differentiation predicts a less than proportionate increase. We examine a panel of U.S. and Canadian manufacturing industries to test the models. Although we find support for either model, depending on whether we estimate based on within or between variation, the preponderance of the evidence supports national product

Journal ArticleDOI
TL;DR: For a country that chooses not to "permanently" fix its exchange rate through a currency board, or a common currency, or some kind of dollarization, the only alternative monetary policy that can work well in the long run is one based on the trinity of (i) a flexible exchange rate, (ii) an inflation target, and (iii) a monetary policy rule as mentioned in this paper.
Abstract: For a country that chooses not to "permanently" fix its exchange rate through a currency board, or a common currency, or some kind of dollarization, the only alternative monetary policy that can work well in the long run is one based on the trinity of (i) a flexible exchange rate, (ii) an inflation target, and (iii) a monetary policy rule.' While not often put into this threepart format, the desirability of such a monetary policy in an open economy is, in my view, the clear implication of three corresponding strands of recent monetary research: (i) research on fixed-exchange-rates regimes, including the influential 1995 article "The Mirage of Fixed Exchange Rates" by Maurice Obstfeld and Kenneth Rogoff and the many analyses of the breakdown of fixed-exchange-rate regimes in the late 1990's; (ii) research on the practical success with inflation targeting by Ben Bernanke et al. (1999); and (iii) research on the benefits of simple monetary-policy rules (see e.g., Taylor, 1999a). This clear policy implication, however, does not end the debate about how exchange rates should be taken into account in formulating monetary policy. Even if one excludes capital controls and sterilized exchange-market intervention from consideration because they are not effective or attractive ways to de-link exchangerate movements from the domestic interest rate, a crucial question remains: "How should the instruments of monetary policy (the interest rate or a monetary aggregate) react to the exchange rate? Should policymakers avoid any reaction and focus instead on domestic indicators such as inflation and real GDP? Or is "the rule of thumb" that "a substantial appreciation of the real exchange rate . .. furnishes a prima facie case for relaxing monetary policy," as characterized by Obstfeld and Rogoff (1995, p. 93), a better monetary policy rule? Or perhaps policymakers should heed the Obstfeld-Rogoff warning that "substantial departures from PPP [purchasing-power parity], in the short run and even over decades" make such a policy reaction to the exchange rate undesirable. More generally, if one accepts the trinity concept of monetary policy in an open economy, then what is the role of the exchange rate in the monetarypolicy rule?

Journal ArticleDOI
TL;DR: The military drawdown program of the early 1990s provides an opportunity to obtain estimates of personal discount rates based on large numbers of people making real choices involving large sums as mentioned in this paper, and most of the separatees selected the lump sum, saving taxpayers $1.7 billion in separation costs.
Abstract: The military drawdown program of the early 1990s provides an opportunity to obtain estimates of personal discount rates based on large numbers of people making real choices involving large sums. The program offered over 65,000 separatees the choice between an annuity and a lump-sum payment. Despite break-even discount rates exceeding 17 percent, most of the separatees selected the lump sum--saving taxpayers $1.7 billion in separation costs. Estimates of discount rates range from 0 to over 30 percent and vary with education, age, race, sex, number of dependents, ability test score, and the size of payment.

Journal ArticleDOI
TL;DR: In this article, the authors describe recent empirical work and its relation to theory for one prominent class of principals venture capitalists (VCs), and the empirical studies indicate that VCs attempt to mitigate principal-agent conflicts in the three ways suggested by theory.
Abstract: Theoretical work on the principal-agent problem in financial contracting focuses on the conflicts of interest between an agent / entrepreneur with a venture that needs financing, and a principal / investor providing funds for the venture. Theory has identified three primary ways that the investor / principal can mitigate these conflicts - structuring financial contracts, pre-investment screening, and post-investment monitoring and advising. In this paper, we describe recent empirical work and its relation to theory for one prominent class of principals venture capitalists (VCs). The empirical studies indicate that VCs attempt to mitigate principal-agent conflicts in the three ways suggested by theory. The evidence also shows that contracting, screening, and monitoring are closely interrelated. In screening, the VCs identify areas where they can add value through monitoring and support. In contracting, the VCs allocate rights in order to facilitate monitoring and minimize the impact of identified risks. Also, the equity allocated to VCs provides incentives to engage in costly support activities that increase upside values, rather than just minimizing potential losses. There is room for future empirical research to study these activities in greater detail for VCs, for other intermediaries such as banks, and within firms.

Journal ArticleDOI
TL;DR: In this paper, the authors extended their analysis to the case of a small open economy and showed that under certain conditions, the monetary policy design problem for the small-open economy is isomorphic to the problem of the closed economy that they considered earlier.
Abstract: In Clarida et al. (1999; hereafter CGG), we presented a normative analysis of monetary policy within a simple optimization-based closedeconomy framework. We derived the optimal policy rule and, among other things, characterized the gains from commitment. Also, we made precise the implications for the kind of instrument feedback rule that a central bank should follow in practice. In this paper we show how our analysis extends to the case of a small open economy. Openness complicates the problem of monetary management to the extent the central bank must take into account the impact of the exchange rate on real activity and inflation. How to factor the exchange rate into the overall design of monetary policy accordingly becomes a central consideration. Here we show that, under certain conditions, the monetary-policy design problem for the small open economy is isomorphic to the problem of the closed economy that we considered earlier. Hence, all our qualitative results for the closed economy carry over to this case. Openness does affect the parameters of the model, suggesting quantitative implications. Though the general form of the optimal interest-rate feedback rule remains the same as in the closedeconomy case, for example, how aggressively a central bank should adjust the interest rate in response to inflationary pressures depends on the degree of openness. In addition, openness gives rise to an important distinction between domestic inflation and consumer price inflation (as defined by the CPI). To the extent that there is perfect exchange-rate pass-through, we find that the central bank should target domestic inflation and allow the exchange rate to float, despite the impact of the resulting exchangerate variability on the CPI (Kosuki Aoki, 1999; Gali and Tommaso Monacelli, 2000).