scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Economic Perspectives in 2009"


Journal ArticleDOI
TL;DR: The financial market turmoil in 2007 and 2008 has led to the most severe financial crisis since the Great Depression and threatens to have large repercussions on the real economy as mentioned in this paper The bursting of the housing bubble forced banks to write down several hundred billion dollars in bad loans caused by mortgage delinquencies at the same time the stock market capitalization of the major banks declined by more than twice as much.
Abstract: The financial market turmoil in 2007 and 2008 has led to the most severe financial crisis since the Great Depression and threatens to have large repercussions on the real economy The bursting of the housing bubble forced banks to write down several hundred billion dollars in bad loans caused by mortgage delinquencies At the same time, the stock market capitalization of the major banks declined by more than twice as much While the overall mortgage losses are large on an absolute scale, they are still relatively modest compared to the $8 trillion of US stock market wealth lost between October 2007, when the stock market reached an all-time high, and October 2008 This paper attempts to explain the economic mechanisms that caused losses in the mortgage market to amplify into such large dislocations and turmoil in the financial markets, and describes common economic threads that explain the plethora of market declines, liquidity dry-ups, defaults, and bailouts that occurred after the crisis broke in summer 2007 To understand these threads, it is useful to recall some key factors leading up to the housing bubble The US economy was experiencing a low interest rate environment, both because of large capital inflows from abroad, especially from Asian countries, and because the Federal Reserve had adopted a lax interest rate policy Asian countries bought US securities both to peg the exchange rates at an export-friendly level and to hedge against a depreciation of their own currencies against the dollar, a lesson learned from the Southeast Asian crisis of the late 1990s The Federal Reserve Bank feared a deflationary period after the bursting of the Internet bubble and thus did not counteract the buildup of the housing bubble At the same time, the banking system underwent an important transformation The

2,434 citations


Journal ArticleDOI
Richard S.J. Tol1
TL;DR: Greenhouse gas emissions are fundamental both to the world's energy system and to its food production as discussed by the authors, and they are the mother of all externalities: larger, more complex, and more uncertain than any other environmental problem.
Abstract: Greenhouse gas emissions are fundamental both to the world’s energy system and to its food production. The production of CO2, the predominant gas implicated in climate change, is intrinsic to fossil fuel combustion; specifically, thermal energy is generated by breaking the chemical bonds in the carbohydrates oil, coal, and natural gas and oxidizing the components to CO2 and H2O. One cannot have cheap energy without carbon dioxide emissions. Similarly, methane (CH4) emissions, an important greenhouse gas in its own right, are necessary to prevent the build-up of hydrogen in anaerobic digestion and decomposition. One cannot have beef, mutton, dairy, or rice without methane emissions. Climate change is the mother of all externalities: larger, more complex, and more uncertain than any other environmental problem. The sources of greenhouse gas emissions are more diffuse than any other environmental problem. Every company, every farm, every household emits some greenhouse gases. The effects are similarly pervasive. Weather affects agriculture, energy use, health, and many aspects of nature—which in turn affects everything and everyone. The causes and consequences of climate change are very diverse, and those in low-income countries who contribute least to climate change are most vulnerable to its effects. Climate change is also a long-term problem. Some greenhouse gases have an atmospheric life-time measured in tens of thousands of years. The quantities of emissions involved are enormous. In 2000, carbon dioxide emissions alone (and excluding land use change) were 24 billion metric tons of carbon dioxide (tCO2).

1,054 citations


Journal ArticleDOI
Marc Rysman1
TL;DR: In the case of a video game system, the intermediary is the console producer, while the two sets of agents are consumers and video game developers as mentioned in this paper, and neither consumers nor game developers will be interested in the PlayStation if the other party is not.
Abstract: At a local Best Buy, a child places a new Sony PlayStation 3 on the cashier’s counter while the parents dig out their Visa card. The gaming system and the payment card may appear to have little connection other than this purchase. However, these two items share an important characteristic that is generating a series of economic insights and has important implications for strategic decision making and economic policymaking. Both video game systems and payment cards are examples of two-sided markets. Broadly speaking, a two-sided market is one in which 1) two sets of agents interact through an intermediary or platform, and 2) the decisions of each set of agents affects the outcomes of the other set of agents, typically through an externality. In the case of a video game system, the intermediary is the console producer—Sony in the scenario above—while the two sets of agents are consumers and video game developers. Neither consumers nor game developers will be interested in the PlayStation if the other party is not. Similarly, a successful payment card requires both consumer usage and merchant acceptance, where both consumers and merchants value each others’ participation. Many more products fit into this paradigm, such as search engines, newspapers, and almost any advertisersupported media (examples in which consumers often negatively value, rather than positively value, the participation of the other side), as well as most software or title-based operating systems and consumer electronics. Malls which seek retailers and consumers, convention organizers which seek buyers and sellers, dating services which seek men and women, and The Journal of Economic Perspectives which seeks content and readership, all experience the economics of two-sided markets. The multi-sided nature of many Internet and high-technology markets, as well as

1,039 citations


Journal ArticleDOI
TL;DR: In a typical leveraged buyout transaction, the private equity firm buys majority control of an existing or mature firm as mentioned in this paper. This arrangement is distinct from venture capital firms that typically invest in young or emerging companies, and typically do not obtain majority control.
Abstract: In a leveraged buyout, a company is acquired by a specialized investment firm using a relatively small portion of equity and a relatively large portion of outside debt financing. The leveraged buyout investment firms today refer to themselves (and are generally referred to) as private equity firms. In a typical leveraged buyout transaction, the private equity firm buys majority control of an existing or mature firm. This arrangement is distinct from venture capital firms that typically invest in young or emerging companies, and typically do not obtain majority control. In this paper, we focus specifically on private equity firms and the leveraged buyouts in which they invest, and we will use the terms private equity and leveraged buyout interchangeably. Leveraged buyouts first emerged as an important phenomenon in the 1980s. As leveraged buyout activity increased in that decade, Jensen (1989) predicted that the leveraged buyout organizations would eventually become the dominant corporate organizational form. He argued that the private equity firm itself combined concentrated ownership stakes in its portfolio companies, high-powered incentives for the private equity firm professionals, and a lean, efficient organization with minimal overhead costs. The private equity firm then applied performance-based managerial compensation, highly leveraged capital structures (often relying on junk bond financing), and active governance to the companies in which it invested. According to Jensen, these structures were

598 citations


Journal ArticleDOI
TL;DR: The story of the bank run on Northern Rock as discussed by the authors was a classic example of a bank run, where depositors waiting in line outside the branch offices of a United Kingdom bank called Northern Rock to withdraw their money.
Abstract: In September 2007, television viewers and newspaper readers around the world saw pictures of what looked like an old-fashioned bank run—that is, depositors waiting in line outside the branch offices of a United Kingdom bank called Northern Rock to withdraw their money. The previous U.K. bank run before Northern Rock was in 1866 at Overend Gurney, a London bank that overreached itself in the railway and docks boom of the 1860s. Bank runs were not uncommon in the United States up through the 1930s, but they have been rare since the start of deposit insurance backed by the Federal Deposit Insurance Corporation. In contrast, deposit insurance in the United Kingdom was a partial affair, funded by the banking industry itself and insuring only a part of the deposits—at the time of the run, U.K. bank deposits were fully insured only up to 2,000 pounds, and then only 90 percent of the deposits up to an upper limit of 35,000 pounds. When faced with a run, the incentive to withdraw one’s deposits from a U.K. bank was therefore very strong. For economists, the run on Northern Rock at first seemed to offer a rare opportunity to study at close quarters all the elements involved in their theoretical models of bank runs: the futility of public statements of reassurance, the mutually reinforcing anxiety of depositors, as well as the power of the media in galvanizing and channeling that anxiety through the power of television images. However, the storyline of the Northern Rock bank run does not fit the conventional narrative. On September 13, 2007, the BBC’s evening television news broadcast first broke the news that Northern Rock had sought the Bank of England’s support. The next morning, the Bank of England announced that it would provide emergency liquidity support. It was only after that announcement—that is, after the

529 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examine how the process of securitization allowed trillions of dollars of risky assets to be transformed into securities that were widely considered to be safe, and argue that two key features of the structured finance machinery fueled its spectacular growth.
Abstract: The essence of structured finance activities is the pooling of economic assets like loans, bonds, and mortgages, and the subsequent issuance of a prioritized capital structure of claims, known as tranches, against these collateral pools. As a result of the prioritization scheme used in structuring claims, many of the manufactured tranches are far safer than the average asset in the underlying pool. This ability of structured finance to repackage risks and to create “safe” assets from otherwise risky collateral led to a dramatic expansion in the issuance of structured securities, most of which were viewed by investors to be virtually risk-free and certified as such by the rating agencies. At the core of the recent financial market crisis has been the discovery that these securities are actually far riskier than originally advertised. We examine how the process of securitization allowed trillions of dollars of risky assets to be transformed into securities that were widely considered to be safe, and argue that two key features of the structured finance machinery fueled its spectacular growth. First, we show that most securities could only have received high credit ratings if the rating agencies were extraordinarily confident about their ability to estimate the underlying securities’ default risks, and how likely defaults were to be correlated. Using the prototypical structured finance security—the collateralized debt obligation (CDO)—as an example, we illustrate that issuing a capital structure amplifies errors in evaluating the risk of the underlying securities.

463 citations


Journal ArticleDOI
TL;DR: In this article, the authors calculate the present value of the already-promised pension liabilities of the 50 U.S. states, assuming that states cannot default on pension benefits that workers have already earned.
Abstract: As of December 2008, state governments had approximately $1.94 trillion set aside in pension funds for their employees. How does the value of these assets compare to the present value of states' pension liabilities? Just as future Social Security and Medicare liabilities do not appear in the headline numbers of the U.S. federal debt, the financial liability from underfunded public pensions does not appear in the headline numbers of state debt. If pensions are underfunded, then the gap between pension assets and liabilities is off-balance-sheet government debt. We show that government accounting standards require states to use procedures that severely understate their liabilities. We then discuss the true economic funding of state public pension plans. Using market-based discount rates that reflect the risk profile of the pension liabilities, we calculate that the present value of the already-promised pension liabilities of the 50 U.S. states amount to $5.17 trillion, assuming that states cannot default on pension benefits that workers have already earned. Net of the $1.94 trillion in assets, these pensions are underfunded by $3.23 trillion. This "pension debt" dwarfs the states' publicly traded debt of $0.94 trillion. And we show that even before the market collapse of 2008, the system was economically severely underfunded, though public actuarial reports presented the plans' funding status in a more favorable light.

334 citations


Journal ArticleDOI
TL;DR: Mankiw et al. as discussed by the authors explored the interplay between tax theory and tax policy and identified key lessons policymakers might take from the academic literature on how taxes ought to be designed, and discussed the extent to which these lessons are reflected in actual tax policy.
Abstract: The optimal design of a tax system is a topic that has long fascinated economic theorists and flummoxed economic policymakers. This paper explores the interplay between tax theory and tax policy. It identifies key lessons policymakers might take from the academic literature on how taxes ought to be designed, and it discusses the extent to which these lessons are reflected in actual tax policy. We begin with a brief overview of how economists think about optimal tax policy, based largely on the foundational work of Ramsey (1927) and Mirrlees (1971). We then put forward eight general lessons suggested by optimal tax theory as it has developed in recent decades: 1) Optimal marginal tax rate schedules depend on the distribution of ability; 2) The optimal marginal tax schedule could decline at high incomes; 3) A flat tax, with a universal lump-sum transfer, could be close to optimal; 4) The optimal extent of redistribution rises with wage inequality; 5) Taxes should depend on personal characteristics as well as income; 6) Only final goods ought to be taxed, and typically they ought to be taxed uniformly; 7) Capital income ought to be untaxed, at least in expectation; and 8) In stochastic dynamic economies, optimal tax policy requires increased sophistication. For each lesson, we discuss its theoretical underpinnings and the extent to which it is consistent with actual tax policy. To preview our conclusions, we find that there has been considerable change in the theory and practice of taxation over the past several decades—although the two paths have been far from parallel. Overall, tax policy has moved in the y N. Gregory Mankiw is Professor of Economics, Matthew Weinzierl is Assistant Professor of Business Administration, and Danny Yagan is a Ph.D. candidate in economics, all at Harvard University, Cambridge, Massachusetts. Their e-mail addresses are ngmankiw@

298 citations


Journal ArticleDOI
TL;DR: In this article, the authors illuminate recent developments in the world oil market from the perspective of economic theory, focusing on the role of speculators in the recent spike in the price of oil.
Abstract: The world oil market is regarded by many as a puzzle. Why are oil prices so volatile? What is OPEC and what does OPEC do? Where are oil prices headed in the long run? Is “peak oil” a genuine concern? Why did oil prices spike in the summer of 2008, and what role did speculators play? Any attempt to answer these questions must be informed and disciplined by economics. Such is the purpose of this essay: to illuminate recent developments in the world oil market from the perspective of economic theory.

284 citations


Journal ArticleDOI
TL;DR: Online advertising accounts for almost 9 percent of all advertising in the United States as discussed by the authors, and this share is expected to increase as more media is consumed over the Internet and as more advertisers shift spending to online technologies.
Abstract: Online advertising accounts for almost 9 percent of all advertising in the United States. This share is expected to increase as more media is consumed over the Internet and as more advertisers shift spending to online technologies. The expansion of Internet-based advertising is transforming the advertising business by providing more efficient methods of matching advertisers and consumers and transforming the media business by providing a source of revenue for online media firms that competes with traditional media firms. The precipitous decline of the newspaper industry is one manifestation of the symbiotic relationship between online content and advertising. Online-advertising is provided by a series of interlocking multisided platforms that facilitate the matching of advertisers and consumers. These intermediaries increasingly make use of detailed individual data, predictive methods, and matching algorithms to create more efficient matches between consumers and advertisers. Some of their methods raise public policy issues that require balancing benefits from providing consumers more valuable advertising against the possible loss of valuable privacy.

282 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on how the market for college education has re-sorted students among schools as the costs of distance and information have fallen, and demonstrate that the stakes associated with choosing a college are greater today than they were four decades ago because very selective colleges are offering very large perstudent resources and per-student subsidies.
Abstract: Over the past few decades, the average college has not become more selective: the reverse is true, though not dramatically. People who believe that college selectivity is increasing may be extrapolating from the experience of a small number of colleges such as members of the Ivy League, Stanford, Duke, and so on. These colleges have experienced rising selectivity, but their experience turns out to be the exception rather than the rule. Only the top 10 percent of colleges are substantially more selective now than they were in 1962. Moreover, at least 50 percent of colleges are substantially less selective now than they were in 1962. To understand changing selectivity, we must focus on how the market for college education has re-sorted students among schools as the costs of distance and information have fallen. In the past, students' choices were very sensitive to the distance of a college from their home, but today, students, especially high-aptitude students, are far more sensitive to a college's resources and student body. It is the consequent re-sorting of students among colleges that has, at once, caused selectivity to rise in a small number of colleges while simultaneously causing it to fall in other colleges. This has had profound implications for colleges' resources, tuition, and subsidies for students. I demonstrate that the stakes associated with choosing a college are greater today than they were four decades ago because very selective colleges are offering very large per-student resources and per-student subsidies, enabling admitted students to make massive human capital investments.

Journal ArticleDOI
TL;DR: In this article, the authors examine the data on online crime; discuss the collective action aspects of the problem; demonstrate how agile attackers shift across national borders as earlier targets wise up to their tactics; describe ways to improve law-enforcement coordination; and explore how defenders' incentives affect the outcomes.
Abstract: This paper will focus on online crime, which has taken off as a serious industry since about 2004. Until then, much of the online nuisance came from amateur hackers who defaced websites and wrote malicious software in pursuit of bragging rights. But now criminal networks have emerged -- online black markets in which the bad guys trade with each other, with criminals taking on specialized roles. Just as in Adam Smith's pin factory, specialization has led to impressive productivity gains, even though the subject is now bank card PINs rather than metal ones. Someone who can collect bank card and PIN data, electronic banking passwords, and the information needed to apply for credit in someone else's name can sell these data online to anonymous brokers. The brokers in turn sell the credentials to specialist cashiers who steal and then launder the money. We will examine the data on online crime; discuss the collective-action aspects of the problem; demonstrate how agile attackers shift across national borders as earlier targets wise up to their tactics; describe ways to improve law-enforcement coordination; and we explore how defenders' incentives affect the outcomes.

Journal ArticleDOI
TL;DR: In this article, the authors provide longitudinal measures that separate changes in income inequality into changes that permanently change income to new levels and those that only reflect transitory change and refer to the latter as changes in "income instability" and discuss how the instability of individual earnings and family income has evolved over the last quarter century.
Abstract: The inequality of earnings and of family incomes in the United States has increased since the late 1970s (Autor, Katz, and Kearney, 2008) This increase in cross-sectional inequality has largely been interpreted as a growing disparity in permanent incomes between those with high incomes and those with low incomes However, growing inequality could equally well be a result of growing income instability If workers experience increasingly large fluctuations in earnings from year to year, this would also increase the measured inequality of earnings from year to year For example, if everyone maintained the same level of permanent income but some experienced a $500 increase in their income in the first year and then a reversal to $500 below their permanent incomes in the following year, while an equal number experienced the opposite (a drop in income followed by an offsetting rise), measured inequality would rise though nothing would have happened to the dispersion of permanent incomes While this example is an extreme one, it points to the fact that the large rise in earnings inequality between the 1970s and the 1990s could reflect either a rise in disparity of permanent incomes, a rise in earnings instability, or some portion of both Without longitudinal data, it is impossible to distinguish between these two very different explanations for increased cross-sectional measures of inequality In this paper, we provide longitudinal measures that separate changes in income inequality into changes that permanently change income to new levels and those that only reflect transitory change We refer to the latter as changes in “income instability” and discuss how the instability of individual earnings and family income in the United States has evolved over the last quarter century

Journal ArticleDOI
TL;DR: In the early stages of the crisis, the situation often arose in which a well-capitalized bank was forced to make sudden large loans based on previously committed lines of credit as mentioned in this paper.
Abstract: In summer 2007, U.S. and global financial markets found themselves facing a potential financial crisis, and the U.S. Federal Reserve found itself in a difficult situation. It was becoming clear that banks and other financial institutions would ultimately lose tens or even hundreds of billions of dollars from their exposure to subprime mortgage market loans. Bank lending is closely tied to bank capital or net worth—specifically, bank regulators require that loans not exceed a certain multiple of capital. Thus, the Federal Reserve faced the danger of a sharp contraction in credit and bank lending in a way that threatened a deep recession or worse. When this kind of event happens, the job of the central bank is to assure that financial institutions have the necessary funds to conduct their daily business; that they have the “liquidity” they need to make timely payments and transfers. Modern financial institutions need to replenish their funding every day. In the United States alone, literally trillions of dollars are transferred between banks each day to support the $50 trillion credit outstanding in the economy as a whole. Commercial banks require funds to initiate the mortgages, auto loans, and credit card debt they then sell into financial markets, while investment banks finance much of their activity with daily borrowing. In the early stages of the crisis, the situation often arose in which a wellcapitalized bank was forced to make sudden large loans based on previously committed lines of credit. In this circumstance, central bank actions can ease

Journal ArticleDOI
TL;DR: In this paper, the authors revisited two conventional wisdom in the current debate about poverty, paying close attention to the price data underlying these findings: that the poor pay more than households of higher income for the goods and services they purchase; and that poverty rates have remained essentially flat since the late 1960s, raising questions about the success of the policies implemented to reduce poverty.
Abstract: In this paper, we revisit two pieces of conventional wisdom in the current debate about poverty, paying close attention to the price data underlying these findings: that the poor pay more than households of higher income for the goods and services they purchase; and that poverty rates, at least as measured by the U.S. Census, have remained essentially flat since the late 1960s, raising questions about the success of the policies implemented to reduce poverty. By examining scanner data on thousands of household purchases, we find that the poor pay less —not more—for the goods they purchase. And by extending the advances on price measurement in the recent decade back to the 1970s, we find that current poverty rates are less than half of the official numbers.

Journal ArticleDOI
TL;DR: This article found that the disparity in life satisfaction between residents of transition and non-transition countries is much larger among the elderly and that deterioration in public goods provision, an increase in macroeconomic volatility, and a mismatch of human capital of residents educated before transition which disproportionately affected the aged population explain a great deal of the difference between transition countries and other countries with similar income and other macroeconomic conditions.
Abstract: Despite strong growth performance in transition economies in the last decade, residents of transition countries report abnormally low levels of life satisfaction. Using data from the World Values Survey and other sources, we study various explanations of this phenomenon. First, we document that the disparity in life satisfaction between residents of transition and non-transition countries is much larger among the elderly. Second, we find that deterioration in public goods provision, an increase in macroeconomic volatility, and a mismatch of human capital of residents educated before transition which disproportionately affected the aged population explain a great deal of the difference in life satisfaction between transition countries and other countries with similar income and other macroeconomic conditions. The rest of the gap is explained by the difference in the quality of the samples. As in other countries, life satisfaction in transition countries is strongly related to income; but, due to a higher non-response of high-income individuals in transition countries, the survey-data estimates of the recent increase in life satisfaction, driven by 10-year sustained economic growth in transition region, are biased downwards. The evidence suggests that if the region keeps growing at current rates, life satisfaction in transition countries will catch up with the "normal" level in the near future.

Journal ArticleDOI
TL;DR: Why, for example, should average grades in English be much higher than averagegrades in chemistry, and what is going on when relative grades change, when a department's grading practices change markedly relative to other departments?
Abstract: The term “grade inflation” covers a multitude of phenomena, some of which are even alleged to be sins. Continuing increases in average grades have been widely documented in many universities over the last several decades (for example, Sabot and Wakeman-Linn, 1991; Johnson, 2003). Conversely, cases of grade deflation are rare and short-lived, although in some settings, such as first-year law courses, some universities have held to a strict curve. Also widely documented, and often associated with grade inflation, are systematic differences in grade levels by field of study, with a common belief that the sciences and math grade harder than the social sciences, which in turn grade harder than the humanities—and that economics behaves more like the natural sciences than like the social sciences. The general persistence of these relative differences in grades seem to us to be more interesting and more difficult to explain than the persistence of modest grade inflation in general, and they are the principal focus of this paper. Why, for example, should average grades in English be much higher than average grades in chemistry? And what is going on when a department’s grading practices change markedly relative to other departments? We begin with an overview of some evidence on grade inflation by department and course level, focusing in particular on detailed data that we have from the University of Michigan. Grades in undergraduate arts and sciences courses at the University of Michigan have, with a few exceptions, been rising slowly and steadily since at least 1992. But our main focus is to explore some possible reasons for the highly (but not perfectly) stable differences in the grading practices of departments. Perhaps surprisingly, we uncover a story that is much richer and more interesting than some variant of “the sciences (which are virtuous or mean, depending on your point of view) grade tough, and the humanities (which are the opposite, again depending) grade easy.” Our basic story is fairly simple. Grades are an element of an intra-university economy that determines, among other things, enrollments and the sizes of departments. Departments supply courses and students demand them, although the payment from students to faculty is mediated by the university administration, and there are also nonpecuniary rewards and costs associated with teaching. Departments generally would prefer small classes populated by excellent and highly motivated students. The dean, meanwhile, would like to see departments supply some target quantity of credit hours—the more the better, other things equal—and will penalize departments that don’t do enough teaching. In this framework, grades are one mechanism that departments can use to influence the number of students who will take a given class. But both the costs and consequences of different grading policies vary systematically across departments and courses. Grading is always at least somewhat costly, but the cost is greater the greater are the opportunities for students to quarrel with the fairness of the grading standards and methods that faculty use. On the demand side, some courses have close substitutes while others do not, and one would expect the grade-elasticity of demand to behave in the usual way. This framework leads to several hypotheses about relative grades across departments and courses. First, the distribution of grades is likely to be lower where courses are required, and where there are agreed-upon and readily assessed criteria—right or wrong answers—for grading. By contrast, departments that evaluate student performance using interpretative methods will tend to have higher grades, because using these methods increases the personal cost to instructors of assigning and defending low grades. Second, upper-division classes are likely to have higher grades than lower-division classes, both because students have selected into the upper-division courses where their performance is likely to be stronger and because faculty want to support(and may even like) their student majors. Third, grades can be used in conjunction with other tools to attract students to departments that have low enrollments and to deter students from courses of study that are congested. We find some evidence in support of each of these patterns. As it happens, the consequence of the preceding tendencies is that, indeed, the sciences (mostly) grade harder than the humanities. But there are some surprises. For example, consistent with our framework but not consistent with the notion that the humanities grade softer than the sciences for some intrinsic reason, we find that at Michigan introductory physics and chemistry labs grade much easier than second-year French courses. We find relative grades to be both more interesting and more amenable to analysis than the low rate of general grade inflation. Inflation, we expect, arises from two complementary features of the landscape: for any instructor in any course, grading a little more softly than expected is costless and it makes students happy. The instructor may gain some benefit in teaching evaluations (Johnson, 2003), but even if not, the opportunity to (in effect) print money is one that at least some instructors will find appealing. As long as some faculty respond to this opportunity, others in the department will be under pressure to adjust to the new norms for grades, and at least some other departments will endeavor to follow this trend in order to maintain market share and perhaps also to avoid the unpleasantness of widespread student grumbling. This story is hard to verify or to refute, in part because general grade inflation has proceeded without interruption for so long, but the key ingredients are surely in place. We conclude with a discussion of implications for further research and for academic policy. We argue that differential grading standards have potentially serious negative consequences for the ideal of liberal education. At the same time, we conclude that any discussion of a policy response to grade inflation must begin by recognizing that American colleges and universities are now in at least the fifth decade of well-documented grade inflation and differences in grading norms by field. Current grading behavior must and will be interpreted in the context of current norms and expectations about grades, not according to some dimly imagined (anyone who actually remembers it is retired) age of uniform standards across departments. Proposals that attempt to alter grading behavior will face the costs of acting against prevailing customs and expectations, whether in altering pre-existing patterns of grades across departments within a college or university or in attempting to alter grades in one institution while recognizing that other universities may not change.

Journal ArticleDOI
TL;DR: The United States is moving closer to enacting a policy to reduce domestic emissions of greenhouse gases, and a key element in any plan to reduce emissions will be to place a price on greenhouse gas emissions as discussed by the authors.
Abstract: The United States is moving closer to enacting a policy to reduce domestic emissions of greenhouse gases. A key element in any plan to reduce emissions will be to place a price on greenhouse gas emissions. This paper discusses the different approaches that can be taken to price emissions and assesses their strengths and weaknesses.

Journal ArticleDOI
TL;DR: This paper surveyed definitions of economics from contemporary principles of economics textbooks and found that economics is the study of the economy, the science of choice, and human behavior, and that human behavior is the most important aspect of economics.
Abstract: Modern economists do not subscribe to a homogeneous definition of their subject. Surveying definitions of economics from contemporary principles of economics textbooks, we find that economics is the study of the economy, the study of the coordination process, the study of the effects of scarcity, the science of choice, and the study of human behavior. At a time when economists are tackling subjects as diverse as growth, auctions, crime, and religion with a methodological toolkit that includes real analysis, econometrics, laboratory experiments, and historical case studies, and when they are debating the explanatory roles of rationality and behavioral norms, any concise definition of economics is likely to be inadequate. This lack of agreement on a definition does not necessarily pose a problem for the subject. Economists are generally guided by pragmatic considerations of what works or by methodological views emanating from various sources, not by formal definitions: to repeat the comment attributed to Jacob Viner, economics is what economists do. However, the way the definition of economics has evolved is more than a historical curiosity. At times, definitions are used to justify what economists are doing. Definitions can also reflect the direction in which their authors want to see the subject move and can even influence practice.

Journal ArticleDOI
TL;DR: The average nominal share prices of common stocks traded on the New York Stock Exchange have remained constant at approximately $35 per share since the Great Depression as a result of stock splits as mentioned in this paper.
Abstract: The average nominal share prices of common stocks traded on the New York Stock Exchange have remained constant at approximately $35 per share since the Great Depression as a result of stock splits. It is surprising that U.S. firms actively maintained constant nominal prices for their shares while general prices in the economy went up more than tenfold. This is especially puzzling given that commissions paid by investors on trading ten $35 shares are about ten times those paid on a single $350 share. We review potential explanations including signaling and optimal trading ranges and find that none of the existing theories are able to explain the observed constant nominal prices. We suggest that the evidence is consistent with the idea that customs and norms can explain the nominal price puzzle.

Journal ArticleDOI
TL;DR: In this article, the potential and actual savings that consumers realize from four particular types of purchasing behavior: purchasing on sale, buying in bulk (at a lower per unit price); buying generic brands; and choosing outlets were analyzed.
Abstract: This paper documents the potential and actual savings that consumers realize from four particular types of purchasing behavior: purchasing on sale; buying in bulk (at a lower per unit price); buying generic brands; and choosing outlets. How much can and do households save through each of these behaviors? How do these patterns vary with consumer demographics? We use data collected by a marketing firm on all food purchases brought into the home for a large, nationally representative sample of U.K. households in 2006. We are interested in how consumer choice affects the measurement of price changes. In particular, a standard price index based on a fixed basket of goods will overstate the rise in the true cost of living because it does not properly consider sales and bulk purchasing. According to our measures, the extent of this bias might be of the same or even greater magnitude than the better-known substitution and outlet biases.

Journal ArticleDOI
TL;DR: In this article, the authors focus on the moderate emission reductions that can be achieved using existing technologies, but on the breakthrough technologies that are needed to reduce emissions dramatically, and it is possible that the revolution needed to dramatically reduce emissions of greenhouse gases will fail.
Abstract: Emissions of CO2 and other greenhouse gases can be reduced significantly using existing technologies, but stabilizing concentrations will require a technological revolution--a "revolution" because it will require fundamental change, achieved within a relatively short period of time. Inspiration for a climate-technology revolution is often drawn from the Apollo space program or the Manhattan Project, but averting dangerous climate change cannot be "solved" by a single new technology, deployed by a single government. The technological changes needed to address climate change fundamentally will have to be pervasive; they will have to involve markets; and they will have to be global in scope. My focus in this paper is not on the moderate emission reductions that can be achieved using existing technologies, but on the breakthrough technologies that are needed to reduce emissions dramatically. The challenges are formidable. Indeed, it is possible that the revolution needed to dramatically reduce emissions of greenhouse gases will fail. Should the climate change abruptly, the incentive to "engineer" the climate will be strong. There will be a climate-technology revolution, but its nature will depend on the institutions we develop to address the challenge we face.

Journal ArticleDOI
TL;DR: The authors studied the demographics and consumption patterns of those who subscribe to adult entertainment websites and found that free access offers consumers an extra benefit: online payments tend to create records documenting the fact of a customer's purchase; consumers of free content may feel more confident that their purchases will remain confidential.
Abstract: This paper studies the adult online entertainment industry, particularly the consumption side of the market. In particular, it focuses on the demographics and consumption patterns of those who subscribe to adult entertainment websites. On the surface, this business would seem to face a number of obstacles. Regulatory and legal barriers have already been mentioned. In addition, those charging for access to adult entertainment face competition from similar content available without a fee. In the context of adult entertainment, free access offers consumers an extra benefit: online payments tend to create records documenting the fact of a customer's purchase; consumers of free content may feel more confident that their purchases will remain confidential. More broadly, measured levels of religiosity in American are high. On the other hand, social critics often argue that the rise of Internet pornography is contributing to a coarsening of American culture. Do consumption patterns of online adult entertainment reveal two separate Americas? Or is the consumption of online adult entertainment widespread, regardless of legal barriers, potential for embarrassment, and even religious conviction?

Journal ArticleDOI
TL;DR: In this article, the authors focus on the pricing aspect of the "net neutrality" debate, in particular, the de facto ban on fees levied by Internet service providers on content providers to reach users.
Abstract: This paper focuses on the pricing aspect of the "net neutrality" debate -- in particular, the de facto ban on fees levied by Internet service providers on content providers to reach users. This "zero-price" rule may prove desirable for several reasons. Using a two-sided market analysis, we suggest that it subsidizes creativity and innovation in new content creation -- goals shared by copyright and patent laws. The rule also helps to solve a coordination problem: since Internet service providers do not completely internalize the effects of their own pricing decisions, lack of regulation may lead to even higher fees charged by all. Finally, allowing for such fees runs the risk of creating horizontally differentiated Internet service providers with different libraries of accessible content, thereby foreclosing consumers and leading to Internet fragmentation.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the effect of the implemented policy on patterns of course choice and grade inflation and identify the extent to which the change in student behavior resulted in an increase in the universitywide mean grade.
Abstract: Grade inflation and high grade levels have been subjects of concern and public debate in recent decades. In the mid-1990s, Cornell University's Faculty Senate had a number of discussions about grade inflation and what might be done about it. In April 1996, the Faculty Senate voted to adopt a new grade reporting policy which had two parts: 1) the publication of course median grades on the Internet; and 2) the reporting of course median grades in students' transcripts. The policy change followed the determination of a university committee that "it is desirable for Cornell University to provide more information to the reader of a transcript and produce more meaningful letter grades." It was hoped that "More accurate recognition of performance may encourage students to take courses in which the median grade is relatively low." The median grade policy has remained to date only partially implemented: median grades have been reported online since 1998 but do not yet appear in transcripts. We evaluate the effect of the implemented policy on patterns of course choice and grade inflation. Specifically, we test two related hypotheses: First, all else being equal, the availability of online grade information will lead to increased enrollment into leniently graded courses. Second, high-ability students will be less attracted to the leniently graded courses than their peers. Building on these results we perform an exercise that identifies the extent to which the change in student behavior resulted in an increase in the university-wide mean grade.

Journal ArticleDOI
TL;DR: For example, this paper showed that while the application rate to four-year colleges has steadily increased over the last several decades, the decline in cohort size between 1982 and 1992 left the number of applicants practically unchanged between the two years, and from 1992 to 2004, on the other hand, from 1.19 million to 1.71 million students, an increase of 44 percent, as rising application rates and growing cohort size reinforced each other.
Abstract: During the last several decades, it has become increasingly difficult to gain entry into an American four-year college or university. Growing numbers of students compete for admission to such schools: the number of college applicants has doubled since the early 1970s, while school sizes have changed little. This increase is due both to the increasing fraction of high school graduates applying for college and more recently to the increase in the size of the college-aged cohorts. Using data from the Digest of Education Statistics (Snyder, Dillow, and Hoffman, 2009) and various National Center for Education Statistics (NCES) surveys, we summarize these trends in Table 1. The table shows that while the application rate to four-year colleges has steadily increased over the last several decades, the decline in cohort size between 1982 and 1992 left the number of applicants practically unchanged between the two years. From 1992 to 2004, on the other hand, the number of applicants to four-year colleges grew from 1.19 million to 1.71 million students, an increase of 44 percent, as rising application rates and growing cohort size reinforced each other. The pattern was slightly different for selective private and public colleges, which saw the number of applicants rise by 10 to 15 percent over the 1980s despite declining cohort size. While the application rate to selective privates dipped slightly between 1992 and 2004, the number of applicants still grew by 30,000, or 18 percent, due to growing cohort size. Table 1 Supply and Demand Trends in College-Going (thousands) In the face of growing demand, the supply of admission slots at four-year colleges did not keep pace. According to our calculations using data from the Annual Survey of Colleges, a near-census of four-year postsecondary institutions in the United States conducted by the College Board, the top 20 private universities and top 20 liberal arts colleges saw only a 0.7 percent change in average under-graduate enrollment from 1986 to 2003. Those ranked 21 to 50 also experienced relatively little growth (4.9 percent and 6.8 percent at private universities and liberal arts colleges, respectively). In contrast, other private four-year institutions grew nearly 16 percent during the period. Public institutions showed more expansion during this period with enrollments increasing 15.2 percent at the top 20 public universities, 10.5 percent at public universities ranked 21 to 47, and 12.8 percent at other public institutions. This increase in enrollment at the most selective public institutions appears largely driven by transfer students, many assumed to be from public two-year colleges. However, when focusing on the sizes of the incoming freshmen classes, the change in enrollment at public institutions has been much smaller. Because fewer than 500,000 slots were added in total at four-year schools from 1992 to 2004, supply did not keep pace with demand, and college selectivity increased. High school seniors today are subject to more competition than at any time in the recent past. The increased overall demand for a college education, presumably, can largely be explained by the dramatic increases in the value of such an education since the 1970s (Heckman, Lochner, and Todd, 2006; Goldin and Katz, 2008). The increased demand for admission to selective schools in particular is plausibly related to the fact that the particular institution a student attends has become increasingly important. Since 1970, income distribution has widened among college-educated workers, and Hoxby and Long (1999) find that nearly half of the explained growth in this dispersion is due to the increasing concentration of peer and financial resources at more selective colleges and universities relative to other institutions. Other work has also documented this increasing segmentation within higher education (Hoxby, 1997, this issue; Bound, Lovenheim, and Turner, 2008). The spread of information through the advent of the U.S. News and World Report and other rankings systems have also given students, their families, and society more data with which to evaluate college quality. As emphasized by Hoxby in this issue, the college market has shifted from regional in focus to national. Also, as more workers are college educated, employers may view the average college-educated worker as less productive than in the past. Under this signaling type of framework, a degree from an elite college becomes more valuable. All of these factors likely play a role in increasing the number of high school graduates who consider elite colleges. This paper begins by documenting the trends of increasing competition in higher education, including how these increases have varied across groups, from the perspective of both institutions and students. It then explores the ways in which this phenomenon has influenced student behavior, in terms of academic preparation and high school activities, standardized test-taking, and college application behavior. Evidence from multiple sources suggests that a significant fraction of students are increasingly searching for ways to maximize their likelihood of admittance into a selective institution. As theory would predict, students have been driven to invest more in signals of ability and to raise their qualifications with the hope of increasing their chances of gaining entry into a selective institution. It has also driven students to alter their approaches to the college application. The extent of student reactions has differed along the ability distribution and by region, as the returns to such investments and changes in application approaches also vary by student. Finally, the paper explores whether such student reactions to growing competition have translated into longer-term effects on the amount that students learn. From a theoretical point of view, the increased competition could have induced high school students to work harder and learn more or, alternatively, could have lead to the reverse by prompting investments in nonproductive signals. Credible evidence on the net effect of increased competition is, needless to say, difficult to find. However, comparisons across regions of the country where competition is more versus less severe provides little evidence that increased competition has had positive effects on what students learn and even provides some suggestive evidence that the reverse might be true.

Journal ArticleDOI
TL;DR: The authors introduce the American Legal Realism, a jurisprudential movement of lawyers, judges, and law professors that flourished in the early 20th century, and develop a perspective on judging that can usefully be understood as the modern manifestation of American legal realism.
Abstract: Economists have made great progress in understanding the incentives and behavior of actors who operate outside of traditional economic markets, including voters, legislators, and bureaucrats. The incentives and behavior of judges, however, remain largely opaque. Do judges act as neutral third-party enforcers of substantive decisions made by others? Are judges “ordinary” policymakers who advance whatever outcomes they favor without any special consideration for law as such? Emerging recent scholarship has started to explore more nuanced conceptions of how law, facts, and judicial preferences may interact to influence judicial decisions. This work develops a perspective on judging that can usefully be understood as the modern manifestation of American Legal Realism, a jurisprudential movement of lawyers, judges, and law professors that flourished in the early twentieth century. The purpose of this essay is to introduce, in simplified form, the Realist account of judicial decision making; to contras...

Journal ArticleDOI
TL;DR: For more than a century, diversified longhorizon investors in America's stock market have invariably received much higher returns than investors in bonds: a return gap averaging some six percent per year that Rajnish Mehra and Edward Prescott (1985) labeled the equity premium puzzle as discussed by the authors.
Abstract: (Introduction, initial paragraphs) For more than a century, diversified longhorizon investors in America’s stock market have invariably received much higher returns than investors in bonds: a return gap averaging some six percent per year that Rajnish Mehra and Edward Prescott (1985) labeled the “equity premium puzzle.” The existence of this equity return premium has been known for generations: more than eighty years ago financial analyst Edgar L. Smith(1924) publicized the fact that longhorizon investors in diversified equities got a very good deal relative to investors in debt: consistently higher longrun average returns with less risk. It was true, Smith wrote three generations ago, that each individual company’s stock was very risky: “subject to the temporary hazard of hard times, and [to the hazard of] a radical change in the arts or of poor corporate management.” But these risks could be managed via diversification across stocks: “effectively eliminated through the application of the same principles which make the writing of fire and life insurance policies profitable.” Edgar L. Smith was right. Common stocks have consistently been extremely attractive as longterm investments. Over the half century before Smith wrote, the Cowles Commission index of American 3 stock prices deflated by consumer prices shows an average real return on equities of 6.5 percent per year— compared to an average real longterm government bond return of 3.6 percent and an average real bill return of 4.5 percent. 1 Since the start of the twentieth century, the Cowles Commission index linked to the Standard and Poor’s Composite shows an average real equity return of 6.0 percent per year, compared to a real bill return of 1.6 percent per year and a real longterm government bond return of 1.8 percent per year. Since World War II equity returns have averaged 6.9 percent per year, bill returns 1.4 percent per year, and bond returns 1.1 percent per year. Similar gaps between stock and bond and bill returns have typically existed in other economies. Mehra (2003) 2 reports an annual equity return premium of 4.6 percent in postWorld War II Britain, 3.3 percent in Japan since 1970, and 6.6 percent and 6.3 percent respectively in Germany and Britain since the mid1970s.

Journal ArticleDOI
TL;DR: In this paper, the authors describe the contracts between private equity funds and investors and the returns earned by investors, and explore one potential answer (and probably the most controversial): that some investors are fooled.
Abstract: As a step towards understanding whether a private equity governance structure reduces overall agency conflicts relative to a public equity governance structure (as is often argued), this paper describes the contracts between private equity funds and investors, and the returns earned by investors. The paper sets the stage with a puzzle: the average performance of private equity funds is above that of the Standard and Poor's 500 - the main public stock market index - before fees are charged, but below that benchmark after fees are charged. Why are the payments to private equity buyout funds so large? Why does the marginal investor invest in buyout funds? I explore one potential answer (and probably the most controversial): that some investors are fooled. I show that the fee contracts for these funds are opaque. Considering this and the way that compensation contracts bury, in details, costly provisions that are difficult to justify on the basis of proper incentive alignment, it would be premature to assert that the agency conflicts are lower in private equity than in public equity.

Journal ArticleDOI
TL;DR: The principal force behind the many changes in household finances during the past several decades has been an expansion of financial opportunities as discussed by the authors, which can yield benefits in terms of household economic security.
Abstract: The principal force behind the many changes in household finances during the past several decades has been an expansion of financial opportunities. More elaborate tools for assessing and pricing risk, increased lending to households without strong collateral, and technologies that allow households to access a wide array of investment opportunities more easily have all enabled more people to engage in more financial activities. The shift in employer-based retire ment benefits from defined benefit plans toward defined contribution plans has also given many households more direct control of their finances. Such opportunities can yield benefits in terms of household economic security. The democratization of credit and development of new lending approaches in creased the options for families looking to borrow against future income or accumulated home equity in order to enjoy a smoother path of consumption. Indeed, a wide range of indicators showed significantly less aggregate economic volatility between the early 1980s and mid-2000s than during the preceding two decades?a phenomenon linked by some researchers (including me) to this type of financial innovation. New financial opportunities also allowed households to choose to take more risks in pursuit of higher expected utility?an important reminder that reducing risk is not always good and increasing risk is not always bad. However, the financial crisis that began in 2007 has powerfully illustrated that expanded financial opportunities can also pose dangers for households. By increas ing the scope for investment in risky assets, people may end up with larger swings