scispace - formally typeset
Search or ask a question

Showing papers in "Economic Inquiry in 2002"


Journal ArticleDOI
TL;DR: This paper examined the relationship between disaster risk and long-run economic growth in a macroeconomic framework and found that a higher probability of capital destruction due to natural disasters reduces physical capital investment and therefore curtails long-term economic growth.
Abstract: I. INTRODUCTION Risks to life and property exist, in varying degrees, in every country of the world. Numerous studies on the relationship between risk and expected losses and economic decisions are available and generally widely known, (1) but to our knowledge there are no empirical studies that evaluate the effects of natural hazards on long-run economic growth in a macroeconomic framework. (2) Despite the vast empirical literature that examines the linkages between long-run average growth rates, economic policies, and political and institutional factors, the relationship between disaster risk and long-run growth has not been empirically examined. There is, however, a body of research that has examined the effects of natural disasters on economic variables in the short run. Tol and Leek (1999) provide a summary of the recent studies that assess the immediate repercussions of natural disasters on economic activity. The empirical findings in this literature (Albala-Bertrand, 1993; Dacy and Kunreuther, 1969; Otero and Marti, 1995) report that gross domestic product (GDP) is generally found to increase in the periods immediately following a natural disaster. This result is due to the fact that most of the damage caused by disasters is reflected in the loss of capital and durable goods. Because stocks of capital are not measured in GDP and replacing them is, GDP increases in periods immediately following a natural disaster. Our article extends the short-run analysis by examining the possible linkages among disasters, investment decisions, total factor productivity, and long-run economic growth. Because disaster risks differ substantially from country to country, it is reasonable to question whether there exists some relationship between disasters and long-run macroeconomic activity. On cursory examination, one might conclude that a higher probability of capital destruction due to natural disasters reduces physical capital investment and therefore curtails long-mn growth. However, such analysis is only partial and may be misleading. Disaster risk may reduce physical capital investment, but disasters also provide an opportunity to update the capital stock, thus encouraging the adoption of new technologies. Furthermore, an endogenous growth framework also suggests that disaster risk could potentially lead to higher rates of growth. In this type of model individuals invest in physical and human capital, but there is a positive externality associated with human capital accumulation. If disasters reduce the expected return to physical capital, then there is a correspondingly higher relative return to human capital. The higher relative return to human capital may lead to an increased emphasis on human capital investment, which may have a positive effect on growth. We present some initial evidence regarding the relationship between disasters and economic growth in Figures 1 through 4. These figures show the simple relationship between the number of natural disasters and long-run economic growth using a sample of 89 countries. The vertical axis represents the average annual growth rate of per capita GDP over the 1960-90 period. Data on per capita GDP are taken from Summers and Heston (1994). Along the horizontal axes are four different measures of the propensity for natural disasters. The disaster data in Figures 1 and 3 are historical information from Davis (1992) covering 190 years of the world's worst recorded natural disasters. Figures 2 and 4 represent more current and detailed information on natural disasters events for the period 1960 through 1990 from the Center for Research on the Epidemiology of Disasters (CRED) (EMDAT, 2000). Figures 1 and 2 show the natural log of one plus the total number of disaster events from Davis and CRED, respectively. (3) However, bec ause larger countries may be subject to more disasters, we present the natural log of one plus the number of disasters normalized by land area from Davis and CRED in Figures 3 and 4. …

900 citations


Journal ArticleDOI
TL;DR: A central contribution of public choice theory to the analysis of government activity is in viewing the activities of government, not as determined by some single altruistic dictator, but rather as the result of a process involving individual political agents who react to the incentives they face as discussed by the authors.
Abstract: A central contribution of public choice theory to the analysis of government activity is in viewing the activities of government, not as determined by some single altruistic dictator, but rather as the result of a process involving individual political agents who react to the incentives they face. Federal disaster relief, administered by the Federal Emergency Management Agency (FEMA), is one activity that is ripe for political influence due to the process of disaster declaration and relief. After a disaster strikes a particular state, the governor makes a request to the president for disaster assistance. Following a governor’s request, the president then decides whether to declare the state or region a disaster area. Only after a disaster has been declared by the president can disaster relief be given. FEMA is in charge of determining the level of relief funding for the area, but additional appropriations are determined by congress in cases requiring large amounts of funding beyond FEMA’s allocated budget. The Act which governs the rules of federal disaster declaration and expenditures gives the president the authority to declare a disaster without the approval of congress.

372 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an empirical investigation of the popular "political repression boosts FDI" hypothesis and arrive at the conclusion that the hypothesis is not supported and that multinational enterprises rather appear to be attracted by countries in which civil and political freedom is respected.
Abstract: Multinational enterprises are often accused of having a preference for investing in countries in which the working populations' civil and political rights are largely disregarded. This article presents an empirical investigation of the popular “political repression boosts FDI” hypothesis and arrives at the conclusion that the hypothesis is not supported. On the contrary, multinational enterprises rather appear to be attracted by countries in which civil and political freedom is respected.

303 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined person-to-person transactions within the Internet market known as eBay and found that eBay transactions exhibit characteristics similar to transactions in more conventional markets, namely, prices are higher when there is less quantity supplied (when fewer of the items are available the same day), prices are lower during periods of lower demand (times less likely to have high traffic), and sellers with higher shipping and handling costs receive lower prices, and sellers failing to provide information about shipping/handling fees (i.e., larger information asymmetries) receive fewer bids.
Abstract: I. INTRODUCTION Without transmission of credible information, asymmetries may lead to underproduction of goods or even market failure. Reputation mitigates inefficiencies associated with information asymmetries by providing an informative signal of quality. (1) The difficulty in quantifying reputation means that few studies can analyze empirically the role of reputation in markets. Our analysis of a quantified, market-observed measure of reputation provides direct evidence of the effect of a seller's reputation on the terms of a onetime real-world transaction, thereby contributing empirical support to a fundamental economic principle. This study empirically examines person-to-person transactions within the Internet market known as eBay. In this virtual market of unseen participants and products, buyers and sellers face the risks of repudiation, as the counter party may deny the agreement after the fact. Buyers assume risks associated with lack of seller integrity and asymmetric information about the particular product, as the buyer is typically required to send payment before the seller ships the product. In addition, for many eBay transactions, the cost of enforcing a contract is high relative to the transaction's value, resulting in a practical absence of legal enforcement. (2) By providing a history of trade execution information, eBay benefits market participants by reducing information asymmetries while achieving substantial transaction cost economies. Market participants relate personal experiences, which eBay uses to calculate a numerical reputation measure for each user. Market participants, in turn, can use this reputation measure to assess counter party risk and adjust bidding behavior accordingly. In a sample of 460 auctions held between January 1998 and July 1998, we find a positive relation between prices and eBay's reputation measure. Higher-reputation sellers experience higher auction prices, ceteris paribus. Our findings suggest that repeat players are rewarded for building reputation. Consistent with the belief that the high-reputation seller's value of future transactions outweighs the value of taking advantage of the buyer in the current transaction, buyers are willing to pay more to a higher-reputation seller. This article contributes to the literature not only by providing quantitative support of long-accepted reputation theories but also by illustrating the use of nontraditional markets as a natural laboratory for experiments. This article is an example of how a newly formed electronic market can provide the elements necessary for analytical research. We find that eBay transactions for this item exhibit characteristics similar to transactions in more conventional markets, namely (1) prices are higher when there is less quantity supplied (when fewer of the items are available the same day), (2) prices are lower during periods of lower demand (times less likely to have high traffic), (3) sellers with higher shipping and handling costs receive lower prices, and (4) sellers failing to provide information about shipping and handling fees (i.e., larger information asymmetries) receive fewer bids. Dramatic innovations in online market structure and increasing availability of online market data should enable researchers to examine directly other traditionally non-quantifiable economic ideas. This article is organized as follows. Section II describes how reputation can be used to facilitate transactions in the presence of asymmetric information. Section III describes the eBay market, summarizes the listing and bidding processes, and discusses the reputation mechanism for this market. Section IV presents the price and reputation descriptive statistics associated with a consistently auctioned item and reports the empirical findings of how this item's highest bid price varies with the level of the seller's reputation. We conclude in section V with a discussion of eBay's continued attempts to add value to the market through recent structural changes. …

206 citations


Journal ArticleDOI
TL;DR: The authors investigated the influence of source country school quality on the returns to education of immigrants and found that immigrants from Japan and northern Europe receive high returns and immigrants from Central America receive low returns.
Abstract: Using the U.S. labor market as a common point of reference, this article investigates the influence of source country school quality on the returns to education of immigrants. Based on 1980 and 1990 census data, we first estimate country-of-origin specific returns to education. Results reveal that immigrants from Japan and northern Europe receive high returns and immigrants from Central America receive low returns. Next we examine the relationship between school quality measures and these returns. Holding per capita GDP and other factors constant, immigrants from countries with lower pupil-teacher ratios and greater expenditures per pupil earn higher returns to education.)

154 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a model of lotto demand that focuses on the maximum possible prize, which was tested against the traditional model using data from the U.K. National Lottery and found that jackpot considerations exert an influence over and above that of variations in effective price.
Abstract: Existing lotto demand models utilize effective price, computed as the face value of a ticket minus the expected value of prize money per ticket, as their primary explanatory variable. By contrast, this article proposes a key role for consumption benefit or "fun" in the demand for gambling in general and lotto demand in particular. It develops an alternative model of lotto demand that focuses on the maximum possible prize. When this is tested against the traditional model using data from the U.K. National Lottery, we find that jackpot considerations exert an influence over and above that of variations in effective price.

134 citations


Journal ArticleDOI
TL;DR: Schaller et al. as mentioned in this paper examined whether monetary policy has the same effect in expansions and recessions by building on the Hamilton (1989) Markov switching model, and provided evidence that positive monetary shocks have different effects from negative monetary shocks.
Abstract: HUNTLEY SCHALLER (*) By building on the Hamilton (1989) Markov switching model, we examine questions like: Does monetary policy have the same effect in expansions and recessions? Given that the economy is currently in a recession, does a fall in interest rates increase the probability of an expansion? Does monetary policy have an incremental effect on the growth rate within a given state, or does it only affect the economy if it is sufficiently strong to induce a state change (e.g., from recession to expansion)? As suggested by models with sticky prices or finance constraints, interest rate changes have larger effects during recessions. (JEL E52, E32) Much of the recent work [in macroeconomics] has proceeded ... under the assumption that variables follow linear stochastic processes with constant coefficients. ... [As a result] some of the richness of the Burns-Mitchell analysis, such as its focus on asymmetries between recessions and expansions ... may well have been lost. --Blanchard and Fischer (1989, 7) I. INTRODUCTION For decades macroeconomists have debated whether monetary policy has the same effect on real output in expansions and recessions. As far back as the 1930s, Keynes and Pigou debated whether monetary policy would have less effect on output during a severe economic downturn. In the 1960s, there were active debates on a very different proposition, namely, whether the rightward portion of the aggregate supply curve was vertical, so that monetary policy would have less effect on real output during expansions. In this article, we provide a new type of evidence on whether monetary policy has different effects depending on whether the economy is in an expansion or recession. Empirical evidence on this issue is particularly relevant in light of new theoretical work in macroeconomics that predicts asymmetric effects of demand shocks conditional on the state of the economy. Two examples of this work are S-s-type models of price adjustment and models in which there are agency costs of financial intermediation. (1) The intuition for the latter class of models is simple. (2) When there is information asymmetry in financial markets, agents may behave as if they were constrained. For a variety of reasons, these finance constraints are more likely to bind during recessions when the net worth of agents is low. An increase in interest rates will then have two effects on investment: the standard effect of increasing the cost of capital and therefore reducing investment demand and an additional effect of reducing liquidity (e.g., by increasing debt service obligations) and thus reducing investment demand for constrained agents. As a result, monetary policy actions that change interest rates will have greater effects during a recession. (3) The S-s-type price adjustment models of Ball and Mankiw (1994), Caballero and Engel (1992), and Tsiddon (1993) lead to a convex aggregate supply curve and therefore also imply that monetary policy will have stronger effects during recessions. One of the more frequently cited empirical papers on the potentially asymmetric effects of monetary policy is Cover (1992), which finds evidence that positive monetary shocks have different effects from negative monetary shocks. We are looking at a different type of asymmetry--namely, between booms and recessions. We study asymmetries using an extension of the Markov switching model developed by Hamilton (1989), estimated over the period 1955-93. In Hamilton's econometric specification, the growth rate of output depends on a state variable that corresponds to an expansion or recession. This approach has several advantages. First, unlike linear projections, it allows for nonlinearities and asymmetries. Second, in estimating the recession coefficients, it gives greater relative weight to observations that most clearly correspond to recessions (and similarly for the expansion coefficients). …

128 citations


Journal ArticleDOI
TL;DR: This paper ranked core journals in economics using the textbook citation method and found that the top nine journals or core journals from this study correlate closely in rank with the results of two comparison studies.
Abstract: GAINES H. LINER (*) This article ranks core journals in economics using the textbook citation method. Rankings are produced from citations in graduate-level microeconomics, macroeconomics, and econometrics textbooks. Textbooks used in the study were chosen through responses from a survey of professors in top-tier economics departments. The top nine journals or core journals from this study correlate closely in rank with the results of two comparison studies. Second-tier journals identified in this study correlate less closely in rank with second-tier journals in comparison studies. (JEL A) I. INTRODUCTION Between 1970 and 1990 the number of published pages in economics journals more than doubled (Laband and Piette 1994), and during the early 1970s budgets of libraries devoted to journals grew, in percentage terms, several-fold relative to those devoted to books (Liebowitz and Palmer 1984). Laband and Piette (1994) reported that between 1976 and 1985 at least 55 new economics journals commenced publication. Although not all journals are considered general interest journals many emphasize the importance of economic theory in their submission instructions to authors. With this increased emphasis on research, scholars and administrators alike have acquired an increased sensitivity to how journals rank in quality, influence, or impact. The variety of methods used to rank journals, departments, and authors attests to the increased concern individuals place on such issues. The Social Sciences Citation Index (SSCI; Institute for Scientific Information 1986) has figured prominently as a source of data in a variety of studies. Burton and Phimister (1995), Laband and Piette (1994), Liebowitz and Palmer (1984), and Stigler et al. (1995 provide recent examples. Another approach occasionally used in ranking of economics departments involves choosing citation data from the "premier," "core," or "principal" journals. Departmental rankings are then based on the number of journal articles or pages published by members of their faculty in these journals. Graves et al. (1982), Burkitt and Baimbridge (1995), Conroy et al. (1995), Scott and Mitias (1996), and Dusansky and Vernon (1998) have used variations of this approach. Others have surveyed deans and chairpersons for their own rankings of journals. See Enomoto and Ghosh (1993) for one recent example. A potential weakness of this approach is that survey respondents can be influenced by recent and not-so-recent literature on the subject and by each other. Some earlier studies have attempted to rank journals by their quality. See Liebowitz and Palmer (1984) for a review and Beed and Beed (1996) for a criticism of attempts to measure journal quality. Because an objective ranking of journals by their quality is difficult if not impossible, most recent studies mentioned above have attempted to rank journals based on their "impact" or their "influence." Ranking the number of citations to journal articles in other journal articles provides only one path to measuring the potential impact a journal has on individuals and the profession and ultimately the broader community. Davis (1998) pointed out that the often-used SSCI data under represent journals that share content with other social sciences. A second problem noted is that "not all of any journal's citations are identified by citing journal. In every case the list is truncated, and some portion of a journal's citations are simply entered as 'all other'" (Davis, 1998, 62). Moreover, the percentage of citations in the "all other" category is not necessarily the same from journal to journal. This might bias raw counts of citations as well as attempts to adjust for "impact" by adjusting for the number of pages or the number of characters in the cited journals. However, these biases tend to have less impact on rankings of core journals than less highly ranked journals. In attempts to rank authors of journal articles using SSCI data, Alexander and Mabry (1994) noted that the SSCI does not include in the counts the second and subsequent authors of multiple-author articles. …

99 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used matched March Current Population Surveys (CPSs) to estimate the effects of minimum wages on the probabilities of various transitions in the family income distribution, such as transitions into and out of poverty.
Abstract: I. INTRODUCTION One of the most compelling rationales for a national minimum wage floor is to raise the incomes of poor or near-poor families with members in the workforce. This general point, however, is often missed in the debates over the merits of a higher minimum wage. In contrast to these oft-stated distributional goals, much of the focus in such debates has been on the employment effects of minimum wages--especially among the teenage population. In large part, this focus is probably attributable to the extensive body of economic research on the effects of minimum wages on employment of low-skilled workers. However, although negative employment effects represent a cost of minimum wages, such costs do not necessarily imply that minimum wages constitute bad social policy. In particular, the employment losses associated with a higher minimum wage may be more than offset by positive effects on low-income families, especially if minimum wages are a significant factor in helping move families out of poverty. (1) This is not to argue that research on employment effects of minimum wages is irrelevant. But such research may be more important as a test of the theory of labor demand and as a method of learning how employers and individuals adjust to exogenous wage increases than as a method of assessing the wisdom of the policy. In addition, we do not mean to suggest that the short-run effects of minimum wages on the incomes of poor families should be the sole criterion for evaluating such policies. Other studies have found evidence suggesting, for example, that minimum wage increases reduce school enrollment rates and training (Neumark and Wascher, 1996a; Hashimoto, 1982), factors that may affect longer-run earnings or earnings growth; these deleterious longer-run effects might offset the benefits of shorter-run effects of minimum wages on family incomes. Nonetheless, our perception is that potential increases in the incomes of poor families provide the main motivation for raising the minimum wage, making it important to assess the evidence on whether minimum wage increases achieve this goal. In this regard, there are two questions that pertain to the influence of minimum wages on family incomes generally and on poverty in particular. First, there is the question of the effects of minimum wages on low-wage workers--that is, do the wage gains received by employed workers more than offset the lost earnings suffered by those who lose or cannot find jobs? (2) Second, there is the question of how minimum wages affect workers in different parts of the family income distribution. Because many (roughly speaking, a large minority of) minimum wage workers are in relatively affluent families (Gramlich, 1976; Card and Krueger, 1995; Burkhauser et al., 1996), which workers gain and which lose will have an important influence on the effects of minimum wages on the distribution of family incomes. In this article we present evidence on the effects of minimum wages on family incomes, focusing in particular--but not solely--on the effectiveness of minimum wages in reducing poverty. Using matched March Current Population Surveys (CPSs), we estimate the effects of minimum wages on the probabilities of various transitions in the family income distribution, such as transitions into and out of poverty. Given that federal changes in minimum wages may be confounded with other aggregate-level shocks influencing family income, we rely heavily on state-level changes in minimum wages to identify minimum wage effects. In a nutshell, our empirical strategy is to compare rates of transition through the family income distribution in states in which minimum wages do and do not increase. For example, if poor families are more likely to escape from poverty when minimum wages increase in their states of residence, we would infer that minimum wage hikes help families move out of poverty. On the other hand, if transitions in to poverty are more common when minimum wages increase, we would infer that the disemployment effects of minimum wages play a dominant role among the low-income population. …

95 citations


Journal ArticleDOI
TL;DR: Leslie S. Stratton et al. as mentioned in this paper found that less than 20% of the differential between married and cohabiting men is attributable to individual specific components or selectivity.
Abstract: Leslie S. Stratton (*) I. INTRODUCTION Wage analyses almost universally indicate that married men earn more than do single men, even after controlling for observable human capital characteristics. The same appears to be true for cohabiting men. Research has failed as yet to reach a consensus regarding the nature of these differentials. The evidence is consistent with a number of alternative explanations. First, marriage could increase men's market productivity because economies of scale and increased specialization within a multiperson household give them more time and energy to devote to market-related activities. If this is the mechanism driving the marital wage differential, then cohabiting men may also experience a wage boost, albeit a smaller one, as cohabitation is a less stable relationship and less likely to engender specialization. Wages could jump at the start of a relationship, but an increased focus on market-related activities is more likely to cause wages to rise at a faster pace. Alternatively, men who marry or cohabit may be inher ently different from men who do not. If men are selected into relationships based on their earnings ability, then cross-section wage analyses will indicate a wage differential for married and cohabiting men, but difference estimation will not. The goal of this analysis is to shed light on the nature of the marital and cohabitation wage premiums for men by estimating wage models that permit both differential wage growth and selection effects. II. LITERATURE REVIEW Evidence of a marital wage premium for men abounds. A thorough literature review is beyond the scope of this article, but incorporation of a marital dummy in wage specifications for men is fairly standard. Hill (1979) provides one much-cited work. Empirical estimates in the range of 10% to 30% are typical. A number of researchers have explored the nature of this wage differential. One explanation focuses on the marital decision itself, the argument being that men who are inherently more productive are more sought after marriage partners and hence more likely to marry. This is the selection model. Attempts, such as that by Nakosteen and Zimmer (1987), to estimate a model in which marital status and wages are endogenously determined have yielded inconclusive results and are sensitive to identification restrictions. An alternative approach, less subject to specification error, is to estimate a fixed effects specification that eliminates all individual-specific time-invariant characteristics. (1) Researchers Korenman and Neumark (1991), Bartlett and Callahan (1984), Daniel (1991), and Gray (1997) have employed this technique and continue to observe a significant marital wage differential, indicating that selection effects do not explain the entire differential. Gray (1997) finds some evidence that the premium has declined over time, as he does not find a significant marital wage differential using data from the National Longitudinal Survey of Youth (NLSY), the most recent cohort of data tested. However, Daniel (1991) also uses data from the NLSY, and he finds a significant marital wage differential of about the same magnitude as that obtained from older cohorts. Generally speaking, a comparison of cross-section and panel results suggests that less than 20% of the marital wage differential is attributable to individual specific components or selectivity. If selection alone does not explain the differential, then wages must increase following marriage. Wages could be higher because they jump or because they rise more rapidly following marriage. Empirical estimates reported by Kenny (1983) using the Coleman-Rossi Retrospective Life Histories Study and by Korenman and Neumark (1991), Loh (1996), and Gray (1997) using the National Longitudinal Study of Young Men suggest that the growth rate of wages increases on marriage. Once again, results from the NLSY are mixed with Daniel (1991) finding faster wage growth following marriage and Gray (1997) finding no marital wage differential. …

88 citations


Journal ArticleDOI
TL;DR: In this paper, the authors study the impact of corruption on an economy with a hierarchical government and show that when the after-tax relative profitability of the formal sector as compared to that of the informal sector is high enough, adding a layer of government increases the total amount of corruption.
Abstract: This article studies the impact of corruption on an economy with a hierarchical government. In particular, we study whether centralizing corruption within the higher level of government increases or decreases the total amount of corruption. We show that when the after-tax relative profitability of the formal sector as compared to that of the informal sector is high enough, adding a layer of government increases the total amount of corruption. By contrast, for high-enough public wages and/or an efficient monitoring technology of the bureaucratic system, centralization of corruption at the top of the government hierarchy redistributes bribe income from the lower level to the upper level. In the process, total corruption is reduced and the formal sector of the economy expands.

Journal ArticleDOI
TL;DR: This article studied the impact of technical change on productivity in specific activities observed at the micro level, such as scholarly publishing, and found that increasing ease of communication has altered scholars' choices about their methods of production, and whether those methods yielded changes in the productivity of scholarly activity that are consistent with increasing access to new communications technologies.
Abstract: I. INTRODUCTION AND MOTIVATION In the past several decades, investment that makes communication easier has increased rapidly in the United States and other developed economies. Since the mid-1980s a combination of technical change and deregulation has also reduced long-distance telephone rates in the United States by 50%, reported by Allen (1995). Pitney-Bowes (1997) reported that fax usage increased by 20% between 1996 and 1997 alone, and electronic mail (e-mail), unknown before 1985, is ubiquitous today. Popular discussion of a wide range of additional examples of rapidly declining prices and explosive growth of the use of telecommunications is provided by Cairncross (1997). Attempts to measure the impact on aggregate total factor and labor productivity of these supposedly productivity-enhancing investments in broad-reaching technical improvements, such as computing machinery and communications equipment, have not met with great success, as in Morrison (1997), but see Greenan and Mairesse (1996). (1) An alternative to measuring effects on the broader economy is to measure the impact of generalized technical change on productivity in specific activities observed at the micro level. An earlier literature, including Griliches (1958) and Trajtenberg (1989), has clearly traced the effect of specific innovations in raising productivity in specific sectors of the economy. This article expands on that tradition by trying to identify the effect of the recent broad revolution in communications on one activity--scholarly publishing. We propose studying scholarly publishing before and after technical change greatly lowered communication costs. We examine in particular whether increasing ease of communication has altered scholars' choices about their methods of production, and whether those methods yielded changes in the productivity of scholarly activity that are consistent with increasing access to new communications technologies. (2) In section II we discuss a model of the production process in scholarly writing in relation to the cost of communications; in section III we describe the unique data set we assembled to examine the relation between technical change and scholarly productivity. Section IV presents the results of using these data to test the hypotheses that we develop, and section V compares coauthored to solo-authored studies and provides a suggestion for the increased prevalence of the former. Section VI offers a consistent explanation for most of the results. II. A MODEL OF SCHOLARLY PRODUCTION The example that we use in this study of the impact of technology is the nature and outcomes of the choices of co-workers by authors of scholarly publications in economics. The importance of team research has been stressed by a number of authors studying the economics of innovation, including Dasgupta (1988), who also presents a research summary, so that our specific example has broader implications for the study of technical change. Has the decline in the cost of communication altered scholars' choices in a way consistent with these technologies increasing scholarly productivity? In examining scholarly productivity, we focus on research output. In particular, we view scholars as having three production choices: (1) work solo, s; (2) work with close-by coauthor(s), c; or (3) work with distant coauthor(s), d. In the model in this section, the scholar is assumed to choose a production technology that maximizes his or her scholarly productivity, measured as the quality of the article produced. In section VI we a dopt an alternative characterization of a scholar's choices. We assume that the scholar has a wide range of potential research activities to choose among and has perfect knowledge of the productivity P (valued in dollars) of all potential matches s, [c.sub.i] [member of] C and [d.sub.i] [member of] D. (3) Each match generates one solo-equivalent article per period. …

Journal ArticleDOI
Jonathan W. Leland1
TL;DR: In this article, the authors consider a variant of the Discounted Utility model of intertemporal choice under uncertainty, where agents base their decisions on judgments regarding the similarity or dissimilarity of prizes and probabilities across alternatives.
Abstract: I. INTRODUCTION Evidence accumulated over many years reveals the inadequacies of the Expected Utility Hypothesis as a descriptive model of choice under uncertainty. Over a much shorter period of time, evidence has accumulated revealing systematic violations of the standard model of choice over time, the Discounted Utility model. In Rubinstein (1988), Azipurua et al. (1993), and Leland (1994, 1998), choice anomalies under uncertainty occur because agents base their decisions on judgments regarding the similarity or dissimilarity of prizes and probabilities across alternatives. (1) This article specifies conditions under which such a procedure implies analogous violations of the Discounted Utility model in intertemporal settings. II. CHOICE ANOMALIES UNDER UNCERTAINTY AND OVER TIME Axioms assumed in models of choice place restrictions on what agents can choose across different pairs of alternatives. The independence axiom, for example, requires that for risky or riskless options [L.sub.1], [L.sub.2], and [L.sub.3], if [L.sub.1] is weakly preferred to [L.sub.2], then the lottery {[L.sub.1], p; [L.sub.3], 1 - p} must be weakly preferred to the lottery {[L.sub.2], p; [L.sub.3], 1 - p} for any p. One consequence of this requirement is that preferences between simple lotteries {$[x.sub.1], [p.sub.1]; $0, 1 - [p.sub.1]} and {$[x.sub.2], [p.sub.2]; $0, 1 - [p.sub.2]} must be invariant to changes in the values of [p.sub.1] and [p.sub.2], which leave their ratio undisturbed. In choices between S and R and between S' and R', for example, independence requires either the choice of S and S' or the choice of R and R'. S: {$3000, 0.90; $0, 0.10} R: {$6000, 0.45; $0, 0.55} S': {$3000, 0.02; $0, 0.98} R': {$6000, 0.01; $0, 0.99} The stationarity assumption of the Discounted Utility model of intertemporal choice places restrictions on how agents can choose between pairs of intertemporal prospects. Consider simple intertemporal prospects [T.sub.j] and [T.sub.k], shown below, where [T.sub.j] offers an increment to consumption [x.sub.j] in time period [t.sub.j] and [T.sub.k] offers an increment to consumption [x.sub.k] in time period [t.sub.k]. [T.sub.j]: {[x.sub.j], [t.sub.j]} [T.sub.k]: {[x.sub.k], [t.sub.k]} Assuming, for simplicity, a common baseline level of consumption per period, c, agents deciding between these options according to the Discounted Utility model will choose as follows where U(*) is a concave, ratio-scaled, utility function, [delta] is the one period discount factor, and > and ~ denote strict preference and indifference, respectively: (2) (1) [FORMULA NOT REPRODUCIBLE IN ASCII] Dividing through by [[delta].sub.j] and rearranging terms yields the following expression: (2) [FORMULA NOT REPRODUCIBLE IN ASCII] Expression 2 reveals that the only way discounting enters into the decision is through the absolute difference in the time periods. As such, agents given choices between [T.sub.1] and [T.sub.2] and between [T.sub.11] and [T.sub.12], shown below, must either select [T.sub.1] and [T.sub.11] or [T.sub.2] and [T.sub.12] since the absolute time interval in both choices is identically 1 period. [T.sub.1]: {$20, 1 month} [T.sub.2]: {$25, 2 months} [T.sub.11]: {$20, 11 months} [T.sub.12]: {$25, 12 months} Neither the restrictions implied by the independence axiom nor those following from stationarity hold empirically. Instead, regarding the independence axiom, Kahneman and Tversky (1979), among others, find that individuals choosing the safer option S over R nevertheless choose the riskier option R' over S' as the difference between probabilities declines, their ratio held constant. This phenomenon is referred to as the common ratio effect. They also find that for lotteries involving losses, the opposite pattern, RS', obtains. This phenomenon is referred to as the reflection effect. …

Journal ArticleDOI
TL;DR: In this article, the authors evaluate the short-run and long-run factors that influence customer price response to RTPs and find that consumers are more likely to opt for RTP.
Abstract: I. INTRODUCTION With the emergence of competition in the electric industry, wholesale prices are proving highly volatile. During the summers of 1998 and 1999, prices in the Midwest soared to $7000 or more per megawatt-hour compared to a typical summer price of $30- $50. One factor contributing to this volatility is that relatively few retail customers pay real-time prices (RTPs) that vary with changing supply and demand conditions. As a result, retail use is not deterred by the spikes in wholesale price, exacerbating wholesale volatility. Without providing an opportunity for retail response, there will be pressure on policy makers to curb wholesale volatility with such measures as the price caps approved by the Federal Energy Regulatory Commission in New York and California as a result of high electricity prices during 2000. Increasingly, electric utilities now offer their largest industrial and commercial customers hourly prices that vary with changes in real-time supply and demand. (1) The purpose of this article is to estimate and evaluate the short-run and long-run factors that influence customer price response to RTPs. To date, there have been relatively few studies of response to RTP rates, and those have been largely limited to short-run estimates. This study advances the research by evaluating how customer response to RTP changes with experience on the rates. In addition, there is consideration of the sensitivity of price response to changes in the levels of both prices and temperatures. Finally, there is an examination of customer characteristics, particularly on-site generation, that provide flexibility in responding to price changes. The data include 110 industrial customers served by Duke Power, a division of Duke Energy Corporation. Some of the participating customers have been on RTP rates for as long as six years. Utilities need information on short-run response to know the magnitude of demand relief that price responsiveness may provide when capacity is tight, as well as to determine unit commitment and spinning reserve. They need long-run response to plan additions to capacity. Knowledge of individual customer response can help utilities target those customers who are most likely to opt for RTP. Responsive customers may include those with interruptible (batch) production processes and with on-site generation. In addition to utilities, policy makers need to know price response to determine the role of RTP in achieving efficient electric resource use. (2) This study adopts the modeling and estimation procedures set forth by Herriges et al. (1993) and King and Shatrawka (1994). Both employed a nested constant elasticity of substitution (CES) functional form. Herriges et al. (1993) estimated response to an RTP rate offered by the Niagara Mohawk Company. At the time, there were only 15 customers with usable data. One strength of the study was the availability of a control group on non-RTP rates. King and Shatrawka (1994) employed a much larger sample of 150 customers served by the Midlands Electric Company (Great Britain) but did not have a control group. Both studies provided shortrun estimates only. Patrick and Wolak (1997) also used Midlands data to estimate demand elasticities from a Generalized McFadden functional form. Though their findings appear to be consistent with the other studies, it is difficult to compare their results directly given the differing functional form. (3) Their study used several years of data and did include temperature but provided only short-run estimates. Additionally, there were no consistent findings regarding temperature. Their study proposed the analysis of customer learning for future work. (4) O'Sheasy (1997) and Gupta and Danielson (1998) have raised the possibility that price response depends on the level of price. O'Sheasy (1997) suggests a demand curve with vertical segments over certain price ranges. …

Journal ArticleDOI
TL;DR: In this paper, the authors examined field data from 214 multiunit sports card auctions carried out in an active marketplace and found that the cognitive effort required to bid strategically exceeds the benefits of low-valued cards.
Abstract: I INTRODUCTION Positive opportunity costs of mental effort may invalidate the predictions of traditional models of rational (or hyperrational) agents Conlisk (1996) uses deliberation costs as a recurring theme when discussing four important reasons for incorporating bounded rationality in economic models Smith and Walker (1993) and Smith and Szidarovszky (1999) present effort models that predict individual behavior will more closely match the predictions of rational-behavior theories as (1) the stakes of the decision increase, and (2) the decision costs decrease Smith and Walker (1993) find evidence of these two effects in a comprehensive review of 31 published laboratory experiments Camerer and Hogarth (1999) extend Smith and Walker's survey by examining 74 experimental papers and find evidence in favor of the cognitive-effort theory, noting that "higher levels of incentives have the largest effects in judgment and decision tasks" (p 34) Although the laboratory evidence is compelling, there has been little verificati on of these predictions outside the laboratory (1) The present article fills this gap by examining field data from 214 multiunit sports card auctions carried out in an active marketplace: on the floor of a sports card show We auctioned four types of trading cards with book values ranging from $3 to $70, providing significant variation in the stakes of the auction Our auctions also included two distinct types of subjects: Some auctions had sports card dealers bidding against each other, whereas others had individual card collectors as the participants This variation allows us to explore the second dimension of decision-cost theory: Do dealers, who commonly participate in sports card auctions and therefore likely require less effort to bid optimally, bid more rationally than nondealers? Our measure of "rational" bidding comes from multiunit auction theory, which predicts strategic "demand reduction" in uniform-price auctions (2) For each type of bidder and each type of card, we measure this demand-reduction behavior in the uniform-price auction relative to a control (the multiunit Vickrey [1961] auction) where bidders are predicted to fully reveal their demands Our sports card data, generated from sales of 428 cards with a combined book value of nearly $10,000, provide two major insights First, we find that the predicted strategic behavior is considerably greater when the auctioned cards have higher values Second, dealers exhibit more of the predicted strategic behavior than do nondealers, for both lower and higher priced cards One conjecture to explain this finding is that nondealers may find that the cognitive effort required to bid strategically exceeds the benefits, especially for low-valued cards By contrast, dealers have more experience with auctions and make their living by buying/selling/trading cards, so their cognitive costs are likely much lower than those of nondealers These two findings are consistent with recent theoretical models of monetary rewards and decision costs, and extend previous experimental evidence from the laboratory into the field II EXPERIMENTAL DESIGN As mentioned, recent theoretical literature has suggested that demand reduction inherent in uniform-price auctions with multiunit demand can induce inefficient allocations and possible reductions in auction revenue To avoid the inefficiencies associated with the uniform-price auction, theorists have identified an alternative mechanism, the generalized Vickrey auction, which gives bidders a dominant strategy of revealing their true valuations for all units of the good In this multiunit Vickrey auction, as in the uniform-price auction, each bidder can submit up to n different sealed bids on individual units, and the highest n bids are declared winners If a bidder submits one or more of the winning bids, his or her price for the first unit equals the highest rejected bid submitted by someone else, and his or her price for the kth unit equals the kth highest of the rejected bids submitted by others …

Journal ArticleDOI
TL;DR: Greenstone and Ashenfelter as mentioned in this paper showed that the introduction of the 65-mph speed limit was associated with a statistically significant decline in statewide fatality rates of 3.4-5.1%.
Abstract: Michael Greenstone (*) I. INTRODUCTION In 1987 the federal government allowed states to raise speed limits from 55 mph to 65 mph on a single category of roads, rural interstates. Of the 47 states with rural interstate roads, 40 adopted the higher speed limit within a year. Garber and Graham (1990) and Ashenfelter and Greenstone (2002) document that the fatality rate on rural interstates increased dramatically subsequent to its introduction. These studies conclude that higher speed limits lead to higher fatality rates. In a recent issue of Economic Inquiry, Lave and Elias (1997) (henceforth L&E) argue that the full effect of the 65-mph speed limit cannot be inferred by examining rural interstates in isolation from other roads. They conjecture that the higher speed limits caused reallocations of drivers and state police that counterbalanced the increased fatality rates on rural interstates. In particular, they posit that the reduced travel times available on rural interstates induced drivers to switch from dangerous side roads to the safer rural interstates. Additionally, they contend that the higher limits freed the state police from speed enforcement, which allowed them to concentrate on activities with greater impacts on fatality rates. Thus, L&E's hypothesis is that the full effect of an increase in the speed limit requires an examination of statewide fatality rates. Figure 1 graphically depicts L&E's hypothesis. L&E present evidence in favor of their hypothesized causal chain. First, in support of the driver reallocation conjecture, they find that between 1986 and 1988 vehicle miles of travel (VMT) increased on rural interstates where the speed limit was increased to 65 mph. Second, they present anecdotal evidence in favor of the trooper reallocation conjecture. Third, they show that the introduction of the 65-mph limit was associated with a statistically significant decline in statewide fatality rates of 3.4%-5.1%. Thus, L&E's surprising conclusion is that through driver and trooper reallocations the 1987 increase in speed limits reduced fatality rates. This study reexamines L&E's empirical results and is unable to confirm them. It shows that the statewide fatality rate declined by a statistically insignificant amount in adopting states after 1987. This finding holds when the specification is virtually identical to the one that L&E fit and when alternative specifications are estimated. Although this result directly contradicts L&E's primary finding, the source of the discrepancy cannot be determined because their data were unavailable. It remains puzzling that the large increase in rural interstate fatality rates is not observable in statewide fatality rates. An explanation that potentially reconciles this finding with L&E's is that the increase on rural interstates was counterbalanced, but not swamped, by fatality declines induced by the hypothesized reallocations. Consequently, this article also explores whether the 65-mph speed limit caused the reallocations that are the conjectured sources of the statewide decline in fatality rates. If these links are not supported by the data, it indicates that the statistically insignificant decline in statewide fatality rates cannot be causally related to the two reallocations and, in turn, to the higher speed limit. The links between the 65-mph speed limit and the two reallocations are tested separately. First, the results suggest that VMT did not increase on rural interstates where the speed limit was raised. This finding is derived from a regression on 1982-1990 data, whereas L&E only compared unadjusted means from 1986 and 1988. Moreover, it holds when the comparison is to rural interstates in states that did not adopt the higher limit and to both other states and other categories of roads. These results fail to provide an empirical basis for the driver reallocation conjecture. …

Journal ArticleDOI
TL;DR: In this article, the authors focus on how people might learn to forecast relevant prices and whether the learning process permits convergence to rational expectations equilibrium, which is the most basic question a macroeconomist might ask about learning.
Abstract: I. INTRODUCTION In recent years economists have begun to investigate how people might learn equilibrium behavior. Microeconomists following Binmore (1987) and Fudenberg and Kreps (1988) consider learning models with roots in Cournot (1838) and Brown (1951). Numerous laboratory studies test and refine the microeconomists' learning models; see Camerer (1998) for a recent survey. There is also a separate theoretical macroeconomics literature on learning following Marcet and Sargent (1989a, 1989b, 1989c) and Sargent (1994); see Evans and Honkapohja (1997) for a recent survey. The focus is on how people might learn to forecast relevant prices and whether the learning process permits convergence to rational expectations equilibrium. We are not aware of any laboratory work intended to test and refine the learning models favored by macroeconomists. (1) The current study is intended to fill that gap. We gather laboratory evidence on the most basic questions a macroeconomist might ask about learning: Can people learn to forecast prices rationally? If there are obstacles to learning, are they transient or innate characteristics of human behavior? What sorts of environments reduce or enlarge those obstacles? Additional questions might be asked about the effects of learning observable in the usual macroeconomic and financial field data and about forecasting in a self-referential macroeconomic setting. Our work does not address such questions directly, but it does lay a foundation for later investigations of these additional questions. Available evidence on the basic questions is rather disquieting. An extensive cognitive psychology literature, following Kahneman, Slovic, and Tversky (1973), finds that human forecasts are bedeviled by many systematic biases, such as the anchoring and adjustment heuristic, the availability and representativeness heuristics, base rate neglect, and confirmatory and hindsight biases; see Rabin (1998) and Camerer (1998) for recent surveys. There is also a small experimental economics literature on forecasting prices and rational expectations that reaches generally negative conclusions. Garner (1982) presents 12 subjects over 44 periods with a continuous forecasting task that implicitly requires the estimation of seven coefficients in a third-order autoregressive linear stochastic model. He rejects stronger versions of rational expectations but finds some predictive power in weaker versions. Williams (1987) find autocorrelated and adaptive forecast errors by traders in simple asset markets. However, the true data -generating process is not stationary in this task and is unknown even to the experimenter, which makes it difficult to identify individually rational behavior. Dwyer et al. (1993) test subjects' forecasts of an exogenous random walk. They find excess forecast variance but no systematic positive or negative forecast bias for this nonstationary task. A possible objection to both strands of the empirical literature is that neither provides good opportunities for learning. Most of the cognitive studies frame the tasks in ways that do not immediately engage subjects' forecasting experience, offer no salient reward, or provide little feedback that would allow subjects to improve performance. The three economics articles just cited have relatively few trials with complicated or nonstationary processes. Our study, by contrast, presents laboratory subjects with a moderately difficult forecasting task in several stationary learning environments. We examine human learning in an individual choice task called Orange Juice Futures price forecasting (OJF). The OJF task has a form and complexity similar to the forecasting tasks in macroeconomists' models: Subjects must implicitly learn the coefficients of two independent variables in a linear stochastic process. The task is based on the observation of Roll (1984) that the price of Florida orange juice futures depends systematically on only two exogenous variables: the local weather hazard and the competing supply from Brazil. …

Journal ArticleDOI
TL;DR: In this paper, a cross-section of share contracts is analyzed in an attempt to determine how (or whether) they vary according to the attributes of the contracting parties or the products being contracted for.
Abstract: I. INTRODUCTION At least since Adam Smith, economists have been intrigued by share contracts. There has been a proliferation of models, most of which tend to explain the share contract as resulting from some mix of optimal risk-bearing and optimal effort motivation. (1) A large number of researchers have attempted to test these models empirically. Sharecropping contracts, not surprisingly, are the most intensively examined (the bibliography in Knoeber [2000]), but similar studies have been conducted in many other areas. For example, Martin (1988) and Lafontaine (1992) examine franchise arrangements, Hallagan (1978) investigates contracts used to lease gold claims, Leffler and Rucker (1991) analyze a sample of private timber sale contracts, Goldberg and Erickson (1987) study long-term contracts for the sale of petroleum coke, Aggarwal and Samwick (1999) investigate incentive contracts between firms and their executives, and Chisholm (1997), Goldberg (1997), and Weinstein (1998) all examine profit-sharing contracts between fi lm companies and the "talent." This article differs in an important respect from most previous studies, which analyze a cross-section of contracts in an attempt to determine how (or whether) they vary according to the attributes of the contracting parties or the products being contracted for. Instead, it investigates an area where a sudden technology shock led to the rapid and widespread replacement of one form of contracting by another. The industry is the motion picture business, and the shock was the arrival of sound. During the silent film era, the vast majority of first-run feature films were rented to cinemas for flat daily or weekly payments. Within two years of the release of the first sound picture, revenue-sharing contracts were the norm, and they remain the norm to this day. (2) My goal is to investigate the degree to which the standard economic concerns--moral hazard, risk sharing, and measurement problems--can explain this change. Because the contracting parties remained the same, the question becomes, what was altered in the nature of the product, or in its provision, so as to have altered correspondingly the incentives faced? I conclude the following. First, the advent of sound fundamentally changed the inputs-- live music and other acts, supplied by the exhibitor and central to the show, were replaced by a soundtrack and short sound films, supplied by the film company. The benefit of deterring exhibitor shirking through the use of flat rental fees (which made the exhibitor full residual claimant) declined accordingly. The party whose effort makes the largest contribution to marginal product generally receives the largest proportion of the residual claims, and in the decades following the arrival of sound, the proportion of residual claims collected by film companies rose steadily. In addition, average revenue per film increased with sound, whereas the cost of ensuring that exhibitors reported attendance revenue honestly--done by locating a film company's representative in the theater--remained the same. The ex post division of revenue (necessary for revenue sharing) thus became cheaper on a per film basis. Finally, uncertainty ab out the value (in terms of expected attendance revenue) of the early sound films appears to have raised the cost of negotiating lump-sum rental fees, and thus promoted experiments with revenue sharing. However, although this may have contributed initially, it cannot explain the practice's persistence--uncertainty about film values declined as talking films became better known. II. THE NATURE OF THE PROBLEM A film company contracts with an exhibitor so that they may jointly produce the final good: a movie presentation. (3) To that end, each supplies essential inputs. The film company provides the movie (itself assembled from a variety of inputs) and some sort of national advertising support. The exhibitor provides the theater (which consists of seating, projection equipment, a refreshment stand, and other support activities) and some form of local advertising. …

Journal ArticleDOI
TL;DR: The authors examined the empirical implications of aggregation bias when measuring the productive impact of computers and found that both sources of bias are important, especially as one moves from the sector to the economy level, and when the elasticity of all types of non-computer capital are incorrectly restricted to be equal.
Abstract: This paper examines the empirical implications of aggregation bias when measuring the productive impact of computers. To isolate two specific aggregation problems relating to “aggregation in variables” and “aggregation in relations,” we compare various production function estimates across a range of specifications, econometric estimators, and data levels. The results show that both sources of bias are important, especially as one moves from the sector to the economy level, and when the elasticity of all types of non-computer capital are incorrectly restricted to be equal. In terms of computers, however, the estimated elasticity is surprisingly stable between industry and sector regressions and does not appear to be biased by the incorporation of a restrictive measure of non-computer capital. The data consistently show that computers have a large impact on output.

Journal ArticleDOI
TL;DR: In this paper, the authors analyze the dynamics of a game of sequential bidding in the presence of stochastic scale effects, either economies or diseconomies of scale, and show that economies of scale give rise to declining expected equilibrium prices, whereas the converse is not generally true.
Abstract: We analyze the dynamics of a game of sequential bidding in the presence of stochastic scale effects, either economies or diseconomies of scale. We show that economies of scale give rise to declining expected equilibrium prices, whereas the converse is not generally true. Moreover, first- and second-price auctions are not always revenue equivalent. Economies of scale make second-price auctions more profitable for the seller, whereas revenue equivalence may be preserved in the case of diseconomies.

Journal ArticleDOI
TL;DR: In this article, the authors discuss the impact of different types of information on the performance of the system and propose a method to improve the performance by using the information from the user's own experience.
Abstract: В статье рассматривается предоставление общественных ресурсов с использованием модели общего равновесия. Такого рода ресурсы являются промежуточными продуктами, влияющими одновременно на целый ряд производственных функций. В отличие от коллективно потребляемых товаров, отклонение от первого наилучшего для общественных ресурсов не является желательным, если в экономике установлены оптимальные налоги. Когда налоги не являются оптимальными, правило второго наилучшего должно включать обратную связь с выручкой и чистыми общественными потерями от налогообложения (избыточным налоговым бременем). Также в работе изучается применение полученных результатов для анализа затраты - выгоды и интерпретация полученных оценок общественной нормы доходности на общественный капитал.

Journal ArticleDOI
TL;DR: Yun et al. as discussed by the authors investigated whether drivers are changing their behavior to mitigate the detrimental safety effects of CAFE, which is essentially another application of Peltzman's (1975) offsetting behavior hypothesis.
Abstract: John M. Yun (*) 1. INTRODUCTION Consumers value many attributes of an automobile. The problem for regulators is that many features are highly interrelated, such as safety and fuel economy improvements. Both features are highly correlated with vehicle weight. Greater weight increases the level of safety, in terms of crashworthiness, but decreases the fuel economy. Therefore, regulatory attempts to improve fuel economy, such as the Corporate Average Fuel Economy (CAFE) Standards, (1) are likely to alter average vehicle weight, which will alter the safety of a vehicle. Crandall and Graham (1989) find the reduction in weight attributable to CAFE results in a 14%-28% increase in occupant fatality risk. This article seeks to determine whether drivers are changing their behavior to mitigate the detrimental safety effects of CAFE, which is essentially another application of Peltzman's (1975) offsetting behavior hypothesis. CAFE, through a reduction in passenger car weight, will increase the ex ante cost of risky driving (2) relative to its expected benefit. The reason is that, once an accident occurs, the probability of serious injury or death has increased. The offsetting behavior hypothesis is that consumers, aware of the increased vulnerability, will exercise greater due care and reduce the amount of risky driving on the road as a result of CAFE. However, CAFE has not just affected the average passenger car weight. Godek (1997) and Yun (1999) indicate that CAFE has positively affected the relative sales of light trucks(3) to passenger cars. If we assume that light trucks are basically heavy cars with greater visibility, the same arguments made for offsetting behavior on the part of passenger car drivers can be applied here. (4) The main difference is that truck drivers will be more aggressive and risky because trucks are heavier, which lowers the cost of risky driving. Thus, if CAFE induces more trucks to be on the road, according to the offsetting behavior hypothesis, CAFE will increase the amount of aggressive truck driving while simultaneously decreasing the amount of aggressive car driving. Although the article will focus on the offsetting behavior from changes in passenger car weight, it is just as relevant for changes in relative truck miles driven. Section II reviews the recent research linking automobile weight changes with CAFE and recent empirical studies that apply offsetting behavior to automobile regulations.(5) Section III details the methodology used to test the offsetting behavior hypothesis. Section IV contains a discussion of the data. The results are reported in section V, and section VI concludes the article. II. BACKGROUND The CAFE standards were initially set at 18.0 mpg for passenger cars and between 15.8 to 17.2 mpg for various light trucks. These standards have steadily increased to the current levels of 27.5 mpg for passenger cars and 20.7 mpg for light trucks. (6) The key impact of CAFE on consumer safety is a reduction in the average passenger car weight. Figure 1 illustrates the fall in weight since CAFE's passage in 1975. Crandall and Graham (1989) attribute 14% of the fall in weight to CAFE. (7) Using single-car accident data, they estimate the elasticity of highway occupant fatality rates with respect to average car weight is between -1.22 and -2.30, which implies a 17%-32.2% increase in the fatality rate due to CAFE. Using the upper and lower bounds, Crandall and Graham (1989) estimate that CAFE is responsible for 2,200-3,900 more deaths over the life span of the cars from a given model year. (8) However, there is no attempt to explicitly measure possible offsetting behavior effects. Not all studies find a positive relationship between weight and safety. The General Accounting Office (GAO)(1991) acknowledges a theoretical link between automobile size and safety; however, they find no direct empirical link between the two. …

Journal ArticleDOI
TL;DR: In this paper, the authors consider the case of four agents, with uniform consumer density and inelastic demand, and analyze the results and explain the data, and conclude that the equilibrium prediction is of substantial but limited help.
Abstract: Nicolaas J. Vriend (*) I. INTRODUCTION Despite the popularity of simple location models in industrial economics and voting theory, in the tradition following Hotelling (1929), and despite the recent rise of the experimental method in economics, there have been only few experimental tests of such models. Brown-Kruse et al. (1993) and Brown-Kruse and Schenk (1999) study models with elastic demand, while Collins and Sherstyuk (2000) focus on the simpler case--which we address, too--where demand is inelastic. Collins and Sherstyuk implement a model with three agents who choose locations on a line segment with a uniform density of consumers (who, due to the assumption of inelastic demand, can also be seen as voters with one vote each). It is well known that this game has no pure-strategy equilibrium (see Eaton and Lipsey [1975]). Normalizing the line segment to the unit interval, the unique mixed-strategy equilibrium prescribes uniform randomization over the middle two quarters for all firms (see Shaked [1982]). As it is a well-established fact that experimental subjects have difficulties in randomizing (see, for example, Rapoport and Budescu [1997]) it is not very surprising that Collins and Sherstyuk do not find strong support for the equilibrium hypothesis. Their empirical distribution of choices is M-shaped and has a considerably larger support than the equilibrium distribution. In this study we analyze the case of four agents, with uniform consumer density and inelastic demand. As the equilibrium of this case implies that two firms locate at the edge of the first and second quarters, and the other two at the edge of the third and fourth quarters, this setup seems ideal for the investigation of various matters. First, there is no other number of agents where the equilibrium prediction has a better chance to be valid. (1) With three competitors, the unique symmetric equilibrium is mixed; with five the unique equilibrium configuration is asymmetric and implies unequal payoffs; and with six and more agents the equilibrium configurations cease to be unique. Thus, only the two- and the four-agent cases yield unique pure and symmetric equilibrium configurations that give identical payoffs to all agents. Second, the equilibrium in the four-agent case has a property that makes it interesting from a behavioral and empirical point of view. Not only the focal midpoint is empty but the whole middle segment of the "linear city" is empty as well. That is, notwithstanding the nice theoretical properties, the equilibrium is not entirely intuitive and is also conflicting with casual empirical evidence. There are no cities without shops in the center, nor are there democracies without parties located in the political middle ground. The picture we will discern in our experimental data can be summarized in the following experimental result. In the four-seller case, the equilibrium prediction is of substantial but limited help. About one-third of all choices are clustered around the equilibrium locations, but in no session do we observe convergence to equilibrium. At the same time, the focal midpoint exerts a considerable attraction with almost 10% of all choices clustered around it. Consumers profit from this considerably in the form of lower transportation costs. The remainder of the article is organized as follows. In section II we give a theoretical account of the model we implement. In section III we present the experimental design. In section IV we analyze the results and explain the data, and section V concludes. II. THEORY Consider a "linear city" in which four firms produce a good at constant marginal cost. The price of the good is fixed (due to some unmodeled features of the market). Consequently, costs can be normalized to zero and the price to one. To sell the goods, the firms simultaneously choose a location on the unit interval (0, 1). …

Journal ArticleDOI
TL;DR: In this paper, the role of formal and informal institutions as constraints on the conversion of forest land to agriculture in developing countries is analyzed, and it is shown that if institutions raise the costs of land conversion, then it is possible to utilize an agricultural household model to formalize the resulting impacts on the amount of converted land used by all farming households.
Abstract: 1. INTRODUCTION In many tropical regions a key factor influencing deforestation is thought to be the lack of effective property rights and other institutional structures controlling access to and use of forests. (1) Where such institutions exist, they "limit" access to and conversion of forest land, thus acting as a deterrent to deforestation. In the absence of formal ownership rules, traditional common property regimes in some forested regions have also proven to be effective in controlling the "open access" deforestation problem (Gibson, 2001; Larson and Bromley, 1990; Richards, 1997). In short, formal and informal institutions can influence the process of forest loss by imposing increased costs of conversion on farmers who clear forest land. This article is concerned with analyzing the role of formal and informal institutions as constraints on the conversion of forest land to agriculture in developing countries. The perspective on institutions adopted here follows the approach of North (1990), who defines institutions as "humanly devised constraints that shape human interaction" and that "affect the performance of the economy by their effect on the costs of exchange and production." In analyzing the relation between institutional constraints and the amount of forest land converted for use by smallholders, this article makes several contributions. First, it demonstrates that if institutions raise the costs of land conversion, then it is possible to utilize an agricultural household model to formalize the resulting impacts on the amount of converted land used by all farming households. Moreover, the equilibrium level of land cleared will differ under conditions of no institutional constraints--that is, the pure open access situation--compared to c onditions where effective institutions exist to control land conversion. Because institutions raise the cost of land clearing, more land should be converted under pure open access. (2) This in turn implies that the existence of institutional constraints prevents the adjustment of the stock of converted land to the long-run equilibrium "desired" by agricultural households, which is the amount of land that could be cleared under open access. A dynamic panel analysis is therefore employed to test the hypothesis that the presence of institutions to control agricultural conversion can significantly affect deforestation. The model of land expansion is applied to the case of Mexico in the pre-NAFTA (North American Free Trade Agreement) reform era, 1960-85. During this period, the existence of ejido, or communal land ownership, for the vast majority of forest land meant that strong institutional controls may have restricted the rate of adjustment in the amount of new land converted and thus limited agricultural expansion (Sarukhan and Larson, 2001). The analysis also has direct relevance for the post-NAFTA period, particularly because the 1992 land reforms sanction changes in the traditional ejido land ownership structure. II. A PURE OPEN ACCESS MODEL OF FOREST LAND CONVERSION The following model of forest conversion is based on an approach similar to that of Cropper et al. (1999), Lopez (1997, 1998a), and Panayotou and Sungsuwan (1994). Assume that the economic behavior of all J rural smallholder households in the agricultural sector of a developing country can be summarized by the behavior of a representative jth household. Although the representative household is utility-maximizing, it is a price taker in both input and output markets. Farm and off-farm labor of the household are assumed to be perfect substitutes, such that the opportunity cost of the household's time (i.e., its wage rate) is exogenously determined. The household's behavior is therefore recursive in the sense that the production decisions are made first and then the consumption decisions (Singh et al., 1986). In any time period, t, let the profit function of the representative agricultural household's production decisions be defined as (1) max [pi](p,w,[w. …

Journal ArticleDOI
TL;DR: In this article, the authors used baseball card prices, a relatively neglected source of market evaluation of players reflecting their value to the ultimate consumer, to measure the important but intangible characteristic of "star quality" or charisma possessed by some players.
Abstract: I. INTRODUCTION Measuring productivity presents problems in many areas of economics. This is especially true in sports economics, where appropriate indicators of individual productivity (apart from team productivity) have been the subject of much debate and where the issue of fairness in compensation has become especially contentious in recent years due to the extremely high salaries being paid to participants. Baseball has been studied more widely than any other sport, largely because of the vast amount of data that has been assembled for it and its fairly well-defined indicators--hits, home runs, earned run averages (ERAs), and so on. However, even in this sport, controversies over productivity and compensation continue. Problems were severe enough in 1994 to have resulted in a 232-day general strike and the first-ever cancellation of the World Series, events that are estimated to have cost Major League Baseball (MLB) and the economy close to $1 billion. (1) This research will utilize baseball card prices, a relatively neglected source of market evaluation of players reflecting their value to the ultimate consumer, to measure the important but intangible characteristic of "star quality" or charisma possessed by some players. Star quality, which exists over and above productivity as indicated in official game statistics, is generally acknowledged to bring fans to the stadiums and impact team revenues in a significant way. This research will determine a numerical measure of star quality as the residual in a model of baseball card prices based on traditional performance variables. We shall use this in the computation of a total productivity figure for a sample of baseball players for the period leading up to the strike of 1994. From this we will derive a measure of the players' marginal revenue product (MRP) and examine the issue of monopsonistic exploitation by comparing this expanded MRP to the players' salaries. II. MONOPSONISTIC EXPLOITATION The institutional arrangements and operating policies of MLB have acted to restrict mobility in that labor market and have led to numerous charges of monopsonistic exploitation over the years. Historically, players were drafted into a team and reserved to employment by that team alone unless they were traded to another team at the discretion of management. However, several lawsuits and labor grievances in the 1970s resulted in the introduction of free agency, which limits a team's control over a player to six years. The MLB Players Association argues that even this shortened restrictive period, together with the collusive operating policies permitted under baseball's special exemption from antitrust laws, gives rise to monopsonistic exploitation. This implies that many players receive less than their MRP to their team. This sentiment was an element in the players' rejection of management's position in the 1994 contract negotiations, which led to the strike that canceled part of two seasons. Previous Research on Measuring Productivity Roger Noll (1974) was one of the first researchers to study baseball productivity and one of the few to address the issue of star quality. Noll attempted to capture the star quality effect by entering the number of star players on a team as an explanatory variable in his attendance regression. Other researchers have advanced the technique for determining productivity from performance statistics but have not addressed the star quality factor directly. The productivity study of Scully (1974) focused on a player's slugging percentage (total number of bases divided by total at-bats) as the statistic with the greatest influence on a team's winning percentage, and hence on team revenues. Using 1968-69 data, Scully found that a one-point increase in a team's slugging percentage increased total team revenue by over $9,500. Medoff (1976) focused on runs scored as the major indicator of player performance as it contributed to team revenues in 1972-74. …

Journal ArticleDOI
TL;DR: Kandil et al. as mentioned in this paper compared asymmetry on the demand and supply sides in the face of two specific demand shocks: monetary and government spending shocks, and found that demand-side asymmetry produces a positive correlation between the asymmetric effects of demand shocks on real output growth and nominal wage and price inflation.
Abstract: Magda Kandil (*) 1. INTRODUCTION Recent research on business cycles has produced evidence that demonstrates the asymmetric effects of monetary shocks on economic variables. (1) Specifically, the effect of expansionary monetary shocks on economic variables may be different from that of contractionary shocks. The empirical evidence has stimulated efforts to provide an adequate theoretical explanation to the observed asymmetry. Sources of asymmetry have varied sharply in the theoretical literature. Primary explanations may be classified into supply-side and demand-side sources of asymmetry. A strand of the theoretical literature has viewed asymmetry of economic fluctuations to be a supply-side phenomenon. Demand shifts along a kinked supply curve are likely to produce varying effects on the economy. As to the explanation of the kinked shape of the supply curve, competing models offer explanations. In one direction, some have explained the asymmetric shape of the supply curve by conditions in the labor market. This possibility arises in the context of models that advocate the contractual nominal wage rigidity explanation of business cycles. The stipulation of wage contracts may allow for asymmetric wage indexation. For example, if wages are more responsive to expansionary demand shocks compared to contractionary shocks, asymmetric wage indexation is consistent with a kinked supply curve in labor and output markets. Consequently, expansionary demand shocks are likely to move the economy along a steeper supply curve. (2) Alternatively, supply-side asymmetry may be attributed to conditions in the p roduct market that differentiate price adjustment in the upward and downward directions. Faced with menu costs, firms may be inclined for more frequent and larger adjustments of prices in the upward compared to the downward direction. This possibility also is consistent with a steeper supply curve in the face of positive demand shocks compared to negative shocks. (3) Another line of the theoretical explanations of asymmetric economic fluctuations has emphasized possible asymmetry on the demand side of the economy. Conditions in the money and/or goods markets may differentiate the response of aggregate demand to positive and negative shocks. (4) This possibility differentiates the size of aggregate demand shifts in the face of expansionary and contractionary shocks. In the absence of supply-side asymmetry, demand-side asymmetry differentiates the severity of economic fluctuations in the face of positive and negative shocks. The primary difference between demand-side and supply-side explanations concerns the output and inflationary effects in response to asymmetry. Supply-side asymmetry produces a trade-off between the asymmetric effects of demand shocks on output and their inflationary effects on wages and/or prices. The response of output growth is larger in the face of demand contraction compared to expansion. Concurrently, the inflationary effect of demand shifts exceeds the deflationary effect on nominal wage and/or price. In contrast, demand-side asymmetry produces a positive correlation between the asymmetric effects of demand shocks on real output growth and nominal wage and price inflation. That is, larger output contraction relative to expansion correlates with smaller inflation relative to deflation of the nominal wage and price in the face of demand shocks. The purpose of this investigation is to contrast asymmetry on the demand and supply sides in the face of two specific demand shocks: monetary and government spending shocks. Demand and/or supply conditions may differentiate the expansionary and contractionary effects of monetary and fiscal policies. The article's analysis will highlight the specifics of this asymmetry. The empirical evidence sheds some light on potential explanations of this asymmetry as follows. Conditions in the credit market and the behavior of Ricardian consumers may differentiate the effects of expansionary and contractionary government spending shocks on aggregate demand. …

Journal ArticleDOI
TL;DR: For example, this article examined the relationship between yields and the tendency to give cash, across groups of givers, in an effort to learn the motivation behind the decision of giving cash.
Abstract: I. INTRODUCTION Each year individuals in the United States transfer between $50 and $72 billion in resources to friends and family members in the form of noncash holiday gifts, despite the fact that holiday gift recipients apparently value their noncash gifts at about 10% less than the prices paid by the givers. (1) Cash gifts are rather rare, accounting for under 15% of gifts to college-aged recipients. The ubiquity of noncash gifts poses a puzzle to the idea that gift givers are rational, with at least three possible explanations. First, gift-givers' choices of gifts versus cash may not be determined by a comparison of costs and benefits. Second, gift givers may not be directly concerned with the utility of their recipients, that is, they may be paternalistic rather than altruistic, in the sense of the distinction raised by Pollak (1988). (2) Third, gift givers may be rational and nonpaternalistic but may behave as if recipients valued $x in cash less than a noncash gift they value at $x, which one could alternatively vie w as a stigma associated with cash gifts or a "surplus" associated with noncash gifts? The strategy for studying the prevalence of noncash gifts is first to analyze a new and relatively large data set on holiday gift giving to college students. In particular, I examine how average gift yields (the ratio of recipient valuation to the price the recipient estimates the giver paid) and the tendency to give cash vary with giver and recipient characteristics. The new data set has information about over 3400 cash and noncash gifts to college students, of which 2400 are usable for extensive analysis. I examine how yields and the tendency to give cash vary by the relationship between giver and recipient, the frequency of contact between giver and recipient, the ethnicity and religion of the recipient, and the price of the gift. Second, I examine the relationship between yields and the tendency to give cash, across groups of givers, in an effort to learn the motivation behind the decision to give cash. I find a strong negative relationship between average yields on noncash gifts and the tendency to give cash. Cash giving is more likely from givers who tend to give unwanted gifts, indicating that givers are concerned with the utility of their recipients and, in turn, that the decision to give cash is economic. Though cash gifts are more likely from less efficient noncash gift givers, they are nevertheless surprisingly rare, if givers are attempting to maximize recipient utility associated with material aspects of the gifts. This motivates the third goal, estimation of a simple structural model of the decision to give cash, incorporating the possibility of both flat and variable relative stigma attached to giving cash. (4) I find strong evidence that the gift/cash decision is made as if influenced by a relative stigma of giving cash that I am able to parameterize and quantify. Relative stigma resolves the puzzle of infrequent cash gifts despite givers' apparent tendency toward efficient gift giving. Holiday gift-giving behavior is interesting both intrinsically and as a potentially significant form of intrafamily resource transfer. Cox (1987) reports that bequests totaled over $70 billion, and inter vivos transfers topped $100 billion in 1979 (both scaled to 1993 dollars using the consumer price index). On the basis of retail spending, annual holiday gift giving (excluding cash gifts) is estimated at $50-72 billion. (5) Holiday gift giving is not just a sizable component of intra family (and friend) resource transfer; because of its pervasiveness it may facilitate operative intergenerational links among individuals not otherwise likely to engage in bequests or inter vivos transfers. Economists have recently postulated various motives for gift giving, including signaling and the correction of externalities among intimate individuals. (6) The idea that the choice of gift versus cash--the object of the present study--is influenced by economic factors need not be inconsistent with those other motives. …

Journal ArticleDOI
TL;DR: Wilson et al. as discussed by the authors investigated the relationship between suspect behavior and collusion in procurement auctions, a context where collusion is often suspected and has been frequently observed, and two environments were examined: a set cost regime that contains some features critical to standard government procurement auctions for goods, and an endogenous cost regime, which exhibits some critical elements of procurement bidding for construction contracts.
Abstract: Bart J. Wilson (*) I. INTRODUCTION Questioning the value of efforts to enforce federal anticonspiracy laws raises spirited debate among antitrust economists. Although everyone recognizes the welfare costs of successful conspiracies, commentators such as Armentano (1990) and Cohen and Scheffman (1989) argue that conspiratorial arrangements are pervasively ineffective, and thus, that social resources spent prosecuting cob lusion are wasted. Others, including Marvel et al. (1988) express more ambivalence about the seriousness of conspiracies as a social problem but believe that the government manages to detect only the most ineffective arrangements. Still other economists clearly believe in the effectiveness of current government efforts (see, e.g., Werden [1989] and Froeb et al. [1993]). Ultimately, the pervasiveness of conspiratorial behavior and the magnitude of the consequent damages are empirical issues. However, the illegality of collusion complicates the collection of relevant data. Tip-offs or complaints frequently expose conspiracies, but good reasons exist for suspecting that conspiracies detected in this fashion tend to be the least profitable. A conspirator likely "rats" on co-conspirators only if he or she is disenchanted with the scheme, suggesting that the conspiracy is about to collapse anyway. Similarly suspect are conspiracies that are sufficiently clumsy to raise the protests of buyers. The detection problem would be vastly simplified if conspiracies could be identified in the absence of a "smoking gun." Posner (1969) argues that such explicit evidence of conspiratorial behavior is unnecessary because conspiring firms exhibit identifiable patterns of activity that are alone sufficient to determine illegality. Kuhlman (1969) and Gallo (1977) go still further and advocate the continuous computerized monitoring of pricing and sales data to detect conspiracies. However, as Marshall and Muerer (1998) observe, a potentially condemning defect of any attempt to detect collusion via behavior is that, depending on underlying conditions, virtually any "suspect" pattern of behavior can be generated as a noncollusive Nash equilibrium. Identical prices, for example, are often cited as an indication of coordinated activity (e.g., Mund [1960]). (1) But in games, such as the one modeled by Anton and Yao (1992), agents submit identical bids in a Nash equilibrium in the absence of collusion. Furthermore, some evidence from collusion cases prosecuted by the Department of Justice suggests that identical prices are rarely part of collusive arrangements, except when the industry is not very concentrated (Comanor and Schankerman, 1976). Market rotations, a second pricing scheme that is typically cited as a suspect pattern, may also be consistent with a subgame perfect noncooperative Nash equilibrium. Zona (1986) for example, identifies a number of approximate equilibria involving contract rotations when sellers have steeply increasing cost structures. Also, Lang and Rosenthal (1991) characterize a noncooperative static equilibrium for a multiproduct contracting environment, where sellers both divide the market and submit noncompetitively high bids in the market where they do not expect to win. (2) Still other pricing patterns are possibly suspect. Porter and Zona (1993) report evidence of bid rigging in procurement auctions from the pattern of losing bids. The intuition is that in a competitive environment, bids should be correlated with costs. When prices are fixed, however, this correlation breaks down? This article reports a laboratory experiment designed to explore the relationship between suspect behavior and collusion. We focus on coordinated behavior in procurement auctions, a context where collusion is often suspected and has been frequently observed. (4) Two environments are examined: a set cost regime that contains some features critical to standard government procurement auctions for goods, and an endogenous cost regime, which exhibits some critical elements of procurement bidding for construction contracts. …

Journal ArticleDOI
TL;DR: In this paper, it was shown that in finite economies, if agents have incomplete information about their relative position in the trade cycle or when the barter and autarky equilibria of the one-shot trading round support a monetary equilibrium with repeated trades, then fiat exchange may arise.
Abstract: The state of the art of rendering fiat money valuable is either to impose a boundary condition or to make the boundary condition unimportant through an infinite sequence of markets so as to circumvent backward induction. We show fiat exchange may nevertheless arise in finite economies if agents have incomplete information about their relative position in the trade cycle or when the barter and autarky equilibria of the one-shot trading round support a monetary equilibrium with repeated trades.

Journal ArticleDOI
TL;DR: In this paper, the authors analyze the properties of a BarroGordon (1983a) model of monetary policy, in which some agents form rational and some adaptive expectations, and show that a higher proportion of agents with adaptive expectations generally slows down the disinflation process but, as in Sargent (1999), it also allows for a lower long run inflation rate.
Abstract: I. INTRODUCTION Whether or not expectations of inflation are rational is an open question. Rational forecasts require knowledge and information that some agents may not find worthwhile acquiring. Instead, because past inflation is a cheap and potentially informative signal about the policies of the central bank, those agents with less information may resort to extrapolation from past inflation to a greater extent than those with more information. In other words, for all agents, expectations have a rational (forward-looking) and an adaptive (backward-looking) component. Differences across agents in terms of information can lead to a separation between those who form more rational and those who form more adaptive expectations. (1) A simpler heterogeneity--agents with purely rational or with purely adaptive expectations-has been adopted in some models. (2) In that context, when a central bank is the major source of information about monetary policy, it could potentially influence the proportion of agents in each group. A natural question is: What are the central bank's preferences regarding the distribution? More specifically, does the central bank prefer many or few agents with rational expectations when introducing a monetary regime designed to reduce inflation? We will analyze the properties of a BarroGordon (1983a) model of monetary policy, in which some agents form rational and some adaptive expectations. In that setting, a higher proportion of agents with adaptive expectations generally slows down the disinflation process but, as in Sargent (1999), it also allows for a lower long-run inflation rate. An implication of this is that the central bank will prefer a higher proportion of agents who form rational expectations if it disinflates from a high level of inflation, but not so if it disinflates from a moderate or low inflation level. It is generally recognized that expectations do not adjust instantaneously to reflect the new conditions following a change in monetary regime. Instead, as in Lewis (1989), Wieland (2000), and Mankiw et al. (1987), agents, including the central bank, observe the unfolding macro developments and form estimates of the parameters that characterize the new environment. A number of interesting questions are addressed in that set-up. What is the speed and correctness of learning? Can the central bank use its control over monetary aggregates to generate observations useful in the learning process, that is, can learning be an active process? When does equilibrium have an inflationary bias? The structure of our model is simpler in the sense that the central bank has direct control over inflation and, aside from a random shock, the output response to changes in prices is clear. The simpler set-up makes the model analytically tractable and allows us to solve explicitly for steady-state inflation and a parameter that captures speed of convergence of inflation to the steady state. We can compare our results with the numerical estimates of other more complicated models, most notably that of Sargent (1999). Our explicit solution helps clarify insights about the dynamics of disinflations. Models of parameter uncertainty and learning have also addressed questions about gradual versus rapid disinflation. With results similar to ours, Balvers and Cosimano (1994) find that rapid disinflation is preferred when inflation is high, although in their framework rapid reduction in money growth is warranted to facilitate learning. In our model, rational agents are preferred because with high initial inflation, the benefit from rapid disinflation with more rational agents outweighs the cost of higher steady state inflation. Similar to Cukierman and Meltzer (1986) and Cosimano and Van Huyck (1993), in our model there is a possibility for the central bank to influence the beliefs of the population. The process of expectation formation, however, is not modeled formally, which contributes substantially to the analytical tractability of the model. …