scispace - formally typeset
Search or ask a question

Showing papers in "Economic Inquiry in 2004"


Journal ArticleDOI
TL;DR: In this article, the effects of financial development on the sources of growth in different groups of countries were investigated using GMM dynamic panel techniques, showing that finance has a strong positive influence on productivity growth primarily in more developed economies.
Abstract: This article studies the effects of financial development on the sources of growth in different groups of countries. Recent theoretical work shows that financial development may affect productivity and capital accumulation in different ways in industrial versus developing countries. This hypothesis is tested with panel data from 74 countries using GMM dynamic panel techniques. Results are consistent with the hypothesis: finance has a strong positive influence on productivity growth primarily in more developed economies. In less developed economies, the effect of finance on output growth occurs primarily through capital accumulation.

392 citations


Journal ArticleDOI
TL;DR: The worker fatality risk variable constructed for this article uses BLS data on total worker deaths by both occupation and industry over the 1992-97 period rather than death risks by occupation or industry alone, as in past studies as mentioned in this paper.
Abstract: The worker fatality risk variable constructed for this article uses BLS data on total worker deaths by both occupation and industry over the 1992-97 period rather than death risks by occupation or industry alone, as in past studies. The subsequent estimates using 1997 CPS data indicate a value of life of $4.7 million for the full sample, $7.0 million for blue-collar males, and $8.5 million for blue-collar females. Unlike previous estimates, these values account for the influence of clustering of the job risk variable and compensating differentials for both workers' compensation and nonfatal job risks.

175 citations


Journal ArticleDOI
TL;DR: In this paper, the impact of trade on the skilled-unskilled wage gap has attracted a lot of research interest in the developed world, primarily due to the observed pattern of increasing inequality over the past decade.
Abstract: I. INTRODUCTION The impact of trade on the skilled-unskilled wage gap has attracted a lot of research interest in the developed world, primarily due to the observed pattern of increasing inequality over the past decade. The importance of two competing candidates, namely, trade and technology, responsible for such a phenomenon has been highlighted through numerous theoretical and empirical writings on this issue. Among them several papers have tried to demonstrate that more open trade regime in the developed countries (North) has led to the relative decline of the unskilled wage and/or employment via the standard Stolper-Samuelson effect. A representative sample of this growing literature is hard to construct. Interested readers may look at Berman et al. (1994), Learner (1995), and Jones and Engerman (1996) for a general idea about the ongoing debate. Although a huge body of literature has been developed looking for consequences of a liberated trade regime on the labor force in the North, the mirror image of the event, that is, the Southern experience, has been somewhat neglected. A standard theoretical presumption will be that the South, being an exporter of unskilled labor-intensive products to the North must experience a decline in the degree of wage inequality in a liberalized regime of international trade. The empirical literature on the consequences of liberal trade policies of the North on the Southern wage rate is not as extensive as its counterpart in the South. However, some systematic studies have been conducted for East Asia and Latin America and exhibit conflicting patterns. The most notable work in this area is due to Donald Robbins, who in a series of papers (Robbins 1994a; 1994b; 1995a; 1995b; 1996a; 1996b; Robbins and Zveglich 1995) has demonstrated that although inequality has been brought down to some extent in East Asia, Latin America in general has experienced an increasing wage gap between the skilled and the unskilled following a more open trade and investment regime. Wood (1997) eloquently summarizes the empirical findings and criticizes the conventional wisdom associated with the Stolper-Samuelson result, which predicts gradual eradication of wage disparity in the South. At a theoretical level, hardly any attempt has been made to "model" the Southern response to a changed trade and investment environment other than the antiquated application of the two-by-two Stolper-Samuelson result. Except for an elegant piece by Feenstra and Hanson (1995), there has been a dearth of analyses that specifically incorporate the structural features of the developing countries, such as pattern of trade, characteristics of labor markets, structure of production, nature of capital mobility, and so on. In this entire debate on trade and wage inequality the two-by-two, Heckscher-Ohlin, or the Stolper-Samuelson arguments are always taken at their face values. The basic idea that commodity price movements have predictable consequences for factor-price movements can be utilized in a more complex description of reality and the celebrated Stolper-Samuelson-type arguments could be used to devise many interesting results. One purpose of this article is to pursue this line of argument. It is quite possible that the Southern example does not contradict the conventional wisdom, as has been claimed in Wood (1997), but it points toward a rather naive application of a standard theorem in contexts that do not properly specify the salient structural features of an economy. Recently, several authors, such as Jones and Kierzkowski (1998), Deardorff (1998), Harris (1998), have analyzed the issue of fragmentation in world trade, whereby different countries increasingly specialize in different fragments of production activities. Sharp declines in transportation and communication costs make it possible for the production process to be fragmented and traded across the globe. This article builds up a simple model of fragmentation by which market opens for trading a specific intermediate good. …

156 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the long-term economic implications of raising the CAFE standard by 3.0 miles per gallon (MPG) above current levels, based on the assumption that the current standard is not binding.
Abstract: I. INTRODUCTION AND BACKGROUND In 1975 the U.S. government enacted legislation regulating the fuel efficiency of new motor vehicles. The apparent objective of this law is to reduce American dependence on foreign oil. After large increases in the price of petroleum in the late 1990s, and with continued conflict in the Middle East, corporate average fuel economy (CAFE) standards once again became a topic of interest. A number of proposals for changing the CAFE standards were discussed in Congress in early 2002, culminating in a defeat in the Senate of an amendment that would have required a 50% increase in the relevant CAFE standards. In place of that increase, the Senate voted to require the executive branch to examine the impact of further increases in the CAFE standard. This work evaluates the long-term economic implications of raising the standard by 3.0 miles per gallon (MPG) above current levels. In industry parlance, this approach is sometimes referred to as "technology forcing." I choose 3.0 MPG because it reflects the focus of a May 2001 report by the vice president's task force on energy policy and because it reflects several legislative proposals in Congress. (1) The long term refers to a length of time such that manufacturers can adjust vehicle technologies and powertrain designs to reduce the amount of fuel required to move a given amount of mass or to achieve a given amount of performance or acceleration per gallon of fuel consumed. Previous work on CAFE standards, such as Kleit (1990) and Thorpe (1997), focused on short-term responses to higher CAFE standards, where technology forcing was not an option for manufacturers. The analysis is conducted under two different scenarios. The first scenario is that CAFE standards are not binding in the current marketplace. The second scenario takes account of the current impact of CAFE standards and then analyzes the costs and benefits of increasing the standards. The costs of CAFE standards are broken down into two areas: the changes in consumer and producer surplus, and the increase in externalities caused by the increased driving that higher CAFE standards induce. The plan of this article is as follows. Section II reviews the history of CAFE standards and briefly discusses the rationale for the regulation. Section III develops a model in which the current CAFE standard is assumed to be nonbinding. Section IV provides estimates of the impacts for a long-term 3.0 MPG CAFE increase under the assumption that the current standard is not binding. Section V then revises the model to take into account the arguably more realistic assumption that the existing CAFE standard was in fact binding. It then reports estimates for a long-term 3.0 MPG increase. Section VI provides a brief cost-benefit analysis of CAFE increases, and section VII provides a summary and conclusion. II. BACKGROUND ON AUTOMOBILE FUEL ECONOMY STANDARDS A Brief History of the CAFE Program The CAFE program, as enacted in 1975, called for all manufacturers selling more than 10,000 autos per year in the United States to reach the mandated CAFE levels. CAFE levels rose from 19.0 MPG in 1978 to 27.5 MPG in 1985 and later years. A manufacturer's domestic and foreign cars are placed in separate CAFE categories, based on the domestic context of the vehicle. If a car has over 75% American context, it is considered domestic and placed in the domestic pool. Otherwise, it is placed in the foreign car pool (see Kleit 1990 for a discussion). Light trucks (pickup trucks, sport-utility vehicles [SUVs], and minivans) were placed in a different CAFE pool than cars. When CAFE standards were originally passed, these vehicles represented a small fraction of the relevant market. By 2001, however, such vehicles made up approximately one-half of the sales of personal vehicles. In 2001, light trucks were required to reach 20.7 MPG. (There is no domestic and foreign division in the CAFE regulation for light trucks. …

118 citations


Journal ArticleDOI
TL;DR: This article found that the relationship between pharmaceutical prices and the number of sellers in the pharmaceutical industry is more like that found in other industries and that generic entry has a mixed impact; generic prices fall rapidly with generic entry, whereas branded prices tend to increase or decrease only slightly.
Abstract: A fundamental question in industrial organization regards the relationship between price and the number of sellers. This relationship has been particularly important in the pharmaceutical industry where legislative changes were specifically designed to foster competition. Previous research on the pharmaceutical industry has shown generic entry has a mixed impact; generic prices fall rapidly with generic entry, whereas branded prices tend to increase or decrease only slightly. Using more complete data, focused on one segment of the pharmaceutical industry—anti-infectives—we find that the relationship between pharmaceutical prices and the number of sellers is more like that found in other industries. (JEL L11, L65, D4)

109 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a dynamic model of an agent's decision to purchase or sell a good under the realistic conditions of uncertainty, irreversibility, and learning over time.
Abstract: Hicksian welfare theory is static in nature, but many decisions are made in a dynamic environment. We present a dynamic model of an agent’s decision to purchase or sell a good under the realistic conditions of uncertainty, irreversibility, and learning over time. Her willingness to pay (WTP) contains both the intrinsic value of the good as in Hicksian theory plus a commitment cost associated with delaying to obtain more information. The Hicksian equivalence between WTP/Willingness to accept (WTA) and compensating and equivalent variations no longer holds. The WTP and WTA divergence may arise and observed WTP values are not always appropriate for welfare analysis. (JEL D60, D83)

91 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a simple model of entry mode choice and evaluate its main testable implication using data on foreign investors in Eastern European countries and the successor states of the former Soviet Union.
Abstract: How does the preferred entry mode of foreign investors depend on their technological capability relative to that of their rivals? The authors develop a simple model of entry mode choice and evaluate its main testable implication using data on foreign investors in Eastern European countries and the successor states of the former Soviet Union. The model considers competition between two asymmetric foreign investors and captures the following tradeoffs: while a joint venture helps a foreign investor secure a better position in the product market compared with its rival, it also requires that profits be shared with the local partner. The model predicts that the efficient foreign investor is less likely to choose a joint venture and more likely to enter directly relative to the inefficient investor. The authors' empirical analysis supports this prediction: foreign investors with more sophisticated technologies and marketing skills (relative to other firms in their industry) tend to prefer direct entry to joint ventures. This empirical finding is robust to controlling for host country-specific effects and other commonly cited determinants of entry mode.

82 citations


Journal ArticleDOI
TL;DR: This article investigated the relationship between the US dollar exchange rate and its fundamentals across different exchange rate regimes using data going back to the late 1800s or early 1900s for six industrialized countries and found that the relative importance of exchange rates and fundamentals in restoring the long-run equilibrium level implied by the exchange rate-monetary fundamentals model varies significantly over time and is affected by the nominal exchange rate regime in operation.
Abstract: We investigate the dynamic relationship between the US dollar exchange rate and its fundamentals across different exchange rate regimes using data going back to the late 1800s or early 1900s for six industrialized countries. For these countries there is evidence of a long-run relation between the nominal exchange rate and monetary fundamentals consistent with conventional theories of exchange rate determination. We employ a Markov-switching vector equilibrium correction model that allows for regime shifts in the entire set of parameters and the variance-covariance matrix. Our results suggest that the relative importance of exchange rates and fundamentals in restoring the long-run equilibrium level implied by the exchange rate-monetary fundamentals model varies significantly over time and is affected by the nominal exchange rate regime in operation.

75 citations


Journal ArticleDOI
TL;DR: In this article, the authors test for convergence of political freedom using Freedom House's (2002) indices of political rights and civil liberties in 136 countries from 1972 to 2001 and find that the level of freedom is significantly related to the legal system, education, economic freedom, and natural resources.
Abstract: This article tests for convergence o f freedom using Freedom House's (2002) indices of political rights and civil liberties in 136 countries from 1972 to 2001. Time-series tests, using structural breaks, are employed to test for stochastic and [beta]-convergence. Cross-section tests are performed to examine the impact of legal systems, education, natural resources, economic freedom, and other variables. We find that political freedoms are converging for one-half of the countries. Additionally, we find that the level of freedom is significantly related to the legal system, education, economic freedom, and natural resources.

64 citations


Journal ArticleDOI
TL;DR: In this article, the authors developed a quantitative general equilibrium model to assess the growth effects of adopting a flat tax plan similar to the one proposed by Hall and Rabushka (1995).
Abstract: I. INTRODUCTION Recently, the debate concerning fundamental U.S. tax reform has led to a number of proposals that involve a shift toward a consumption-based system. (1) One such proposal is the so-called flat tax of Hall and Rabushka (1995). The flat tax would apply a single tax rate to all labor income above a given threshold and to all capital income after fully expensing investment expenditures. Hall and Rabushka (1995) argue that the adoption of their proposal would provide an enormous boost to the U.S. economy by dramatically improving incentives to engage in productive activities and would save taxpayers hundreds of billions of dollars in compliance and administration costs. (2) In this article, we develop a quantitative general equilibrium model to assess the growth effects of adopting a flat tax plan similar to the one proposed by Hall and Rabushka (1995). The model captures many of the features of the current U.S. tax code, such as graduated personal tax rates, a standard personal deduction, separate tax rates applied to personal and business income, double taxation of business income, and differential tax treatment of physical and human capital. Under appropriate parameter settings, the model can exhibit either endogenous or exogenous long-run growth. Our choice of functional forms facilitates a closed-form solution to the model. This allows us to characterize explicitly the economy's transition path following the reform. A central issue in the debate over fundamental tax reform is the effect that such a reform would have on economic growth. The present analysis builds on the work of Stokey and Rebelo (1995), who use an endogenous growth framework to identify the key model features and parameters that are important for determining the quantitative impacts of distortionary taxes on long-run growth. (3) Our study differs from theirs and the bulk of the dynamic tax literature in one fundamental respect. Here we evaluate the growth effects of shifting from a graduatedrate tax system to a flat-rate system. Stokey and Rebelo (1995) consider only flat-rate systems in which the marginal tax rate is equal to the average tax rate. We approximate the graduated-rate tax system in the U.S. economy by an empirical tax rate function that allows the personal tax rate to depend positively on household taxable income. In equilibrium, household decisions are influenced by both the level and slope of the tax schedule. To implement the flat tax reform, we shift the parameters of the tax rate function to flatten the marginal tax schedule while maintaining revenue neutrality. Our methodology treats the Hall-Rabushka proposal as one that in effect moves a representative household from one tax rate schedule to another. We view this setup as a reasonable approximation to gauge the growth effects of tax reform at the macro level. A more elaborate setup would of course allow for household heterogeneity at a given point in time. In this regard, we note that Caucutt et al. (2003) have recently examined the growth effects of graduated-rate taxes in a two-period overlapping generations model that includes both skilled and unskilled agents. In their model, a flatter tax schedule increases the fraction of skilled agents in the economy. The share of total output devoted to education rises accordingly and the economy's long-run growth rate is observed to increase. The quantitative implications of their model are difficult to compare to ours, however, because their two-period framework implies a very long time horizon between household decisions (about 30 years). Moreover, their model abstracts from many of the features of the U.S. tax code that we include here. In addition to examining the consequences of flattening the marginal tax schedule, we investigate the growth effects attributable to other parts of the flat tax proposal, such as allowing full investment expensing, eliminating the double taxation of business income, and increasing the standard personal deduction. …

62 citations


Journal ArticleDOI
TL;DR: In this article, the authors present new evidence on this ongoing solvency debate utilizing an empirical framework that allows us to test whether there have been threshold effects in the U.S. deficit.
Abstract: I. INTRODUCTION There has been an ongoing debate on whether the U.S. fiscal policy is sustainable in the long run. In addressing this issue, a number of studies have examined whether the U.S. public finances are compatible with the government's intertemporal solvency constraint (Hamilton and Flavin 1986; Trehan and Walsh 1988; Hakkio and Rush 1991; among others). The requirement of budget processes to be sustainable implies effectively that Ponzi games are ruled out as a viable option of government finance. In other words, further new borrowing cannot be used indefinitely as a method of financing interest payments on existing debt. Therefore, the solvency constraint requires that any changes in taxes and government spending are followed by adjustment in future taxation and/or spending that equals to the original change in present value. This solvency constraint imposes testable restrictions on the time-series properties of key fiscal aggregates. In this article we present new evidence on this ongoing solvency debate utilizing an empirical framework that allows us to test whether there have been threshold effects in the U.S. deficit. Unlike existing empirical studies that focus only on the identification of regime shifts in U.S. fiscal policy, we also offer an explanation and provide evidence as to why these regime shifts might occur. Specifically, we argue that for fiscal authorities to be able to meet the solvency constraint, they would intervene through deficit cuts only when the government budget deficit becomes very large. Therefore, we expect a mean reverting dynamic behavior for deficits only when they are above some threshold value. We test for this hypothesis using the threshold unit root empirical methodology recently developed by Caner and Hansen (2001). The article is organized as follows. Section II reviews the empirical evidence. Section III describes the empirical methodology, and section IV presents the empirical results, which are analyzed and interpreted. Section V summarizes and concludes. II. INTERTEMPORAL SOLVENCY CONDITION: EMPIRICAL UNDERPINNINGS One strand of the empirical literature focuses on the stationarity of the stock of public debt. Uctum and Wickens (1997) deal with stochastic interest rates and primary surpluses that are allowed to be either exogenous or endogenous to the stock of public debt. They show that a necessary and sufficient condition for the intertemporal solvency condition to hold is a stationary discounted stock of public debt. Another strand of the empirical literature has concentrated on the dynamics of the undiscounted inclusive-of-interest deficit, or alternatively on the long-run relationship between government spending and tax revenues. Trehan and Walsh (1988) show that government spending, inclusive of interest payments, and government revenues should be cointegrated with a cointegrating vector equal to [-1, 1]'. They present evidence that supports this restriction. Hakkio and Rush (1991) point out that a necessary condition for intertemporal solvency is the existence of cointegration between government expenditure, inclusive of interest payments, and government revenues, with a cointegrating vector equal to [1, -[beta]]', and 0 Quintos (1995) expands on Hakkio and Rush (1991) and shows that cointegration is not necessary for the intertemporal balance condition to hold. Specifically, she distinguishes between a weak and a strong sustainability condition. The former implies that the government solvency holds, but the undiscounted debt process is exploding at a rate that is less than the growth rate of the economy. Although this case is consistent with deficit sustainability, it is inconsistent with the ability of the government to market its debt in the long run, especially if the focus is on the debt to gross national product (GNP), or on the debt per capita (see also Hakkio and Rush 1991). In contrast, strong sustainability implies that the undiscounted public debt is finite in the long run. …

Journal ArticleDOI
TL;DR: In this article, international and inter-sectoral R&D spillover effects on the total factor productivity growth of manufacturing and nonmanufacturing sectors based on a pooled time-series data set of 14 OECD economies and 3 East Asian economies were investigated.
Abstract: This study empirically explores international and intersectoral R&D spillover effects on the total factor productivity growth of manufacturing and nonmanufacturing sectors based on a pooled time-series data set of 14 OECD economies and 3 East Asian economies--Korea, Singapore, and Taiwan. The study finds that foreign manufacturing R&D has strong influence on domestic productivity growths of both sectors and that domestic manufacturing R&D has a substantial intersectoral R&D spillover effect on domestic nonmanufacturing productivity growth. The social rates of return to manufacturing R&D are estimated to be two to six times greater than the private rates of return. (JEL D24, 033, F10) I. INTRODUCTION Research and development (R&D) activity, which provides a key to success in productivity competition, has been performed disproportionately across sectors and across economies. This observation may reflect the nature of the production technology of each sector and the needs of each country at its respective developmental stage. As discussed in the R&D spillover literature, if there are linkages through the use of technology-embodied intermediate goods or through other transmission mechanism across sectors and across economies, R&D investments in one sector of a country could bring about productivity rise not only in its performing sector but also in other sectors and in other countries. The R&D activities in the Organisation for Economic Co-operation and Development (OECD) and East Asian economies have been heavily concentrated in the manufacturing sectors in the past two decades, even though the share of the manufacturing sector is relatively small in comparison with the rest of the aggregate economy in these countries. (1) One intriguing issue is to identify the magnitude of influence of manufacturing R&D on the productivity of the remaining nonmanufacturing sector as well as its influence on the productivity of other countries. This study addresses this and other related issues in a two-sector empirical model of international and intersectoral R&D spillovers on the total factor productivity (TFP) of manufacturing and non-manufacturing sectors based on the pooled time-series data set of the 14 OECD economies and three East Asian newly industrialized economies (NIEs) over the period 1980 to 1995. (2) The two different groups of countries in the sample--the OECD and the East Asian economies--increase the variations in the cross-section. Given the recent success and substantial increase in the indigenous R&D investments in the manufacturing sectors of the East Asian economies, this study investigates further whether the international R&D spillovers arise from these countries as well. Most empirical studies on international R&D spillovers have focused on R&D effects across business sectors of OECD economies, including the pioneering work of Coe and Helpman (1995). (3) Few empirical studies exist that allow for simultaneous presence of international and intersectoral R&D spillovers, and they all focus on spillovers across disaggregate manufacturing sectors within and among OECD countries. Keller (2002) estimates international and intersectoral R&D spillovers using industry-level data of 13 manufacturing industries in 8 major countries, where his findings show significant presence of R&D spillovers, both domestically and internationally. (4) Using a panel cointegration estimation procedure and using transaction intensity and patent space to construct R&D spillover variables, Frantzen (2002) confirms the findings of Keller (2002). This study complements the literature on international and intersectoral R&D spillover effects by considering the impact of international and intersectoral R&D spillovers between manufacturing and nonmanufacturing sector productivity, an issue that was never addressed in previous studies. …

Journal ArticleDOI
TL;DR: In this paper, the authors consider the situation of a bidder choosing between two auctions that differ only in regard to whether the auction is being conducted according to an ascending or sealed bid format.
Abstract: I. INTRODUCTION Auctions have become a pervasive method of exchange in the online world as each day thousands of auctions take place online and this trade volume totals to billions of dollars worth of goods per year. (1) This large volume of auction transactions implies the existence of a large number of sellers competing for buyers. The obvious implication is that the competition among the sellers for the pool of potential buyers can be fierce, and any competitive edge a seller can find could be important. One such competitive edge a seller might exploit is using an auction design that attracts bidders away from their competitors. When designing a real auction or modeling a theoretical one, the entry decision of prospective bidders is rarely considered. Most auction analysis is performed assuming that a certain number of bidders will participate for certain or perhaps that the number of bidders is unknown and randomly determined. It should be clear however, that the most crucial part of a successful auction is encouraging as many bidders as possible to participate. In general this should be expected to have a positive effect on revenue (at least in noncommon value environments), and in certain types of auctions it may help combat the possibility of bidder collusion. Because there are typically competing auctions available for similar goods or even outside options that bidders can pursue when auctions are for unique goods, it is important to understand how the aspects of an auction format can effect the entry decision of a bidder. Consider a bidder who is faced with the choice of entering one of two auctions for similar or even identical objects. How does this bidder make the decision of which auction to enter? The obvious answer is that the bidder will enter into the auction that maximizes his or her expected utility so long as that expected utility is greater than some reservation value. The real question, then, is how are these expected utilities constructed? Profit from participating in the auction is an obvious argument. There are also a number of environmental considerations that might effect this decision that would be difficult to account for precisely, such as the reputation and trustworthiness of the auctioneer, quality of the advertisement for the auction, and things of this nature. It is also possible that the format of the auction itself can have an impact on the preferences of the bidder. This latter point will be the issue of this study. The particular focus will be looking at bidder preferences between the two most common standard auction formats used in the field; the sealed bid first price (will be abbreviated as just the sealed bid auction) and the ascending or English auction. The other reason we are interested in comparing these two auction formats rather than the ascending and second price or first price and descending is that it seems reasonable to expect bidders to have preferences between the two due to the strategic differences between them. Such differences lead to substantial differences in terms of the difficulty of deciding how to bid and also in the possibility that an outcome leads to a bidder experiencing some form of regret. Regret may occur in a first price auction when a bidder loses to a bid that is below his value as he may think that if only he had bid higher, he could have won. Although such feelings of regret may be considered irrational, that does not preclude their existence. Such a scenario should not reasonably occur in an ascending auction. If one considers the situation of a bidder choosing between two auctions that differ only in regard to whether the auction is being conducted according to an ascending or sealed bid format, it is not immediately obvious which would be the most preferred even assuming a standard symmetric independent private values environment. Were all bidders risk-neutral, then of course revenue equivalence would hold and the bidders would be indifferent. …

Journal ArticleDOI
TL;DR: In this article, the authors showed that the full prices and full budgets specification is sufficient to satisfy the constraints on demand parameters that arise because choice is made subject to two binding constraints, regardless of the individual's position in the labor market.
Abstract: I. INTRODUCTION At least since the work of Becker (1965), it has been recognized widely that time may play an important role in consumer demand. One of the areas where this may be most significant empirically is in the valuation of natural resource amenities through their associated recreation demands. A principal reason is that recreation is a time-intensive commodity, so that the value of the time spent in recreation would be expected to play a large role in its overall value. The potential bias to consumer's surplus estimates of the net economic value of recreation from excluding time "prices" from the demand model has been recognized from the outset, for example by Knetsch (1963) and Clawson (1959). To incorporate time into recreation demand models, researchers have generally adopted the practice suggested by McConnell (1975) of defining the "full" price of recreation as the money cost of a trip plus its monetized time cost, though the use of "full" budgets also called for by the Becker approach is less widespread. Bockstael et al. (1987) pointed out quite clearly that both full budgets and full prices are required in the demand function if the recreationist is jointly choosing labor supply with recreation at an exogenous marginal wage, but that the structure of demand was unclear if the individual was not making such marginal labor supply choices (i.e., if he or she was working fixed hours). Subsequently, Larson and Shaikh (2001) showed that the full prices--full budgets specification is sufficient to satisfy the constraints on demand parameters that arise because choice is made subject to two binding constraints, regardless of the individual's position in the labor market. They also pointed out that models that use full prices but only money income (in effect ignoring the role of time as a resource constraint) cannot be consistent with the requirements of choice subject to two constraints. Although the outlines of how time should enter the structure of demand are becoming clear, a major unresolved issue is how to determine its value in practice. Several studies have followed the basic logic of Becker's early work, assuming that the opportunity cost of recreation time (i.e., the value of other time forgone in favor of recreation) is an exogenous parameter, such as the average wage rate or, more commonly, some fraction thereof. This fraction is either chosen arbitrarily or estimated as part of the recreation demand model, as in Cesario (1976), McConnell and Strand (1981), and Smith et al. (1983). Such assumptions may be erroneous for a variety of reasons. Assuming the average wage is the appropriate opportunity cost of time presumes that the individual faces no constraints on hours worked, derives no utility or disutility from work, and has a linear wage function, as has been noted by Chiswick (1967), Bockstael et al. (1987), and Smith et al. (1983). This is unlikely to be true for many people. Assuming some constant fraction of the wage is the opportunity cost of time, especially an arbitrarily chosen fraction, also seems likely to be incorrect for most people. For all of these reasons, an individual's average wage does not necessarily reveal anything about the shadow value of discretionary leisure time, either as an upper or lower bound. In a recent advance, Feather and Shaw (2000) adapted the Heckman (1974) labor supply model to include information supplied by recreationists about their labor supply status, in particular whether they felt under- or over-employed relative to their desired number of hours. This information is used to identify subgroups in the data for which the shadow value of leisure time is less than the wage or more than the wage, which helps in the estimation of both a wage function and a shadow value of leisure time function. This approach produces individual-specific estimates of the opportunity cost of leisure time, based on their demographics and labor market decisions, which are used in a subsequent recreation demand analysis. …

Journal ArticleDOI
TL;DR: In this article, the authors developed a model in which the equilibrium balance between the two ways of acquiring desired goods is determined by the values of various parameters governing the productive and conflictual processes involved.
Abstract: Joint exchange and raiding can emerge in a world of mutual raiding when the appropriated production is less valuable to the appropriator than to the defender and the defense is not too inferior to attack. The amounts of resources allocated to production and raiding and the amounts of goods exchanged reciprocally are determined endogenously. The model reduces to pure exchange and pure raiding as special cases. Pure exchange emerges when the usability of appropriation is sufficiently low. Pure raiding emerges if the defense is sufficiently inferior to attack. The results of the model are intermediate between the results of the two extreme cases. (JEL C6, C72, D51, D72, D74, F10) I. INTRODUCTION In the American Civil War, each side to some extent looted the other. Still, some true reciprocal exchange between the two sides was conducted by blockade runners, whereby the South exported cotton and imported manufactured goods. Varying across time/space, pure market exchange may be supplemented by or grow out of such activities as war, piracy, (1) corruption, (2) extortion, (3) crime, (4) plundering, or theft, referred to as raiding or mutual raiding. Generally, agents (individuals, groups, companies, nations) can meet their consumption needs by exchange of goods in accordance with comparative advantage, or else by attempts at confiscating the other's production, resources, goods, or trade flows. Each route has been the subject of separate study. This article develops a model in which the equilibrium balance between the two ways of acquiring desired goods is determined by the values of various parameters governing the productive and conflictual processes involved. How can exchange, conducted voluntarily and with mutual advantage, emerge in a world where appropriation and defense through mutual raiding appear to be more natural means of survival? (5) Classical economics describes how agents exchange goods reciprocally, voluntarily, and nonviolently. (6) Political economy describes how agents appropriate and defend one productive resource, without reciprocal exchange. (7) A few authors attempt to integrate production, exchange, and appropriation. (8) This article lets two agents exchange goods reciprocally and raid each other's production mutually. Hausken (2003, 33-36) classifies the 78 possible models (in the exhaustive spirit of Rapoport and Guyer 1966) for how one or several of the five objects--production, resources, consumption goods, exports, and imports--can be set under attack. As is also the case for Rider (1999), Anderton et al. (1999), and Anderton (1999), this article utilizes ratio forms of the contest success function. (9) Reduction to a pure exchange model, presented in section II, or a pure raiding model, occurs by adjusting parameters. Also assuming raiding of production, Rider (1999) claims that pure exchange is impossible, (10) which is correct when the production is equally valuable to the appropriator and defender. Rider shows that in the absence of pure exchange, it could turn out--and, in fact, it inevitably turns out--that the advantages of diversified consumption achieved by mutual raiding exceeds the wastage of fighting effort. If his result were universal, one would not observe pure exchange in the real world, which in fact we do. Accordingly, this article shows that Rider's (1999) result is not inevitable, even when accepting his assumption that each agent produces only one of the two desired goods. We believe that the two characteristics "value of appropriation" and "superiority of defense over attack" (11) are realistic and help explain the empirics as well as reciprocal exchange in the real world. (12) Regarding the first characteristic, Grossman and Kim (1995, 1279) present the following argument, which is contrary to most contest models which let appropriated objects be equally valuable to the appropriator and the defender: "We also allow for the possibility that predation is destructive, by which we mean that in any appropriative interaction the predator gains less than the prey loses. …

Journal ArticleDOI
TL;DR: In this article, the authors use the Survey of Consumer Finances (SCF) data set to study consumer use of credit cards and find that 74% of households hold at least one credit card.
Abstract: I. INTRODUCTION The credit card market has expanded rapidly over the past 20 years. In 1983, 65% of households held at least one card; by 1998, about 74% did so. The size of real balances on these cards more than tripled in this period. Analysts have proposed a number of explanations for the increase, but untangling the main causes is complicated by the unique nature of the credit card financial product. An individual who obtains a credit card has obtained the right to borrow a certain amount, called the limit, with no questions asked, under predetermined payback rules. Thus credit cards involve both assets and liabilities. The available limits are best thought of as assets, which can be used by a consumer to hedge against future income shortfalls or just to facilitate paying for goods and thereby reducing the need to carry cash. The asset and limit components of credit cards makes them similar to financial instruments. For example, the asset component is essentially an option, a subject that has been studied extensively in the corporate finance literature. The seminal work is Black and Scholes (1973), who developed a method for valuing an option. Washam and Davis (1998) extend this method to the problem of valuing the liquidity of credit resources. This body of literature suggests that for some people, a credit card can be viewed as a source of liquid funds for business purposes. Moreover, once a credit card is used, it becomes a debt liability. In effect, these are loans that the user has taken on to cover an income shortfall or simply to make a purchase for which borrowing makes sense. Again, a corporate equivalent would be line-of-credit borrowing; see Myers (1989), or Miller et al. (1998) for an introduction. These similarities raise the question of what sort of sample one should analyze for a study of consumer use of credit cards. In all likelihood, many average consumers who use credit cards are actually business entrepreneurs who use the cards to finance small business. Considering that wealth is an important element of any study of credit cards, one faces the further problem that small business wealth is hard to measure accurately. Some research (Lindh and Ohlsson 1998) suggests that access to liquid credit, such as credit cards, can make the difference between becoming self-employed or not. Dunn and Holtz-Eakin (2000) argue that family financial capital can ease the transition to self-employment; although they do not discuss credit availability in their study, their results are strong evidence that credit cards would also ease that transition. Other studies that focus on the role of liquid credit and entrepreneurial activity include Meyer (1990), Blanchflower and Oswald (1998), Cox and Japelli (1990), and Evans and Jovanovich (1989). Our focus here is not on the use of credit cards as a business financing tool, however, it is apparent from this literature that business finance certainly plays a powerful role in determining the use of credit cards by some owners. The research suggests that entrepreneurs would be more likely to be using credit cards for financing business rather than for consumer purposes. For this reason, it is important to separate the self-employed from the non-self-employed in our sample. Looking specifically at consumers, the unusual nature of the credit card financial product raises interesting questions for empirical researchers who hope, as we do, to provide some estimates of the responsiveness of the credit card demand to standard price and income effects. The object of this article is to raise these questions and then answer some of them, estimating regression models using data from the Survey of Consumer Finances (SCF). Specifically, we make two empirical approaches. First, we model credit card demand as a two-stage process, with a consumer obtaining limits in the first stage and then borrowing some fraction of those limits in the second. …

Journal ArticleDOI
TL;DR: In this article, the authors present a spatio-temporal analysis of state-level tax competition in the context of the elimination of state EIG taxes and their replacement with the pick-up tax.
Abstract: Since 1976, more than 30 states have eliminated their "death" taxes and many others have reduced them. This unexplored case of interstate tax competition presents a unique opportunity to develop a new, more satisfying definition of competitor based on historical elderly migration patterns. Using data from 1967 onward, we outline the recent history of state death tax competition and present a spatial econometric analysis. Interstate tax competition is evident and grows stronger when using migration-based definitions of competitors. The article concludes with still more evidence of interstate tax competition the recent movement by states to effectively revive their death taxes. (JEL H7, D7) I. INTRODUCTION The federal estate tax has received wide attention especially because its elimination has been a centerpiece of President George W. Bush's tax proposals. (1) Largely overlooked, however, is the quiet revolution that has been taking place at the state level. Since 1976, 31 of the 48 contiguous states have repealed their "death" or estate, inheritance, and gift (EIG) taxes, instead relying only on the pick-up tax whereby states capture a portion of the federal estate tax liability but do not increase the overall liability of the estate. (2) Of the remaining 12 states that still have EIG taxes, 2 have enacted legislation that will eliminate them by 2005 and others are considering doing so. Beyond the outright elimination of EIG taxes, many states have also acted to reduce them in a variety of ways, such as exempting certain beneficiaries, such as the spouse. This trend is noteworthy for several reasons. Foremost, it appears to us a prime example of intense interstate tax competition due to the growing size and political influence of the elderly population. There is additional political pressure because states may worry that high EIG taxes will drive the high-income elderly to move to bordering states or retirement havens. The stakes may be substantial. For example, Longino and Crown (1989) estimate that Florida had a net gain of $5 billion in income from the elderly migrants it received between 1985 and 1990, and Sastry (1992, 73) estimates that one new job is created for every 2.5 elderly migrants it receives. State EIG tax competition also provides us with a unique opportunity to explore alternative definitions of competitor states beyond simple geography because the movements of the tax base--the elderly--are fairly easy to track via historical migration data. Yet no research to our knowledge has explored interstate EIG tax competition. Furthermore, these widespread changes in state EIG taxes provide substantial cross-sectional and time-series variation that has been mostly overlooked by researchers interested in the behavioral effects of estate taxes. (3) State EIG tax policy is also still in flux. Current and proposed changes in the federal estate tax have substantial revenue consequences for states, effectively eliminating EIG tax revenues for those that rely solely on the pick-up tax. How will the states react to this change? At present, several states have enacted or are considering legislation that would effectively decouple their EIG taxes from the federal estate tax and so would preserve the revenue source. Indeed, as the federal estate tax is slowly eliminated, EIG taxes in states that continue to use them may be the only remaining taxes on bequests. Yet little is known about them. Our research seeks to fill this void. Using state-level data from 1967 to the present, we first describe the brief history and geographical pattern of these widespread reductions in state EIG taxes. To our knowledge, ours is the first research to document in a systematic way this phenomenon. We begin with a chronology of the states that eliminated completely their EIG taxes (thereby choosing to rely instead only on the pick-up tax). However, there are many other ways that the states may reduce their EIG taxes, such as increasing exemptions or reducing tax rates. …

Journal ArticleDOI
TL;DR: In this article, exit discrimination is defined as the involuntary termination of employment due to racial characteristics holding productivity constant, and the authors test for exit discrimination in the National Football League using a panel study on career length.
Abstract: Exit discrimination is defined as the involuntary termination of employment due to racial characteristics holding productivity constant. We test for exit discrimination in the National Football League (NFL) using a panel study on career length. Our analysis focuses on six positional groups: defensive backs, defensive linemen, linebackers, running backs, tight ends and wide receivers. In our analysis, in addition to race, we include performance variables to determine their importance in determining career length. Using both parametric and semi-parametric hazard models, we find no evidence of exit discrimination in the NFL.

Journal ArticleDOI
TL;DR: In this article, the authors compare past-compliance targeting and Friesen's optimal targeting against random auditing and find a production possibility frontier between compliance and minimizing inspections, and show that past compliance targeting generates the lowest inspection rates as predicted, but random audits achieves the highest compliance.
Abstract: Conditional audit rules are designed to achieve regulatory compliance with fewer inspections than required by random auditing. A regulator places individuals into audit pools that differ in probability of audit or severity of fine and species transition rules between pools. Future pool assignment is conditional on current audit results. We conduct an experiment to compare two specific schemes-Harrington's Past-- Compliance Targeting and Friesen's Optimal Targeting-against random auditing. We find a production possibility frontier between compliance and minimizing inspections. Optimal targeting generates the lowest inspection rates as predicted, but random auditing the highest compliance. Past-compliance targeting is intermediate

Journal ArticleDOI
TL;DR: In this article, a new empirical framework is applied to allow for inferences of price transmission at three different time horizons: instantaneous, the short run, and the long run.
Abstract: I. INTRODUCTION Numerous studies have investigated market linkages and price transmission mechanisms in major international equity markets, employing the analytical framework of the vector autoregression (VAR) or its variant, the error-correction model (ECM). (1) Studies such as Von Furstenberg and Jeon (1989), Eun and Shim (1989), and Koch and Koch (1991) focus on the short-run dynamic pattern of price transmission; others like Taylor and Tonks (1989) and Francis and Leachman (1998) are primarily interested in the long-run pattern of price transmission. More recently, an increasing number of studies explore both long- and short-run patterns of price transmission. Included in this last set are the works of Malliaris and Urrutia (1992), Arshanapalli and Doukas (1993), Masih and Masih (2001), and Bessler and Yang (2003), among others. This study extends the examination of international price transmission to stock index futures markets. The article contributes to the existing literature in three aspects. First, a relatively new empirical framework is applied to allow for inferences of price transmission at three different time horizons: instantaneous, the short run, and the long run. Building on recent advances in statistical analysis of causal modeling using directed acyclic graphs (DAGs) as in Spirtes et al. (2000), Pearl (1995, 2000), and Swanson and Granger (1997), this study is able to explore the contemporaneous causal pattern underlying the correlations among market innovations. The existence of strong contemporaneous correlations among market innovations has been well documented in the United States and international stock markets by Agmon (1972), Eun and Shim (1989), Koch and Koch (1991), Housbrouk (1995), and Bessler and Yang (2003). It is also well recognized by Agmon (1972, 849) and Eun and Shim (1989, 246) that contemporaneous correlations among market innovations reflect the phenomenon that new information in one market is transmitted and shared by other markets in contemporaneous time, due to immediate response to price changes between markets. However, more in-depth analysis on exactly how instantaneous price transmission among market innovations is conducted in international equity markets has not yet been well addressed in the existing literature. Although Bessler and Yang (2003) touch on the issue, the necessity of imposing constraints in the spirit of the block-recursive structure noted by Koch and Koch (1991) in the DAG analysis of VAR innovations is proposed and discussed thoroughly in this study. Second, innovation accounting analysis is more thoroughly explored in the study. Innovation accounting tools (i.e., impulse response analysis and forecast error variance decomposition) have been commonly used to summarize the dynamic pattern of price transmission among international financial markets. The importance of the factorization of innovations (i.e., VAR residuals) in yielding sound inference has been well acknowledged theoretically by Bernanke (1986), Sims (1986), and Swanson and Granger (1997). The application of the DAG technique, as discussed in Swanson and Granger (1997) and explained in the next section, is further key to innovation accounting analysis. In this study, the instantaneous price transmission pattern between market innovations (as identified by the DAG analysis) provides a data-determined solution to the basic problem of orthogonalization of residuals from the ECM and thus is critical to impulse response analysis or forecast error variance decompositions. Swanson and Granger (1997) argue that compared to the Choleski decomposition, the DAG-based structural decomposition is sensible but not subjective, because it allows for the properties exhibited by the data. Although several recent studies, such as those by Bessler and Yang (2003), Bessler et al. (2003), Haigh and Bessler (forthcoming), and Yang (2003), have used the DAG-based structural decomposition in a similar setting, the study is the first attempt responding to the suggestion by Swanson and Granger (1997, 364) of investigating the empirical implications of the DAG-based contemporaneous causal modeling. …

Journal ArticleDOI
TL;DR: This paper investigated the impact of tax rate volatility on investment in a cross-section of countries, namely, the 15 countries of the European Union (EU), the United States, and Japan.
Abstract: I. INTRODUCTION Spanning a period of nearly 100 years of economic research, a substantial body of literature has developed with the goal of explaining the behavior of investment over time. (1) Although many of these studies have considered the implications of tax policy for investment in an uncertain world, most have also implicitly assumed that the tax policy itself does not contribute to the uncertainty. The problem is that tax policy can be very uncertain in many cases, (2) and to date we know little about the consequences, especially from an empirical standpoint. More generally, empirical evaluations of uncertainty and investment are very limited compared with the development of theoretical analyses (Calcagnini and Saltari 2000), and the case of tax uncertainty is no exception. This article sets out to fill part of the intellectual void by empirically investigating the impact of volatility in effective tax rates on investment in a cross-section of countries, namely, the 15 countries of the European Union (EU), the United States, and Japan. In doing so, I first estimate tax rate volatility using an ARCH specification with data on effective capital tax rates. I then provide panel regression results, using the system generalized method of moments (GMM-Sys) estimator of Arellano and Bover (1995) (see also Blundell and Bond 1998), which suggest that the volatility of effective tax rates on capital have a significant negative impact on investment per worker in these countries. The remainder of the article proceeds as follows. Section II briefly reviews the existing literature on tax policy uncertainty and investment as well as existing empirical studies of uncertainty (in general) and investment. Section III develops the empirical model I employ to estimate the relationship between tax volatility and investment. Section IV then presents an analysis of effective tax rates in the EU countries, the United States, and Japan, followed by a discussion of the data and econometric issues in section V, and an examination of the effects of tax rate volatility on investment in section VI. Section VII provides concluding remarks. II. THEORETICAL FOUNDATIONS Tax Policy Uncertainty and Investment Although most of the voluminous literature on tax policy and investment under uncertainty ignores observed randomness in tax policy, a recent set of literature has begun to explore these issues in some detail, mostly through simulation. (3) The basic premise underlying these studies is that because output price uncertainty tends to retard investment (Pindyck 1988), (4) tax uncertainty might be expected to harm investment as well (Hassett and Metcalf 1999). Further credence to a negative relationship between tax uncertainty and investment is given by the business community's mantra that "they cannot make plans if they don't have confidence in the tax structure" (Bizer and Judd 1989, 223). These simulation studies, however, demonstrate that the impact of tax uncertainty depends crucially on the source and nature of the uncertainty. Contrary perhaps to conventional wisdom, in some cases increased uncertainty can be shown to have positive effects on investment, growth, or welfare. Bizer and Judd (1989) simulate the economic effects of introducing random tax policy in a dynamic general equilibrium model. They find that if random tax rates or credits are serially correlated, the target capital stock falls when taxes are high and rises when taxes are low. Their more interesting case considers independently and identically distributed random tax shocks. In this case the authors find that randomness in investment tax credits generates large fluctuations in investment, which have the effect of reducing both utility and production (because both are concave functions) as well revenue. (5) They find that variance in future tax rates, however, is not important for long-term investments and in fact raises nontrivial mounts of revenue at a welfare cost that is never more than the cost associated with raising an equivalent amount of revenue with a permanent increase in a deterministic tax rate. …

Journal ArticleDOI
TL;DR: The use of standards in exchange agreements involves a great deal more than meeting some criteria or quality level. as discussed by the authors argues that standards are necessary for communication, they facilitate trade, and their use economizes on the cost of information.
Abstract: I. INTRODUCTION The use of standards in exchange agreements involves a great deal more than meeting some criteria or quality level. (1) Standards are necessary for communication, they facilitate trade, and their use economizes on the cost of information. Figuratively speaking, they turn amorphous entities into orderly ones. More concretely, standards bring under a common denominator commodity attributes that may appear disparate. In the absence of standards, long-term relations are best suited to enforce agreements. On the other hand, the state's comparative advantage is in enforcing standardized components of agreements. Standards, along with state enforcement, move the economy toward perfect competition. That standards enhance competition has been understood for a long time. The 1937 Encyclopaedia Britannica states that standardization "enables buyers and sellers to speak the same language and makes it possible to compel competitive sellers to do likewise ... thus putting tenders on an easily comparable basis it promotes fairness in competition." Despite its long standing, this basic understanding of standards has not been incorporated into the economic literature. I am concerned primarily with commodity standards independent of who sets them or whether they are voluntary or mandatory. The basic hypotheses of this article are that new commodity standards created by a fall in the cost of measurement turn private information about commodities into a public good that may also be publicly available; shift self-enforced components of agreements into their contractual, state-enforced components; lead to less vertical integration; increase the incidence of theft; and make the contents of commodities clearer, more comparable, and easier to enforce, and thus make competition more "perfect." The Random House Dictionary's (1967) apt definition of standard is: "an object considered by an authority or general consent as a basis of comparison." "Apples," for instance, meet this dictionary's definition of a standard, because there is general consent about the term. On the other hand, for a "ton," general consent is lacking; the dictionary lists six definitions for it. For the term ton to be used as a basis of comparison, then, an authority must decree what the standard is. In the absence of such a ruling, transactors may hold conflicting views as to the exact meaning of the term. If, however, a court rules, and the ruling publicized that a ton is 1000 kilograms, the term becomes a public good. New users need not spend resources to determine what it means. Consider now a commodity such as Campbell's Tomato Soup. A buyer unacquainted with it may try a can. However, he or she will not deem it necessary to compare it with other cans of Campbell's Tomato Soup. An assumption seems to be at work: that in its desire to protect its brand name, Campbell's will produce all tomato soup cans to the same standard. However, commodity standards do not fully cover every commodity. In terms of what the standard covers, apples differ from Campbell's Tomato Soup. Whereas there is general consent for the meaning of the term apple, the standard does not fully delineate apples. It does not cover the diverse features of the individual specimens. Thus part of the information one gathers when examining an apple is a private good, and it has to be gathered separately for each apple. Standard units as well as graded commodities (discussed in section III), such as eggs, have been around for a long time. Commodity standards, however, seem to have been rare before the Industrial Revolution and the enhanced level of commodity uniformity that it brought about. One of the earliest industrial standards was for screw threading, suggested by Whitworth in 1841 and introduced in England shortly thereafter (Hemenway [1975], 3). (2) I hypothesize that the emergence of standards brought about an increase in litigation to resolve disputes about the quality of commodities, disputes that in earlier times were resolved primarily by the use of long-term relations. …

Journal ArticleDOI
TL;DR: A number of papers have set out to empirically assess instances of alleged price discrimination as mentioned in this paper, which is said to exist when the same product is sold to different consumers at different prices.
Abstract: I. INTRODUCTION Price discrimination is said to exist when the same product is sold to different consumers at different prices. Most economists also agree that price discrimination is at work when similar--but not identical--products are sold at prices that do not reflect differences in costs. Economics textbooks are rife with examples, such as student or senior citizen discounts, hardcover and paperback versions of books, dinner versus lunch prices at restaurants, airfares with various restrictions, and price spreads of retail gasoline products. On the other hand, some economists have criticized the identification of such apparent "price anomalies" as price discrimination. For example, Lott and Roberts (1991) argue that there are usually cost-based explanations for these phenomena and propose such explanations for the cases of airfare, gasoline, and restaurant prices. Nevertheless, the authors do not challenge the conventional wisdom that price differences that cannot be explained by cost differences are discriminatory. In recent years a number of papers have set out to empirically assess instances of alleged price discrimination. The literature goes to great lengths to control for potential sources of cost variation among different products and thus--to the extent that it is successful--is not subject to the Lott-Roberts critique. Almost all studies conclude that price discrimination is practiced in the particular market they analyze. A skeptic, of course, could always argue that some sources of cost variation have not been accounted for and thus dismiss the results as erroneous. Despite the mild controversy over the methodology and its effectiveness, the basic premise seems universally accepted: The existence of price variation that cannot be explained by cost differences constitutes price discrimination. This is not, however, a formal definition. In fact there is no single, widely accepted definition of price discrimination when products are not homogeneous. Some authors define price discrimination to exist when price-cost margins (absolute differences) between differentiated products are unequal, whereas others prefer to compare price-cost markups (percentage differences). Few authors have discussed the relative merits of one measure over the other. (1) The relationship between the two has never been spelled out, and the choice among them remains arbitrary. In this article I aim to shed some light on this issue and provide some guidelines to empirical researchers interested in price discrimination. I start by providing an overview of the current state of the literature in section II. I introduce the two competing definitions of price discrimination and briefly discuss the justification that has been provided for each. I then review the empirical literature that uses one or the other definition as the basis of testing for price discrimination. I present the basic methodology and discuss the definitional choices made by researchers investigating different market environments. The picture that emerges from this overview is murky and provides little guidance to the empirical practitioner. The absence of controversy on the choice of definition might simply reflect the fact that the two commonly used definitions are equivalent. In section III I show that this is not the case. Specifically, it is possible for one definition to indicate the presence of price discrimination and for the other one to reject it. It is also possible for the two measures to take opposite signs; that is, one product may have a higher markup and a lower margin than the other. I also show that, in general, the margin criterion is more likely to indicate the presence of price discrimination. I conclude that the choice of definition has important implications and should be given careful consideration. The challenge is to come up with a consistent way of thinking about price discrimination that can easily be applied in a variety of settings. …

Journal ArticleDOI
TL;DR: In this article, the authors consider two-player asymmetric games with ratio-form contest success functions and show that the equilibrium effort ratio is equal to the valuation ratio, and that the prize dissipation ratios for the players are the same.
Abstract: I examine players' equilibrium effort levels in two-player asymmetric contests with ratio-form contest success functions. I first characterize the Nash equilibrium of the simultaneous-move game. I show that the equilibrium effort ratio is equal to the valuation ratio, and that the prize dissipation ratios for the players are the same. I also show that the prize dissipation ratio for each player is less than or equal to the minimum of the players' probabilities of winning at the Nash equilibrium and thus never exceeds a half. Then I examine how the equilibrium effort ratio, the prize dissipation ratios, and the players' equilibrium effort levels, respond when the players' valuations for the prize or their abilities change. (JEL D72, C72) I. INTRODUCTION A contest is a situation in which players compete with one another by expending irreversible effort to win a prize. Typical examples are various types of rent-seeking contests: competition among firms to win a monopoly rent secured under government protection or by a government procurement contract, competition between domestic and foreign firms to obtain governmental trade policies favorable to them, competition among firms to acquire a rent generated by rights of ownership to an import quota, and competition among firms to capture rents created by governmental decisions to establish tariffs or other trade barriers. Other examples of contests include auctions, patent races, research and development (R&D) competition among firms, litigation, competition for jobs among job candidates, competition among candidates to win promotion to higher ranks, election campaigns between political candidates, and competition between local governments to invite business firms, government institutions, or government-owned corporations into their districts. Naturally, due to their prevalence and importance in economies, such contests have been studied by many economists: Loury (1979), Lee and Wilde (1980), Tullock (1980), Rogerson (1982), Rosen (1986), Appelbaum and Katz (1987), Dixit (1987), Hillman and Riley (1989), Hirshleifer (1989), Ellingsen (1991), Nitzan (1991, 1994), Krishna and Morgan (1997), Che and Gale (1998), Hurley and Shogren (1998), and Konrad (2000), to name a few. In this vast literature on the theory of contests, one of the main issues is: How much effort do the players exert in pursuit of the prize? Indeed, it is of great interest because the players' effort levels determine the profitability of the players' engaging in the contest and, in some cases, they are revenues collected by the contest organizer or bribes given to government officials. Furthermore, they account for other important outcomes of the contest. For example, in a rent-seeking contest, efforts expended by the players are viewed as social costs due to rent-seeking activities, so that total effort level is a measure of economic efficiency. In an R&D contest, effort levels expended by the players--these are R&D expenditures of the firms--determine the expected date of invention. This article also addresses the issue: How much effort do the players exert in pursuit of the prize? But it differs from previous research by dealing with contests with ratio-form contest success functions.1 Specifically, the novelty of this article is to consider two-player asymmetric contests in which each player's probability of winning is a function of the ratio of the two players' effort levels. Two-player contests with ratio-form contest success functions or two-player contests that can be best modeled with ratio-form contest success functions are easily observed in the real world. Examples include various types of two-player rent-seeking contests, litigation between a plaintiff and a defendant, election campaigns between two parties or candidates, and R&D competition between two firms. Consider, for example, a rent-seeking contest in which two firms, potential monopolists, compete with each other to win a government monopoly franchise contract. …

Journal ArticleDOI
TL;DR: In this article, the authors investigated the relationship between capacity utilization and high-tech investment for U.S. manufacturing and found that technological change may lead either to lower average utilization by making it cheaper to hold excess capacity or to higher utilization through making further changes in capacity less costly and time consuming.
Abstract: I. INTRODUCTION Capacity utilization is a variable of longstanding macroeconomic interest. Many studies have found it to be a valuable indicator of inflationary pressure. For example, Cecchetti (1995) finds that capacity utilization works as well as or better than other variables in predicting inflation over the next year or two. Similarly, in models of the level of resource utilization above which inflation accelerates, the utilization rate does as well as, and sometimes better than, the unemployment rate in predicting this level. (1) This predictive value may reflect capacity utilization's ability to do "double-duty," picking up the extent of slack in both labor and product markets (Corrado and Mattey 1997). However, in recent years, the capacity utilization and unemployment rates have at times provided different signals about the degree of tightness in resource markets. Notably, in the late 1990s, the decline in the unemployment rate below 4% suggested a relatively tight labor market, but the capacity utilization rate remained unexpectedly flat (Figure 1). Part of this divergence may be due to effects of technology on capacity utilization, as the 1990s saw both an investment boom that broadly increased manufacturing capacity and a shift in the composition of capacity toward high-tech machinery and equipment. In the 1940s and 1950s, manufacturing methods typically involved assembly line production with large-scale fixed units of machinery and equipment. [FIGURE 1 OMITTED] Relationships between inputs and outputs were relatively fixed, and adjustments in capacity were both costly and slow. Modern manufacturing methods, however, build considerable flexibility into the management of capacity. Technologies like numerically controlled machines, programmable controllers, and modular assembly make it easier to adjust the level and composition of output. At the same time, the use of automated design and modular tooling lowers the cost and time needed to expand capacity. While the use of advanced technologies is far from universal, it is increasingly widespread. For example, about three-quarters of plants in equipment-producing industries used at least one advanced technology in 1993; about 30% used five or more. (2) With the investment boom that took place in the second part of the 1990s, these shares are likely higher now. Conceptually, how these advances in technology would affect capacity utilization is not clear a priori. On the one hand, flexible manufacturing makes it easier to ramp production up and down. This may encourage firms to install a broader margin of excess capacity--that is, to operate at lower average utilization--in order to be able to handle upswings in demand. Such a strategy would be favored by declining prices of high-tech capital, which make excess capacity cheap. On the other hand, automated design and modular tooling make it faster and cheaper for firms to expand capacity. This may permit them to reduce the amount of excess capacity they maintain and to operate at higher utilization on average. With these two offsetting forces at work, determining how advances in technology affect capacity utilization at industry levels is ultimately an empirical question. This paper investigates the relationship between capacity utilization and high-tech investment for U.S. manufacturing. The next section discusses conceptual considerations in the relationship between technological change, capital spending, and capacity utilization. We show how technological change may lead either to lower average utilization by making it cheaper to hold excess capacity or to higher utilization by making further changes in capacity less costly and time consuming. The third section discusses the data and specification used for our study. The extent of investment in high-tech machinery and equipment has varied importantly across industries and over time. Thus, we use data on 111 manufacturing industries from 1974 to 2000 and panel data techniques to investigate effects of technology on utilization. …

Journal ArticleDOI
TL;DR: In this article, the authors reinterpreted the Alchian and Allen result in an n-good world and showed that their result holds more broadly than suggested by Borcherding and Silberberg.
Abstract: I. INTRODUCTION The Alchian and Allen substitution theorem posits that a per unit tax or shipping fee applied to similar goods will increase the relative consumption of the higher-quality good. Originally formulated in Alchian and Allen's 1964 textbook University Economics, the theorem is often called the Shipping the Good Apples Out theorem because of the empirical observation that supermarkets in apple-importing areas (such as Indiana) have a higher proportion of high-quality apples (relative to low-quality apples) than supermarkets in apple-growing areas, such as Washington State. A Washington resident on holiday in Indiana might well conclude that the good apples are getting shipped out. The theoretical basis for the Alchian and Allen result was questioned by Gould and Segall (1969), who demonstrate that the result holds unequivocally only in a two-good world. Borcherding and Silberberg (1978) defend the Alchian and Allen result in an n-good world, but only when the two taxed goods are close substitutes. This special case appears to be all that can be salvaged in terms of theory. (For further discussion, see Umbeck 1980). The purpose of this article is to reinterpret the Alchian and Allen result in an n-good world. This reinterpretation shows that their result holds more broadly than suggested by Borcherding and Silberberg and indeed more broadly than (though not as robustly as) originally claimed by Alchian and Allen. II. BACKGROUND Consider a world with n goods, [x.sub.1], [x.sub.2], ..., [x.sub.n], the first two of which can be thought of as, respectively, the high-quality and standard-quality versions of some product (e.g., good apples and bad apples). By assumption, then, [p.sub.1] > [p.sub.2] > 0. Following Borcherding and Silberberg, I phrase the Alchian and Allen thesis as [[differential]([x.sub.1]/[x.sub.2])]/[differential]t > 0, where [x.sub.1]([p.sub.1],[p.sub.2], ..., U) and [x.sub.2] ([p.sub.1],[p.sub.2] ..., U) are Hicksian (income-compensated) demand functions and t is a per unit charge applied to both goods. (1) The chain rule gives us ([differential][x.sub.i] /[differential]t)=([differential][x.sub.i]/[differential][p.sub.1]) +([differential][x.sub.i]/[differential][p.sub.2]), and combining this with the quotient rule I get [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] Substituting in the compensated elasticities, [[epsilon].sub.ij] = ([p.sub.j]/[x.sub.i]) * ([differential][x.sub.i] /[differential][p.sub.j]), I arrive at [[differential]([x.sub.1] /[x.sub.2])/[differential]t] = ([x.sub.1]/[x.sub.2])([epsilon].sub.11] /[p.sub.1]] + [[epsilon].sub.12]/[p.sub.2]] - [[epsilon].sub.21]/[p.sub.1]] - [[epsilon].sub.22]/[p.sub.2]]). The first term here is always positive, so I will focus attention on the second term, (1) ([[epsilon].sub.11]/[p.sub.1]) + ([[epsilon].sub.12]/[p.sub.2] - ([[epsilon].sub.21]/[p.sub.1]) - ([[epsilon].sub.22]/[p.sub.2]). The Alchian and Allen claim is that (1) is positive. III. A TWO-GOOD WORLD With only two goods, Hicks's (1946, 310-11) third law [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII.] reduces to [[epsilon].sub.ij] = [[epsilon].sub.ii] and one can substitute for [[epsilon].sub.11] and [[epsilon].sub.21] in (1) to get (2) ([[epsilon].sub.12] - [[epsilon].sub.22])([1/[p.sub.2]] - [1/[p.sub.1]]). The first term here is positive because the two goods in a two-good world must be substitutes ([[epsilon].sub.12] > 0) and own-price elasticities are negative ([[epsilon].sub.22] [p.sub.2] > 0). I therefore get the Alchian and Allen result: [[differential]([x.sub.1]/[x.sub.2])]/[differential]t > 0. The intuitively compelling story is that consumers are substituting out of bad apples and into good apples. …

Journal ArticleDOI
Daowei Zhang1
TL;DR: The Endangered Species Act (ESA) is the most powerful environmental regulation ever enacted in the United States, and it has been amended several times since as mentioned in this paper, and it is intended to protect species from becoming extinct.
Abstract: I. INTRODUCTION The issue of differentiating legitimate public regulation of private property from regulatory takings has become important and controversial in the United States. The Endangered Species Act (ESA), probably the most powerful environmental regulation ever enacted in the United States, is in the center of this controversy. The modern version of the ESA was enacted in 1973, and it has been amended several times since. The ESA is intended to protect species from becoming extinct. The ESA creates two main processes: the designation of species and their critical habitats through listing, and protection. Souder (1995) shows that listing is important because it triggers the four major provisions of the ESA, which are to conserve listed species, avoid jeopardizing them, avoid destruction of critical habitat, and avoid taking them. Under the ESA, no person may take endangered or threatened species. In the ESA, "the term 'take' means to harass, harm, pursue, hunt, shoot, wound, kill, trap, capture, collect, or attempt to engage in any such conduct" (16 USC Section 1532 [19]). Furthermore, the U.S. Department of the Interior has defined the statutory term harm as "an act which actually kills or injures wildlife, including significant habitat modification or degradation where it actually kills or injures wildlife by significantly impairing essential behavioral patterns, including breeding, feeding, or sheltering" (50 CFR Section 17.3 [1995]). This regulatory definition has been upheld by the U.S. Supreme Court (Sweet Home v. Babbitt, II S.Ct. 714 [1995]), and it is the fulcrum on which the government levers regulation of private land. Because habitat modification may be a "take," Flick et al. (1996) indicate that the normal forestry activities of landowners fall within the purview of the U.S. Fish and Wildlife Service on lands with endangered or threatened species. U.S. Government Accounting Office (1995) shows that more than 80% of listed endangered species have some habitat on private lands that are mostly used for forestry or agricultural purposes. Furthermore, the list of endangered or threatened species is growing continually with no limit in sight. Because the ESA prescribes behavior and extracts use rights from the bundle purchased or inherited by private landowners, its potential reach over private land is very large yet uncertain. Few publicly provided incentive programs have been offered to private landowners for protection and enhancement of endangered species until very recently. (1) Because of this "stick" approach to public policy regarding endangered species, the usual presumption is that, other things being equal, landowners will avoid management activities that might attract endangered species onto their lands and possibly develop their lands early. (2) This belief continues to produce advocates for protection of private property rights, not only from private landowner organizations but also from public agencies and some environmental groups. Recently, the U.S. Fish and Wildlife Service, with the support of the Environmental Defense Fund, designed and implemented the Safe Harbor Program, No Surprise Policy, and No Take Regulation, as noted by Zhang (1999). These policies were in part designed to mitigate the existing incentives to manage against endangered species on private lands. On the other hand, individuals and groups who want to stop development, construction, or logging may latch onto the ESA as a tool to do so, with little or no concern in fact for listed species. "Not in my back yard" and other motives are served well by the strong ESA as currently formulated. As such, these individuals and groups who can be labeled as bootleggers are no doubt supportive of the current ESA. (3) However, with the exception of Lueck and Michael (2003), there is little empirical evidence in support of the view that weakness in the current endangered species-related regulations impedes good management and stewardship of forest resources. …

Journal ArticleDOI
TL;DR: This article used the concept of forecast encompassing to reexamine the forecasting ability of a large number of financial variables with respect to U.S. real output growth over the 1985:1-1999:4 period.
Abstract: We reconsider the out-of-sample forecasting ability of a large number of financial variables with respect to real output growth over the 1985:1 1999:4 period. We show that models including financial variables display almost no forecasting ability relative to an autoregressive benchmark model over this period according to a mean squared forecast error metric. However, tests based on forecast encompassing indicate that many financial variables do, in fact, contain information that is useful for forecasting real output growth over the 1985:1-1999:4 out-of-sample period. Our results suggest that the extant literature exaggerates the demise of the forecasting power of financial variables with respect to real activity since the mid-1980s. (JEL C22, C53, E44, E32) I. INTRODUCTION It is widely documented that the ability of financial variables to forecast real output growth has broken down---especially since the mid-1980s--according to a mean squared forecast error (MSFE) criterion; see, for example, the recent study of Stock and Watson (2003). In this article, we use the concept of forecast encompassing to reexamine the out-of-sample forecasting ability of a large number of financial variables with respect to U.S. real output growth over the 1985:1-1999:4 period. By focusing on forecast encompassing, the present work complements Stock and Watson (2003), Thoma and Gray (1998), and other recent studies that rely primarily on a relative MSFE criterion to analyze the out-of-sample forecasting ability of financial variables. Forecast encompassing is closely related to the construction of optimal composite forecasts. (1) Consider two sets of out-of-sample forecasts of real output growth, one from an autoregressive distributed lag (ARDL) model that includes a financial variable and one from a simple autoregressive (AR) benchmark model, and consider forming an optimal composite forecast as a convex combination of the forecasts from the two models. If the optimal weight attached to the forecast from the ARDL model is zero, then the ARDL model does not contain information that is useful in the formation of the optimal composite forecast apart from the information already contained in the AR benchmark model. In this case, the AR model forecasts encompass the ARDL model forecasts. However, if the optimal weight attached to the ARDL model forecast is greater than zero, then the ARDL model does contain information useful for forecasting real output growth apart from that already contained in the AR benchmark model. Harvey et al. (1998) develop a test statistic for the null hypothesis that the optimal weight attached to one out-of-sample forecast is zero against the alternative hypothesis that the optimal weight is greater than zero. Clark and McCracken (2001) develop a variant of the Harvey et al. (1998) statistic that accounts for the parameter uncertainty inherent in the formation of forecasts and that Monte Carlo simulations show to be considerably more powerful in detecting forecasting ability than the original statistic. In our applications, we reconsider the forecasting power of 10 of the financial variables used in Stock and Watson (2003) with respect to U.S. real gross domestic product (GDP) and industrial production growth over the 1985: 1-1999:4 out-of-sample period. These are 10 of the most popular financial variables in the extant literature: M0, M1, M2, M3, federal funds rate, 3-month Treasury bill rate, term spread, default spread, real stock prices, and dividend yield. For each financial variable, we construct recursive out-of-sample forecasts of real output growth over the 1985:1-1999:4 period based on an ARDL model that includes a given financial variable as an explanatory variable. Following Stock and Watson (2003), we consider forecasts of both real GDP and industrial production growth. We use the Harvey and colleagues (1998) and Clark and McCracken (2001) statistics to test the null hypothesis that the out-of-sample forecasts of real output growth from an AR benchmark model encompass the forecasts from the ARDL model that includes a given financial variable. …

Journal ArticleDOI
TL;DR: In this paper, the authors distinguish three spillover effects: negative production externality, anegative or positive consumption external and an increase in the risk of future welfare loss.
Abstract: Wemodelbiologicalinvasionsasanunintendedby-productofcapitalaccumulation.We distinguish three spillover effects: (1) a negative production externality, (2) anegative or positive consumption externality and (3) an increase in the risk of futurewelfare loss. We also consider the implications when households self-protect byallocating income to reduce the potential damages from a biological invasion. Anoptimal output tax for production externalities is straightforward and can beaugmented in the case of negative or positive spillover effects on consumer welfare.Policiestocorrecttheeffectofinvasionsonendogenousriskaremoredifficulttodesign.(JEL O13, O41, Q2)

Journal ArticleDOI
TL;DR: In this article, the authors examined empirically if monetary policy, since the October 19, 1987 stock market crash, has been influenced by the stock market and concluded that the Fed considers stock market only to the extent that it influences inflation and the output gap.
Abstract: Does the Federal Reserve System consider the level of the stock market when setting monetary policy? This paper examines empirically if monetary policy, since the October 19, 1987 stock market crash, has been influenced by the stock market. We conclude that the Fed considers the stock market only to the extent that it influences inflation and the output gap. As a consequence the Federal Reserve policy accommodated the high valuations in the 1990s of the stock market as measured by the S&P500 P/E ratio.