scispace - formally typeset
Search or ask a question

Showing papers in "Economic Inquiry in 2003"


Journal ArticleDOI
TL;DR: The authors showed that identical offers in an ultimatum game trigger vastly different rejection rates depending on the other offers available to the proposer, which casts serious doubt on the consequentialist practice in standard economic theory that defines the utility of an action solely in terms of the consequences of this action.
Abstract: I. INTRODUCTION There is by now considerable evidence that fairness considerations affect economic behavior in many important areas. In bilateral bargaining situations, anonymously interacting agents frequently agree on rather egalitarian outcomes although the standard model with purely selfish preferences predicts rather unequal outcomes. (1) In competitive experimental labor markets with incomplete contracts, fairness considerations give rise to efficiency wage effects that generate stable deviations from the perfectly competitive outcome as shown in Fehr and Falk (1999). In several questionnaire studies, for example, in studies by Bewley (1999) and Campbell and Kamlani (1997), personnel managers indicate that despite an excess supply of labor, firms are unwilling to cut wages because they fear that pay cuts are perceived as unfair and hostile by the workers and will hence destroy work morale. Fehr et al. (1997) show that in principal-agent relationships reciprocally fair behavior causes a considerable increase in the set of enforceable contracts and hence large efficiency gains. To examine the forces that affect the perceptions of fairness and the determinants of fair behavior is thus not just of philosophical or academic interest. A common feature of fair behavior in the cited situations is that in response to an act of party A that is favorable for party B, B is willing to take costly actions to return at least part of the favor (positive reciprocity), and in response to an act that is perceived as harmful by B, B is willing to take costly actions to reduce A's material payoff (negative reciprocity). This suggests that reciprocal behavior is an important component of fairness-driven behavior. Reciprocally fair behavior has been shown to prevail in one-shot situations and under rather high-stake levels. (2) In this article we show that identical offers in an ultimatum game trigger vastly different rejection rates depending on the other offers available to the proposer. In particular, a given offer with an unequal distribution of material payoffs is much more likely to be rejected if the proposer could have proposed a more equitable offer than if the proposer could have proposed only more unequal offers. Thus it is not just the material payoff consequence of an offer that determines the acceptance but the set of available, yet not chosen offers is also decisive. This result casts serious doubt on the consequentialist practice in standard economic theory that defines the utility of an action solely in terms of the consequences of this action. It also shows that the recently developed models of fairness by Bolton and Ockenfels (2000) and Fehr and Schmidt (1999) are incomplete to the extent that they neglect "nonconsequentialist" reasons for reciprocally fair actions. These models assume that--in addition to their m aterial self-interest--people also value the distributive consequences of outcomes. The impressive feature of these models is that they are capable of correctly predicting a wide variety of seemingly contradictory facts. They predict, for example, why competitive experimental markets with complete contracts typically converge to the predictions of the selfish model, whereas in bilateral bargaining situations or in markets with incomplete contracts stable deviations in the direction of more equitable outcomes are the rule. However, despite their predictive success in important areas, our results indicate that legitimate doubts remain as to whether these models capture the phenomenon of reciprocal fairness in a fully satisfactory way. A parsimonious interpretation of our results, which is also suggested by psychological research, can be given in terms of intentions. (3) Identical actions by the proposer are--depending on the available alternatives--likely to signal different information about the intentions of the proposer. Hence, if responders take into account not only the distributive consequences of the proposers' actions but also the fairness of the proposers' intentions, their responses to identical offers may differ. …

601 citations


Journal ArticleDOI
TL;DR: In this article, the authors explored the two-way link between FDI and growth for a panel of 23 developing countries and investigated the impact of liberalization on the dynamics of the FDI-GDP relationship.
Abstract: Using a panel cointegration framework, the article explores the two-way link between FDI and growth for a panel of 23 developing countries. In addition, it investigates the impact of liberalization on the dynamics of the FDI and GDP relationship. A long-run cointegrating relationship is found between FDI and GDP after allowing for heterogeneous country effects. The cointegrating vectors reveal a bidirectional causality between GDP and FDI for more open economies. For relatively closed economies, long-run causality appears unidirectional and runs from GDP to FDI, implying that growth and FDI are not mutually reinforcing under restrictive trade and investment regimes.

316 citations


Journal ArticleDOI
TL;DR: In this paper, the authors look for evidence of unequal broadband availability in areas with high concentrations of poor, minority, or rural households and find that there is little evidence for unequal availability based on income or on black or Hispanic concentration.
Abstract: The newest dimension of the digital divide is access to broadband (high-speed) Internet service. Using comprehensive U.S. data covering all forms of access technology (chiefly DSL and cable modem), I look for evidence of unequal broadband availability in areas with high concentrations of poor, minority, or rural households. There is little evidence of unequal availability based on income or on black or Hispanic concentration. There is mixed evidence concerning availability based on Native American or Asian concentration. Other findings: Rural location decreases availability; market size, education, Spanish language use, commuting distance, and Bell presence increase availability.

212 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare the behavior of economists and noneconomists in a natural setting, and reach significantly different results from previous studies: 1. Political economists are not more selfish than the average student, but students of business economics are.
Abstract: I. INTRODUCTION Economic science is constantly being accused of having a blind spot. It is said that, compared to efficiency, equity is not given its just weight in the education of economists. Moreover, it is argued that the Homo economicus is too narrowly defined and that it does not explain the behavior of human beings accurately. According to the critics, the consequences of this oversimplified view of human behavior is that the students of economics act in a more selfish way than students of other social sciences. (1) Economists create the type of selfish persons (the Homo economicus) they axiomatically assume in their theories. If this claim indeed holds in reality, the critics are right in emphasizing that economic science makes the much-needed cooperation in the world more difficult. Hirschman (1982, 1466) puts it the following way: "The emphasis on self-interest typical of capitalism makes it more difficult to secure the collective goods and cooperation increasingly needed for the proper functioning of the system in its later stages." There is evidence that students of economics behave more selfishly than other people (e.g., Frank et al., 1993; 1996; Marwell and Ames, 1981; Frank and Schulze, 2000). The results are mainly based on laboratory experiments with students. These studies cannot exclude that economists see the experimental setting as "an JQ test of sorts" (Frank, 1988, 226). Students may play the equilibrium learned in their economics classes, but they do not apply it to real life situations. In contrast, we use a unique and extremely large data set (more than 96,500 observations) to study the behavior of economics students in a natural setting. At the University of Zurich, every student has to decide each semester whether he or she wants to donate money to two social funds managed by the university. We can observe the decisions of the students over five semesters and compare the behavior of economists with that of students of other disciplines. Most important, the data set enables us to analyze whether a possible difference in b ehavior is due to indoctrination in economic education or due to selection. Previous studies have had serious difficulties to discriminate between the competing hypothesis that behavioral differences emerge because (1) selfish persons choose to study economics (selection hypothesis) or (2) training in economics causes students to act more selfishly (indoctrination hypothesis). The data set used allows addressing these two questions. Moreover, the panel structure of the data enables to exclude individual heterogeneity by controlling for personal fixed effects. Comparing the behavior of economists and noneconomists in a natural setting, we reach significantly different results from previous studies: 1. Political economists (to use the classical term) are not more selfish than the average student, but students of business economics are. 2. The higher level of selfishness of business students is due to self-selection, not indoctrination. 3. Students of the economic sciences (i.e., both political and business economists) are about as selfish as law students. The willingness of economics students to contribute decreases during their studies somewhat but to a lesser extent than medical and veterinary students. The article proceeds by presenting previous studies in section II. Section III discusses the data used. Section IV submits the analysis and results of our inquiry. Section V draws conclusions. II. PREVIOUS STUDIES Frank et al. (1993; 1996) seem to have convinced most of the academic community that an economics education has a negative influence on a student's cooperative behavior. (2) But the literature on the topic is much less uniform than the conclusion of Frank et al. (1996, 192), who argue that there is "a heavy burden of proof on those who insist that economics training does not inhibit cooperation. …

181 citations


Journal ArticleDOI
TL;DR: In this paper, a model of the determinants of articles produced by male and female academics is proposed to test how article production is influenced by coauthorship, institutional research orientation, and gender.
Abstract: I. INTRODUCTION The purpose of this article is to test a model of the determinants of articles produced by male and female academics. Of particular interest is how article production is influenced by coauthorship, institutional research orientation, and gender. Several works have identified and demonstrated the trend toward coauthorship in economics and in other disciplines. Using data from the Journal of Economic Literature, Heck and Zaleski (1991) demonstrated that the incidence of coauthorship has increased from about 15% of total articles in 1969 to about 35% in 1989. Durden and Perri (1995) report that as of 1992 the proportion coauthored had grown to more than 38%. Hudson (1996) finds that by 1993 coauthorship rates in the Journal of Political Economy and American Economic Review were 39.6% and 54.9%, respectively, as compared to rates of 6% and 8% in 1950. Why do economists coauthor? Rational interest theory suggests that if scholars collaborate then there must be a utility-enhancing result, other things equal. This could be in the form of higher salaries and increased probabilities of promotion, greater access to funded research, greater mobility in job markets, and the like, which might result if collaboration significantly increases the overall production of published papers. Studies by McDowell and Melvin (1983), Barnett et al. (1988), and Piette and Ross (1992) suggest that coauthorship among scholars may increase article production through the division of labor made necessary by increased complexity in the subject matter and by the need to saturate markets to increase the probability of getting papers accepted for publication. Laband and Tollison (2000) find evidence that increasing rates of coauthorship result from the greater quantitative content of articles, greater requirement for the use of sophisticated econometric techniques, and the fact that cooperation is cheaper in time and other resources as compared to learning what is necessary to publish in another field or discipline. Hamermesh and Oster (2002) discuss nonpecuniary, purely consumption benefits, such as the pleasure obtained from cooperation, and suggest that higher levels of prestige among colleagues may also be relevant. Although many reasons for cooperation have been developed, not much has been done to determine whether the rate of coauthorship among scholars actually affects total article production. Using cross section data, Hansen et al. (1978) and Graves et al. (1982) estimate productivity models, but coauthorship is not included in either study. McDowell and Smith (1992) include a coauthorship variable but find that it neither increases nor decreases article production when articles are discounted by the number of authors. Hollis (2001) uses panel data to examine the relationship between coauthorship and research productivity. He finds that coauthorship leads to better, longer, and more frequent publications. When publications are discounted by the number of authors, however, coauthorship appears negatively related to research output. McDowell and Smith's (1992) and Hollis's (2001) results are very counterintuitive, because a viable reason for coauthorship is to increase efficiency, as shown in Durden and Perri (1995). Durden and Perri (1995) estimate a time-series model, finding that coauthorship is highly instrumental in determining article productivity. As far as we know, the only study by economists that specifically analyzes the effect of institutional research orientation on publishing output is that by Graves et al. (1982). Using data from 240 schools, they estimate total pages published per faculty member in the top 24 economics journals between 1974-78. Using a series of independent variables to proxy influences that affect publication, they find that secretaries per faculty member helps and teaching load hinders production. The effects of Ph.D. status, teaching assistance, and faculty--student ratios are mixed, depending on specification of the model. …

115 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the relationship between drug prices and possession violation enforcement and demand for these drugs, and found that the response of drug demand to changes in their prices is a key determinant of the effectiveness of illegal drug enforcement policy.
Abstract: This article estimates equations for past year cocaine and marijuana use among adult and juvenile respondents of the 1990-97 National Household Surveys on Drug Abuse. Unlike most previous studies, we control for the monetary price of marijuana, probabilities of arrest for marijuana and cocaine possession, and state fixed effects. Results indicate that cocaine prices are inversely related to adult cocaine and marijuana demand but are unrelated to juvenile drug demand, marijuana price effects are always statistically insignificant, estimated price effects are inflated when state effects are omitted, and increases in each arrest probability diminish both types of drug use. (JEL K42, 118, D12) I. INTRODUCTION The responsiveness of cocaine and marijuana demand to changes in their prices is a key determinant of the effectiveness of illegal drug enforcement policy. By harassing sellers and seizing drugs, enforcement attempts to reduce the consumption of illegal drugs by restricting their supply and thereby raising their prices. Even if enforcement is able to increase drug prices, its success in reducing illegal drug use depends on the elasticity of drug demand with respect to drug prices. Reciprocally, unless this elasticity is close to zero, legalization of cocaine and marijuana would likely increase their consumption substantially by drastically reducing their prices. A complementary goal of enforcing cocaine and marijuana possession violations is to reduce their demand at prevailing prices. This occurs through both incarceration of drug users who will no longer be able to purchase drugs and deterrence of drug consumption by potential users. Price and enforcement effects may be dissimilar if consumers respond differently to changes in their budget constraints than to changes in expected punishment. In particular, the relative magnitudes of the responses in drug demand to changes in possession arrest probabilities and prices is an important determinant of how enforcement resources can most efficiently be allocated between buyers and sellers. But in spite of this policy relevance, there is little direct evidence on the relationship between arrest probabilities for cocaine and marijuana possession and demand for these drugs. Meanwhile, the relationship between the consumption of cocaine and marijuana is both theoretically and empirically uncertain. In theory, cocaine and marijuana act as substitutes in the production of intoxication but also can provide complementary intoxicating effects. Empirically, this relationship determines whether policies designed to reduce demand for one drug have effects on the other that reinforce or counteract the impacts of policies designed specifically for that other drug. For instance, marijuana possession arrests more than doubled nationally between 1990 and 1997, both in number and as a fraction of total arrests. This might have reinforced any effect of cocaine possession enforcement on cocaine use if the two drugs are complements but had an unintended counteractive effect if they are substitutes. This study provides evidence on the impacts of cocaine and marijuana prices and possession violation enforcement on the demand for these drugs. We analyze data on past year cocaine and marijuana use among 12- to 39-year-old respondents to the annual 1990-97 National Household Surveys on Drug Abuse (NHSDA). Along with various individual characteristics, the set of explanatory variables includes regional prices of cocaine and marijuana, state-level measures of the probability of arrest for cocaine and marijuana possession, and fixed effects for states and years. Our goals are to estimate the size of the response in the demand for cocaine and marijuana to changes in their prices, to do the same with respect to changes in possession violation enforcement intensity, and to examine whether unmeasured state characteristics can potentially bias estimated price and enforcement effects. …

113 citations


Journal ArticleDOI
TL;DR: Bender et al. as discussed by the authors compare public- and private-sector wage distributions and examine the extent to which focusing exclusively on differences in mean wages obscures the true nature of the wage relativities.
Abstract: I. INTRODUCTION Institutional, political, and economic factors all contribute to the determination of public-sector wages. For example, the public sector is almost the only place in many advanced economies where unions continue to have a significant presence and an influential voice in setting wages and conditions of work. Political factors, such as desiring to be a good employer, may also influence the public-sector wage structure. As a good employer, governments may offer low-skilled workers higher rates of pay than they might get in the private sector. On the other hand, the public sector may be adverse to paying very skilled workers high rates of pay if the public does not like to see public-sector workers earning too much. Economic factors, such as spending and tax receipt limits, can also affect the wage structure in the public sector. All of these factors could make the public-sector labor market be less efficient than that of the private sector. From an economic perspective, if government workers are paid more than private-sector workers, ceteris paribus, then taxes are being wasted. If private-sector workers are paid more than government workers, governments cannot recruit and retain appropriately skilled and productive workers. To make the wage structures comparable, many countries use surveys to ascertain the prevailing earnings levels for jobs in the private sector that are similar to public-sector jobs. However as Fogel and Lewin (1974) and Gregory (1990) point out, these survey methods are fraught with biases that may affect comparability of wages. Indeed, there is little empirical evidence that wages between the two sectors are equal. Bender (1998) and Gregory and Borland (1999) survey the current literature on public-private wage differentials and report that most studies find that central governments pay more on average compared to the private sector, even after controlling for differences in the productive characteristics of workers. Furthermore, as Elliott et al. (1999) find, comparability in pay is currently important in the debates over public-sector pay reform, particularly in Europe. The institutional and political factors outlined above are waning in importance, and economic considerations are seemingly coming to the fore. Because of perceptions of the overpayment of public-sector workers and the desire to cut government wage bills, politicians in these countries are devising policies to align average rates of pay by slowing the relative rate of public-sector wage growth. The policies should be different, however, if wage distributions are not the same. Indeed, there is some evidence that the distributions differ, as found in the double imbalance concept articulated by Schager (1993), Katz and Krueger (1993), and Elliott and Duffus (1996). This imbalance is characterized by public-sector workers at the lower end of the occupational hierarchy receiving the largest wage premia and workers at the upper end earning less than their private-sector counterparts. If the distributions are incomparable, ceteris paribus, mandating equal average rates of pay growth will not ensure comparability. Policies aimed at realigning wage structures at the ends of the distribution should then be followed to promote comparability. This article compares public- and private-sector wage distributions and examines the extent to which focusing exclusively on differences in mean wages obscures the true nature of the wage relativities. Data from the British Social Change and Economic Life Initiative (SCELI) are employed to investigate comparability. This data set is well suited for a comparability study given the range of personal and job characteristics that control for differences in public-and private-sector jobs. Moreover, the data were collected in 1986, before substantial reform of public-sector pay in the United Kingdom, reviewed in Bender and Elliott (1999). This offers a test to see if the reform policies were appropriate to promote wage comparability. …

80 citations


Journal ArticleDOI
TL;DR: The Coase theorem maintains that where free-market precepts exist, the allocation of property rights does not impact the distribution of resources as mentioned in this paper, and therefore suggests a course for those who wish to alter the level of competitive balance: Major League Baseball should increase its focus on expanding the size of its labor pool.
Abstract: The Coase theorem maintains that where free-market precepts exist, the allocation of property rights does not impact the distribution of resources. An application to Major League Baseball suggests that institutions such as free agency and the reverse-order amateur draft would not impact player distributions and therefore would not impact competitive balance. The present study finds that the distribution of wins is generally consistent with the precepts of the Coase theorem and therefore suggests a course for those who wish to alter the level of competitive balance: Major League Baseball should increase its focus on expanding the size of its labor pool. (JEL O15, L83, C22)

79 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore the time-inconsistency problem that arises as a result of the dual nature of the promotion decision and show that the common practice of favoring internal candidates for promotion can be understood as a response by firms to the problem of time inconsistency.
Abstract: I INTRODUCTION The literature on promotion practices within the firm identifies two distinct roles for a firm's promotion policies Promotions serve both as a way for the firm to efficiently assign workers to tasks and as a way for the firm to reward prior performance (1) In this article I explore the time-inconsistency problem that arises as a result of the dual nature of the promotion decision In particular, I show that the common practice of favoring internal candidates for promotion can be understood as a response by firms to the problem of time inconsistency (2) Consider a firm in which promotions serve as a reward for prior expenditures of effort and/or prior investments in human capital The optimal promotion policy for this firm depends on the time frame from which we view the problem The ex ante optimal promotion rule is the optimal rule when the problem is viewed from a date prior to young workers deciding to work at this firm This rule takes into account both the assignment aspects of the promotion decision and that promotions serve as a reward for prior performance There is a second relevant rule, however, because at the date of the promotion decision the effects of promotion serving as a reward for prior performance are in the past For example, in a world where the possibility of future promotion serves to increase the effort levels of young workers, by the time of the promotion decision those effort choices have already been made Hence, the ex post optimal rule only takes into account the assignment aspects of the promotion decision The time-inconsistency problem that arises is now clear Total profits are maximized when the firm follows the ex ante optimal promotion rule, but in the absence of commitment this is not how the firm behaves At the time of the promotion decision the firm has an incentive to ignore the effects of promotion serving as a reward for prior performance, and instead follow the ex post optimal rule In other words, the profits of the firm are decreased because at the time of the promotion decision the firm maximizes current profits, and the anticipation of this behavior by workers has deleterious effects on behavior in prior periods In this article I explore a specific example of this time-inconsistency problem Consider a firm's decision concerning whether to fill a managerial position by promotion from within or by hiring an outsider I show that when confronted with this decision the firm faces the exact situation described That is, the ex post optimal rule shows no preference for promotion from within over hiring from the outside, whereas the ex ante optimal rule exhibits a preference for promoting from within The logic here is that the possibility of receiving the higher managerial wage in the future reduces the wage required to attract young workers into the firm, but only the ex ante optimal rule takes this into account The result is that in the absence of commitment the firm hires from the outside too often The conclusion is that the establishment of an internal labor market in which the firm favors promotion from within can be understood as a way that the firm avoids this time-inconsistency problem This result extends a finding in Malcomson (1984) That article also shows that an internal labor market in which hiring from the outside is restricted can arise in a world where promotion serves as a reward for prior performance In that work, however, there is no time-inconsistency problem The reason is that Malcomson assumes workers are homogeneous, and thus at the time of the promotion decision it is (weakly) optimal for the firm to have all promotions be from within In contrast, I assume that workers are heterogeneous and find that if the firm maximizes ex post profits it lowers total profits The result is an internal labor market that not only limits hiring from the outside but also constrains the firm to behave in a manner different from that which maximizes profits at the time the promotion decision is made …

75 citations


Journal ArticleDOI
TL;DR: For example, the authors found that among college graduates who do not earn advanced degrees, economics majors generally earn more than similar individuals with other majors, and that among individuals who pursue graduate degree programs in business and law, economics major earn higher than undergraduate majors in most other academic disciplines.
Abstract: Undergraduate advisors in economics departments suggest that the study of economics is good preparation for a variety of careers, including economics, consulting, analysis, and administration, and they argue that economics is a solid prelaw or pre-MBA major. In this article we provide some empirical evidence about each of these contentions. We find that among college graduates who do not earn advanced degrees, economics majors generally earn more than similar individuals with other majors. We show also that among individuals who pursue graduate degree programs in business and law, economics majors earn more than undergraduate majors in most other academic disciplines. (JEL J31)

72 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present an approach to analyze the relationship between institutions and aggregate economic performance by using frontier analysis, a form of benchmarking that can be used to assess the effects of various policies and characteristics.
Abstract: I. INTRODUCTION Economic studies of long-run performance are focusing increasingly on political, legal, financial, and social factors. Development is no longer regarded as a gradual, inevitable transformation from self-sufficiency to specialization and participation in the division of labor. Instead, progress follows the creation and evolution of institutions that support social and commercial relationships. The new institutional economics explains that growth requires that the potential hazards of trade (shirking, opportunism, risk, and so on) be controlled by institutions like secure property rights, reliable procedures for resolving disputes, and means of enforcing contracts in the absence of close social ties. These institutions reduce information costs, encourage capital formation and capital mobility, allow risks to be priced and shared, and otherwise facilitate cooperation (North and Thomas, 1973; North, 1990; Drobak and Nye, 1997; Levine, 1997). In particular, political authorities must make credible commitments not to expropriate private resources once investments have been made. (1) Despite widespread agreement that institutions matter, there is no consensus on how they should be incorporated into the analysis. Even the best empirical studies of productivity and growth treat institutional characteristics in an eclectic way. Barro's (1991) influential article, for example, uses the numbers of assassinations and revolutions per capita as proxies for political instability, finding these measures negatively correlated with growth and investment. Scully (1988) regresses growth rates on dummy variables derived from Gastil's (1982) ordinal rankings of political and economic liberty. King and Levine (1993a; 1993b; 1993c) derive various measures of the quality of financial intermediaries and show that these measures are good predictors of growth. This article presents a different approach to analyzing the relationship between institutions and aggregate economic performance. Following the modem productivity literature (see, for example, Fried et al., 1993), we model economic performance with a stochastic production frontier. Frontier analysis is a form of benchmarking. It analyzes a group of branches, firms, nations, or other units by identifying best practices and evaluating each member's performance relative to the best-practice frontier. The results produce not only qualitative rankings of the group members but also numerical efficiency scores that can be used to assess the effects of various policies and characteristics. For this reason, frontier analysis is well suited for studying the effects of legal and political institutions on the economic performance of nations. To capture institutional factors, we use two comprehensive indexes of legal, regulatory, and political conditions. Working with a broad sample of countries from 1975 to 1990, we incorporate a widely used measure of economic freedom along with a new measure of policy stability to represent a country's institutional environment. The new institutional economics suggests that countries with high levels of economic freedom (protection of private property rights, respect for the rule of law, an unhampered price system, and so on) and policy stability (commitment not to change the rules of the game ex post) will be closer to the best-practice frontier. In our model, economic freedom and policy stability affect economic performance by enhancing technical efficiency. In other words, these institutions do not alter the state of technology, but they allow producers to squeeze more out of current technology. For instance, countries with more stable policies attract more foreign investment than countries with less stable policies, ceteris paribus; this leads to increased competition among producers, which in turn brings efficiency gains. Similarly, countries with lower taxes, milder regulatory burdens, lower inflation, fewer restrictions on foreign ownership, and so on are likely to produce output more efficiently than countries with policies that restrict production, inhibit capital formation, and reduce competition. …

Journal ArticleDOI
TL;DR: For central-city black children, each additional mile from the hospital is associated with a 3-percentage-point decline in the probability of having had a checkup, and access to providers is as important as private insurance coverage in predicting use of preventive care.
Abstract: I. INTRODUCTION This article examines the effect of distance to hospital on the utilization of preventive care among children. Many poor children, lacking alternative providers, rely on hospitals and clinics for preventive care. Moreover, in many poor neighborhoods, the majority of private doctors' offices are located in buildings adjacent to a hospital. Thus, hospitals indirectly serve to attract physician services that would otherwise be lacking in the neighborhood. This pattern of service provision raises several concerns. First, it is inefficient for children to be receiving preventive care directly from hospitals because preventive care can be delivered more cheaply in doctors' offices, and doctors' offices also provide greater continuity of care (an index of quality). Second, even when children are receiving care in a doctor's office adjacent to a hospital, continuation of that service may be jeopardized if the hospital were to close. In this study, we use distance to hospital as a proxy for access to medical services to examine the effect of distance to hospital on the use of preventive care services. If the use of these services falls with distance, other things being equal, then we interpret this as evidence of lack of access to alternative providers. Most previous studies of the effects of distance have concentrated on specific geographical areas and/or hospitalizations for specific procedures. We are therefore unaware of any previous studies that have focused on the effects of distance on the utilization of preventive care among children. (1) We use a national sample of children created by matching records from the National Longitudinal Survey of Youth's Child-Mother (NLSCM) file with the American Hospital Association's 1990 Hospital Survey. We allow the effects of distance to vary with race, ethnicity, insurance status, and degree of urbanicity. We control for other factors that might affect utilization of preventive care by including a rich set of control variables and by estimating models that include either city dummy variables or mother fixed effects. In particular, city dummies control for unobserved differences across cities in public transportation services and population density. Mother fixed effects control for unobserved differences in preferences for preventive health care. Robust estimates across models allow us to rule out these alternative explanations for why distance to hospital might affect different groups differentially. These innovations address the important question of whether children who rely on hospitals for preventive ca re do so because they lack access to other providers. We find that distance to hospital has significant effects on access to preventive care only among central-city black children. For these children, each additional mile from the hospital is associated with a 3% decline in the probability of having had a checkup (from a mean baseline of 74%). This effect is comparable to the 3% increase in the probability of having a checkup, which is associated with having private health insurance coverage rather than being uninsured. A striking result is that among these children, the size of the distance effect is similar for both the privately insured and those with Medicaid, suggesting that even black urban children with private health insurance have difficulty obtaining access to preventive care outside the area around hospitals. Thus, for this group questions of access to providers may be as important as insurance coverage in predicting use of preventive care. II. CHILDREN WHO RELY ON HOSPITALS FOR PRIMARY CARE Prior research suggests that children who are members of minorities, uninsured, covered by Medicaid, or residing in rural areas are all more likely to rely on hospitals for preventive care. This section discusses each of these groups in turn. Bloom (1990) demonstrates that, nationally, black children are twice as likely as white children to receive care in an institutional setting, such as a clinic or emergency room, and that they are more likely to be attended by residents than by staff physicians. …

Journal ArticleDOI
TL;DR: In this paper, the authors examined the effect of increasing the number of referees in the National Hockey League (NHL) games and found that the deterrent effect dominates the monitoring effect.
Abstract: I. INTRODUCTION The economic theory of crime predicts that an increase in policing resources will lead to a decrease in the crime rate. Empirically determining the magnitude of this effect has proved difficult, however, because of an endogeneity problem. Although variations in the allocation of policing resources are expected to affect crime rates, the reverse may also hold true (Cornwell and Trumbull, 1994; Levitt, 1997). The endogeneity problem can be avoided to a large degree by studying situations in which changes in the allocation of policing resources occur independent from crime rates. In an innovative and influential study, McCormick and Tollison (1984) apply the economic theory of crime to rules infractions in a sports contest. They analyze a policy change in the Atlantic Coast Conference (ACC) basketball tournaments. In 1979, the ACC increased the number of referees in the tournament games. McCormick and Tollison analyze the effect of increasing the number of referees on the number of fouls called. Their approach is unlikely to suffer from a severe endogeneity bias because the policy change occurred only once and furthermore occurred between tournament seasons. On the other hand, insofar as rules infractions in sports contests are analogous to criminal activity, McCormick and Tollison measure arrests, rather than crimes. The effect of an increase in policing resources on arrests cannot be resolved by theory. For a given crime rate, increases in police budgets and forces enable greater monitoring of criminal activity and consequently lead to more arrests. The crime rate, however, is itself a function of the level of policing resources. Rational criminals realize that greater monitoring increases the probability that their actions will result in arrest and may be deterred from committing crimes. This decreases the crime rate. The net effect on the total number of arrests is ambiguous because it depends on whether the monitoring or deterrent effect dominates. McCormick and Tollison find that increasing the number of referees leads to a reduction in the number of fouls called. This suggests that the deterrent effect dominates the monitoring effect. Shortly after their study was published, it was recognized in surveys of both sports economics, by Cairns et al. (1985), and crime, by Cameron (1988). We examine a natural experiment in sports to further understand the monitoring and deterrent effects. In the 1999-2000 season, the National Hockey League (NHL) games had either one or two referees. (1) We find that games with two referees have more penalties called, suggesting that the monitoring effect dominates the deterrent effect. We then use an instrumental variables technique to determine the effect of the number of referees on the number of infractions actually committed by players in the game. We find that the number of referees does not significantly affect the number of infractions committed. This is direct evidence that the deterrent effect is inconsequential in this context. Our results are unlikely to suffer from an endogeneity bias because the variation in the number of referees is independent of the rate of infractions in the individual games. II. THE NHL EXPERIMENT Enforcement of the rules in an NHL game is done by referees and linesmen. The NHL has historically utilized one referee and two linesmen. The linesmen are responsible for identifying infractions that result in a stoppage of play, such as icing (sending the puck from one end of the rink to the other) and off-sides (entering the offensive zone before the puck). The referee is responsible for identifying more severe infractions, such as slashing (swinging a stick at an opponent), hooking (using a stick to impede the progress of an opponent), and fighting (fisticuffs). When one of these infractions is identified by the referee, a penalty is called and the offending player is removed from the ice for a period of time depending on the severity of the penalty. …

Journal ArticleDOI
TL;DR: The authors investigated the effect of price volatility in housing markets on homeownership and housing demand and found that risk-averse households prefer to hold less risk at a given expected expected rate of return.
Abstract: I. INTRODUCTION The purpose of this article is to investigate the effect of heightened price volatility in housing markets on homeownership and housing demand. The literature on housing markets has recognized that buying a home involves consumption and investment motives, as in Ranney (1981), Henderson and Ioannides (1983), and Fu (1995). Purchasing a home requires a concentrated investment in housing stock that for most families is not easily diversified (Caplin et al., 1997). Indeed, in 1995, the typical American household had most of its gross assets in real estate and held no corporate equity, and only households in the top 10% of the wealth distribution held portfolios that were diversified (Tracy et al., 1999). In 1998, despite strong stock market gains, home equity was a dominant investment for most households. Although roughly two-thirds owned their homes, less than half of all households held stock; of those households owning both stocks and homes, 60% had more wealth in home equity than in stocks (Di, 2001). Nominal house prices in U.S. cities have generally risen over time, with some cities experiencing periods of steep house-price appreciation. However, homeowners in many cities have been exposed to considerable downside risk. According to repeat-sales price indexes published by the Federal Home Loan Mortgage Corporation (Freddie Mac), significant one-year declines in home prices are not uncommon. For example, nominal declines of 15%, 5%, and 9% occurred in San Antonio in 1985, Denver in 1987, and Boston in 1990, respectively. Sustained house-price declines have occurred in Texas, California, and the Eastern states. For example, in the Long Beach-Los Angeles metropolitan area, the nominal price of single-family homes declined 22% from 1990 to 1996. (1) Furthermore, there is evidence that losses at the time of sale are surprisingly common. (2) What is not clearly understood is the extent to which families are responsive to such house-price risk. The investment literature has demonstrated that risk-averse households prefer to hold less risk at a given expected rate of return. When house-prices are uncertain, homeownership and demand relationships depend on the distribution of the random price variable. (3) Fu (1991) characterizes the price distribution by its first and second moments and shows that the likelihood of homeownersbip and the demand for owner-occupied housing are increasing in the expected rate of house-price appreciation and decreasing in house-price volatility. These results do not hold, however, for liquidity-constrained households: When households cannot borrow against the expected future gains of housing investment for current consumption, the theoretical effects on housing choices of the expected rate of return and house-price uncertainty are ambiguous (Fu, 1995). Thus, the effect of price volatility on housing choices remains to be determined empirically. The empirical research on housing markets includes a number of micro-level econometric studies on the determinants of homeownership and housing demand. These studies provide a framework for the present study and have shown that a household's after-tax relative cost of homeownership, permanent and transitory income, and family composition are important in explaining housing decisions (e.g., see Rosen, 1979; Dynarski and Sheffrin, 1985; Henderson and Ioannides, 1987; 1989). In addition, homeownership is negatively impacted by income uncertainty (Haurin, 1991) and by an inability to generate savings for a down payment, as in Henderson and Ioannides (1987), Linneman and Wachter (1989), Haurin et al., (1994), and Gyourko et al. (1999). Nevertheless, despite the importance of owner-occupied housing in the portfolios of homeowners, surprisingly little empirical work exists on the housing choices of families in the presence of house-price uncertainty. Rosen et al. (1984) provide some empirical evidence on the question of housing choices and risk. …

Journal ArticleDOI
TL;DR: This article examined how the widespread adoption of unilateral divorce influenced the prevalence of lethal spousal violence in the United States and found that unrestricted unilateral divorce laws had small and statistically insignificant effects on the amount of domestic violence directed against wives.
Abstract: This study examines how the widespread adoption of unilateral divorce influenced the prevalence of lethal spousal violence in the United States. These evaluations are based on fixed-effects specifications for spousal homicide counts from an annual panel of U.S. states from 1968 to 1978. The results indicate that unrestricted unilateral divorce laws had small and statistically insignificant effects on the amount of lethal spousal violence directed against wives. However, the easy access to divorce created by such laws increased spousal homicides of husbands by approximately 21%. These increases were concentrated in states where the division of marital property favored husbands.

Journal ArticleDOI
TL;DR: This paper showed that metamorphic progress, associated with creation of new industries or technological transformation of existing industries, is of the same or higher order of magnitude as a source of technological progress.
Abstract: Drunk: Can you help me find my keys? Passerby: Sure, where exactly did you drop them? Drunk: Way over there by the trash can. Passerby: Then why are you searching over here? Drunk: The light's much better under the lamppost. Milton Friedman (Economics 331, 1967) The class laughed after hearing this joke, not yet realizing how well it described the profession for which they were preparing. Even those present who cannot carry memory of a joke home from the barbershop still remember the day they first heard that little joke. The thesis of this article is that the economics profession has spent years looking for technological progress under the familiar lamppost of research and development (R&D) by incumbent firms aimed at improvement in existing commodities or productive methods. Such perfective progress (as we call it) is amenable to hedonic measurement and analysis of firm behavior and market equilibrium in terms of return on investment, public goods, and positive externalities. We show here that metamorphic progress, associated with creation of new industries or technological transformation of existing industries, is of the same or higher order of magnitude as a source of technological progress. We believe that our approach complements Arnold C. Harberger's recent emphasis on the concentration of growth in a few companies in a few industries that are achieving dramatic real cost reductions. He began to formulate his own schema in his 1990 Western Economic Association presidential address and by his 1998 American Economic Association presidential address could report considerable empirical evidence in support of this concentration (Harberger, 1998). Harberger distinguishes between yeast, which makes bread rise evenly, and mushrooms, which pop up unexpectedly in the back yard. In titling this article, we had in mind the Japanese picture of progress by inching up--or Frank Knight's (1944) Crusonia plant, which grows proportionately except as parts are cut off and eaten. (1) In contrast, we emphasize the process of this or that industry leaping forward at any given time--a process that may have prompted Schumpeter's (1934) model of creative destruction. Breakthrough discoveries in science and engineering--particularly invention of a new way of inventing, such as corn hybridization, integrated circuits, and recombinant DNA--typically drive metamorphic progress. These discoveries are rarely well understood in the early years following them. As a result, natural excludability is characteristic of these radical technologies due to the extensive tacit knowledge required to practice them and the lengthy period of learning-by-doing-with at the lab bench required to transfer them. Thus, metamorphic progress cannot be analyzed following Arrow's information as a public good paradigm. The importance of metamorphic progress based on naturally excludable technologies motivates a challenging and exciting research agenda to remove the black box covering the linkages among scientific breakthroughs, high technologies, entry and success in nascent industries, and the movement toward industrial maturity where government statistics and economic research are most likely (coincidentally) to begin. There are real data problems in studying hundreds of private start-up companies in industries still lumped into one or another classification ending in "n.e.c." (not elsewhere classified). They are manageable, however, if economists are willing to exploit unconventional sources and methods more familiar to organizational theorists, such as industry directories, financial practitioners' online services, the ISI and other scientific literature databases, and sophisticated matching methods for linking firms and individuals across databases. Before addressing these central issues, we make a necessary digression in the next section to clarify the relationship between metamorphic progress and the supposed acceleration of secular productivity growth post-1995 labeled the new economy by Federal Reserve Chairman Alan Greenspan (2000a, 2000b, 2001) and others. …

Journal ArticleDOI
TL;DR: In this paper, the authors consider three automated pricing algorithms that use price information as an input that could be retrieved from consumer price-gathering technology in an electronic market. And they study the supply-side effects of these three algorithms.
Abstract: I. INTRODUCTION Electronic commerce affords the opportunity to automate the process by which buyers and sellers engage in an exchange at a mutually agreeable price. For example, buyers on the Internet can use software agents to automate the gathering of information on price and other product attributes in online retail markets that are traditionally organized as posted offer markets. (1) This technology obviously reduces the consumer transaction costs of comparing prices, which presumably would induce stronger competition among sellers. However, as Hal Varian remarked in the e-commerce magazine Wired, "Everybody thinks more information is better for consumers. That's not necessarily true" (Bayers, 2000, 215). The same technology that reduces consumer transaction costs can be employed by sellers to automate the monitoring of rivals' prices, which might mitigate or even overwhelm the procompetitive effect of reduced consumer search costs. In addition, electronic markets grant more than just ideal information on competitors' pr ices. With electronic commerce a seller can commit to implementing an automated algorithm that could possibly sustain tacitly collusive prices. In this article we study the supply-side effects of three automated pricing algorithms that use price information as an input that could be retrieved from consumer price-gathering technology in an electronic market. Two of the three algorithms that we study have the theoretical potential to facilitate collusion: low-price matching and a trigger strategy. The third and presumably more competitive algorithm, which we refer to as undercutting, is drawn from the common retail practice of beating competitors' prices. Using the experimental method, this article explores how sellers deploy these three automated pricing algorithms in electronic markets. More specifically, we ask: (1) Do sellers prefer to set their price manually or to adopt automated algorithms that adjust price more frequently? (2) Are markets with automated pricing algorithms more competitive or less competitive than markets with manually-posted prices? (3) Does increased commitment to an automated pricing mechanism facilitate tacit collusion? To study the impact of automated pricing on seller behavior, we base our work on the model of price dispersion in retail markets by Varian (1980). Thus, our article also explores how well a theory of mixed strategy pricing organizes the behavior of sellers in a market with a nearly continuous stream of differentially informed fully revealing customers. (2) As a preview of the results, we find that (1) sellers employ automated pricing algorithms more often than they manually set their own price, (2a) automated undercutting leads to prices similar to the game-theoretic prediction, (2b) automated low-price matching generates prices significantly higher than the game-theoretic prediction, (2c) automated trigger pricing results in market prices below the game-theoretic prediction, and (3) greater commitment to automated low-price matching shifts prices closer to the joint profit-maximizing outcome. The structure of the article is as follows. Section II presents the three automated pricing algorithms we consider. The experimental design and procedures are in section III. Section IV presents and discusses our results, and section V briefly concludes. II. AUTOMATED PRICING ALGORITHMS The market environment is motivated by the model of sales in Varian (1980). (3) In this model there are n sellers that supply buyers who each desire to purchase at most one unit of a homogeneous product. Each seller has a commonly known, constant marginal cost c of supplying a unit to a buyer and posts a price p, which the buyer can accept or reject. The private value v for each nonstrategic buyer is assumed to be a random variable, drawn independently from a known uniform distribution with a support [v, v]. Buyers differ based on the number of firms from which price quotes are received prior to making a purchasing decision. …

Journal ArticleDOI
TL;DR: In this article, the effect of trade liberalization on the use of quotas and antidumping laws can be investigated directly, and the results suggest that treaties that remove or reduce one type of distortion may lead to other policies that are even worse.
Abstract: I. INTRODUCTION It is common to recognize that tariffs have gradually been replaced by nontariff barriers (NTBs). Some authors go even further and argue there is a "Law of Constant Protection" (an expression used by Bhagwati [1988] mainly to dismiss the idea). Baldwin (1984, 600), for instance, writes: "Not only have these measures become more visible as tariffs have declined significantly through successive multilateral trade negotiations but they have been used more extensively by governments to attain the protectionist goals formerly achieved with tariffs." The purpose of this article is to set up a model in which the effect of trade liberalization on the use of quotas and antidumping laws can be investigated directly. We analyze two types of bilateral trade liberalization: tariff reductions and quota elimination. We show that our model is consistent with a progression from the use of tariffs only to the use of quotas (following tariff liberalization) to the use of antidumping laws (when quotas have been jointly tarrified). Second, it is also consistent with a narrowing of the range of industries in which each of these instruments is used. Third, the extent of bilateral tariff liberalization and the ensuing degree of replacement of tariffs by NTBs depend on the combination of two industry-specific characteristics: the government's preferences for domestic firm profits and the importance of international transport cost in the industry. Overall, our results suggest that treaties that remove or reduce one type of distortion may lead to the use of other policies tha t are even worse, but despite the use of NTBs overall trade is more liberal. To show these results, we use a standard two-country model with two-way trade where the policy maker's objective function is quasi-concave. This last characteristic is important to explain some of the results as it leads to strategic complementarity in the tariff game and to the possibility of interior solutions in the quota game. Our results appear to track three separate sets of empirical facts well. First, evidence shows that there is a clear emergence of quantitative restrictions in the 1960s in manufacturing sectors and in developed economies followed by an explosion in the use of antidumping constraints since the 1980s. (1) The emergence of these two NTBs can be linked to preceding multilateral trade rounds and in particular to the completion of the Kennedy Round. Today, the General Agreement on Tariffs and Trade (GATT, 1990, 10) notes that "despite a recent decline in the number of anti-dumping investigations initiated in the United States and the European Communities, anti-dumping remains (after tariffs) the most frequently invoked trade policies in these countries." Antidumping measures are also spreading to developing countries (Nogues, 1993). Quantitative restrictions are meanwhile on the decline as signatories of the Uruguay Round agreement are now required to "tariffy" existing quotas and constrain their future use. GATT ( 1990) documents specific import quotas, import licensing restrictions, and items subject to import prohibition that have been eliminated in recent years. Second, the gradual replacement of trade tools has also been accompanied by a reduction of the number of sectors affected by NTBs. Whereas very few products were traded without levy before the various GAIT rounds, Renner (1971) finds that 7% of the (four-digit) product classes were affected by quantitative restrictions in the United States and in the European Communities in 1970. The number of antidumping cases in the European Communities (initiated and/or ending up with a positive decision) represents a fraction of this number (see GAIT, 1993b) mainly concentrated on a very small number of sectors (see Messerlin and Reed, 1995). Of course, it is not because a smaller number of sectors are affected by NTBs that overall protection necessarily decreases. However, given the high rates of growth in world trade, it seems likely that despite this substitution the overall level of protection has decreased over time. …

Journal ArticleDOI
TL;DR: In this paper, the authors define a family of conceptually consistent income measures, each based only on the expectations held at the time of measurement, that they describe as generalized-Hicksian income.
Abstract: I. INTRODUCTION This article argues that the distinction between expected and unexpected capital gains (losses) is fundamental to the measurement of income. The way capital gains are treated can have a significant impact on major macroeconomic statistics, such as national income and saving, the balance of payments, government income and saving, and depreciation (see Gale and Sabelhaus, 1999; Joisce and Wright, 2001). The recent sale of spectrum licenses is a case in point. The spectrum, a natural asset over which governments enforce property rights, became unexpectedly valuable as a result of the development of mobile telephones. In 2000 and 2001 some governments realized gains of $30 billion or more in a day by auctioning licenses to use sections of the spectrum (see UN Statistics Division, 2000). The concept of income, although widely used, remains vague. It is necessary to inquire why the concept of income is needed and what use it serves. The main purpose of income is to provide guidance to households or other economic units, including government, on the rate at which they can afford to consume when there is uncertainty about future resources. The more successfully measured income meets this requirement, the greater its power as an explanatory variable for the analysis of consumer behavior. The treatment of expected and unexpected capital gains in income measurement is a continuing source of controversy. (1) Even in a perfect foresight setting, the concept of income is not straightforward. Hicks (1946) considered a number of definitions. Two in particular command support in the literature. Hicksian income no. 1 is the maximum amount that can be consumed while maintaining wealth intact. (2) Hicksian income no. 2 is the maximum sustainable level of consumption. The general consensus that emerges from the literature is that, with perfect foresight, all capital gains are included in Hicksian income no. 1. However, Asheim (1996) shows that some capital gains are excluded from Hicksian income no. 2 when the interest rate varies over time. Both income concepts coincide when the interest rate is fixed. When the perfect foresight assumption is relaxed, a distinction must be drawn between expected and unexpected capital gains. It is important that we move beyond a perfect foresight setting because the concept of income has been developed primarily to assist decision taking under uncertainty. If income is meant to act as a budget constraint indicating the resources available for consumption each period, the important question is how should a rational consumer react to unexpected capital gains. This article develops a general theoretical framework for the measurement of Hicksian income no. 1 under uncertainty that is capable of handling all kinds of capital gains on all kinds of assets ranging from money market assets to mineral deposits. (3) First, the concept of income with perfect foresight is developed and analyzed. In the following section, uncertainty is introduced. Income depends on expectations of future receipts that are liable to be revised with the passage of time. The time at which income in a particular period is measured is therefore crucial. Income can be measured at any time, but attention has tended to focus on the beginning and end of the period. Hicks (1946, 178-79) described income as measured at these times as ex ante and ex post income. Ex ante income, as noted by Eisner (1990, p. 1180), is essentially the same as Friedman's (1957) concept of permanent income. Ex post income as defined by Hicks--now usually described as Haig-Simons income after two earlier proponents of the concept--is a widely used objective measure familiar to most economists. However, it is conceptually flawed because it utilizes two different and generally inconsistent set of expectations, those held at the beginning and end of the period. We define a family of conceptually consistent income measures, each based only on the expectations held at the time of measurement, that we describe as generalized-Hicksian income. …

Journal ArticleDOI
TL;DR: In this paper, the authors examined the information content of asymmetric policy statements for predicting future FOMC policy actions using monthly dummy variables to indicate the prevailing direction of policy asymmetry over a sample period of 1984-2002.
Abstract: I. INTRODUCTION Since 1983, the Federal Open Market Committee (FOMC) has included in its policy directives a statement indicating conditional expectations about the future. Although the specific language used to communicate expectations has evolved over the years, the "bias" statement has persistently been interpreted as an indicator of the likely direction of future changes in the committee's Federal funds rate target and has therefore been carefully monitored by Fed watchers and other financial market participants. As the FOMC has enhanced its efforts to communicate its policy intentions to the public in recent years, the contemporary version of the bias statement has been subject to considerable discussion and scrutiny. From 2000 through 2002, the FOMC's postmeeting press releases included a statement referring to the "balance of risks" in the "foreseeable future." Paralleling the Federal Reserve's dual objectives of price stability and maximum sustainable economic growth, the statements took the form of stating that the concerns of FOMC members about prospective economic developments are tilted toward either "inflation pressures" or "economic weakness." In 2003, the committee expanded this "balance of risks" language further to identify separate risk assessments for both inflation and real economic activity. When it was adopted, the language of the balance-of-risks statement was intended to be more general than the previous statements of policy bias, to avoid giving the impression that the statements directly signaled impending changes in the funds rate target. Nevertheless, as was the case with the earlier language--the balance of risks statement has tended to be interpreted as indicating likely future policy moves. In this article, I examine the question of whether such an interpretation might be warranted as an empirical matter. In particular, I examine the information content of asymmetric policy statements for predicting future FOMC policy actions using monthly dummy variables to indicate the prevailing direction of policy asymmetry over a sample period of 1984-2002. The time-series approach facilitates the use of a monthly data set for conditioning the information content of the bias statement on macroeconomic variables thought to be of importance to policy makers. In particular, I use inflation and output data to estimate a baseline Taylor-rule specification for policy and test whether the bias statement provides any additional information for forecasting changes in the FOMC's Federal funds rate target. The evidence presented here shows that the statements of policy asymmetry do indeed convey information that is useful for forecasting changes in the funds rate target. The information content in the bias statement has been a statistically significant factor for predicting changes in the funds rate target over the sample period, even after controlling for responses to policy variables in the Taylor-rule equation. In light of this finding, I estimate an alternative specification in which the variables representing asymmetry are interacted with the parameters of the estimated Taylor-rule equation. From this perspective, statements of policy bias are associated with a greater or lesser degree of responsiveness to inflation and output data. During the sample period considered here, variation in the committee's responses to inflation data has evidently been the predominant factor for explaining the predictive power of asymmetric policy statements. This is particularly true for the first half of the sample period, in which the FOMC was actively pursuing a policy of disinflation. II. THE ASYMMETRIC POLICY STATEMENT AND ITS INTERPRETATION A Brief History From 1983 until 1997 statements of an asymmetric bias were included in the FOMC Policy Directive, as a note that "greater reserve restraint" or "lesser reserve restraint" either "would" or "might" be acceptable during the intermeeting period, depending on emerging economic circumstances. …

Journal ArticleDOI
TL;DR: Chezum and Wimmer as discussed by the authors studied the effect of adverse selection in the market for young thoroughbreds and found that adverse selection is present in non-certified sales but absent in certified sales.
Abstract: I. INTRODUCTION Researchers have long understood that a variety of mechanisms may be used to overcome problems of adverse selection. Stigler (1961, 224) notes that "[s]ome forms of economic organization may be explicable chiefly as devices for eliminating uncertainties in quality," and Akerlof (1970) cites independent groups, such as the Consumers Union and United Laboratories, that test and certify the quality of goods. There is, however, little empirical evidence that illustrates the effectiveness of such mechanisms. In this article we examine the effect certification has on a market characterized by adverse selection. Using data from the market for young thoroughbreds, we compare the performance of sales where auction houses provide certification services with sales where certification is absent. The thoroughbred racehorse market consists of two distinct types of public auctions: certified and noncertified sales. In a certified sale, auction houses physically inspect the horses nominated to their sales, selling only the horses they conclude are from the upper end of the quality distribution. In noncertified sales, auction houses sell all horses nominated to a sale. Our empirical strategy is to perform tests that indicate whether adverse selection affects market outcomes in either certified or noncertified sales. A finding that adverse selection is present in noncertified sales but absent in certified sales is consistent with the hypothesis that certification alleviates problems of adverse selection. We adopt three approaches to test for the presence of adverse selection. (1) The first approach is in the spirit of Chiappori and Salanie (2000), who examine the relationship between unobservable factors from participation and performance equations. We model adverse selection as a case of sample-selection bias and examine the correlation between errors in participation and price equations. (2) To accomplish this we use a unique data set that allows us to estimate how breeder decisions to sell or retain horses affect market prices. The data set consists of a 10% random sample of all thoroughbreds born in 1993 and includes both horses retained by their breeders, who own a horse at the time of its birth, and horses that breeders chose to sell. Our second test follows Genesove (1993) and Chezum and Wimmer (1997) by examining the relationship between seller characteristics and price. Theoretically, when observable seller characteristics are correlated with seller incentives to select goods adversely, prices should reflect these differences. Chezum and Wimmer observe that some breeders sell all of their horses, whereas others retain a portion to race. They find that market prices are inversely related to the extent of a breeder's involvement in racing, concluding that adverse selection affects market outcomes. We extend this work in two respects. First, we isolate the effect seller characteristics have on price through the decision to sell or retain a horse. Second, we compare the effect seller characteristics have on observed prices in certified and noncertified sales. Our last test follows Bond (1982) who attempts to identify adverse selection in the market for used trucks by comparing the repair records of trucks that were sold with the records of trucks retained by their original owners. Because the horses in our sample had not begun their racing careers at the time they were sold, we use data on racetrack earnings as an ex-post measure of quality and compare the quality of horses sold in noncertified sales with certified horses and horses retained by their breeders. The results from each approach are consistent with certification alleviating problems of adverse selection. We find that holding observable attributes constant, horses that would receive unusually high market prices in noncertified sales are even more valuable in other options and are less likely to be sold in a noncertified sale. …

Journal ArticleDOI
TL;DR: In this article, two types of congressional representation: district representation, reflecting interests related to the politician's constituents, and alma mater affiliation, reflecting the politicians' personal interests, were studied.
Abstract: Does congressional representation of a university affect the distribution of research funding to universities? This article studies two types of congressional representation: district representation, reflecting interests related to the politician's constituents, and alma mater affiliation, reflecting the politician's personal interests. I find that both types of representation matter and lobbying efforts by public and private universities may differ. Thus this article suggests politics plays a role in diverting funding that might be given to other institutions based under a more objective process, reducing the potential effectiveness of the funding on research activities.

Journal ArticleDOI
TL;DR: In this article, the authors derive the implications of the nonlinearity hypothesis for time-series models and analyze the country's fiscal policy by investigating time series data of an individual country and examining two types of public spending, public investment and public consumption.
Abstract: I. INTRODUCTION Endogenous growth theories differ from exogenous growth theories with respect to their assumed relationships between economic growth and policy variables. In contrast to their exogenous counterparts, endogenous growth theories suggest various channels through which growth can be affected by government policies. One of these channels is assumed to be due to supply-side effects of productive public spending on infrastructure, human-capital accumulation, legal protection, and so on, which can be viewed as factors for private production. Therefore, increases in productive public spending will increase the economy's overall productivity and thus enhance growth. Empirical cross-country studies investigating the relationship between government and growth--such as Ram (1986), Barro (1991), Easterly and Rebelo (1993), and Levine and Renelt (1992)--have produced rather ambiguous results, ranging from inconclusive to contradictory. An explanation for the failure to empirically resolve this issue could be that the relationship is nonlinear in nature, as is suggested by the endogenous growth models like that of Barro (1990). Empirical studies that do not account for such nonlinearities may easily lead to inconclusive--if not misleading--answers to the question of how fiscal policy affects economic growth. By incorporating productive public spending into the production function, Barro (1990) demonstrates theoretically that the relationship between growth and government size (i.e., the share of public spending in gross domestic product, GDP) may well be nonmonotonic. In this case an optimal, growth-maximizing government size may exist, such that additional public spending increases (decreases) growth if the government size is below (above) the optimum. This result known as the nonlinearity hypothesis--has also been suggested in the theoretical analyses of Barro and Sala-i-Martin (1992), Glomm and Ravikumar (1994), Lau (1995), and Devarajan et al. (1996), among others. Empirical studies investigating the optimal government size--like Barro (1990), Devarajan et al. (1996), Kelly (1997), and Karras (1993; 1996; 1997)--rely on cross-country data. So does Dowrick (1996), the only study explicitly examining the validity of the nonlinearity hypothesis. These empirical studies assume that all countries have the same optimal government size and thus are notable to identify the individual countries' optimal fiscal policy or the consequences from changes in government size. In this article we derive implications of the nonlinearity hypothesis for time-series models. By investigating time-series data of an individual country one can examine the validity of the nonlinearity hypothesis and analyze the country's fiscal policy. Jones (1995), Kocherlakota and Yi (1996; 1997) and Evans (1997) characterize endogenous growth models by the property that a permanent change in policy variables leads to a permanent change in the growth rate. If growth effects due to changes in public spending depend on the deviation of public spending from its optimal level, as the nonlinearity hypothesis suggests, growth effects induced by a change in public spending should vary as government size varies--unless the optimal spending level varies accordingly. Conducting time-series studies, Jones (1995), Evans (1997), and Karras (1998; 1999) conclude against endogenous growth, which contrasts the conclusions of Kocherlakota and Yi (1996; 1997). However, by relying on linear models, all these studies force growth effects from public policy to be constant. Therefore, they prevent any analysis of nonlinear endogenous growth phenomena as implied by Barro-type models. As we will argue, if the nonlinearity hypothesis holds, results from linear models may in fact be misleading. The empirical analysis presented herein is based on quarterly West German time-series data and examines two types of public spending, public investment and public consumption, as a proxy for productive public spending. …

Journal ArticleDOI
TL;DR: In this paper, the authors quantify the potential financial impact of each individual's death on his or her survivors and measure the degree to which life insurance moderates these consequences, and identify a systematic gender bias: for any given level of financial vulnerability, couples provide significantly more protection for wives than for husbands.
Abstract: Using the 1995 Survey of Consumer Finances and an elaborate life-cycle model, we quantify the potential financial impact of each individual’s death on his or her survivors and measure the degree to which life insurance moderates these consequences. Life insurance is essentially uncorrelated with financial vulnerability at every stage of the life cycle. As a result, the impact of insurance among at-risk households is modest, and substantial uninsured vulnerabilities are widespread, particularly among younger couples. We also identify a systematic gender bias: For any given level of financial vulnerability, couples provide significantly more protection for wives than for husbands. (JEL D10, G22)

Journal ArticleDOI
TL;DR: The authors show that the trust standard and the correlation and corroboration tests are pseudo-criteria for judging the quality of proxies and results based on them because they are capable of supporting fallacious statistical estimates.
Abstract: I. INTRODUCTION Today proxies for cultural, political, and institutional variables abound in econometric studies, and strong policy recommendations are based on results obtained with them. Unlike the traditional data of economists, they are neither market-generated nor the fruit of large official or semi-official statistical projects. The main providers are ideological or public-interest organizations, such as Transparency International and Freedom House, supplemented by scholars, corporations, and miscellaneous organizations. The diverse and perhaps suspect origins of these data are reasons to use them cautiously. Yet even the prudent Robert Barro (1999) takes Freedom House's ratings of international political and civil liberty on faith in his recent study of the determinants of democracy--the trust standard. What I shall call the correlation test is a common way to assess multiple proxies for an intangible variable which also breeds complacency because it often is used as a substitute for direct appraisal of them. Alberto Alesina and Beatrice Weder (1999, p. 10) use it in a study of corruption's relation to foreign aid, claiming that "these relatively high correlations [among proxies for corruption] provide some confidence in the measures of corruption since most of them were compiled by different institutions using very different ... methodologies." The same spirit infuses what I shall call the mutual corroboration test for regression results. It is considered probative that different proxies for the same intangible variable produce similar or at least mutually compatible regression estimates. The correlation and corroboration tests also appear in a recent World Bank study (Kaufman et al., 2000, p. 11) utilizing measures of the quality of governance. The authors write that "if [the data were not informative], we would not expect to see the ... strong agreement across sources about the quality of governance. Particularly striking is the broad consensus that emerges [among many diverse raters]." They acknowledge the conceptual differences among the literally hundreds of measures used in their study (all qualitative) but omit none from their composite indicators. Finally, a pioneer in the new data, Gerald Scully (1992), constructs eight economic freedom indexes plus an average of them to be the economic freedom variable in one of his projects. He cites high rank correlations among them approvingly. These tests use mutual resemblance or consistency as substitutes for directly assessing the suitability of proxies for a particular project. One-by-one evaluation is, to be sure, a laborious and inconvenient task, but ease and convenience are not tests of the validity of evaluation procedures. Correlation among the proxies or results obtained with them must not be confused with aptness of the proxies or results. The correlation and corroboration tests assume, for instance, that only measurement error differentiates proxies from one another. Yet political-institutional variables normally are defined differently from one another. "Good governance" is already a subjective judgment; "broad consensus" allows differences of opinion. Identical policy conclusions cannot follow from regressions based on measures having different meanings, so the quotation from Alesina and Weder is an indictment of their study, not a defense of it. The main purpose of this article is to show that the trust standard and the correlation and corroboration tests are pseudo-criteria for judging the quality of proxies and results based on them because they are capable of supporting fallacious statistical estimates. These tests, I show, endorse statistical nonsense in an interesting, familiar case. The nonsense is attributable in part to the inapposite use of proxy regressors in estimations of a simple and well-known model, specifically bivariate regressions often cited in support of the strict free-market school of thought. …

Journal ArticleDOI
TL;DR: Yang and Stitt as mentioned in this paper show that the Ramsey (1927) rule for optimal commodity taxation is changed in important ways when the assumption of constant production costs is relaxed, and they show that a fixed charge unambiguously reduces the price of a higher quality good, relative to a lower quality version of the same good, only when that good is sold by a perfectly competitive, constant-cost industry.
Abstract: I. INTRODUCTION The Alchian and Allen (1967, 63-64) theorem, sometimes elevated to the status of a third law of demand, as in Bertonazzi et al. (1993), is a clever application of the fundamental principle of economizing behavior. As usually stated, the theorem suggests that by reducing the price of higher quality relative to lower quality versions of the same good, a fixed transportation charge will trigger a predictable shift in the mix of quality grades purchased by consumers in distant markets, as compared to the mix purchased locally. Producers will respond rationally to this difference in behavior. Relatively more high-quality goods will be included in outbound shipments, leaving more low-quality goods to be sold nearer the point of origin. Hence, other things being the same, the theorem predicts that it will be harder to find "good" apples in the State of Washington, a prime apple-growing region, than in, say, New York City, where "bad" apples are comparatively more expensive. The first law of demand still holds, of co urse, in that fewer apples of both types will be consumed in New York than in Seattle, ceteris paribus. Similarly, given that the cost of a babysitter will be the same no matter where the parents of young children decide to spend the evening Out, they are more likely than an otherwise identical childless couple to choose an upscale restaurant over an inexpensive eatery and to go to the theater rather than to the movies. Moreover, if both couples plan to see a play or a concert, the couple with children will opt for better seats. The foregoing examples indicate that the Alchian and Allen theorem is rich in empirical implications. In what follows, however, we show that placed in the context of a market model, its range of applications is narrower than has been acknowledged in the literature heretofore. More specifically, our analysis demonstrates that a fixed charge unambiguously reduces the price of a higher-quality good, relative to a lower-quality version of the same good, only when that good is sold by a perfectly competitive, constant-cost industry. Under increasing-cost conditions or imperfectly competitive industry structures, by contrast, it is possible for relative prices either to be unaffected by the addition of a fixed charge or, indeed, for the lower-quality good to become relatively cheaper in distant markets, depending on the elasticity characteristics of relevant market demand and supply functions. Our analysis is in no way intended as an attack on Alchian and Allen or their fine textbook, which has taught more than one generation of economists to be better price theorists. We do not quarrel in the least with the pedagogical value of their theorem, which has been immense. Our purpose is to identify some plausible scenarios under which the third law of demand may fail to hold. The present article is thus in the spirit of Yang and Stitt (1995), who show that the Ramsey (1927) rule for optimal commodity taxation is changed in important ways when the assumption of constant production costs is relaxed. It is organized as follows. In the next section, the fundamentals of the Alchian and Allen theorem are described in more detail and its predictions are situated within the existing literature. Section III explores the theorem's operations under alternative cost conditions and industry structures. Factors complicating the analysis of the effects of a fixed charge on the relative prices of high-and low-quality goods are also addressed in this section. Finally, section IV contains some concluding remarks. II. SHIPPING THE GOOD APPLES OUT To borrow an example from a classic statement of the Alchian and Allen theorem, (1) suppose that "good" Washington apples sell for 10 cents each in Seattle and "bad" apples each cost a nickel there. Because the price of a good apple is twice that of a bad apple, Seattle's consumers must sacrifice the opportunity of buying two bad apples to obtain a good one. …

Journal ArticleDOI
TL;DR: In this article, the authors examined the distribution of federal expenditures and tax shares among the states from 1975 to 1997 and showed that there has been a similar continuity in the interstate distribution of the federal funds and taxes, with some states receiving more than they contribute in taxes.
Abstract: 1. INTRODUCTION Under a majority voting rule, representatives of specific political districts must bargain with representatives of other districts to build a coalition for enacting legislation that benefits their narrow constituencies. But the theoretical public choice literature warns of potential for unstable majorities. Minimum winning coalitions divide program benefits just among their members, creating incentives for those left out to entice defection by offering rewards to those who leave and form a different coalition. As noted in Mueller (1989) new coalitions emerge, undermining the old ones, leading to cyclical majorities, short-term programs, and highly skewed distributions of program benefits. (1) Despite these dire predictions, there is a general consensus in the literature [such as Tullock (1981)] that programs are more stable and allocations more equal than the theory suggests. (2) In an effort to explain the discrepancy between prediction and observation, researchers have pointed to institutional rules and practices in Congress. For example, as stated in Shepsle and Weingast (1981a) and Weingast and Marshall (1988), the committee system provides a "structure-induced equilibrium" that limits the possible range of vote trading and thereby helps maintain coalitions. Further, Shepsle and Weingast (1981b), Miller and Oppenheimer (1982), and Collie (1988) argue that universalist sharing of program benefits enlarges the winning coalition and extends its political support. (3) In this article we offer additional evidence of broad, stable sharing in many programs enacted by Congress by describing interstate distributions from the Federal Highway Trusts Fund (HTF). The allocation formula for the HTF was initiated in 1916, but despite wide divergence across the states in growth of various economic factors over the rest of the twentieth century (such as vehicle registration and population) that might have led to redirection of highway funds, there were comparatively limited HTF allocation adjustments. Analysis of state receipts from the HTF relative to tax payments into the fund reveals that some states collect much more than they contribute, whereas others pay in more than they receive. Even so, interstate ratios of HTF apportionments to payments have remained stable across the years, varying less than changes in highway use measures would suggest. Going beyond this specific program, we examine overall federal expenditure and tax shares among the states from 1975 to 1997 and show that there has been a similar continuity in the interstate distribution of federal funds and taxes. (4) As with the highway program, there is broad, stable sharing of federal expenditures across the states, with some receiving more than they contribute in taxes to the federal government. To better understand this observed stability and use of relatively egalitarian sharing rules and to go beyond existing explanations, we emphasize the desire of politicians to minimize the high transaction costs of negotiating and enforcing political coalitions. Politicians have incentive to prevent unraveling of political agreements to avoid the costs of searching for new coalition partners, reaching agreement on the nature and distribution of program benefits and costs, and verifying compliance. These activities detract from a legislator's ability to address other voter concerns. Moreover, legislators seek to protect constituent benefits accruing from long-term programs that would be lost if coalitions unraveled. (5) Accordingly, we argue that politicians assemble greater than minimum-sized coalitions to build broad political support for their legislative programs, offering benefits to a larger constituency in exchange for additional votes. Considerable negotiation over the distribution of program benefits a nd costs may be required, so that once agreements are reached, politicians will be loath to consider a major reallocation that could undermine the coalition. …

Journal ArticleDOI
TL;DR: The economic theory of regulation has been an exception to this normal order of progression as mentioned in this paper, and it has been used to explain a wide range of regulatory practices, such as the deregulation movement and the specific pattern of observed cross-subsidies.
Abstract: I. INTRODUCTION Historically, many if not most economic theories have advanced in three fairly distinct stages, progressing from descriptive to graphical to mathematical formulations, generally in that sequence. The economic theory of regulation, however, has been an exception to this normal order of progression. The fundamental corpus of this theory was laid out initially in the descriptive discussions provided by Stigler (1971), Posner (1974), and others. From these early presentations, however, the theory was subsequently formalized mathematically by Peltzman (1976) and Becker (1983) without passing through the graphical stage of its development. (1) Regardless of the chronological order of the progression of this theory, however, we believe that a fully developed graphical exposition is likely to be of considerable value even now. At least two considerations support this view. Specifically, by making this theory more accessible to a wider audience and by better illustrating the basic underlying mechanics behind it, the likelihood of significant future advancement, we believe, is enhanced. Indeed, currently there remain a number of observed regulatory phenomena that have yet to be satisfactorily explained by the existing theory (e.g., the deregulation movement and the specific pattern of observed cross-subsidies). (2) An improved understanding of the fundamental components of the theory should facilitate its further advancement and lead to an expanded ability to explain a broader range of regulatory practices. We illustrate this point through a variety of applications to provide an improved understanding of, inter alia, the markets likely to be chosen for regulation, the propensity of regulatory benefits to be spread across interest groups, the symbiotic nature of regulation and cross-subsidization, and the economics of deregulation. We also are able to depict graphically the relationship between the economic theory of regulation and the traditional normative model of regulation, which assumes that regulators maximize social welfare. The article is organized as follows. In section II, we briefly describe the regulator's general optimization problem--viz., what precisely is being maximized and what constraints apply to that maximization problem. The latter topic--the constraints--will be of primary interest here. Section III presents the formal graphical analysis of these constraints by deriving what we label the regulator's "benefits budget constraint." As its name suggests, the benefits budget constraint defines the locus of maximum benefits the regulator is able to deliver to the affected interest groups. Importantly, that constraint is determined by the prior, unregulated market equilibrium price and output and by general market parameters, such as demand elasticity, costs, and so on. Section IV describes the resulting regulatory equilibrium. Section V, then, applies the graphical tools developed in sections II-IV to explain a number of commonly observed regulatory phenomena. Here, both the ability to provide new insights and the pedagogical value of the graphical approach are illustrated. Finally, section VI concludes. II. THE REGULATOR'S GENERAL OPTIMIZATION PROBLEM The economic theory of regulation postulates that regulators will attempt to maximize some objective function (most generally, the regulator's utility) by implementing regulatory policies that benefit (and, by necessity, harm) particular interest groups. (3) The benefits provided to these groups are then used to "purchase" from them the objects that are directly valued by the regulator--that is, the direct arguments contained in the regulator's objective function. The distribution of these benefits and harms across the affected groups (which is determined by the regulator's choice of a particular regulatory policy) is selected to maximize the value of the regulator's objective function. The ability of any specific interest group to curry regulatory favor, then, will depend on the capacity of that group to deliver the objects of value to the regulator in exchange for the benefits provided by the regulatory process. …

Journal ArticleDOI
TL;DR: In this paper, the authors developed a theoretical model in which some families may be able to achieve a cooperative outcome, which increases the total resources available to children, and derived the conditions under which families can achieve each of these outcomes, and specifically model how policy variables affect whether or not a particular outcome is achieved.
Abstract: I. INTRODUCTION In response to the high levels of poverty faced by children of divorced parents, federal and state governments adopted a number of policies during the 1980s aimed at increasing child support awards and payments. These laws required all states to have numerical child support guidelines to be used by parents who could not agree on their own and to increase state enforcement of delinquent child support awards. Implicit in much of the policy discussion about divorce is the assumption that within marriage parents will jointly act in the best interest of their children, but that after divorce children may not be adequately provided for, and legislative action is required. In the extreme, this view is represented by the stereotype of the deadbeat dad, a term that describes a father who doesn't care enough about his children to continue any involvement with them either in terms of time or money. Similarly, economic models often assume a noncooperative outcome in which the non-custodial father will not provide what he did during marriage (e.g., Weiss and Willis, 1985). Our article makes several contributions to the literature on the determinants of child support by absent fathers. In contrast to almost all of the work on this topic that has been based on noncooperative models, we develop a theoretical model in which some families may be able to achieve a cooperative outcome, which increases the total resources available to children. In particular, our model identifies three different types of outcomes: (1) cooperative and self-enforcing; (2) noncooperative and self-enforcing; and (3) noncooperative and state-enforced. Each of these outcomes has different implications for the well-being of children and for the level of government involvement in divorce settlements. We then derive the conditions under which families can achieve each of these outcomes, and we specifically model how policy variables affect whether or not a particular outcome is achieved, the levels of payments, and levels of compliance (conditional on the type of outcome). Our empirical work provides reduced-fo rm evidence that is consistent with a model that distinguishes these different types of outcomes, and we find that policies can affect the type of outcome, as well as the level of child support awards and payments. Section II of this article provides background and motivations for our topic. We develop a general theoretical framework in section III and compare outcomes under assumptions of symmetric versus asymmetric information. Section IV incorporates the legal environment by modeling the impact of guidelines and state enforcement efforts on the probability the custodial parent (CP) will impose the guidelines, the level of child support awards, and the noncustodial parent's (NCP's) compliance. In section V we interpret comparative statics. Section VI examines the determinants of reduced-form models of the probability of a court-ordered child support award and shows that the predictions of the model are consistent with data on the characteristics of divorce settlements and parental behavior for those with voluntary versus court-ordered settlements. A summary of the findings and policy implications concludes. II. BACKGROUND AND MOTIVATION In contrast to the marriage literature, which predominately uses cooperative bargaining models, the literature on divorce generally assumes that divorcing parents are unable to reach cooperative agreements. For example, Weiss and Willis (1985) model parental expenditures on children after divorce as a principal-agent problem in which the absent father cannot verify the child expenditures made by the mother. This problem of asymmetric information leads to an inefficient, noncooperative outcome. Del Boca and Flinn (1995) also use a noncooperative model to examine the government's choice of compliance outcomes. Others have analyzed compliance with child support awards using an economics of crime framework (Beron, 1988; Beller and Graham, 1991; Chambers, 1979). …

Journal ArticleDOI
TL;DR: In this article, Feenstra and Hanson (1996a, 1996b, 1999) find that contrary to the conclusions of earlier studies, the process of replacing domestic for imported inputs, or outsourcing, can have a significant impact on wage inequality.
Abstract: I. INTRODUCTION One of the most policy-relevant academic inquiries in the area of international economics concerns the contribution of imports to the growing wage inequality between skilled and unskilled labor in the United States that has been observed over the past few decades. (1) Research in this area has generated considerable interest, few definite answers, and a plethora of conflicting results that have inspired a fierce debate. On one side of this debate, Borjas and Ramey (1994), Wood (1994, 1995, 1998), Feenstra and Hanson (1996a), as well as Leamer (2000), among others, advocate that the rise in imports that has been observed over the past few decades can account for much of the trend in wage inequality. The retractors, including Bound and Johnson (1992), Lawrence and Slaughter (1993), Berman et al. (1994), and Krugman (1997, 2000), to name a few, argue that the impact of imports on wages was negligible. To further polarize the nature of related research, there seems to be little consensus over the requirements of an appropriate analytical framework. As a result, a wide array of models have been recruited. These include the input-output methods of Sachs and Shatz (1994) and Wood (1995), the behavioral frameworks of Lawrence and Slaughter (1993) and Feenstra and Hanson (1996a, 1999), as well as a more descriptive approach favored by Krugman (2000) and, with added theoretical complexity, Leamer (2000). One of the few unifying elements characterizing the preponderance of research in this area is a collective presumption that any impact of significance that imports may have on the demand for domestic labor can derive solely from their potential to displace demand for domestic output. The premise of this notion is twofold. On one hand, it obtains from the heavy reliance of the majority of studies in this area on the StolperSamuelson (Stolper and Samuelson, 1941) theorem, which assumes that traded goods are final. On the other, it derives from some of the earlier studies in this area, such as Lawrence and Slaughter (1993) and Berman et al. (1994), that estimate the manufacturing sector's volume of imports of intermediate goods to be too small to explain the observed trend in wages. As a consequence, a prevalent methodological shortcoming of a good number of studies in this area is their failure to account for the impact on the demand for domestic labor that may derive from downstream processing of imported comm odities. (2) Recently, a number of authors have made valuable contributions in efforts to address this issue. Important examples of such research include the work of Feenstra and Hanson (1996a, 1996b, 1999) who relinquish the narrow definition of what constitutes imports of intermediate goods by the manufacturing sector that was adopted in earlier studies (see Feenstra and Hanson, 1996a, 106-7), such as Lawrence and Slaughter (1993), in favor of a more inclusive measure (see Feenstra and Hanson, 1996b, 241-42). Using their "augmented" definition and a suitable analytical framework, Feenstra and Hanson (1996a, 1999) find that contrary to the conclusions of earlier studies, the process of replacing domestic for imported inputs, or outsourcing, can have a significant impact on wage inequality. Related studies in this area using U.K. data, such as Anderton and Brenton (1999), arrive at similar results. The work of these authors accentuates the importance of trade in intermediate goods. However, it only accounts for a small portion of all relevant processes that involve imported inputs in domestic production. This is predominantly so because, similarly to previous studies, these contributions concentrate exclusively on the manufacturing sector. As a result, they are not fit to capture any impact on the domestic demand for labor that may derive from imports that do not compete directly with either the outputs or the inputs of this sector. Additionally, these studies fail to account for the impact on the demand for labor that may derive from the downstream handling of imported commodities other than those they classify as intermediate imports of the manufacturing sector. …