scispace - formally typeset
Search or ask a question

Showing papers in "Economic Inquiry in 1989"


Journal ArticleDOI
TL;DR: This article examined the impact of average and marginal tax rates on the level and growth of economic activity in sixty-three countries and found that the marginal tax rate has negative effects on economic activity.
Abstract: Data from sixty-three countries are used to examine the impact of average and marginal tax rates on the level and growth of economic activity Apparent negative effects of tax rates on growth disappear upon controlling for (1) potential endogeneity of average tax rates to per capita income and (2) the relation between economic growth and per capita income However, controlling for average tax rates, increases in marginal tax rates have negative effects on the level of economic activity This evidence supports the hypothesis that reductions in the “progressivity” of tax rates induce a parallel shift upward in the growth path

251 citations


Journal ArticleDOI
TL;DR: This article analyzed the relationship between earnings and the extent of assimilation, cohort quality change, and return migration experienced by the foreign-born population using the longitudinal data available in the Survey of Natural and Social Scientists and Engineers.
Abstract: This paper analyzes the relationship between earnings and the extent of assimilation, cohort quality change, and return migration experienced by the foreign-born population. The study uses the longitudinal data available in the Survey of Natural and Social Scientists and Engineers. The analysis reveals that there was a sizable decline in the skills of this population over the last two decades. In addition, the study shows that return migration is more likely among immigrants who did not perform well in the U.S. labor market.

242 citations


Journal ArticleDOI
TL;DR: In this article, the effect of nominal and real devaluations on trade balance has been examined in a two-country, two-good model and shown to have a significant effect on the trade balance.
Abstract: DO DEVALUATIONS IMPROVE THE TRADE BALANCE? THE EVIDENCE REVISITED This paper reexamines the effectiveness of devaluation in trade balance adjustment. The question is addressed in a framework which improves the previous empirical literature in several respects. The evidence indicates that devaluations have been a successful tool in inducing trade balance adjustment. In particular, nominal devaluations are found to result in significant real devaluations that last for at least three years, and the real devaluation induces significant trade flows that are distributed over a two-to three-year period. The evidence comes from two different samples, 1953-73 and 1975-84, involving twenty-seven countries and sixty devaluation episodes. 1. INTRODUCTION Do devaluations affect real magnitudes, in particular the trade balance? A devaluation may effect the trade balance through two channels: devaluation of the real exchange rate and a direct effect on domestic absorption. The traditional approach stresses the first channel. A nominal devaluation is assumed to change the real exchange rate (a relative price) and thus improve competitiveness. In turn, if relative prices (the terms of trade in a two country, two-good model) affect the trade balance, devaluation will be successful--in the sense of improving the trade balance, ceteris paribus. The absorption effect becomes the sole or most important channel in the monetary approach. In a world in which all goods and assets are perfect substitutes, prices are exogenously given for the small country, and wages and prices are flexible in both nominal and real terms, a devaluation increases the price level by the same percentage. The increase in the price level reduces real balances and thus domestic absorption. Dornbusch [1973, 883] has argued that if such a real balance effect is not present, "then it might stand to reason that the effects of devaluation are negligible, not that there must be other powerful avenues through which it exerts its effects" (emphasis added). The controversy over the effects of devaluation on the trade balance arises because according to Frenkel and Johnson [1976, 42] "the monetary approach rejects the emphasis given to the role of relative prices in the analysis of devaluation." Global monetarists argue that neither of the links in the chain postulated by the traditional approach is likely to hold in practice (see Laffer [1977]). The major disagreement centers on the question of the effectiveness of nominal devaluation to affect the real exchange rate and the importance of the latter in influencing trade flows. Theoretical arguments have not settled the issue. The question is as strongly debated today as it was forty years ago; see, for example, Branson [1983], Katseli [1983]. Kaldor [1983]. Nashashibi [1983], and McKinnon [1981]. Nor does it seem that the available empirical evidence leads to a consistent answer. The best known studies --Cooper [1971], Laffer [1977], Salant [1977], Miles [1979], Gylfason and Risager [1984]--reach contradictory conclusions.(1) Cooper and Gylfason and Risager find that devaluations improve the trade balance or current account, while Laffer and Miles conclude exactly the opposite. Salant finds that devaluations improve the trade balance but not as often as the overall balance of payments. In the face of such diverse findings, Frenkel [1984] suggests that a reexamination of the issue would be useful. Indeed, Himarios [1985] shows that a close examination of Miles's widely quoted study reveals serious deficiencies that, once addressed, lead to opposite results. This paper provides new evidence concerning the effectiveness of devaluation in trade balance adjustment. The approach differs from past studies in several respects. It differs from Cooper's, Laffer's, and Salant's studies in that it accounts for the effects of other variables that affect the trade balance. …

167 citations


Journal ArticleDOI
TL;DR: Barro and Gordon as discussed by the authors developed a model that provides a potential explanation for this relationship in terms of the incentives facing the policymaker in a "discretionary equilibrium." The model can also account for an empirical association between inflation and measures of real output instability.
Abstract: A POSITIVE THEORY OF INFLATION AND INFLATION VARIANCE Empirically, inflation and the variance of inflation are positively associated. This paper develops a model that provides a potential explanation for this relationship in terms of the incentives facing the policymaker in a "discretionary equilibrium." The model can also account for an empirical association between inflation and measures of real output instability. There is, however, no direct causal link whatever from the average rate of inflation to either the variance of inflation or that of real output. I. INTRODUCTION The debate over the use rules or discretion in monetary policy has been central to macroeconomics for many years (e.g., Simons [1963]; Lucas [1980]; Buiter [1980]). Recently Kydland and Prescott [1977] and Barro and Gordon [1983a,b] have identified "discretion" as the absence of policy commitment in a game between policymakers and the public. They argued that discretionary policymaking will lead governments to create excessive inflation. A policy of low inflation is not consistent with incentives facing governments, and thus will not be believed by the private sector. Barro and Gordon [1983a] argue that this positive theory of monetary policy can help to explain many features of the trend rate of inflation in modern economies: high and persistent rates of inflation, a positive relationship between inflation and unemployment, and the observed countercyclical behaviour of monetary authorities, among others. These models are based on the premise that there are significant costs to a high but fully anticipated rate of inflation. Another feature of inflation in modern economics, however, is the well-documented fact that inflation and the variability of inflation are positively associated. This phenomenon has been widely observed over different countries at different times.(1) This has led researchers to question the feasibility of a steady and predictable positive rate of inflation. The finding suggests that high rates of inflation may reduce the ability to forecast future inflation rates. A high average inflation rate may add an unnecessary degree of uncertainty to individual decision making and lead to a misallocation of resources. Friedman [1977] suggests that this may cause output instability and possibly raise the average unemployed rate. A related paper by Logue and Sweeney [1981] establishes that there is a positive relationship between inflation and the variability of economic growth for industrial countries. Given this perspective, the welfare costs associated with inflation might be considerably higher than the traditional costs of anticipated inflation. This paper extends the Barro and Gordon [1983a] model of discretionary monetary policy to take account of the relationship between inflation and measures of inflation and output variability. The extension focussed on is to model an endogenous wage-indexing scheme in the labor market. In the discretionary equilibrium, wage setters not only form rational expectations of the future price level, but also choose an optimal degree of wage indexation. This extension to the basic model has the following properties of a discretionary equilibrium. 1. There is a positive association between the mean rate of inflation and the magnitude of real disturbances in the economy, as well as between the mean inflation rate and the degree of output instability in the economy. 2. In an economy where real disturbances are relatively important, there is a positive association between the mean rate of inflation and the variance of inflation. The explanation behind these results is as follows. The lower is the degree of wage indexation, the greater is the incentive for monetary authorities to cause surprise inflation, and hence the higher is the mean rate of inflation in a discretionary equilibrium. But the degree of wage indexation is negatively related to the variance of real disturbances in an economy which is subject to both real and nominal disturbances. …

146 citations


Journal ArticleDOI
TL;DR: In this paper, a quantitative estimate of parental non-altruism is derived from an equilibrium labor market model: approximately 90 percent of all child earnings was implicitly competed away through lower adult wages as families migrated to areas with abundant child labor opportunities.
Abstract: Intergenerational relationships within late nineteenth-century industrial families are analyzed using several large-scale, contemporary household surveys. Nonaltruistic behavior by parents was pervasive. Even among families with positive assets, child labor was common in certain industrial settings, suggesting that child labor (or nonschooling) did not simply reflect parental borrowing constraints. Neither did physical asset transfers offset human capital losses among working youth. A quantitative estimate of parental nonaltruism is derived from an equilibrium labor market model: approximately 90 percent of all child earnings was implicitly competed away through lower adult wages as families migrated to areas with abundant child labor opportunities.

135 citations


Journal ArticleDOI
TL;DR: An examination of the 1982 Tylenol poisonings reveals stock-market losses to Johnson & Johnson that far exceed direct costs and losses shared with other pain-reliever producers, providing support for the Klein and Leffler (1981) theory of brand names as quality-assuring mechanisms.
Abstract: An examination of the 1982 Tylenol poisonings reveals stock market losses to Johnson & Johnson that far exceed direct costs and losses shared with other pain-reliever producers; this evidence provides support for the Klein and Leffler [1981] theory of brand names as quality-assuring mechanisms. Of the subsequent cases, only the 1986 Tylenol poisonings were associated with significant stock market losses. Prior to the 1982 and 1986 Tylenol poisonings, Tylenol was the number one pain reliever, whereas the other pain relievers that were poisoned had a much lower level of brand-name capital to lose.

123 citations


Journal ArticleDOI
TL;DR: Kagel et al. as mentioned in this paper studied the effect of the winner's curse in first-price common value auctions for objects of uncertain value, focusing on sealed-bid auctions.
Abstract: FIRST-PRICE COMMON VALUE AUCTIONS: BIDDER BEHAVIOR AND THE "WINNERS CURSE" JOHN H. KAGEL, DAN LEVIN, RAYMOND C. BATTALIO and DONALD J. MEYER(*) Experimental auction markets are characterized by a strong winner's curse in early auction periods as high bidders consistently lose money, failing to account for the adverse selection problem inherent in winning the auction. With experience and bankruptcy on the part of the worst offenders, subjects earn positive average profits, but these are far below Nash equilibrium predictions as a sizable minority of bids exceed the expected value of the item conditional on having the highest estimate of value. Individual bidding behavior is explored to identify the mechanism whereby market outcomes no longer display the worst effects of the winner's curse. I. INTRODUCTION Numerous occurrences of the winner's curse have been reported in bidding for items of uncertain value, resulting in below normal or even negative average profits for bidders. The winner's curse results from bidders' failure to account for the adverse selection problem inherent in winning auctions for items of uncertain value. Capen, Clapp, and Campbell [1971] claim that the winner's curse resulted in low profits for oil companies in the 1960s in bidding on offshore oil and gas leases. Regarding corporate takeovers and mergers, Roll [1986] proposes a hubris hypothesis: acquiring firms generally fall prey to the winner's curse, paying too much on average for their targets. He claims that from the samples he has observed, the hubris hypothesis explains merger data as well as tax factors, synergy, or inefficient target management. Cassing and Douglas [1980] find that many baseball players in the free agency market have been overpaid on account of the winner's curse, and Dessauer [1981] reports a similar finding of overbidding in auctions for the book publishing rights. Based on these occurrences, it appears that many agents are not fully cognizant of the intricacies involved with bidding on alternatives that have uncertain worth. The winner's curse results from the fact that although bidders may hold unbiased estimates of the auctioned item's value, this estimate can be overly optimistic given that participants' bids are influenced by their estimates of value. In other words, the winner's curse results from an adverse selection problem that bidders fail to account for fully in submitting their bids. The existence of a winner's curse implies a breakdown of rational expectations on the bidder's part (as discussed by Milgrom [1981]) and identifies a market that is out of equilibrium. A winner's curse need not result from bidding on items with uncertain value provided proper adjustments are made. One adjustment to the adverse selection problem is to deflate the expected value of the item (and hence the bid) before any action is taken. For example, Cox and Isaac [1984] show that agents who maximize expected utility will revise their expectations downward and submit bids that are strictly less than the expected value conditional on the event of winning. When agents behave in this fashion, a winner's curse does not result in the sense of bidders paying more on average than the items are worth. This paper focuses on sealed-bid auctions for objects of uncertain value, a market institution for which theoretical predictions and empirical evidence concerning a winner's curse are mixed. Current theoretical development excludes the possibility of bidders paying more on average than the items are worth; yet some empirical evidence in the sale of oil tracts, as seen in Capen, Clapp, and Campbell [1971], Lohrenz and Dougherty [1983], and Mead, Moseidjord, and Sorenson [1983], suggests that the winner's curse may be present. A series of experiments is designed and conducted in order to answer the following empirical research questions: Does the winner's curse exist in this auction framework; and if it does, what is its duration, its relation to agent experience in the market, and its breadth of impact across agents? …

70 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined the welfare cost of rationing by waiting and showed that such individually rational adjustments only increase the welfare costs of waiting, and thus are socially self-defeating.
Abstract: With price controls and rationing by waiting, rational consumers increase the quantity boughtperpurchase. This individually rational response is socially wasteful and the cost of making it is a deadweight loss. This cost plus the value of time spent in queues may exceed the total rent transferred from suppliers to consumers by price controls; i.e., the value of resources spent competing for the rent may exceed the rent itself. This point is illustrated by an empirical application to gasoline price controls. Rent seeking exhausts an estimated 116 percent of the rent trangerred. The efficiency of the price system is often illustrated lby pointing to the obvious wastes associated with alternative allocation mechanisms, for example the queues that accompany price controls and first come/first served allocations. Although rationing by waiting wastes resources in a dramatically visible fashion, the magnitude of the waste and the factors that determine it seldom have been examined empirically.’ Moreover, existing theory on the subject is incomplete; unless the first come/first served rule is precisely specified, consumers will compete for available supplies in ways other than waiting. In the model developed below queues cause consumers to compete by increasing amounts bought per trip to the market. The analysis demonstrates that such individually rational adjustments only increase the welfare cost of rationing by waiting, and thus are socially self-defeating. The theoretical model is illustrated empirically by estimating the welfare costs of a market-wide ceiling on gasoline prices. This application was motivated by the availability of two key pieces of information, the value of time spent in gasoline queues and the effect of waiting lines on amounts bought per purchase. This information was obtained from a survey of motorists conducted during the era of federal gasoline price regulations. To focus on competition among consumers, one of the inefficiencies that results from price controls is ignored, the supply reduction leading to the

51 citations


Journal ArticleDOI
TL;DR: This paper found a positive and significant relation between players' ability to replicate an individual coach's allocation of playing time across players and their winning percentage using data on more than 3000 male college basketball players, their coaches, and their skill levels.
Abstract: COACHING TEAM PRODUCTION The actual function of managers is the subject of much debate. Using data on more than 3000 male college basketball players, their coaches, and their skill levels, we find a positive and significant relation between our ability to replicate an individual coach's allocation of playing time across players and his winning percentage. Although this result does not answer the central question of just what it is that managers contribute, the results do support the property rights paradigm: managers are the employees of workers; and more generally, sports data can be used to help understand related economic processes where quantifiable measures of inputs and outputs are more costly to obtain. "What is meant by performance? Input energy, initiative, work attitude, perspiration, rate of exhaustion? Or output?...sometimes by inspecting a team member's input activity we can better judge his output effect...It is not always the case that watching input activity is the only or best means of detecting, measuring, or monitoring output effects of each team member, but in some cases it is a useful way." I. INTRODUCTION A major tenet of the Alchian-Demsetz [1972] representation of the firm is that managers are hired by workers to prevent shirking and malfeasance in their own ranks. Absent overseers, individual workers bear only a fraction of the cost of shirking, and hence, everyone undersupplies labor relative to its opportunity cost. In this world, the managerial function is to monitor inputs and meter rewards, thereby reducing the incentive to shirk and raising each worker's marginal productivity. The more dependence there is in the marginal products of laborers, the more important the role of management. This view of supervision is simultaneously intuitively pleasing and difficult to quantify. This paper investigates the role of management in a setting where, indisputably, there is team production: intercollegiate basketball.(1) Data on 3012 college basketball players across ten years and sixty-five teams are used to replicate one aspect of coaching, the allocation of playing time. Next, we attempt to link coaching decisions to winning. The goal is to determine if coaches who manage well, by our standards, are successful. Section II motivates the paper. In section III the empirical methodology is detailed; the monitoring function is estimated and related to coaching success. Section IV contains a summary and conclusion. II. COACHING DECISIONS The essence of team production is interdependence and the inherent immeasurability of marginal productivity across workers. However, this does not mean that proxies for marginal productivity cannot be developed. Managers monitor worker inputs and deduce marginal products accordingly. Of course, some inputs are more cheaply quantified than others. Attitude is hard to measure, but heart rate is easy. Sweat is observed cheaply, but workers can feign exhaustion. In the famous Chinese boatpullers fable, the coxswain uses his whip to insure that coolies pull efficiently, to persuade each worker to give the appropriate effort.(2) McManus [1975, 341] relates a story told by Cheung: "...boats are pulled upstream by a team of coolies prodded by an overseer with a whip....an American lady, horrified at the sight of the overseer whipping the men as they strained at their harness, demanded that something be done about the brutality. She was quickly informed... `Those men own the rights to draw boats over this stretch of water and they have hired the overseer and given him his duties.'" Here the monitor uses his vision, intuition, and experience to determine shirking, counseling the loafers with his whip. But employing subjective observation is just one way to monitor. …

49 citations


Journal ArticleDOI
TL;DR: In this paper, the authors analyzed the rate of return on a Stradivarius from 1803 to 1986 and found that the return rate was 0.92 percent per year at a rate of 0.77 percent annually.
Abstract: CAPITAL GAINS AND THE RATE OF RETURN ON A STRADIVARIUS I. INTRODUCTION No name in the annals of violin-making has evoked more interest and sense of mystery than that of Antonio Stradivari. During his long lifetime (1644-1737) Stradivari probably made about 1,100 instruments, with about 700 violins being accounted for currently. Stradivariuses have been sold in 1986 at auction for over $300,000. Fakes are everywhere. In spite of the wealth of price data on Stradivariuses, no systematic economic analysis of this data has been made. The central question for this paper is: "What is the rate of return on a Stradivarius?" "Rate of return" means the annual percentage increase in the price of a Stradivarius, as in the case of collectables. After converting all prices into dollars with 1967 purchasing power, the following equation is applied: P.sub.t.=P.sub.0.e.sup.rt., where P.sub.t is the most recent price and P.sub.0 is the preceding transaction price. r is the rate of return and t is the number of years elapsing between transaction dates of P.sub.t and P.sub.0. In section II the rate of return on individual violins for four different periods is determined. Stradivari developed his skills over time, so that each violin is not an identical product. Regression analysis is used to show how these differences in quality are reflected in differences in price, but not in differences in rates of return (section III). In section IV the rate of return on a Stradivarius from 1803 to 1986 is determined. Finally, in section V other factors are considered that may influence the rate of return on a Stradivarius. II. THE RATE OF RETURN ON INDIVIDUAL VIOLINS Here data on sale prices of individual violins are examined, where there is more than one transaction over time. In addition, each of the sales is assigned to one of four periods--periods generally accepted by musicologists such as Henley [1961, 15-16] as encapsulating stages in Stradivari's development. Data are shown in Tables I to IV. The Amati Period, 1665-90 Stradivari was apprenticed at the age of fourteen in the workshop of the famous Nicolo Amati where he received "on-the-job training." All creations during this period are worked on the pattern of Amati. Table I shows three violins having at least two sales. To understand the table, consider the first row labeled "Soames." The violin was made by Stradivari in 1684. Transactions took place in 1907 and 1973 for $8,571 and $15,778, respectively. All prices are in 1967 dollars. The equation discussed above yields $15,778 = 8,571e.sup.r(66). Based on this r (the rate of return) is 0.92 percent. In other words, the price of the Soames violin increased annually at a rate of 0.92 percent. Typically there is a brokerage fee of 10 percent for individuals (6 percent for dealers) to sell a violin at Sotheby's or Christie's auction house. This, of course, reduces the rate of return on a violin. For example, to determine the rate of return on the Soames after brokerage costs, 10 percent was deducted from the 1973 price, changing it from $15,778 to $14,201. After eliminating brokerage costs, the price increases by 0.77 percent annually. The Nachez and Mercury violins were similarly analyzed. The Mercury had three different sales, so that there are two rates of return. The Long Period, 1691-1700 During this period Stradivari made numerous changes in his art. The scroll is worked in more detail and the varnish is golden yellow. Stradivari modified the pattern by making the violin a little longer and slightly more narrow. Table II shows data on four violins. Except for the 1933-36 period, the Falmouth had similar rates of return even though the interval between sales varied significantly: Seventy-four, six and forty-six years for the first, second, and fourth transactions, respectively. …

34 citations


Journal ArticleDOI
TL;DR: The authors pointed out a serious problem with the growth-salary linkage often asserted in the bureaucracy literature and pointed out that there is no direct empirical evidence of the asserted linkage, which seems inconsistent with the idea that the civil service system shields federal white-collar employees from the uncertain fluctuations of political outcomes.
Abstract: AGENCY GROWTH, SALARIES AND THE PROTECTED BUREAUCRAT I. INTRODUCTION In recent years, increased attention has been directed towards understanding bureaucratic behavior and its implications for economic performance. A key element of this literature is the application of utility maximization for deriving implications about bureaucratic behavior. Representative is Niskanen [1971, 38], who argues that bureaucratic decisions are made rationally to maximize utility, which is assumed to be a function of salary, perquisites, reputation, power, patronage, and output. Implicit in the literature is an activist bureaucracy which engages in purposeful action to advance its interests. An important part of some of the bureaucracy literature is the notion that government employees are a source of the documented growth of government. It is commonly asserted that government employees, acting in their own self-interest, will foster the growth of their organization. Growth of the organization is seen as a means for increasing salaries, as well as the other arguments in the utility function. The positive link between salaries and growth is emphasized by Downs [1967, 11] in his classic study of bureaucracy: "Any organization experiencing rapid overall growth provides many more opportunities for promotion in any given time period than a static one." Additional references to the importance of agency growth for salaries include Niskanen [1971, 38-41; 1975, 619], Tullock [1974, 127], and Heclo [1977, 131]. Building on that theme, other researchers such as Bush and Denzau [1977], Borcherding, Bush, and Spann [1977], and Bennett and Orzechowski [1983] have argued that government employees will use their power to influence election outcomes to promote government growth and, hence, salaries. Despite the often critical nature of the salary-growth relationship in recent discussions of bureaucratic behavior and the growth of government, there is no direct empirical evidence of the asserted linkage. Related evidence provided, for example, by Wolfinger and Rosenstone [1980, 97-101] that government employees have higher voter participation rates than do private employees is not conclusive support for the growth-salary assertion. Voting results, alone, do not permit one to distinguish the underlying reasons for the higher participation rates, and there are many. More importantly, for the federal government, at least, the assertion that salaries of government employees are significantly affected by agency growth seems inconsistent with the idea that the civil service system shields federal white-collar employees from the uncertain fluctuations of political outcomes. Skepticism about the growth-salary linkage due to the constraints of the civil service system is noted by Wilson [1980, 375]: "Few officials need fear for their jobs, and their salaries are determined by government-wide laws and regulations, rather than by the size, rate of growth, or 'success' ... of the organization." Further, while senior agency officials would be in the best positions to promote the growth of their agencies, within the federal government the salaries of high GS (General Schedule) level and Senior Executive Service personnel are capped by law and may not exceed those paid to officials in Executive Level V. The restriction limits the salaries of senior managers regardless of the growth of the agency. One could, however, argue that managers promote growth to motivate their subordinates or that medium- and lower-level employees are the primary proponents of agency growth. But as Kaufman [1981, 79] notes, "most officers and employees, once they have attained career status, could do nearly as well materially without pushing themselves, as they could by working at capacity." The absence of incentives and resulting inertia can contribute to the often-cited tension between career bureaucrats and political appointees. The above discussion points out a serious problem with the growth-salary linkage often asserted in the bureaucracy literature. …

Journal ArticleDOI
TL;DR: In this article, it is observed that the traditional definition of money (currency plus demand deposits) shows no evidence of structural change, and yields nearly as low or lower prrediction root mean square errors for both real GNP growth and inflation over 1983-87Q2 than the standard errors of estimate obtained for 1961-82.
Abstract: THE EMPIRICAL RELIABILITY OF MONETARY AGGREGATES AS INDICATORS: 1983-1987 I. INTRODUCTION It is widely believed that monetary aggregates have failed to predict real growth and inflation over 1983-87. This presumed breakdown of previously reliable linkages between money growth and future output and inflation has been variously attributed (by the present authors, among others) to changes in money demand induced by regulatory change and to parameter instability due to structural change. This paper observes that these disputations may be moot, since the traditional definition of money (currency plus demand deposits) shows no evidence of structural change, and yields nearly as low or lower prrediction root mean square errors for both real GNP growth and inflation over 1983-87Q2 than the standard errors of estimate obtained for 1961-82. Part of the so-called "breakdown" in the monetary indicators -- especially in the case of M1 -- may be explained by the fact that current M1 (or M1B) is defined much like the "old" M2, and current M1A is defined much like "old" M1. Thus, it is probably not too surprising that use of M1 as a monetary indicator does not yield consistent predictive power over a period of time in which it experienced redefinition. If there is a mystery in the 1980, it is not why M1A has done so well but why economists abandoned it for broader M1B (currency, demand deposits and other checkable deposits or OCDs). (1) With the nationwide introduction of negotiable order of withdrawal (NOW) accounts on January 1, 1981, M1A fell by 5.5 percent (a 22.1 percent per annum rate) in the first quarter, while M1B rose at a 3.1 percent per annum rate. At the time, the Federal Reserve System expected M1A demand to shift down as households chose to substitute from demand deposits to the newly available (in most states) OCDs. Accordingly, the sharp drop in M1A was expected to be reflected in a once-for-all upward shift in its velocity with no effect on nominal income or its components. (2) However, consistent with the shock-absorber view of money demand, even if the long-run demand for M1A was unchanged, a sharp decrease in its quantity would induce an equal contemporaneous increase in its velocity. (3) Contrary to the Federal Reserve's expectation, the shock absorber view would thus predict that the actual value of velocity would temporarily exceed its long-run equilibrium level so that nominal income would tend to fall or grow less rapidly as M1A velocity adjusted to the money shock. Figure 1 shows that, compared to a relatively small drop in M1 velocity, the contemporaneous velocity of M1A moved sharply in the first quarter of 1984. (4) The shock-absorber hypothesis suggests that contemporaneous velocity movements would be dominated by money supply shocks and thus attributes the different movements of M1A and M1 velocity to differences in magnitude and signs of the shocks in M1A and M1. What Milton Friedman [1983] calls "leading velocity" is a crude way of allowing nominal GNP to adjust to past money shocks. Panels a and b in Figure 2 illustrate leading velocity for lags between money and GNP of one and four quarters respectively. The longer the adjustment lag, the more leading velocity becomes a smooth, trend-dominated series for M1A. (5) However, M1B continues to display a sizeable break from its historical pattern. This observation suggests that the recent behavior of the economy may be consistent with that indicated by movements in M1A, and that the choice to switch to M1B as the standard definition of the "narrow" money supply was unfortunate and a major source of recent forecasting failure. (6) This paper runs a race among M1A, M1, and M2 by comparing out-of-sample forecasting performance and tests of structural stability. (7) An explanation is also offered for the observed departure of M1 velocity from its historical trend. Based on a large battery of conventional tests, the results are remarkably favorable to the continued reliability of M1A as a useful indicator of future economic performance and for its relevance as a tool in monetary policy. …

Journal ArticleDOI
TL;DR: In the absence of some constitutional restraints upon such rent seeking, race is bound to be politicized as discussed by the authors, which can explain the existence of many government policies concerning race that are not apparently motivated by economic gain.
Abstract: Rules governing social and economic interactions among ethnic groups are modeled as public goods. The publicness of social rules can explain why race has been so consistently politicized. The potential gains from public provision attract political entrepreneurs into the field. In the absence of some constitutional restraints upon such rent seeking, race is bound to be politicized. In addition, the model can explain the existence of many government policies concerning race that are not apparently motivated by economic gain. Finally, government enforcement of ethnic economic cartels can explain some of the persistent differences in earnings across ethnic groups.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the relationship between the way rational expectations are employed in practice and the argument initially put forth to justify its use and found that standard and aggregate rational expectations typically yield systematically different equilibria and that the size of the difference depends positively on the degree of synergism.
Abstract: This paper investigates the relationship between the way rational expectations is employed in practice and the argument initially put forth to justify its use. In practice rational expectations has meant that the expectations of each agent taken separately is consistent with the predictions of the theory. This is different than the argument frequently used by proponents of rational expectations that on an aggregate level expectations should be consistent with the theory. The primary findings are that standard and aggregate rational expectations typically yield systematically different equilibria and that the size of the difference depends positively on the degree of synergism.

Journal ArticleDOI
TL;DR: SCHWARTZ as discussed by the authors argued that the secondary markets are thin and transactions of even moderate size cannot be executed at quoted prices, and the price quotations may also misrepresent the prospect of repayment for the following reason.
Abstract: SCHWARTZ: INTERNATIONAL DEBTS The official strategy on the international debt problem was breached in May 1987 when Citicorp, acknowledging the likelihood of credit losses on its Latin American loans, announced that it was increasing its reserves by $3 billion. Nineteen banks between 26 May and 8 July also announced additions to their loan loss reserves, in aggregate amounting to over $12 billion, according to Musumeci and Sinkey [1988]. In May 1988, the General Accounting Office (GAO) estimated that reserves of U.S. banks against their troubled loans to LDCs totaled about $21 billion, whereas adequate provisioning would require reserves of $49 billion (see the Wall Street Journal, 19 May 1988). The GAO used prices on secondary debt markets in determining the adequacy of loan reserves. The banks and regulators, however, dispute the GAO's report (see the Wall Street Journal, 13 May 1988). The secondary markets, they contend, are thin, and transactions of even moderate size cannot be executed at quoted prices. The price quotations may also misrepresent the prospect of repayment for the following reason. By holding a foreign country's debt, a creditor bank has been subject to the implied obligation to lend new money pro rata to that country. However, by selling the debt on the secondary market, a creditor bank is released from that obligation, and hence may accept a lower price than would otherwise be the case. On the other hand, as Sachs and Huizinga [1987,579-87] note, the decline in bank equity prices since 1982 closely matches the secondary market valuation of LDC exposure. The qualifications concerning the significance of the secondary market prices of foreign debt did not apply to prices of foreign bonds that in the past traded at a discount in the market. Those prices reflected solely the probability of repayment of the bond. The additions to loan loss reserves in 1987 reduced the banks' reported income and their book capital. Equity was reduced by a transfer to reserves, and plans to issue equity were announced by many banks following their additions to reserves. The reserves, however, are not charged against current income for tax purposes. Should write-offs occur in the future or agreements be reached with the debtors to reduce outstanding debt, the losses will be charged to reserves with no effect on reported income. Latin American debt has not been written down because of the additions to reserves, and no forgiveness is involved for LDC debts. The banks face difficulties in reducing their developing country debt by sales, swaps, and write-offs. Loan agreements may prohibit repurchase of the debt by the borrower, hence limiting the sales option. (14)Swaps for local currency or equity have had only small effects on debt reduction. J.P. Morgan and Company earlier this year devised a program with Mexico that had the blessing of the U.S. Treasury. Morgan swapped $400 million of its Mexican government debt for $263 million twenty-year Mexican government bonds. The Mexican issue was collateralized by twenty-year U.S. Treasury zerocoupon bonds (at time of sale, priced to pay 8.41 percent annually) that Mexico bought with $2.56 billion of its international reserves. Swaps by other banks participating in the program reduced Mexico's bank debt by about $1 billion, although a $10 billion reduction had been forecast. Morgan regarded the swap as advantageous because it also served to reduce its obligation to supply future loans to Mexico. For other banks the size of the discount on the swap made the deal unattractive. (15)The third way for the banks to reduce exposure to developing-country debt is write-offs. Write-offs have limited appeal. They risk inciting political pressure to force debt forgiveness. The regulators, the U.S. government, and international-agency lenders continue to pressure the banks to provide additional loans to the Latin American debtors that are either in arrears on interest payments or pay a large percentage of their trade surplus to cover service charges. …

Journal ArticleDOI
TL;DR: The authors analyzes bargaining and Pigovian taxation solutions to inefficiencies from production externalities with free entry, and shows that the Coase Theorem remains valid if the property rights holder can act like a command economy planner.
Abstract: This paper analyzes bargaining and Pigovian taxation solutions to inefficiencies from production externalities with free entry. The Coase Theorem fails in a decentralized context but remains valid if the property rights holder can act like a command economy planner. A less powerful price-taking rights holder's objective function is nonconcave, causing an inefficient bargaining outcome. Bargaining complicates Pigovian taxes with a nonlinear tax scheme required to sustain the optimum. Polluting firms pay a franchise tax whose revenue is given lump sum to consumers and face a marginal charge only on excess output, which thus raises no revenue in equilibrium.

Journal ArticleDOI
Paul Evans1
TL;DR: The authors investigated whether in the steady state the real interest rate is an increasing function of both the government debt and government spending and found no evidence of such a relationship using data from the period between January 1981 and March 1986.
Abstract: A TEST OF STEADY-STATE GOVERNMENT-DEBT NEUTRALITY This paper investigates whether in the steady state the real interest rate is an increasing function of both the government debt and government spending. Using data from the period between January 1981 and March 1986, the paper finds no evidence of such a relationship. These data afford an especially powerful test because the ratio of federal debt to trend output, which had fallen from 93 to 33 percent between January 1948 and December 1980, reversed course after the enactment of the Economic Recovery Tax Act, reaching 47 percent in March 1986. I. INTRODUCTION In conventional macroeconomic analysis, government debt is not neutral because households view it as contributing to their net wealth. As a result, the larger the government debt is, the wealthier households feel and the more they consume. More consumption in turn spells less investment over time and thus a lower steady-state capital stock. As a result, output is ultimately lower and the real interest rate higher. In principle, however, households need not view government debt as net wealth.(1) If households accurately foresee the future taxes that will service the government debt, if they face perfect capital markets, if taxes do not distort their decisions, and if they internalize future generations, then they treat the future taxes servicing the government debt as an exact offset. Consequently, government debt is neutral since it does not make households feel wealthier and hence affects nothing.(2) Nevertheless, because most macroeconomists doubt that households accurately foresee future servicing taxes, have access to perfect capital markets, and internalize future generations, they do not model government debt as neutral. Like perfect competition, however, government-debt neutrality may be a good approximation in many applications even if the assumptions underlying it are unrealistic. Therefore, empirical analysis rather than introspection should determine how one models government debt. This paper provides such an empirical analysis. The empirical analysis focuses on the steady-state relationship between government debt and the real interest rate.(3) In the steady state, a larger government debt leads to a smaller capital stock and hence a higher marginal product of capital if households view government debt as net wealth. Consequently, the real interest rate is also higher. If instead government debt is neutral, nothing should happen to the real interest rate. U.S. data from the period between January 1981 and March 1986 have been used here to investigate whether a larger government debt leads to a higher real interest rate in the steady state and hence whether government debt is neutral. These data afford an especially powerful test because the ratio of federal debt to trend output, which had fallen from 94 to 33 percent between January 1948 and December 1980, reversed course after the enactment of the Economic Recovery Tax Act, reaching 47 percent by March 1986. Moreover, this upward trend is likely to continue well beyond 1986. Such an enormous increase in federal debt should have raised by steady-state real interest rate appreciably if households view government debt as net wealth to any important extent. In the absence of evidence that the steady-state real interest rate did rise, government debt is judged to be neutral in the steady state. Section II formulates the model, which is estimated in sections III and IV. Section V summarizes the paper and then briefly reviews an extensive literature consistent with government-debt neutrality. II. FORMULATION OF THE ECONOMETRIC MODEL The appendix demonstrates that if households view government debt as net wealth, the real interest rate should be an increasing function of both the government debt and government spending in the steady state (see also Evans [1988] for this analysis). …

Journal ArticleDOI
TL;DR: In this article, the authors use a competitive interest group theory of the apartheid state to formalize a collective choice analysis of apartheid as endogenous policy, where the "level" of apartheid is conceived as a continuous variable that is determined by the relative influence of competing interest groups within the white polity and by the costs of maintaining and defending apartheid institutions.
Abstract: AN ECONOMIC THEORY OF APARTHEID Apartheid is a regulatory system designed to effect redistributions in favor of white workers and farmers at the expense of black workers and white capitalists. This paper uses a competitive interest group theory of the apartheid state to formalize a collective choice analysis of apartheid as endogenous policy. The "level" of apartheid is conceived as a continuous variable that is determined by the relative influence of competing interest groups within the white polity and by the costs of maintaining and defending apartheid institutions. Some empirical implications of this approach are explored. I. INTRODUCTION Since South African apartheid embodies essentially a complex web of politically determined restrictions affecting the level of employment and mobility of the black labor force, it is not surprising that most economic analyses of apartheid institutions have focused on their evident allocative inefficiency and distributional inequity (see Nattrass [1981, 31-32]). Some economists have hinted at the possibility of formally modeling the economic, social and political institutions of apartheid as endogenous products of a process of collective choice. In other words, the question has been raised of the existence of an economic "rationale" for apartheid. Thus, Richard Porter [1978] and subsequently Mats Lundahl [1982] have initiated rigorous analysis of what Porter terms a "South African-type" economy--a competitive market system, the operation of which is constrained by state-imposed apartheid regulation. While the Porter-Lundahl characterization of the South African-type economy is an invaluable starting point for an economic analysis of apartheid, it falls short of providing a satisfactory explanation for the existence and extent of South African discriminatory policies. The inevitable contradiction between economic efficiency and the political requirements of apartheid leads both Porter and Lundahl to treat the goals of apartheid as exogenously determined by political considerations.(1) The purpose of this paper is to show how it is possible to provide an explicitly economic explanation of apartheid institutions. Apartheid is viewed here as a vector of policies which can be varied in intensity along a continuum. Thus the level of apartheid regulation is treated as a continuous endogenous variable, responsive to the relative influences of interest groups which receive benefits or incur costs associated with apartheid. The intention is not to claim originality in recognizing the economic motives for apartheid policy (which have been well documented in the literature, as indicated in section II), but rather to formalize this approach (in section III) and to suggest some methods of testing its implications (in section IV). The effort to formalize an analysis of apartheid in terms of a competitive interest group model of social choice should be of interest not only to students of South African, but also to economists more generally concerned with endogenous public policy. The South African case provides a laboratory for this type of analysis because the boundaries which delineate interest groups are very clearly drawn. II. A BRIEF POLITICAL ECONOMY OF APARTHEID It has long been recognized by economists and scholars of South Africa that the apartheid system, representing a comprehensive set of regulations affecting all aspects of economic, social and political life, arose essentially as a response on the part of the white working class to the threat of black labor market competition.(2) Although apartheid itself is phenomenon of the post-1948 era of National Party rule, it is really only the most recent phase of a long history of white labor elitism and black exclusion, brought about through the medium of an interventionist, statist polity characterized by a racially limited franchise. The harnessing of the instruments of state power to further the interests of the white labor force, at the expense not only of blacks but also of white capitalists, dates from the earliest period of industrialization in South Africa, which followed the discovery of precious minerals in the 1860s and 1870s. …

Journal ArticleDOI
TL;DR: Karpels et al. as mentioned in this paper examined the effect of market forces on the stock market reaction to unanticipated events such as the 1979 DC-10 crash and found that market forces would compel producers to invest in product safety even in situations when the technical safety aspects of the product are beyond the comprehension of the average consumer.
Abstract: MARKET FORCES AND AIRCRAFT SAFETY: AN EXTENSION GORDON V. KARELS(*) Recent work by Chalk focuses on whether market forces provide safe products via stock market reaction to unanticipated events. Chalk finds a $200 million decline in McDonnell Douglas stockholder wealth related to the 1979 DC-10 crash in Chicago. That decline far exceeds reasonable estimates of regulatory and liability costs, suggesting a market penalty for unsafe products. His results, however, do not seem consistent with the notion of efficient markets, as American Airlines maintenance procedures were quickly identified as the likely cause of the crash. This study finds no resulting shareholder wealth loss for American Airlines or McDonnell Douglas. I. INTRODUCTION In a recent article in this journal, Andrew Chalk [1986] attempts to test whether market forces provide safety when products are too complex to permit buyer repurchase inspection. To do so he uses the rather ingenious method of examining the abnormal returns to McDonnell Douglas stock prices following the May 25, 1979, crash of a DC-10 just outside O'Hare Airport in Chicago. Efficient market theory predicts that perceived safety problems in products will result in a fall in the firm's stock price due primarily to a product demand effect. If this is the case, market forces would compel producers to invest in product safety even in situations when the technical safety aspects of the product are beyond the comprehension of the average consumer, as is the case with large commercial aircraft. Chalk's study finds large and statistically significant negative abnormal returns to McDonnell Douglas stockholders on the order of $200 million. In almost every case where news is released just after the crash (both favorable and unfavorable with respect to McDonnell Douglas), Chalk finds statistically significant abnormal returns to McDonnell Douglas stock. What is interesting about this particular event was that faulty maintenance and not a product defect was ultimately blamed as the principal cause of the crash. Initially the general safety of the DC-10 was questioned, but subsequent evidence indicated that the maintenance procedure of an American Airlines mechanic (who later moved on to Continental and taught the technique there) of removing the engine and pylon in one step instead of separately, caused stress which led to the engine tearing loose. American Airlines and Continental ended up paying fines of $500,000 and $100,000 respectively for this procedure, but American did not admit guilt for the crash in paying the fine.(1) Given that most of the blame for the crash fell on American's maintenance procedure, the finding of significant cumulative abnormal returns by Chalk are surprising in the context of efficient markets. Evidence of faulty maintenance procedures became public information within two weeks after the crash. Because of the initial uncertainty over the cause of the crash, significant abnormal returns for particular event days are likely, but those negative abnormal returns should be offset in later periods as the true cause becomes known to investors. Thus the finding of significant cumulative abnormal returns to McDonnell Douglas would not be consistent with the semistrong form of efficient markets. The purpose of this note is to replicate and extend the findings of Chalk's study by examining the market forces that affected the other major party involved in subsequent litigation, American Airlines Corporation. Following Chalk's argument, one would expect to see the market impose substantial costs on American if it is believed that travel on that airline is less safe than previously thought. Travelers could substitute towards other airlines where possible, reducing cash flow and hence the value of the firm. The size of the demand effect should have little to do with American's investment in aircraft safety. …

Journal ArticleDOI
TL;DR: This article showed that the excess profits from filter rules between the US and Canadian dollar during the 1950s are the result of intervention by the Bank of Canada, which is a central bank that leans against the wind in those markets.
Abstract: Research shows that filter rules in foreign exchange markets yield higher than normal profits Other work indicates that central banks lean against the wind in those markets and some claim that this intervention generates profits for private speculators This study, which uses daily data for exchange rates and official re-serves, indicates that excess profits from filter rules between the US and Canadian dollar during the 1950s are the result of intervention by the Bank of Canada

Journal ArticleDOI
TL;DR: In this paper, the authors show that if the government changes the tax on wage income, what happens to aggregate labor supply? This question is at the core of debates between proponents of supply-side and Keynesian approaches to government fiscal policy.
Abstract: TAX RATES AND LABOR SUPPLY IN FISCAL EQUILIBRIUM I. INTRODUCTION If the government changes the tax on wage income, what happens to aggregate labor supply? This question is at the core of debates between proponents of supply-side and Keynesian approaches to government fiscal policy. It is unfortunate, therefore, that previous attempts to resolve the issue have obscured decisive assumptions regarding the preference relation between leisure and public spending. One argument, exposited recently by Gwartney and Stroup [1983; 1986] and Ehrenberg and Smith [1988, 179-80], descends from an older literature represented by the works of Friedman [1949; 1954], Goode [1949], Scitovsky [1951], and Bailey [1954], and emphasizes the importance of a balanced-budget framework for addressing the question. This approach reveals the presence of an income effect caused by the change in government spending that must accompany the tax change. At optimum, according to this view, this income effect exactly offsets the income effect of the tax change so that only the substitution effect of the tax remains. As a consequence, a balanced-budget increase in the wage tax unambiguously decreases economy-wide labor supply, provided the increase in public spending is valued the same as the forgone private spending. A different approach was initiated by Winston [1965] and subsequently elaborated upon by Lindbeck [1982], Fullerton [1982], Hanson and Stuart [1983], Bohanon and Van Cott [1986], and Gahvari [1986]. This approach stresses the importance of the preference relation between public spending and private spending, rather than the role of the public spending income effect, in determining the change in aggregate labor supply. As a special example, Gahvari [1986] assumes a preference structure which implies that public spending does not have any influence on labor supply so there is only the partial equilibrium effect of the tax. However, the general conclusion of this line of reasoning is that the theoretical ambiguity of the labor supply response arises from both tax and spending effects. In this paper a simple, formal model is used to develop a careful accounting of the various income and substitution effects. The model is sufficiently general to permit a rigorous comparison of earlier studies and to expose implicit assumptions responsible for their conclusions. We show that the two approaches outlined above are associated, respectively, with the focal cases of "compensated independence" and "ordinary independence" between leisure and public spending. The pan of the paper is as follows. Section II contains a model of labor supply in the presence of wage taxation and public spending. In section III, the effect on aggregate labor supply of a balanced-budget change in wage taxation is analyzed. Section IV provides an interpretation of several previous analyses of these issues. Section V contains a summary of our results. II. THE MODEL Consider an economy with n identical consumers who derive utility from leisure (l), a pure private good (x) which serves as numeraire, and a publicly provided good (z). The utility function u(l, x, z) is assumed to be twice continuously differentiable and strictly quasi-concave. Everyone is endowed with T units of time which are allocated either to labor (market work) or leisure (nonmarket activities). The marginal product of labor in producing the private good is the constant, real (gross-of-tax) wage rate W. Because agents are identical, we can confine attention to allocations of equal consumption. As a consequence, the production possibilities frontier, assumed to be linear, can be expressed in per capita terms as Wl+x+(P/n)z=WT, (1) where P is the constant marginal cost of z. With identical agents and only two goods not publicly provided, it may be assumed without loss of generality that public spending is financed by a wage tax with constant (marginal and average) ad valorem rate t. …

Journal ArticleDOI
TL;DR: This paper developed a synthesized macroeconomic model that incorporates the local-global informational asymmetries of an island economy into a setting characterized by endogenous wage indexation, where agents are unable both to filter out the separate influences of demand and supply shocks on observed output prices and to distinguish between the separate price effects of local and aggregate disturbances.
Abstract: This paper develops a synthesized macroeconomic model that incorporates the local-global informational asymmetries of an “islands” economy into a setting characterized by endogenous wage indexation. In such an economy, agents are unable both to filter out the separate influences of demand and supply shocks on observed output prices and to distinguish between the separate price effects of local and aggregate disturbances, so that optimal wage indexation depends upon both the variances of supply and demand disturbances and the information-conditioned forecasts of agents. As a result, optimal monetary policy generally depends upon the variances of local and aggregate supply and demand.

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that a change in the stochastic process generating money can alter the relationship between money and inflation and between inflation and interest rates, and that the extent to which inflation is forecastable depends significantly on the extent of money persistence and forecastability.
Abstract: This paper demonstrates that a change in the stochastic process generating money can alter the relationships between money and inflation and between inflation and interest rates. The extent to which inflation is forecastable is shown to depend significantly on the extent to which money is forecastable. Thus, the greater the persistence and forecastability of money, the greater the likelihood of observing a statistically significant Fisher effect. US. data over the 1953–86 period are used to demonstrate that instability in the Fisher effect coincides with changes in the stochastic process generating money. There is a significantly stronger Fisher effect during a subsample in which money—and hence inflation—are more predictable.

Journal ArticleDOI
TL;DR: In this paper, the authors estimate sex discrimination with two plausible methods of controlling for a major unobservable, acceptance of male and female traditional roles in the household, and find little discrimination and possibly favoritism toward women.
Abstract: Current decomposition estimates of sex discrimination by employers are not robust. Many “unobservables,” like motivation and attitudes toward work, are left unmeasured. We estimate sex discrimination with two plausible methods of controlling for a major unobservable, acceptance of male and female traditional roles in the household. The methods offer enormously different estimates of sex discrimination. One estimates sex discrimination at over 61 percent of the female wage, the other finds little sex discrimination and possibly favoritism toward women. The range in estimates is so large that point estimates of sex discrimination by employers are of little use to policymakers.

Journal ArticleDOI
TL;DR: In this paper, the authors present a simple model in which a presumption does exist about the location of internationally mobile capital, the relative concentration of production, and the volume of international trade in a commodity during the course of a cycle in its price relative to the prices of other commodities.
Abstract: CO-MOVEMENTS IN RELATIVE COMMODITY PRICES AND INTERNATIONAL CAPITAL FLOWS: A SIMPLE MODEL Suppose a number of countries produce a commodity which employs local labor and a type of capital that is internationally mobile. Within the framework of a specific-factors model the paper argues that there is a presumption about the international movement of capital when the relative price of the industry using that capital rises on world markets. Capital flows towards countries less heavily involved in producing the commodity; internal labor flows contribute towards worldwide industry dispersion; and the volume of international trade in that commodity tends to fall. I. INTRODUCTION Fluctuations in the composition of world demand may cause a particular commodity to experience periods during which price is high relative to other commodities to alternate with periods during which price is relatively low. A number of countries may share not only in the production of such a commodity but also in the use of some factor (call it capital) specific to its production but capable of being relocated from one country to another. That is, some productive activities may combine local factors with inputs that have international markets. What can be said about the location of internationally mobile capital as shifts in demand result in changes in relative commodity prices? Is there anything systematic about the likely degree of international concentration or diversification of production over periods in which a commodity's price is alternatively high and low? The purpose of this article is to sketch out a simple model in which a presumption does exist about the location of internationally mobile capital, the relative concentration of production, and the volume of international trade in a commodity during the course of a cycle in its price relative to the prices of other commodities. The model is a variant of the sector-specific general equilibrum production model in which one type of capital is internationally mobile so that rates of return remain equalized among countries. The presumption is that when a commodity's price is relatively high compared with other commodities, real capital specifically used to produce that commodity tends to leave regions that are relatively large producers. Furthermore, this tendency of international capital mobility to encourage a dispersion of world production when a commodity's price is relatively high and a concentration of the world's production when price is low is enhanced by the mobility within each country of factors used jointly with other sectors. Finally, international trade in a commodity experiencing such a price cycle tends to be "second best" compared with own production in the sense that the volume of trade tends to contract precisely when the commodity's price is high relative to other commodities. II. THE PRESUMPTION ABOUT INTERNATIONAL CAPITAL FLOWS Production structures within countries are assumed to follow the sector-specific model as described in Jones [1975]; mobile labor is combined with each of several types of specific capital goods in producing outputs. In one activity capital is assumed to be internationally mobile, although retaining its sectoral specificity. Such mobility serves to equalize the return to this factor in all areas in which it is employed. Although commodities are traded, technologies and factor endowments are not assumed to be the same, so the returns to local specific factors and national wage rates can differ from country to country. Although the argument can be posed in the context of a many-country trading world, the logic is more easily revealed in a two-country setting. Suppose both home and foreign countries each produce a number of traded commodities and one of these, say x1, makes use of a specific factor, say K1, which is mobile internationally. Such mobility ensures that the rental on type-1 capital at home, r1, is kept in line with the foreign return, r1*. …

Journal ArticleDOI
TL;DR: In this paper, the authors used the Alexander-Kurland model to study the relationship between the probability that a man is the biological father of a woman's children and the amount of money he invests in her children.
Abstract: INVESTMENT IN SISTER'S CHILDREN AS BEHAVIOR TOWARDS RISK I. INTRODUCTION A man in some societies invests much in the children of his sister. An explanation has been suggested by biologist Richard Alexander and examined by anthropologist Jeffrey Kurland and others. [1] They suggest that if a man has serious doubts about being the biological father of his wife's children, he may wish to invest instead in his sister's children with whom he is sure to share some genes. Since the Alexander-Kurland model forms the foundation for our work, it will be useful to present it in greater detail. Define the "relatedness" of two persons (say, Dick and Jane) as the fraction of genes they have in common. Dick and Jane have a relatedness of 1/2 if they are full siblings or a relatedness of 1/4 if they are half siblings. [2] The relatedness of Dick to one of Jane's children is 1/4 if he and Jane shared the same father. If Dick and Jane did not share the same father then the relatedness of Dick to one of Jane's children would be 1/8. The relatedness of Dick to one of his wife's children is 1/2 if he in fact is the father. It is zero if he is not. Suppose that everyone in a society has the same probability that he was fathered by his mother's husband. Call this probability the paternity probability and denote it by [rho]. The lowest paternity probability among well-documented societies is probably that of the matrilineal Nayar group of castes in the Central Kerala region of India (see Gough [1961, 298-384]). Nayar women in Central Kerala may have had as many as twelve "visiting husbands" at any one time; each husband stayed with his wife on nights informally agreed upon by the woman's regular husbands. A man had more than one "wife" but lived in the household of his mother and sisters. Men ordinarily invested nothing beyond ceremonial gifts in their wives' children and instead devoted their full resources to the support of the children of their sister. The residence and investment patterns of Nayar men may be related to their traditional occupation as soldiers. The potential connection between themilitary profession of the male Nayars (entailing long absences from the home) and their paternity probability was suggested early in the literature. (For references see De Moubray [1931, 46-47]). Although the Nayar may be an extreme case, the best available evidence suggests that they are not unique in having a [rho] much below what we would expect to observe in developed nations today. For example, in some matrilineal societies such as the Dobu, the Truk and the Trobriand Islanders, adultery was known to have been common (Fortune [1963, 7], Malinowski [1932, 98], Schneider [1961, 213]). In other, such as the Ashanti, the separate residences of husbands and wives increased the cost of monitoring the wife's behavior and hence increased the likelihood of adultery. Among the Truk of Micronesia sisters were free to have sexual intercourse with each other's husbands (even though the husbands were not blood relatives) (see Schneider [1961, 230]). In societies with high divorce rates, such as the Navaho and Truk, a woman may have had children living in her household who were fathered by different men, even if she had never committed adultery (Aberle [1961, 129], Schneider [1961, 213]). If a man could invest solely in those children in his wife's household whom he had fathered, then the high divorce rates would not matter. But some forms of investment, such as maintenance of his wife's dwelling, were "household goods" that benefitted all members of the household. So to the extent that the investment by a man in his wife's household was not easily divisible between the various children, a man would face a low "effective paternity probability." Note that this would occur even if he knew with complete certainty which of his wife's children was his (Kurland [1979, 161]. The Nayar, the Dobu, the Trobriand Islanders, the Ashanti and the Navaho are among the societies where one would expect to find a low paternity probability. …

Journal ArticleDOI
TL;DR: A detailed framework for economic software evaluation was created and applied to each program in this paper, where the reviewer was given a total of 31 questions, in five different categories to guide the software review.
Abstract: GRADING SOFTWARE PROGRAMS ACCOMPANYING SELECTED PRINCIPLES TEXTS An increasing number of non-statistical software packages are being written as supplementary instructional material for economics Principles texts. This paper reviews the software programs currently available as ancillary material to eight major Principles texts. To avoid simply listing what the programs do, a detailed framework for economic software evaluation was created and applied to each program. This evaluation instrument gives the reviewer a total of 31 questions, in five different categories to guide the software review. A summary table is presented which allows direct comparison of each package across each of the five evaluation categories. I. INTRODUCTION An increasing number of Principles textbooks include non-statistical computer-assisted instructional (CAI) software packages as part of the supplementary instructional material available to the student. The diskette (s) containing the program (s) and any written documentation are usually provided either free or for a small charge to students buying the textbook. The software combines one or both of the following two program types: Tutorials, in which the student is presented with a monolog, incorporating text and graphics, that reviews important concepts covered in the relevant textbook chapter, and Drills in which the student is presented with a series of questions designed to allow a self-test of their understanding of the material covered in the relavant chapter (1). This paper reviews the Tutorial and Drill software programs currently available as ancillary material to ten major Principles texts. The programs reviewed here range from packages containing Tutorials or Drills alone, to those offering some combination of the two. The question that launched this review was, "Are the Tutorial and Drill programs supplied with Principles texts created following sound pedagogical principles?" II. EVALUATION INSTRUMENT An Instructional Software Evaluation Form was developed in an effort to answer that question (2). The form list 31 questions a reviewer should ask when working through a program, its documentation, and the student/teacher manual. The questions are divided into five different sets.(3) The first set of questions is on General Issues regarding the general design of the program. The second set of questions deals with the Economics Content of the tutorials and drills. The third set of questions is the most critical, covering the Instructional Quality of the program. The fourth set of questions covers the Technical Quality of the program itself, as well as any documentation provided. For those packages that contain drill questions designed to assess the student's understanding, the fifth set of questions covers the likely Effectiveness of the Assessment Measures on learning. The reviewer answers each question in a category by assigning a whole number ranging from -3 to +3 (including zero). When all the questions in a single category are answered, the numerical rankings are summed, and that sum compared to the reviewer's subjective "grading scale" for each category For example, if a category has seven questions my grading scale is set up in the following way: a Sum less than or equal 0 receives a Grade of F;a Sum between 1 and 6 receives a Grade of D; a Sum between 7 and 13 receives a Grade of C; a Sum between 14 and 20 receives a Grade of B; a Sum = 21 receives a Grade of A. By assigning points to the letter grade received in each category a GPA can be calculated for each program. For example, a Grade of F is assigned 0 points; D, 1 point; C, 2 points; B, 3 points;and A, 4 points. III. RESULTS Preliminary Comments Space limitations do not allow a question-by-question comparison among all ten of the programs reviewed. Table I shows a summary of the Grades assigned in each of the five categories. …

Journal ArticleDOI
TL;DR: In this article, the authors present an empirical analysis of the industrial structure of the federal government and examine the relationship between measured industrial structure and bureau monopoly power, which is consistent with the focus of most existing theories that rely on the assumption of monopoly power.
Abstract: INDUSTRIAL STRUCTURE AND MONOPOLY POWER IN THE FEDERAL BUREAUCRACY: AN EMPIRICAL ANALYSIS I. INTRODUCTION In theories of public supply of services, bureau monopoly power plays a key role in determining output, budget, and cost levels. That bureaus have considerable monopoly power has generally been taken for granted. (1) Margolis [1975] and Kaufman [1976], however, question the validity of this assumption. They suggest that bureaus compete, not only in the broad sense of the "invisible hand" as proposed by McKean [1965], but also in the narrower economic sense of supplying substitutable services. Borcherding [1988] comments on "...the need to establish the strength and effectiveness of competition within the public sector..." in order to determine the role of competition in public sector resource allocation. To date there has been no direct empirical test of the assertion that individual bureaus have monopoly power. This paper presents such a test by first estimating the industrial structure of the federal sector, and by next examining the relationship between measured industrial structure and bureau monopoly power. This investigation is limited to estimating federal industrial structure, as distinct from market structure. That is, the characteristics of bureau supply are examined, but demand characteristics are not. This approach is consistent with the focus of most existing theories that rely on the assumption of monopoly power. These theories model bureaucratic managerial behavior in supply, taking the legislative role to be primarily one of production monitoring. (2) II. MEASURING INDUSTRIAL STRUCTURE IN THE FEDERAL SECTOR Most bureaus are considered as independent organizations analogous to the firm in the private sector. The concept of a public sector industry is also analogous to that of a private sector industry, that is, a collection of producing organizations that supply a similar service and then compete for funds. A private sector industry is considered to be highly structured if the distribution of market shares (or some other measure of size) of the firms in the industry is significantly uneven. (3) Two commonly accepted measures of the degree of industry structure in the private sector are the concentration ratio and the Herfindahl index. Of the two measures, the concentration ratio is cited more frequently, primarily because it is readily available. The concentration ratio measures the combined market share of a given number (usually four or eight) of the largest firms in an industry. Because it is a partial and aggregate measure of industry structure, the concentration ratio provides limited information on the firms included in the ratio. The Herfindahl index measures the dispersion of firm size within an industry by summing squared market shares of each firm. It is generally considered a better indicator of overall industry structure and competitive level within an industry than the concentration ratio because it usually incorporates information on market shares of all firms in an industry rather than overall concentration of a few large firms. The more limited use of the Herfindahl index may be attributed to its extensive data requirements. Each of these measures is used to estimate industrial structure of the federal sector. To determine public sector industry structure, budget and expenditure data on federal funds (general and special funds) were obtained from the Budget of the United States Government, Appendix at the agency level for all bureaus active in either of two fiscal years, FY 1985 and FY 1980. (4,5) Data on nearly 300 federal organizations have been examined for each fiscal year. FY 1985 is the most recent fiscal year for which actual rather than estimated data are available. FY 1980 was chosen because a five-year interval should be reasonable for comparative purposes, and is consistent with the practice for similar calculations made for the private sector. …

Journal ArticleDOI
TL;DR: Barro et al. as discussed by the authors investigated the impact of supply shocks on the real interest rate in an agricultural, closed and open economy, and showed that a temporary increase in government spending reduces the resources currently available to the private sector.
Abstract: SUPPLY SHOCKS AND THE INTEREST RATE I. INTRODUCTION A new approach to business cycles that stresses intertemporal substitution possibilities and market clearing equilibria has recently gained prominence. Key models that use this "new classical" framework include those of Lucas [1972], Barro [1976; 1981b; 1987a; 1987g], Kydland and Prescott [1980; 1982], Long and Plosser [1983], and King and Plosser [1984]. Although these models share the basic similarity of intertemporal maximization and market clearing, they differ about which factors cause business cycle fluctuations. However, most of them admit a role for supply shocks such as OPEC oil price shocks and agricultural harvest failures. Most of the empirical work using the market clearing approach concentrates on quantities such as real GNP and ignores the key price variable of the intertemporal substitutional approach, the (real) interest rate; exceptions are the papers by Benjamin and Kochin [1984] and Barro [1987a], who use eighteenth and nineteenth century data to examine the effect of government spending on the (nominal) interest rate. This paper extends previous work by studying not only the effects of government spending on the interest rate but also the effects of supply shocks on the interest rate. The classic supply shock, agricultural harvests, is used. Indeed, to the best of our knowledge, this is the first paper to investigate how agricultural supply shocks affect the interest rate, and, as such, it provides additional evidence about the factors that influence interest rates. Although the theory behind this work is cast in the framework of the equilibrium approach, Keynesian models also concur with the theoretical prediction that supply shocks raise the real interest rate. Thus, this empirical investigation of the impact of harvests on the interest rate will command attention from Keynesian as well as new classical macroeconomists. Section II reviewers the theory used to motivate the work. The third section discusses the data. The fourth section presents the empirical results, while the last section contains conclusions. II. THEORY The equilibrium approach to macroeconomics emphasizes the role played by intertemporal optimization: consumers maximize their intertemporal utility functions while firms maximize their profits. As a result, an intertemporal relative price--the ex ante real interest rate--emerges as a crucial variable. Consider the effect on the ex ante real interest rate from a temporary, adverse supply shock, such as a harvest failure in an agricultural, closed economy. Consumers, faced with a temporary shortfall in income and trying to smooth their consumption, generally try to borrow in order to maintain their usual level of consumption. Thus the ex ante real interest rate increases. Much the same argument applies to government spending; a temporary increase in government spending reduces the resources currently available to the private sector. Then, as in the case of a temporary supply shock, the increased demand for borrowing to smooth the reduction in disposable income leads to a higher interest rate. An open economy is different. When faced with a temporary reduction in resources--from either an adverse supply shock or a temporary increase in government spending--the private sector in an open economy is able to borrow from abroad by running a balance of trade deficit. If the country in question is a small part of the world economy, the increased borrowing does not affect the (world) ex ante real interest rate; so the effect on the country's ex ante real interest rate is nil. Returning to the case of a closed economy, supply shocks and changes in government spending affect the ex ante real interest rate only if they are temporary. If they were permanent, consumers would not have the same incentive to borrow, so the interest rate would tend to be constant. …

Journal ArticleDOI
TL;DR: In this article, the authors present examples of three errors in applying supply and demand analysis that are frequently made both by students and authors of business articles, which can have adverse affects on students of professors who passively use business periodicals in introductory economics courses.
Abstract: WHAT'S WRONG HERE? I. INTRODUCTION In the interest of enhancing the economic way of thinking, many professors encourage students to read the Wall Street Journal or other business periodicals (a passive role), while others, following Kelly [1983], use business periodicals in a more specific way. In both instances, the objective is to inspire students to correctly apply basic economic principles to real world problems in order to make economics relevant and interesting and to increase the retention level of a conventional one-year economics course. Research by Saunders [1980, pp. 10-12] supports the importance of regular reading of business periodicals for achieving the goals. The passive method of using business periodicals (handing out Wall Street Journal or Business Week sign up sheets and occasionally making periodic references to specific articles) is time efficient for the busy professor, but can be counterproductive unless the professor takes time to alert students to possible errors in economic reasoning that occur more frequently than is desirable in many business articles, particularly articles applying supply and demand analysis. Most students will make the assumption that the economic analysis reported in the press is correct. In those instances where journalists (or experts) apply improper economic reasoning in their reporting, the article may serve the undesired outcome of reinforcing similar errors in thinking by students reading the article. This confusion is further compounded when, in the same article, errors are reported in one segment while other arguments correctly apply economic concepts. The purpose of this article is to aid professors in using business periodicals in principles courses by presenting examples of three errors in applying supply and demand analysis that are frequently made both by students and authors of business articles. Incorrect use of supply and demand analysis as evidenced in journalistic reports has several implications that can have adverse affects on students of professors who passively use business periodicals in introductory economics courses. First, the student who has been careful to learn the proper terminology and logic will feel uncomfortable and begin to question their own understanding when reading from someone 'professional' who has applied the concept incorrectly. Second, the student, who at the time of reading, does not fully appreciate the principle but is attempting to apply it, will unfortunately 'learn' from the incorrect presentation, and thus the article will have a negative effect on the original teaching objective. Finally, if the error made by the author is one made frequently by students, it will become harder for the student to correctly learn economic principles since the student's erroneous reasoning has been 'verified' by the journalist or the expert. While it would be impossible for a professor to monitor even the Wall Street Journal for all cases of misreporting during a semester, the professor can make the students more sophisticated readers and hopefully avoid some of the above problems by presenting examples of typical errors of reasoning students may encounter in their outside reading. The professor can maintain a file of articles that have come to his/her attention or use available supplements that contain examples of journalistic errors applying economic reason. Some brief examples of typical errors that we have recently used effectively in our classes follow. II. EXAMPLES The first typical error involves the failure to carefully distinguish between changes in quantity demanded (quantity supplied) and changes in demand (supply). Principles of economics texts carefully distinguish between changes in quantity demanded and changes in demand. The terminology, when consistently applied, allows a reader to easily determine whether a change in buyer behavior has resulted from a change in product price or from a change in a non-price determinant of demand. …