scispace - formally typeset
Search or ask a question

Showing papers in "The American Economic Review in 1983"


Posted Content•
TL;DR: The applied econometrician is like a farmer who notices that the yield is somewhat higher under trees where birds roost, and he uses this as evidence that bird droppings increase yields.
Abstract: Econometricians would like to project the image of agricultural experimenters who divide a farm into a set of smaller plots of land and who select randomly the level of fertilizer to be used on each plot. If some plots are assigned a certain amount of fertilizer while others are assigned none, then the difference between the mean yield of the fertilized plots and the mean yield of the unfertilized plots is a measure of the effect of fertilizer on agricultural yields. The econometrician's humble job is only to determine if that difference is large enough to suggest a real effect of fertilizer, or is so small that it is more likely due to random variation. This image of the applied econometrician's art is grossly misleading. I would like to suggest a more accurate one. The applied econometrician is like a farmer who notices that the yield is somewhat higher under trees where birds roost, and he uses this as evidence that bird droppings increase yields. However, when he presents this finding at the annual meeting of the American Ecological Association, another farmer in the audience objects that he used the same data but came up with the conclusion that moderate amounts of shade increase yields. A bright chap in the back of the room then observes that these two hypotheses are indistinguishable, given the available data. He mentions the phrase "identification problem," which, though no one knows quite what he means, is said with such authority that it is totally convincing. The meeting reconvenes in the halls and in the bars, with heated discussion

2,228 citations



Posted Content•
TL;DR: This paper examined the effects of the financial crisis of the 1930s on the path of aggregate output during that period and argued that the financial disruptions of 1930-33 reduced the efficiency of the credit allocation process; and that the resulting higher cost and reduced availability of credit acted to depress aggregate demand.
Abstract: This paper examines the effects of the financial crisis of the 1930s onthe path of aggregate output during that period. Our approach is complementary to that of Friedman and Schwartz, who emphasized the monetary impact of the bank failures; we focus on non-monetary (primarily credit-related) aspects of the financial sector--output link and consider the problems of debtors as well as those of the banking system. We argue that the financial disruptions of 1930-33 reduced the efficiency of the credit allocation process; and that the resulting higher cost and reduced availability of credit acted to depress aggregate demand. Evidence suggests that effects of this type can help explain the unusual length and depth of the Great Depression.

1,820 citations


Posted Content•
TL;DR: In this paper, the authors model the negative self-characterizations of welfare recipients as a form of social stigma, and use a utility maximization model to predict the impact of welfare programs on the low-income population.
Abstract: Perhaps the most basic assumption of the economic theory of consumer demand is that "more is better than less." Virtually all of the major propositions of consumer theory can, in a certain sense, be derived from the assumption that "goods are good." Interestingly, however, this tenet seems to be violated by the behavior of many individuals in the low-income population, for many turn out to be eligible for a positive welfare benefit but do not in fact join the welfare rolls. For example, it has been estimated that in 1970, only about 69 percent of the families eligible for AFDC (Aid to Families with Dependent Children) participated in the program (see Richard Michel, 1980). The corresponding percentage for AFDC-U, the program for which families with an unemployed male are eligible, was only 43 percent and the participation rate in the Food Stamp Program was only 38 percent (see Maurice McDonald, 1977). This phenomenon has puzzled many investigators because such individuals do not locate on the boundaries of their budget sets. Consequently, most investigators ignore the problem when studying the effects of welfare programs on behavior. In this paper, this seemingly irrational rejection of an increase in income is modeled as resulting from welfare stigma -that is, from disutility arising from participation in a welfare program per se.1 The existence of stigma has been amply documented in the sociological literature (Patrick Horan and Patricia Austin, 1974; Lee Rainwater, 1979), where interviews of recipients have often uncovered feelings of lack of self-respect and " negative self-characterizations" from participation in welfare. Nevertheless, this phenomenon has not been modeled, and many questions consequently remain. When is the disutility of participation strong enough to prevent participation? Shouldn't we expect individuals to weigh the disutility of participation against the potential benefit in their decisions? What is the elasticity of participation with respect to the potential benefit? Also, in a slightly different vein, how are the work disincentives of welfare affected by stigma? These questions have been given scant attention by economists, yet they are crucial for our ability to predict the impact of various welfare programs on the lowincome population. Here these questions are addressed by modeling nonparticipation as a utility-maximizing decision. The model is developed and estimated for the AFDC program.2 The model posits an individual utility function containing not just disposable income, but

1,195 citations


Posted Content•
TL;DR: In economics, this critique has been persistently attacked as an acceptable explanation of behavior as discussed by the authors, which has been taken various forms which include information processing limitations in computing optima from known preference or utility information, unreliable probability information about complex environmental contingencies, and the absence of a well-defined set of alternatives or consequences.
Abstract: Despite vigorous counterargument by its proponents, optimization theory has been persistently attacked as an acceptable explanation of behavior. In one form or another, these attacks repeat the oldest critique of economics; namely, the ability of agents to maximize successfully. Over the years, this critique has taken various forms which include information processing limitations in computing optima from known preference or utility information, unreliable probability information about complex environmental contingencies, and the absence of a well-defined set of alternatives or consequences, especially in an evolving world that may produce situations that never before existed. These complaints are not new to economics. Indeed, they have been present during the very intellectual sifting process that produced neoclassical optimization and general equilibrium theory. Thus, if we are to further elaborate this critique of conventional theory, the basic issue is whether there is anything new that is worthy of attention by someone well versed in standard tools and concepts. Are we simply advancing more refined or cleverly argued versions of older critiques, or extensions of them to areas not previously emphasized? Such arguments would still represent an attack on the basic rationality postulate of economics (that agents are able to maximize), but without providing a clear alternative to traditional optimization theory. However plausible these arguments might be, ultimately they must be set aside by someone desiring a theoretical understanding of behavior, unless they lead to another modeling structure whose analytical ability can be explored and compared with existing optimization theory. Another argument focuses on the desire to understand the "real" dynamic processes that actually generate observed behavior. In contrast, optimization is thought of as a surrogate theory based on false assumptions about agents' capacity to maximize. Thus, it can be defended only in terms of empirical testability, without really illuminating the underlying processes determining behavior. Nevertheless, even if this view was fully accepted, it is unlikely by itself to cause a major shift away from conventional thinking. The reason is that evolutionary processes have long ago been interpreted as one of the key mechanisms tending to produce optimizing behavior; or conversely, optimizing models will predict the behavior patterns that will survive in an evolutionary process tending to select relatively superior performance.' The latter interpretation is in fact one of the dominant justifications for standard models against the criticism of unrealistic assumptions (i.e., the surviving agents of a selection process will behave "as if" they are able to maximize).2 *Department of Economics, Brigham Young University, Provo, UT 84602. I am indebted to Axel Leijonhufvud for constant encouragement about applications to economics, and for numerous stylistic suggestions. Harold Miller helped familiarize me with a broad range of issues across the sociobiological, psychological, and behavioral science literatures. James Buchanan provided stimulating discussion about conceptual issues. I have also benefited from the advice and criticism of Armen Alchian, Ron Batchelder, Bruce Brown, Robert Clower, Daniel Friedman, Jack Hirshleifer, Kai Jeanski, Randy Johnson, Edward Leamer, Stephen Littlechild, John McCall, James McDonald, Richard Nelson, Gerald O'Driscoll, Dennis Packard, Clayne Pope, Lionello Punzo, Ezio Tarantelli, and Sidney Winter. Needless to say, these colleagues are not responsible for inadequacy in the conceptual framework or scope of ideas presented. 'See in particular Armen Alchian's well-known 1950 paper, and also Sidney Winter, 1964, 1971; Jack Hirshleifer, 1977; Richard Nelson and Winter, 1974. 2A still used reference on the "as if" point of view is Milton Friedman's 1953 paper. Some recent journal illustrations are Benjamin Klein and Keith Leffler, 1981, p. 634; Richard Posner, 1980, p. 5; Hirshleifer, 1977, p. 50; Nelson, 1981, p. 1059. The ultimate extension of this view is to claim not that agents are able to maximize (select most preferred actions), but rather that any ob-

1,193 citations


Posted Content•
TL;DR: In this article, the authors examined the effect of output price uncertainty on the investment decision of a risk-neutral competitive firm which faces convex costs of adjustment and showed that Hartman's results continue to hold using Pindyck's stochastic specification.
Abstract: This paper examines the effect of output price uncertainty on the investment decision of a risk-neutral competitive firm which faces convex costs of adjustment.' This issue has been analyzed by Richard Hartman (1972) and by Robert Pindyck (1982), but they reached dramatically different results. Hartman showed that with a linearly homogeneous production function, increased output price uncertainty leads the competitive firm to increase its investment. However, Pindyck found increased output price uncertainty leads to increased investment only if the marginal adjustment cost function is convex; but, if the marginal adjustment cost function is concave, then increased uncertainty will reduce the rate of investment. Pindyck argues that his results differ from Hartman's results because of a different stochastic specification of the price of output. In Hartman's discrete-time model, price is random in each period including the current period, whereas in Pindyck's continuous-time model, the current price is known but the future evolution of prices is stochastic. In this paper, I demonstrate that Hartman's results continue to hold using Pindyck's stochastic specification and that Pindyck's analysis applies to a socalled "target" rate of investment, which in general is not optimal. The model developed herein, which is a special case of Pindyck's model, is used because it can be solved explicitly, unlike Pindyck's more general model. Since Pindyck did not derive an expression for the optimal rate of investment, he used a phase diagram to determine the target capital stock. This target capital stock is determined by the intersection of a locus for which the rate of change of the capital stock is zero, and a locus for which the expected change in the rate of investment is zero. A problem with this stochastic phase diagram approach is that in general there is no reason for the firm to be on the locus with zero expected change in investment, even in the long run. Indeed, in the particular model in this paper, optimal behavior is such that the expected proportional rate of change of investment is (in general, a nonzero) constant over time.

1,149 citations




Posted Content•
TL;DR: In this paper, the authors present a model which incorporates uncertainty and concludes the contrary; that is, in a Nash equilibrium the incumbent firm invests less on the innovation than a challenger.
Abstract: In a recent paper published in this Review, Gilbert and Newbery (1982) show that, because an incumbent firm enjoys greater marginal incentives to engage in R and D (under their assumption of deterministic invention), the incumbent firm will engage in preemptive patenting. Thus the industry will tend to remain monopolized, and by the same firm. They then argue heuristically that this result extends to the case in which innovation is uncertain. One form of this conjecture is that the incumbent patents the innovation more often than not. We briefly review the Gilbert and Newbery argument as well as those in related papers (Gilbert, 1981 and Craswell, 1981). We then present a model which incorporates uncertainty and concludes the contrary; that is, in a Nash equilibrium the incumbent firm invests less on the innovation than a challenger. Consequently, the incumbent firm will patent the innovation less often than not. This result indicates that one need worry far less about persistent monopoly than would be suggested by the Gilbert and Newbery analysis.

664 citations




Posted Content•
TL;DR: In this paper, the authors extend the standard Mincerian approach to incorporate school quality as well as quantity, and show that the expected private return to years of schooling using the preferred quality-inclusive specification is only one-half the estimate using the standard procedure, indicating substantial upward bias in the standard estimates.
Abstract: Although much research discusses the quantity of schooling as an effective impact on productivity and earnings, most such research leaves out the idea of the quality of schooling which may cause biases in the estimated returns to schooling. This paper, consequently, attempts to extend the standard Mincerian approach to incorporate school quality as well as quantity. It demonstrates how exclusion of quality in the standard procedure may cause biases in the estimated returns to years of schooling, probably in the upward direction. The paper explores the implications of this extension of the standard model for the case of young Brazilian males. It shows that the estimate of the private return to years of schooling using the preferred quality-inclusive specification is only one-half the estimate using the standard procedure, indicating substantial upward bias in the standard estimates. It further outlines a method for estimating a social return to quality and finds that it exceeds substantially the social return to quantity. The paper then shows why this in turn suggests there may be an equity-productivity trade-off in schooling investments.

Posted Content•
TL;DR: In this article, the authors analyze the variance-bound methodology used by Shiller and conclude that this approach cannot be used to test the hypothesis of stock market rationality, and they conclude that the empirical properties of the stochastic model I posit come close to resembling the empirical determinants of today's real-world markets.
Abstract: Perhaps for as long as there has been a stock market, economists have debated whether or not stock prices rationally reflect the "intrinsic" or fundamental values of the underlying companies. At one extreme on this issue is the view expressed in well-known and colorful passages by Keynes that speculative markets are no more than casinos for transferring wealth between the lucky and unlucky. At the other is the Samuelson-Fama Efficient Market Hypothesis that stock prices fully reflect available information and are, therefore, the best estimates of intrinsic values. Robert Shiller has recently entered the debate with a series of empirical studies which claim to show that the volatility of the stock market is too large to be consistent with rationally determined stock prices. In this paper, we analyze the variance-bound methodology used by Shiller and conclude that this approach cannot be used to test the hypothesis of stock market rationality. Resolution of the debate over stock market rationality is essentially an empirical matter. Theory may suggest the correct null hypothesis-in this case, that stock market prices are rational but it cannot tell us whether or not real-world speculative prices as seen on Wall Street or LaSalle Street are indeed rational. As Paul Samuelson wrote in his seminal paper on efficient markets: "You never get something for nothing. From a nonempirical base of axioms, you never get empirical results. Deductive analysis cannot determine whether the empirical properties of the stochastic model I posit come close to resembling the empirical determinants of today's real-world markets" (1965, p. 42). On this count, the majority of empirical studies report results that are consistent with stock market rationality.' There is, for example, considerable evidence that, on average, individual stock prices respond rationally to surprise announcements concerning firm fundamentals, such as dividend and earnings changes, and that prices do not respond to " noneconomic" events such as cosmetic changes in accounting techniques. Stock prices are, however, also known to be considerably more volatile than either dividends or accounting earnings. This fact, perhaps more than any other, has led many, both academic economists and practitioners, to the belief that prices must be moved by waves of "speculative" optimism and pessimism beyond what is reasonably justified by the fundamentals. 2

Posted Content•
TL;DR: In this article, the effects of government spending financed by current-period taxation also depend upon private sector perceptions, which is the standard approach to modeling private sector consumption-saving behavior.
Abstract: A current-period tax reduction financed by issuing government debt shifts the timing of tax collection from the current period to the future. If the future taxes implied by government debt are not fully perceived and discounted by the private sector, there will be a "net wealth effect" that increases private sector consumption, thus reducing capital accumulation and growth. If, on the other hand, the implied future taxes are perceived and discounted by the private sector, the current-period tax reduction will be used to increase private saving to pay for the future taxes, and government debt will be absorbed without any real effects on the economy.' The effects of government spending financed by current-period taxation also depend upon private sector perceptions.2 If the benefits of government spending are ignored, private sector consumption will decrease in accordance with the reduction in permanent disposable income. To the extent that government spending is on consumption-type goods that are perceived as substitutes for privately provided consumption goods, there will be a relatively greater reduction in private sector consumption. To the extent that government spending is on investment-type goods yielding future goods and services that are perceived as substitutes for future privately provided consumption goods, there will be a relatively smaller reduction in private sector consumption. The standard approach to modeling private sector consumption-saving behavior involves a rather asymmetric set of assumptions as to how the private sector perceives the various elements of government fiscal policy.3 Current-period taxes are assumed to be fully perceived, but current-period government spending is implicitly assumed to be completely ignored by the private sector. In considering permanent personal disposable income, the private sector is assumed to be forward-looking in its assessment of income and taxation. The stock of government debt is nevertheless included as part of the stock of private wealth, the implicit assumption *Associate Professor of Economics, Graduate School of Business, University of Chicago, 1101 East 58th Street, Chicago, IL 60637. I thank Eugene Fama, Levis Kochin, Michael Mussa, Paul Evans, and an anonymous referee for helpful comments. I am particularly grateful to Daniel Benjamin for contributing many hours of discussion, and Laura Lahaye, who was a research assistant and valuable adviser on earlier drafts. 'The theoretical debate on the "burden of the debt" has been long standing. Gerald O'Driscoll (1977) documents Ricardo's nineteenth-century position. Robert Barro (1974) reopened the debate by introducing the fundamental issue of intergenerational transfers (also discussed in Merton Miller and Charles Upton, 1974). The empirical side of the debate was initiated by Levis Kochin's (1974) attempt to test for the effects of deficits on consumption and by Martin Feldstein's (1974) attempt to test for the effects of Social Security "wealth" on consumption. Other empirical contributions include Jess Yawitz and Laurence Meyer (1976), myself (1978) and J. Earnest Tanner (1978, 1979) with respect to the effects of government deficits and debt, and Barro (1978), Michael Darby (1979), and Dean Leimer and Selig Lesnoy (1982) with respect to Social Security wealth. See also interesting recent papers by John Seater (1982), who generates detailed tests of the effects of deficits and debt on consumption, Charles Plosser (1982), who explores the effects of government spending and debt "shocks" on interest rates, and Feldstein (1982), who attempts tests similar to some in this paper (see fn. 29). 2Martin Bailey's (1962, 1971) development of the effects of government spending on private consumption and aggregate economic activity is the seminal contribution. Paul David and John Scadding (1974) extend Bailey's ideas and provide some supporting empirical evidence. More recently, Willem Buiter (1977) and myself (1978) developed models based on Bailey's earlier work. Barro (1981) has an interesting paper on related issues. See also David Aschauer (1982). 3The "standard approach" incorporates fiscal policy through the concept of personal disposable income and by including the stock of government debt as part of personal wealth. See, for example, the empirical specification of Albert Ando and Franco Modigliani (1963), which has been the basis of most empirical consumption studies since, and Feldstein (1974) for one of the more influential papers based on the Ando-Modigliani specification.

Posted Content•
TL;DR: In this article, Alejandro-Wright et al. analyzed the tradeoff between patent life adjustment and direct contracting for research services, and showed that patent-based incentives may not be the optimal incentive mechanism for research.
Abstract: Though public intervention in the market for research is virtually universal, economists have paid surprisingly little attention to the choice of the form of research incentive in a given market structure. Many studies concentrate on patents, but any assumption of their superiority over other incentives has been founded on intuition rather than on formal analysis. In this paper I analyze the choice between three of the most common alternative means of public intervention in the research market, namely, patents, prizes, and direct contracting for research services. I show why, and under what conditions, any one of the three may be preferred by a social welfare-maximizing administrator in a competitive economy, using a model that, for the first time, pays explicit attention to differences in the informational roles of each of these alternatives. In the extensive literature on the economics of patents (see Arnold Plant, 1934; Fritz Machlup, 1958; Charles Taylor and Z. A. Silberston, 1973; Morton Kamien and Nancy Schwartz, 1975; and F. M. Scherer, 1977, for valuable surveys), formal analysis weighs the benefits of patents as a solution to the market failure associated with the inappropriability of knowledge against the welfare cost due to the restriction on the use of the knowledge generated, and this tradeoff is optimized by patent life adjustment in William Nordhaus (1969). Scant analytical attention is paid to alternative incentive mechanisms. (An exception is Ben Yu, 1981, who considers the role of prior contracting for inventions.) But as many writers (for example, Dan Usher, 1964; Yoram Barzel, 1968; Joseph Stiglitz, 1969; Carole Kitti, 1973; Glenn Loury, 1979; Partha Dasgupta and Stiglitz, 1980a) have pointed out in various contexts, the incentive offered by an unlimited patent to competitive researchers may be excessive, due to the "common pool problem" discussed further in Section I below. If the patent administrator and researchers share the same information, as implicitly assumed in previous models, then the patent life limitation can be adjusted to provide the optimal patent incentive, given the common pool problem. But in all such models, patents would not be chosen in a fully optimized fiscal system. Researchers and the administrator are assumed to have identical information about the shadow price of potential inventions; a patent is just a means of turning this shadow price into a monetary reward. But monetary compensation can instead be offered directly to researchers by the state. Assuming that patent revenues incur a higher deadweight loss than an equivalent amount of public funds financed by less distortionary means (for example, a minimally efficient tax system), appropriate prizes or government contracts are socially preferable to patents with optimal lives. If the patent is ever to be the optimal incentive mechanism for research, it must possess advantages not captured in existing models. Informal discussions of patents emphasize their informational role. To include the latter as a justification for decentralized invention incentives, I incorporate an ex ante imbalance of information about costs and benefits of research in the model presented in Section II. But this alone is not quite enough. It is further necessary to specify that the terms of the award must be fixed before *Department of Economics, Economic Growth Center, Box 1987 Yale Station, Yale University, New Haven, CT 06520. I thank, with the usual caveat, Marguerite Alejandro-Wright, Cindy Arfken, Martin Baily, Nuong Brennan, Steven Englander, Robert Evenson, Richard Levin, Richard Nelson, Susan Rose-Ackerman, Denis Wright, and two referees for assistance of various kinds.

Posted Content•
TL;DR: In this paper, a dynamic model is used to understand how sharp changes in energy prices affect investment behavior, employment, and energy use, and they conclude that the data strongly reject the hypothesis of constant returns to scale within their specification of aggregate production, and the data indicate adjustment costs on labor are small.
Abstract: A dynamic model is used to understand how sharp changes in energy prices affect investment behavior, employment, and energy use. The authors discuss the theory behind their model selection, model specifications, estimation methods and data; list parameter estimates and elasticities for the selected model; and describe the simulations. They conclude that (1) the data strongly reject the hypothesis of constant returns to scale within their specification of aggregate production; (2) the data indicate adjustment costs on labor are small, and (3) their results help reconcile some of the conflicting estimates of energy-demand elasticities appearing in recent literature. 23 references, 3 figures, 3 tables.

Posted Content•DOI•
TL;DR: In a first best world, with perfect information concerning the nature of the technology (but where it is still costly to monitor individuals' activities), the compensation scheme would vary from time to time as the environment changed as mentioned in this paper.
Abstract: One of the dominant characteristics of modern capitalist economies is the important role played by competition: not the peculiar static form of pure price competition embodied in the Arrow-Debreu model, but rather a dynamic competition, more akin to the kind of competition represented by sports contests and other races (including patent races). In recent years, there have been several attempts to explain why firms often base the pay of their workers and managers on relative performance. (See, for example, Edward Lazear and Sherwin Rosen, 1981.) Such compensation schemes become desirable when three conditions are satisfied: (a) The input (effort) of workers (managers) must not be directly observable, at least without cost. Thus firms must either expend resources to monitor inputs or devise reward structures in which compensation is a function of variables (such as output or profits) which are themselves functions of inputs but are less costly to observe. (b) The relationship between input and output must be stochastic, so that by observing output, one cannot perfectly infer what the input was. (c) Finally, the stochastic disturbances which affect the relationship between input and output of different firms must be correlated. By looking at the performance of one worker relative to that of others, one can make better inferences about his effort than one can make without using this information. Not only can competition provide a basis of comparison, which enables the design of reward structures that can simultaneously provide a high level of incentives with relatively low level risk; but compensation schemes based on relative performance have the further advantage of automatically adjusting incentives to changes in the economic environment. (We refer to this as "built-in flexibility.") In a first best world, with perfect information concerning the nature of the technology (but where it is still costly to monitor individuals' activities), the compensation scheme would vary from time to time as the environment changed. Such changes in the compensation scheme are costly to implement and the information required to do so is seldom available. When a task is easier, the individual's rewards for performing the task should be reduced. If pay is based on relative performance, although all individuals perform better (when they exert the same level of effort), their compensation is automatically adjusted. Thus, teachers frequently grade on the curve and a significant fraction of the pay of successful salesmen often consists of bonuses based on relative performance.


Posted Content•
TL;DR: Tan et al. as discussed by the authors show that the explanatory power of Mlw increases sharply as that of mlUS declines moderately, being insignificantly correlated with American or world prices.
Abstract: Myron Ross correctly points out in his comment that U.S. price inflation is better predicted by the American money supply (MlUS) than by the broader ten-country world money supply (MlW) over the whole statistical time-series from 1960 to 1980 provided in McKinnon's 1982 article. Specifically, he showed that American price inflation is more highly correlated with MlUS lagged one or two years than with MlW similarly lagged. However, McKinnon's present-tense assertion that "In general, growth in the world money supply is a better predictor of American price inflation than is American money growth" applies only to the weak dollar standard of the 1970's and early 1980's-as his preceding discussion intended, but failed to indicate clearly. Only in this later period of volatile exchange rates and price-level instability in the United States do alternative hard currencies (such as the yen and deutsche mark) become competitive as international stores of value and units of account. Hence, international currency substitution-associated with the ebb and flow of speculation against or for the dollar-significantly destabilized the demand for MlUs in the 1970's and 1980's. And only in this later period might one expect the sum of these internationally substitutable monies, Ml, to predict world, and perhaps even American, price inflation better than does Mlus. In contrast, during the strong dollar standard of the 1950's and 1960's, the dollar was unchallenged as international money. Exchange rates were (by and large) convincingly fixed: speculation for or against the dollar was incapable of substantially altering U.S. interest rates or of directly affecting the demand for MlUS. Because cyclical international influences did not then destabilize the demand for dollars, MlUS was by itself a fairly efficient predictor of American prices. Thus Ross's statistical results for the period 1960 to 1980 show somewhat greater predictive strength in Mlus, compared to Ml', because the 1960's data outweigh the dissimilar data of the 1970's. Let us instead partition the sample of IMF data around the year 1970, whence began the transition from fixed to fluctuating exchange rates and the maturation of alternative monetary systems in Europe and Japan. After applying similar statistical correlation procedures to those used by Ross, we show a striking structural shift in the world's monetary system: the explanatory power of Mlw increases sharply as that of MlUS declines moderately (Tan, 1982). For the early period of 1960 to 1970, Table 1 shows simple correlation coefficients between annual percentage changes in money supplies and changes in American and world wholesale price indices. The MlUs provides a good explanation of U.S. prices and of world prices one to two years hence, whereas the broader definition of Mlw (in which MlUs enters with a 50 percent weight) does surprisingly poorly, being insignificantly correlated with American or world prices. Apparently, money growth in industrial countries other than the United States was not then an independent source of worldwide inflationary pressure during the strong dollar standard. Table 2 shows the same simple correlations between percentage changes in prices and money for the weak dollar standard of * Stanford University. Please note that a typographical error giving the last word in the original title as "Market" appeared on the cover of the June 1982 issue of this Review. The title was correct in the table of contents and on the article.



Posted Content•DOI•
TL;DR: Brecher and Bhagwati as mentioned in this paper extended the positive analysis of the transfer problem to allow for tariffs and transport costs, and showed that a transfer from abroad can be immiserating (and that the donor may improve its welfare) despite market stability.
Abstract: Paul Samuelson's (1952, 1954) classic papers on the transfer problem addressed two separate analytical issues: the "positive" effect of a transfer on the terms of trade; and the welfare effect of the transfer on the donor and the recipient. Since then, a considerable body of literature has grown up on the positive analysis. While Samuelson (1954) himself had extended the 2 x 2 x 2 free trade analysis to allow for tariffs and transport costs, subsequent writers have analyzed other extensions of the model: for example, to allow for nontraded goods as with leisure in Samuelson (1971); or general nontraded goods in John Chipman (1974) and Ronald Jones (1970, 1975). Remarkably, however, the welfare analysis of transfers has not paralleled these developments. Since Wassily Leontief (1936) produced an example of immiserizing transfer from abroad and Samuelson (1947) argued that the example required market instability, the proposition that has monopolized attention has been that a transfer in the conventional 2 x 2 x 2 model in its free trade version cannot immiserize the recipient or enrich the donor as long as world markets are stable (in the Walras sense). Interestingly, Samuelson (1954), who did extend the positive analysis to include tariffs, did not go on to ask whether immiserization of the transfer recipient (and hence symmetrically enrichment of the donor in a two-country model) could now arise consistent with market stability. Recently, the welfare analysis of transfers has been extended in two different directions, both apparently unconnected, and both yielding the conclusion that transfers from abroad can be immiserizing (and that the donor may improve its welfare) despite market stability. One route to this conclusion has been the introduction of a third economic agent (or country) that is outside of the transfer process. In the Appendix of his 1960 paper analyzing the interaction between trade policy and income distribution, Harry Johnson discussed the possibility of welfareparadoxical redistribution between two factor-income classes (capital and labor) in an open economy, thereby providing what can be interpreted as a treatment of the threeagent transfer problem for the case in which donor and recipient are both completely specialized in the ownership of a single different factor.' An independent analysis of the three-agent transfer problem, using a restrictive model with given endowments of goods and fixed coefficients in consumption, was also undertaken in an important paper by David Gale (1974).2 Brecher and Bhagwati *Bhagwati: Department of Economics, Columbia University, New York, NY 10027; Brecher: Department of Economics, Carleton University, Ottawa, ON KIS 5B6; Hatta: Department of Political Economy, The Johns Hopkins University, Baltimore, MD 21218. We thank the National Science Foundation, grant no. 524718, for partial financial support of the research underlying this paper. The paper was written when Brecher and Hatta were visiting Columbia University, 1981-82. Gratefully acknowledged are helpful comments and suggestions from John Chipman, Avinash Dixit, Jacques Dreze, Robert Feenstra, Jacob Frenkel, Ronald Jones, Murray Kemp, Andreu Mas-Colell, Michael Mussa, John Riley, Lars Svensson, and Robert Willig, from anonymous referees, and from seminar participants at Berkeley, Harvard, Minnesota, Rochester, Chicago and the University of California-Los Angeles. 'After the present paper was submitted for publication, and following its presentation at Rochester, our attention was drawn to this Appendix, which was noticed by a student of Ronald Jones. Subsequently, we learned from Makoto Yano that Motoshige Itoh had pointed out an important related paper by Ryuotaro Komiya and T. Shizuki (1967), whose condition (11) for the Johnson case anticipated our equation (12) below. We are grateful for having both of these references brought to our attention. 2 Gale constructs an example in which the donor is enriched along with the recipient. Furthermore, this immediately implies that a reverse transfer will immis-

Posted Content•
TL;DR: In this paper, the change in wage dispersion in response to increases in the relative supply of educated workers in two low-income countries was studied, and the relative contributions to that change of its two components were measured.
Abstract: The objective in this note is to show the change in wage dispersion in response to increases in the relative supply of educated workers in two low-income countries. We also measure the relative contributions to that change of its two components: the effect of the educational expansion on the educational composition of the labor force (holding the educational structure of wages constant), and the resultant compression of that structure (holding composition constant). The paper's analysis is based on three precisely comparable surveys of wage employees conducted in Tanzania in 1971 and 1980, and in Kenya in 1980.

Posted Content•
TL;DR: In this article, the authors present a simple model of exchange in capital markets where divergence of opinion not only exists, but is essential, because of its association with endogenous limitations on the number of active market participants.
Abstract: The importance of divergence of opinion in the functioning of capital markets was recognized by early economic writers. In the prevailing models of capital markets, however, differences of opinion either do not exist or do not matter. Thus, although heterogeneity of opinion is allowed in the models developed by Kenneth Arrow, Gerard Debreu, and Peter Diamond, nothing essential would change if all individuals were to hold identical, homogeneous-equivalent average expectations. In the capital asset pricing model (CAPM) of William Sharpe (1964) and John Lintner (1965), homogeneous expectations are assumed at the outset. When they considered the implications of heterogeneity of expectations in the model, both Lintner (1969) and Sharpe (1970) reached similar conclusions; as stated by Sharpe, "in a somewhat superficial sense the equilibrium relationships derived for a world of complete agreement can be said to apply to a world in which there is disagreement, if certain values are considered to be averages" (p. 291). Sharpe's conclusion was that "a model based on disagreement has little value in a positive role" (p. 113). The aim of this paper is to present a simple model of exchange in capital markets where divergence of opinion not only exists, but is essential. It is essential because of its association with endogenous limitations on the number of active market participants. It will be argued that in the models cited above, the significance of divergence of opinion was dismissed because of the failure to recognize the implications of the obvious fact that investors choose not only the size of their holdings in each asset, but also in which assets to invest. Correspondingly, they failed to recognize that in (imperfect) capital markets, equilibrium requires the simultaneous determination of asset prices and the identity of investors trading in each asset. As both Lintner (1969) and Sharpe (1970) recognized, the case of divergent opinions may differ from the case in which there is no such divergence, if only because it implies that investors may seek to sell short assets that they believe to be overrated. Lintner, who pursued the implications of the case in which short sales are not allowed, argued that, when not all investors trade in every asset, the price of an asset will reflect an average of the assessments of only those investors who actually hold the asset. Lintner, however, did not realize that, given that the set of active investors is endogenously determined, this is an incomplete characterization of how asset prices are determined. It leaves an integral question unanswered; namely, what distinguishes the active from the nonactive investor? Lintner was thus led to dismiss an alternative characterization of equilibrium asset prices-the marginal-investor theory-proposed by John Maynard Keynes and John Burr Williams thirty years earlier.' Keynes characterized the determination of asset prices in a manner that attests to the importance that he attached to divergence of opinion among investors: "The prices of capital assets move until... they offer an


Posted Content•
TL;DR: In the last five years or so, a number of theorists have begun to apply methods drawn from the theory of industrial organization to international trade, to produce a new genre of trade models as discussed by the authors.
Abstract: Most students of international trade have long had at least a sneaking suspicion that conventional models of comparative advantage do not give an adequate account of world trade. This is especially true of trade in manufactured goods. Both at the macro level of aggregate trade flows and at the micro level of market structure and technology, it is hard to reconcile what we see in manufactures trade with the assumptions of standard trade theory. In particular, much of the world's trade in manufactures is trade between industrial countries with similar relative factor endowments; furthermore, much of the trade between these countries involves two-way exchanges of goods produced with similar factor proportions. Where is the source of comparative advantage? Furthermore, most manufacturing industries are characterized by at least some degree of increasing returns (especially if we include dynamic scale economies associated with RD the arguments of many observers that much trade among industrial countries is based on scale economies rather than comparative advantage; and the common argument that a protected home market can promote exports. Until recently, however, none of these alternatives was presented in a form which economists would properly call a model: that is, a formal structure in which macro behavior is derived from micro motives. This lack of formalization essentially barred alternatives to comparative advantage, however plausible, from the mainstream of international economics. In the last five years or so, however, there has been a significant change. A number of theorists have begun to apply methods drawn from the theory of industrial organization to international trade, to produce a new genre of trade models. These models offer a new way of looking at tradeand particularly at manufactures trade among the industrial countries. A characteristic feature of the new models is that they often rely on very special assumptions. This is probably inevitable: given the inherent complexity of the world once the great simplifying device of constant returns is dropped, only special assumptions will yield tractable analysis. In spite of the specialness of individual models, however, the new literature on trade is starting to give rise to concepts which look more general than the particular models used to illustrate them. The purpose of this paper is to sketch out two such concepts which I believe are important and more general in application than the particular models in which they have been expressed. The first is the theory of "intraindustry" trade, a view which incorporates scale economies as well as comparative advantage as major causes of trade and gains from trade. The second is the (less well developed) theory of technological competition, which may begin to shed some light on the dynamics of international competition in research-intensive industries.

Report•DOI•
TL;DR: The loanable funds theory as mentioned in this paper suggests that when government expenditures exceed current tax revenues, the resulting deficit must be financed either by issuing bonds, which imply obligations to levy future taxes, or by creating high-powered money.
Abstract: When government expenditures exceed current tax revenues, the resulting deficit must be financed either by issuing bonds, which imply obligations to levy future taxes, or by creating high-powered money. The choice between money and bonds is often thought to be of great moment for both real and nominal variables; that is, monetary policy matters.There is by now a wide empirical consensus that monetary policy has effects on real variables like output and employment. But there is far less agreement about why this is so. The purpose of this paper is to take issue with some currently fashionable views of why money has real effects,and to suggest a new theory, or rather resurrect an old one--the loanable funds theory--and give it new, improved microfoundations.

Posted Content•
TL;DR: In particular, Demsetz, Peltzman, and Gale as mentioned in this paper pointed out that higher profits logically follow as much from lower costs that are associated with higher levels of concentration, as from collusion-determined higher prices.
Abstract: The simple proposition that consumerdamaging collusion is more likely to occur when there are fewer competitors has given rise not only to legal restrictions of economic activity thought to restrict output, but also to an enormous amount of empirical work attempting to relate market concentration to the exercise of monopoly power.' Most such studies measured market structure by an index of concentration (for example, four-firm concentration or Herfindahl) and performance by accounting profit (for example, net profit divided by assets) or price-cost margins (sales less direct costs divided by sales). From the positive, statistically significant correlations often found between greater concentration and profit (with other factors presumably accounted for), some researchers conclude that increases in concentration are anticompetitive and that concentration, therefore, is bad. The concentration-profits studies have been criticized essentially on two grounds. One is that the positive relationship is not indicative of collusive behavior. In particular, Oliver Williamson (1968), John McGee (1971), Harold Demsetz (1973, 1974), and Sam Peltzman (1977) point out that higher profits could result from efficiencies experienced by large firms, which resulted both in greater market shares and high levels of concentration. Analytical arguments show that higher profits logically follow as much from lower costs that are associated with higher levels of concentration, as from collusion-determined higher prices. Some empirical evidence that supports this belief is presented by Demsetz, Peltzman, and Bradley Gale and Ben Branch (1982). However, their findings have been contested on the grounds that the data used are biased or inadequate, or that the researchers have not demonstrated that the observed higher-profits/greater-concentration relationship was caused by lower costs.2 The second criticism is that available data provide an inadequate basis for such conclusions. The concentration numbers are based, usually, on industries defined by the Commerce Department's Standard Industrial Classifications (SIC). The SIC definitions tend to be supply (production) rather than demand determined, include nonhomogeneous products, and exclude sales of similar products that are included in different SIC groups or are imported. The profit data are taken from accounting reports that provide poor measures of economic values. Weiss (1974) describes many of these problems. However, he does not believe that these shortcomings invalidate the studies. As Weiss (1979) concludes:

Posted Content•
TL;DR: In this article, Stiglitz and Weiss showed that adverse selection effects can also occur when borrowers are risk neutral, and that an increase in collateral requirements can also result in adverse selection.
Abstract: In their 1981 article, Joseph Stiglitz and Andrew Weiss analyze adverse selection and incentive effects in the loan market. The models considered are based on two crucial assumptions: borrowers are subject to limited liability; and lenders cannot distinguish borrowers (projects) of different risk. Stiglitz and Weiss show that a bank that raises its interest rate may suffer adverse selection because only risky borrowers will be willing to borrow at the higher rate. Thus lenders may choose not to raise the interest rate to eliminate excess demand, resulting in the possibility of a "credit rationing equilibrium." Stiglitz and Weiss also consider briefly the role of collateral in such credit rationing models. They conclude that lenders may choose not to use collateral requirements as a rationing device. An increase in collateral requirements, like an increase in the interest rate, potentially leads to a decrease in the lender's expected return on loans because of resulting adverse incentive and selection effects. The purpose of this note is to further investigate the role of collateral in these models. Stiglitz and Weiss' discussion in Section III establishes that adverse selection effects can result from increases in collateral when borrowers are risk averse. I will show by returning to a model they discussed earlier in Section I, that the adverse selection effects can also occur when borrowers are risk neutral. Stiglitz and Weiss outline a model to consider the use of collateral as a rationing device (Section III). In that model, all potential borrowers face the same array of risky projects; each potential borrower chooses (at most) one of those projects to undertake. The individuals are, by assumption, risk averse with decreasing absolute risk aversion, and possess different amounts of initial wealth. Thus, choice of project (if any) to undertake and the method of finance-selffinance from initial wealth or loan finance-will differ from one individual to the next. Stiglitz and Weiss show that, among those who undertake risky projects and who choose borrowing as the method of finance, wealthier individuals undertake riskier projects. An increase in collateral has two effects on the market for loans: those individuals who remain in the market will choose to undertake less-risky projects; and those individuals who drop out of the market are less-wealthy, low-risk borrowers. If the second effect is sufficiently strong, then increased collateral requirements will mean decreased expected returns for the lender. Thus, a credit rationing equilibrium may occur, since lenders may not choose to use collateral requirements (or the interest rate) to eliminate excess demand. The adverse selection effect just described does not occur in this model if the individuals are risk neutral. Consequently, the potential for a credit rationing equilibrium is limited to cases where borrowers are risk averse. To see that increases in collateral requirements can also result in adverse selection if borrowers are risk neutral, consider the model Stiglitz and Weiss used to analyze the adverse selection effects of increases in the interest rate (Section I, pp. 395-99). It differs from the Section III model discussed above in three ways. First, borrowers are risk neutral; second, all projects ar loan financed. The analogy to the Section III model is that no individuals have sufficient wealth to self-finance projects.' Third, in the Section