scispace - formally typeset
Search or ask a question

Showing papers in "The American Economic Review in 1979"



Posted Content•
TL;DR: Lecture to the memory of Alfred Nobel, December 8, 1978(This abstract was borrowed from another version of this item) as discussed by the authors, Section 7, Section 2, Section 3.
Abstract: Lecture to the memory of Alfred Nobel, December 8, 1978(This abstract was borrowed from another version of this item.)

2,108 citations


Book Chapter•DOI•
TL;DR: This article reported the results of a series of experiments designed to discredit the psychologists' works as applied to economics and suggested that no optimization principles of any sort lie behind even the simplest of human choices and that the uniformities in human choices may result from principles which are of a completely different sort from those that are commonly accepted.
Abstract: A body of data and theory has been developing within psychology which should be of interest to economists. Taken at face value the data are simply inconsistent with preference theory and have broad implications about research priorities within economics. The inconsistency is deeper than the mere lack of transitivity or even stochastic transitivity. It suggests that no optimization principles of any sort lie behind even the simplest of human choices and that the uniformities in human choice behavior which lie behind market behavior may result from principles which are of a completely different sort from those generally accepted. This paper reports the results of a series of experiments designed to discredit the psychologists' works as applied to economics.

1,140 citations


Posted Content•
TL;DR: In this article, the authors developed a model which is a version of the asset view of the exchange rate, in that it emphasizes the role of expectations and rapid adjustment in capital markets, and it combines the Keynesian assumption of sticky prices with the Chicago assumption that there are secular rates of inflation.
Abstract: Much of the recent work on floating exchange rates goes under the name of the "monetary" or "asset" view; the exchange rate is viewed as moving to equilibrate the international demand for stocks of assets, rather than the international demand for flows of goods as under the more traditional view. But within the asset view there are two very different approaches. These approaches have conflicting implications in particular for the relationship between the exchange rate and the interest rate. The first approach might be called the "Chicago" theory because it assumes that prices are perfectly flexible.' As a consequence of the flexible-price assumption, changes in the nominal interest rate reflect changes in the expected inflation rate. When the domestic interest rate rises relative to the foreign interest rate, it is because the domestic currency is expected to lose value through inflation and depreciation. Demand for the domestic currency falls relative to the foreign currency, which causes it to depreciate instantly. This is a rise in the exchange rate, defined as the price of foreign currency. Thus we get a positive relationship between the exchange rate and the nominal interest differential. The second approach might be called the "Keynesian" theory because it assumes that prices are sticky, at least in the short run.2 As a consequence of the sticky-price assumption, changes in the nominal interest rate reflect changes in the tightness of monetary policy. When the domestic interest rate rises relative to the foreign rate it is because there has been a contraction in the domestic money supply relative to domestic money demand without a matching fall in prices. The higher interest rate at home than abroad attracts a capital inflow, which causes the domestic currency to appreciate instantly. Thus we get a negative relationship between the exchange rate and the nominal interest differential. The Chicago theory is a realistic description when variation in the inflation differential is large, as in the German hyperinflation of the 1920's to which Frenkel first applied it. The Keynesian theory is a realistic description when variation in the inflation differential is small, as in the Canadian float against the United States in the 1950's to which Mundell first applied it. The problem is to develop a model that is a realistic description when variation in the inflation differential is moderate, as it has been among the major industrialized countries in the 1970's. This paper develops a model which is a version of the asset view of the exchange rate, in that it emphasizes the role of expectations and rapid adjustment in capital markets. The innovation is that it combines the Keynesian assumption of sticky prices with the Chicago assumption that there are secular rates of inflation. It then turns out that the exchange rate is negatively related to the nominal interest differential, but positively related to the expected long-run inflation differential. The exchange rate differs from, or "overshoots," its equilibrium value by an amount *Assistant professor, University of California-Berkeley. An earlier version of this paper was presented at the December 1977 meetings of the Econometric Society in New York. I would like to thank Rudiger Dornbusch, Stanley Fischer, Jerry Hausman, Dale Henderson, Franco Modigliani, and George Borts for comments. 'See papers by Jacob Frenkel and by John Bilson. 2The most elegant asset-view statement of the Keynesian approach is by Rudiger Dornbusch (1976c), to which the present paper owes much. Roots lie in J. Marcus Fleming and Robert Mundell (1964, 1968). They argued that if capital were perfectly mobile, a nonzero interest differential would attract a potentially infinite capital inflow, with a large effect on the exchange rate. More recently, Victor Argy and Michael Porter, Jiirg Niehans, Dornbusch (1976a,b,c), Michael Mussa (1976) and Pentti Kouri (1 976a,b) have introduced the role of expectations into the Mundell-Fleming framework.

1,102 citations


Posted Content•
TL;DR: In this paper, Bernoulli and Cramer show that mean-variance analysis should be rejected as the criterion for portfoliQ selection, no matter how economical it is as compared to alternate formal methods of analysis.
Abstract: Suppose that an investor seeks to maximize the expected value of some utility function U(R), where R is the rate of return this period on his portfolio. Frequently it is more convenient or economical for such an investor to determine the set of mean-variance efficient portfolios than it is to find the portfolio which maximizes EU(R). The central problem considered here is this: would an astute selection from the E,V efficient set yield a portfolio with almost as great an expected utility as the maximum obtainable EU? A number of authors have asserted that the right choice of E, V efficient portfolio will give precisely optimum EU if and only if all distributions are normal or U is quadratic.' A frequently implied but unstated corollary is that a well-selected point from the E, V efficient set can be trusted to yield almost maximum expected utility if and only if the invector's utility function is approximately quadratic, or if his a priori beliefs are approximately normal. Since statisticians frequently reject the hypothesis that return distributions are normal, and John Pratt and Kenneth Arrow have each shwn us absurd implications of a quadratic utility function, some writers have concluded that mean-variance analysis should be rejected as the criterion for portfoliQ selection, no matter how economical it is as compared to alternate formal methods of analysis. Consider, on the other hand, the following evidence to the contrary. Suppose that two investors, let us call them Mr. Bernoulli and Mr. Cramer, have the same probability beliefs about portfolio returns in the forthcoming period; while their utility functions are, respectively,

868 citations


Posted Content•
TL;DR: In this article, an analytical and empirical interpretation of E-K complementarity consistent with basic microeconomics and with the manufacturing process engineering evidence of energy and capital substitutability is presented.
Abstract: Econometric studies have recently shown an apparent contradictory evidence regarding substitution possibilities between energy (E) and capital (K). An analytical and empirical interpretation of E-K complementarity consistent with basic microeconomics and with the manufacturing process engineering evidence of E-K substitutability is presented. The notion of utilized capital is developed, and some of the seemingly disparate econometric findings are reconciled. The analysis emphasizes to the authors that care must be taken in interpreting and properly comparing alternative elasticity measures. 42 references. (MCW)

438 citations


Posted Content•
TL;DR: In this article, a risk-averse, competitive firm is assumed to face price uncertainty and must choose its level of output before the price uncertainty is resolved and may, at the same time, buy or sell output in a forward market at a fixed price.
Abstract: Recent models of the competitive firm under uncertainty have explored ways in which the competitive firm's decisions would deviate from those of a firm operating with certainty and have explored the effects of increasing risk aversion on the firm's decisions (see, for example, Agnar Sandmo; David Baron; Hayne Leland; R. N. Batra and Aman Ullah; and the author). These models all assume that the firm's only response to uncertainty is to adjust its output or input levels. In practice, however, a number of institutions have been developed to aid firms in the management of risk. One of the most common methods used to deal with price uncertainty is hedging. Futures markets exist for many agricultural products and some metals. Firms may also use forward contracts to fix the price at which output or inputs are traded in the future. Although there is a large literature dealing with futures markets, few attempts have been made to incorporate futures or forward trading in the theory of the firm.' Such a model is developed in this paper. A risk-averse, competitive firm is assumed to face price uncertainty. It must choose its level of output before the price uncertainty is resolved and may, at the same time, buy or sell output in a forward market at a fixed price. The major result of the paper is that the firm will produce a level of output which depends only on the forward price and is, in particular, independent of the firm's degree of risk aversion and the probability distribution of the uncertain price. In addition, if the forward price is less than the expected future price, the firm will generally hedge some, but not all, output in the forward market; it will hedge more, the more risk averse it is; and it will hedge more as the riskiness of the uncertain price increases. Finally, the existence of a forward market will generally induce the firm to produce a greater output than it would have in the absence of such a market.

429 citations



Posted Content•
TL;DR: In this paper, the authors examined the use of fine and imprisonment to deter individuals from engaging in harmful activities, and the effect of risk aversion on these results was analyzed separately as well as together for identical risk-neutral individuals and then for two groups of risk neutral individuals who differ by wealth.
Abstract: This paper examines the use of fines and imprisonment to deter individuals from engaging in harmful activities. These sanctions are analyzed separately as well as together, first for identical risk-neutral individuals and then for two groups of risk-neutral individuals who differ by wealth. When fines are used alone and individuals are identical, the optimal fine and probability of apprehension are such that there is some "underdeterrence." If individuals differ by wealth, then the optimal fine for the high wealth group exceeds the fine for the low wealth group. When imprisonment is used alone and individuals are identical, the optimal imprisonment term and probability may be such that there is either underdeterrence or overdeterrence. If individuals differ by wealth, the optimal imprisonment term for the high wealth group may be longer or shorter than the term for the low wealth group. When fines and imprisonment are used together, it is desirable to use the fine to its maximum feasible extent before possibly supplementing it with an imprisonment term. The effects of risk aversion on these results are also discussed.

371 citations


Posted Content•
TL;DR: The bulk of international commerce consists of trade in intermediate goods, raw materials, and goods which require further local processing before reaching the final consumer as discussed by the authors.Although this has been shown to be beneficial in some cases, it has not always been the case.
Abstract: The bulk of international commerce consists of trade in intermediate goods, raw materials, and goods which require further local processing before reaching the final consumer. Although this has bec ...

307 citations


Posted Content•DOI•
TL;DR: In this article, the authors show that taking appropriate account of the costs of information provides an explanation of many phenomena which otherwise could not be explained, but that it also casts considerable doubt on a number of traditional economics, for example, the efficiency of competition, and the policy prescriptions derived from those presumptions.
Abstract: This paper is concerned with the relationship between information and market equilibrium: with the effect of information on the effective degree of competition, on the level of prices and their dispersion, on the variety and character of products produced by markets, on the one hand; and with the demand for information by consumers and the supply of information by producers, on the other. I shall argue not oniy that taking appropriate account of the costs of information provides an explanation of many phenomena which otherwise could not be explained, but that it also casts considerable doubt on a number of presumptions of traditional economics, for example, the efficiency of competition, and the policy prescriptions derived from those presumptions. Traditional models of competition with perfect information obviously cannot explain the widely observed phenomena of price distributions, which seem sufficiently persistent that they cannot simply be dismissed as a disequilibrium phenomenon; nor can they explain advertising; nor can they explain why markets in which there are only a few large firms often seem more competitive than markets with many small firms. The work I am about to describe, which attempts to characterize equilibrium in product markets in which information is costly, provides considerable insight into these phenomena. At the same time, examining markets with costly information raises several important conundrums for competitive equilibrium theory: in the simplest of models formulated, no market equilibrium exists. The resolution of this

Posted Content•
TL;DR: In this paper, the authors focused on the relationship between job mobility and migration and showed that the true effects of human capital variables job characteristics and family variables on the decision to migrate are best measured when one takes account of the relationship that exists between migration and job mobility.
Abstract: An important characteristic of the US population is its geographic mobility In 1970 18 percent of the population was living in a county that was different from their 1965 county of residence; half of these migrants had also moved across state lines Previous work on geographic mobility can be classified into two categories The first is composed of studies that have used aggregate data (for example Samuel Bowles Michael Greenwood 1969 Ira Lowry and Aba Schwartz) to examine the determinants of net or gross migration for SMSAs or other geographic divisions The second category of research has used data on individuals (for example Julie DaVanzo Richard Kaluzny John Lansing and Eva Mueller and Solomon Polachek and Francis Horvath) to explore the relationship between an individuals characteristics and his decision to migrate This article continues the work on the analysis of the individuals decision to migrate but differs from the previous studies by focusing on the relationship between job mobility and migration First the proportion of geographic mobility that occurs in conjunction with a job change is calculated Second it is shown that the true effects of human capital variables job characteristics and family variables on the decision to migrate are best measured when one takes account of the relationship between migration and job mobility Third the effect of migration on the wage gains of individuals is studied and again the need for distinguishing among moves that were associated with quits layoffs and transfers is clearly shown Finally by using three data sets that encompass different age groups (the National Longitudinal Surveys (NLS) of Young and Mature Men and the Coleman-Rossi Retrospective Life History Study) the importance of the relationship between migration and job mobility is demonstrated at different points in the life cycle Section I of the article presents some summary statistics on the extent of geographic mobility among the individuals in the samples and documents the relationship between migration and job mobility In Section II a framework for analyzing the decision to migrate is discussed Sections III and IV present the empirical results while Section V summarizes the analysis (excerpt)



Posted Content•
TL;DR: In this article, the authors provide empirical evidence on the impact of tax deductibility on the level of personal charitable contributions, and the implications of these results are summarized in Section V.
Abstract: In this paper I provide some new empirical evidence on the impact of tax deductibility on the level of personal charitable contributions. In order to provide empirical evidence of this kind we have to consider the broader question of the determination of the level of charitable giving by the household. The "price of contributions," which is dependent upon the tax treatment of those contributions, is only one element of this process. Efforts to quantify the impact of tax deductibility lead to an empirical model which allows us to test a number of hypotheses on the determination of the level of charitable contributions. In Section I, a model of personal charitable giving and its empirical implications are discussed. In Section 1I previous attempts to test some of these implications and to quantify the impact of tax deductibility on charitable giving are discussed. Section III presents the specification of the empirical model including a discussion of the data, and Section IV presents empirical results. In Section V the implications of these results are summarized.

Posted Content•
TL;DR: In this article, the authors show that when vertical integration of successive oligopolists is mutually profitable, industry output increases and product price is lowered, and that the welfare gain stemming from vertical integration is further shown to hold not only under Cournot oligopoly but also under a Stackelberg "leader-follower" type of oligopoly.
Abstract: Vertical integration of successive monopolists (with fixed production coefficients) has long been known to provide merging monopolists with greater profit and their customers with greater outputs at lower prices. We contended in our earlier papers that similar welfare attributes apply to mergers between monopolist input suppliers and Cournot-type oligopolists.1 But what is the result when the input supplier is also an oligopolist? The present paper answers this question. It demonstrates, in particular, that when vertical integration of successive oligopolists is mutually profitable, industry output increases and product price is lowered. The welfare gain stemming from vertical integration is further shown to hold not only under Cournot oligopoly but a Stackelberg "leader-follower" type of oligopoly. I. Independent Upstream-Downstream Oligopolists

Posted Content•
TL;DR: The results show that current methods leave out an important social transfer term, that "value of a life lost" is highly age-dependent, and that the degree of diminishing returns to consumption is crucial in calculations of the economic cost of risks.
Abstract: This study asks two questions: (1) What is the net value to the representative individual over his life-time of activities that alter age-specific mortality risks? (2) What is the cost to the representative individual of activities that take a life at random at a given age? Results, derived from an economic-demographic model with full age-specific accounting have a strong actuarial flavor: alterations in the mortality schedule, caused say by a medical breakthrough, should be assessed on the utility of expected additional life-years, production, and reproduction, less expected social costs of support. Loss of life at a specific age, due to an accident say, should be assessed on the opportunity costs of expected lost years of living, lost production and reproduction, less expected social support costs. The results show that current methods, in general, leave out an important social transfer term, that "value of a life lost" is highly age-dependent, and that the degree of diminishing returns to consumption is crucial in calculations of the economic cost of risks.

Posted Content•
TL;DR: In this article, some puzzling features of the most conspicuous form of wage bargaining, that done formally by employers and labor unions, deserve further theoretical attention: 1) Collective bargaining agreements are rarely contingent on outside events even though the parties have very imperfect knowledge of prospective economic conditions during the period of the contract.
Abstract: Much recent thought has been devoted to the macroeconomic importance of the existence of wage contracts. Still, some puzzling features of the most conspicuous form of wage bargaining, that done formally by employers and labor unions, deserve further theoretical attention. Among these important features are: 1. Collective bargaining agreements are rarely contingent on outside events even though the parties have very imperfect knowledge of prospective economic conditions during the period of the contract. The only important exception is the indexing of wages to the cost of living. 2. Employers are permitted wide discretion in determining the level of employment when demand shifts unexpectedly. As employment varies, total compensation varies according to a formula established in the agreement. 3. Agreements are not permanent but are renegotiated on a regular cycle. 4. In the process of renegotiation, the current state of demand has little impact on the new wage schedule. On the other hand, current wages in other industries have an important influence. This feature especially has been denied or ignored by economic theorists even though it is a prominent part of the thinking of labor economists on wage determination.

Posted Content•
TL;DR: In this article, the authors argue that if government gets too large, why can't voters band together to stop its growth? Rational, informed, democratic voting processes should provide a limit to the size of the public sector; indeed they should insure that public sector is just as large as the voters want it to be.
Abstract: Recent budgetary rhetoric emanating from Washington and other governmental capitals suggests a growing fear that public spending is getting out of control. For long periods of time the government budget has grown more rapidly than GNP in most mixed economies, and observers of these trends have begun to realize that if this process continues, public expenditures will approach very high shares of GNP and income tax rates could get close to unity.1 These scare stories are counteracted by the simple question that if government gets too large, why can't voters band together to stop its growth? Rational, informed, democratic voting processes should provide a limit to the size of the public sector; indeed they should insure that the public sector is just as large as the voters want it to be. According to what economists have come to know as the "median voter" theory, it is puzzling to know exactly how government spending could ever get too high or out of control. There have been several attempts to explain the apparent anomaly. The major focus of previous efforts has been on some aspect of bureaucratic aggrandizement, either broadly or narrowly construed. William Niskanen (1971), for example, presents a model in which bureaucracies desire to obtain as large a budget as possible for the bureau in which they are employed. (See also his 1975 paper.) Despite competition from other bureaus, the size of the overall governmental budget is larger than socially optimal because the nature of the budget process allows bureaus to act as price-discriminating revenue maximizers. Their ability to use their market power is constrained, both by competition from other bureaus and by the preferences of relevant legislative committees. As is implicit in the title of his work, Bureaucracy and Representative Government, Niskanen's major concern is with the way in which the institutions of representative government (particularly the U.S. federal government) may lead to an overprovision of public services. The model is not directly relevant to the behavior of local governments since it ignores two important constraints on local government spending. One is provided by households' opportunity to vote directly on referenda concerning tax collections, and the other by the ability of households to leave local jurisdictions in response to expendituretaxation packages which they find to be unsatisfactory.2 More general in application than Niskanen's work are a number of papers which focus on the ability of public employees to influence the political process so as to increase both wages and the size of the public sector.3 The implications of this approach have been discussed by a number of authors, but in each case the underlying model has been left unstated or undeveloped. For example, James Buchanan considers the possible ramifications of the right of public employees to vote when he argues:




Posted Content•
TL;DR: In this article, a monopolistically competitive model of airline markets is introduced and analyzed, which takes account of the product differentation effect resulting from variation in flight departure times, and the effects of flight frequency and load factor on service quality.
Abstract: This paper introduces and analyzes a monopolistically competitive model of airline markets which takes account of the product differentation effect resulting from variation in flight departure times, and the effects of flight frequency and load factor on service quality. The basic results are that (1) when the direct benefits (to consumers) of increasing flight frequency are exhausted, socially optimal choices of price and frequency result in zero profits for the industry, but (2) a noncooperative, free entry equilibrium always results in higher prices, lower load factors, and greater frequency than are socially optimal.

Posted Content•
TL;DR: This paper used detailed data collected from nearly 4,900 rural households (including landless laborers, farmers, and nonagricultural workers) in West Bengal in what may be among the first econometric attempts to estimate labor supply functions' in peasant agriculture.
Abstract: The analytical literature on employment, unemployment, and wage determination in poor agrarian economies is large, albeit inconclusive. Empirical work in this area is comparatively scanty. For the most part it relates either to the question of "surplus" labor in peasant agriculture (and other unorganized activities) or to that of labor use and productivity in studies of production functions fitted to farm management data. There have been few systematic empirical studies of labor supply and labor market participation behavior of peasant households. The usual farm management data are not good enough for this purpose, particularly because they exclude the substantial class of landless laborers who do not have a farm. In this paper I have used detailed data collected from nearly 4,900 rural households (including landless laborers, farmers, and nonagricultural workers) in West Bengal in what may be among the first econometric attempts to estimate labor supply functions' in peasant agriculture. The data set is part of a very large-scale employment and unemployment survey of households carried out by the National Sample Survey Organization in India for the oneyear period of October 1972--September 1973. In Section I the nature of the data is described and the results presented on labor supply behavior. My evidence seems to be against the standard horizontal supply curve of labor assumed in a large part of the development literature. In Section II the factors influencing labor participation rates for rural women are analyzed. Section III contains an analysis of the wage rates quoted as "acceptable" by different groups of respondents. Such answers came in response to hypothetical questions on wage employment to give us some idea of the supply prices of labor.

Posted Content•
TL;DR: In this article, Baumol et al. presented a theory of indexes which measure the rate of potential improvement in the welfare performance of an industry, based on the assumption that different modes of firms' conduct lead to different indexes, and the choice among these indexes should be based on an assessment of the behavior of the industry's firms.
Abstract: This paper presents a theory of indexes which measure the rate of potential improvement in the welfare performance of an industry. These indexes indicate the magnitude of gross social gains achievable from appropriate governmental intervention (for example, antitrust, regulatory and deregulatory actions, or threats thereof). The indexes are local measures which can be calculated from data pertaining to the current industry structure (i.e., market shares and demand elasticities). Surprisingly, the indexes reduce to simple transformations of standard indexes of market concentration' and monopoly power (namely the m-firm concentration ratio, the Herfindahl index, and the Lerner index) given familiar sets of assumptions on firm behavior (respectively: collusive price-leadership, quantity Cournot, and pure monopoly). Since different modes of firms' conduct lead to different indexes, the choice among concentration index formulae should be based on an assessment of the behavior of the industry's firms. We find that the potential improvement in welfare performance is as sensitive to mode of conduct and other industry data as it is to the observed market shares. Consequently, our analysis provides a quantification of the idea that concentration per se does not necessarily warrant governmental intervention. Our theory at once provides a general index concept, new rigorously based practical indexes, a conceptual framework for the interpretation of standard indexes, and insights into appropriate criteria for governmental intervention. A rational appraisal of the desirability of a governmental action towards an industry can be phrased as a comparison of the benefits and the costs of the intervention. Each of the many possible governmental actions can conceptually be associated with the vectors qo and q of the outputs of the firms in the industry, before and after the intervention, respectively. The gross benefits of each action may be expressed as W(q) W(q?), where W(.) is the sum of consumers' and producers' surpluses. While received theory does guide the specification of the social objective function, little can be said at this level of generality about the social cost of governmental action which moves industry outputs from qo to q. Even so, it is useful to examine the benefit side of the rational calculus of intervention. It appears that the government regards an industry with high values of the standard concentration indexes as a prime candidate for intervention.2 Thus, using the cost-benefit vocabulary, the prevailing view seems to be that the concentration indexes are strongly positively correlated with W(q) W(q?), where q is the result of appropriate corrective action. In this paper we synthesize the rigorous cost-benefit and the practical index number approaches to the identification of industries where the government's intervention efforts will be well placed. Our aim is to develop tools capable of assessing W(q) W(q?). Yet, to ensure that the tools are practical ones, we accept constraints implicit in the index number methodology and confine ourselves to the use of information on only the current situation of the industry. Consequently, we focus on the rate of change of W( ) at qo; that is, on the current sensitivity of social welfare *American Telephone and Telegraph Company and Princeton University, respectively. This paper was written while we were employed by Bell Laboratories and is partly based on Dansby's doctoral dissertation. We are grateful to W. J. Baumol, A. Weiss, and S. Winter for extremely helDful comments and discussions. 'The measurement of industrial concentration is discussed by Morris Adelman, John Blair, and Russell Parker. The data used in these measurements typically come from Bureau of the Census or Federal Trade Commission sources. See J. E. Morton. 2Although economists debate the relative merits of various concentration indexes (see Eugene Singer or James Delaney), the government unabashedly uses these indexes to guide intervention activities (see F. M. Scherer).

Posted Content•
TL;DR: This paper analyzed the long-run effects of changes in Social Security on capital accumulation and the equilibrium wage and interest rates in a growing economy and showed that an appropriate Social Security system can increase the long run well-being of the economy by causing the return on capital to converge to the Golden Rule level.
Abstract: The Social Security system has played an important role in the economic life of American families. It not only provides security for the elderly, but is a device for automatic stabilization, a method of income redistribution, as well as an important factor affecting capital accumulation and the supply of labor. The purpose of this paper is to analyze the long-run effects of the Social Security system in a growing economy. The model employed here extends and generalizes the neoclassical life cycle growth models of Peter A. Diamond and Paul A. Samuelson by explicitly allowing for an endogenous retirement decision and bequest motive. I consider an economy in which the population grows at a constant rate. Each individual lives for two periods. In the first period, he works full time, earning an income of w and paying a Social Security tax of T. In the second period, he works a fraction of time and then retires, receiving from the government a pension of z. He is to choose a consumption path, a retirement age, and an amount of bequest so as to maximize his lifetime utility. From these individual decisions and the assumption that the government budget is balanced each period, we derive the aggregate capital and labor supply functions and analyze the effects of changes in Social Security on capital accumulation and the equilibrium wage and interest rates. The present model is similar to that of Martin S. Feldstein in that retirement decisions are assumed endogenous. The main difference is that his is a partial equilibrium analysis while the model presented here is a general equilibrium model capable of analyzing long-run effects. I show that the short-run effects of Social Security depend primarily on the elasticities of the demand and supply of labor, and its long-run effects are influenced as well by the elasticities of savings and bequest. It is further shown that an appropriate Social Security system can increase the long-run well-being of the economy by causing the rate of return on capital to converge to the Golden Rule level. If, however, the tax and pension levels are tied to the individual workingretirement decisions, the system causes distortions in the labor market. Because of this distortional effect, the optimal Social Security does not necessarily lead to the Golden Rule.

Posted Content•
TL;DR: In this paper, it was shown that the conditions necessary for effective signalling to emerge may not always be satisfied, and that it is possible to make every agent in the market better off simply by raising the price.
Abstract: A common characteristic of a large class of markets is that one side of the market is more informed than the other about the properties of one of the goods being traded. In some instances, this presents no serious problem. If the informed agents deal on a regular basis with the less-informed agents (for example, local grocers, barbers), there may be little incentive for the informed agents to take advantage of their superior information. In other cases, the problem may be avoided if it is profitable for specialists (or some government agency) to provide the information at a relatively low cost (for example, credit agencies, Consumer Reports). Frequently, however, these kinds of market responses provide at best a partial reduction in the informational asymmetry. There may still be substantial benefits to the less-informed agents from acquiring more information. How the market will respond under these circumstances has been the focus of much recent research. Most of the attention, however, has been directed at examining the possibility that a signalling convention will emerge. The essential idea is that sellers of high quality products may choose contracts or invest in observable characteristics which distinguish their products from those of lower quality. Although I believe that signalling is an important and pervasive phenomenon, the conditions necessary for effective signalling to emerge may not always be satisfied. It is important, therefore, that we understand how the allocation of goods is affected in the absence of signalling, when the only variable that agents may use to distinguish quality is the price. This paper provides an overview of some of my recent research on this question. My investigation begins with a welfare analysis of the Walrasian equilibrium. Specifically, the question is whether or not it is necessarily desirable for trade to take place at a price which clears the market. My analysis indicates that it is not. Under some conditions, it may be possible to make every agent in the market better off simply by raising the price. Besides generating some obvious policy implications, this result also suggests that the Walrasian equilibrium may not always be the appropriate equilibrium concept for this model. In a market with homogeneous goods, it is generally argued that independently of how the prices are set, as long as there is a large number of buyers and sellers, competitive pressures will force the price toward a stable Walrasian equilibrium. When an adverse selection problem appears, however, the possibility that some buyers may prefer a price higher than the one which clears the market casts some doubt as to whether such pressures will still be present. It is no longer obvious that the market will clear or even that all trade will take place at a single price. These points can be conveniently illustrated using George Akerlof's model of the used car market. There is a set of cars of varying quality q distributed over an interval [ql, q2] with densityf (q). Each agent in the economy has an identical utility function u(c, q; t) = c + tq where c is consumption of other goods, q is the quality of car he consumes, and t is a parameter equal to his marginal rate of substitution of car quality for consumption. (If an agent does not consume a car, q may be set equal to zero.) The set of agents can be divided into two subsets, those that initially own exactly one car and those that own none. Each owner has the same utility parameter, t = 1; for the nonowners, however, t is distributed continuously over some interval [tl, t2] with density h(t). As long as each owner can directly identify the quality of his own car, the supply curve will have the usual positive slope. A utility maximizing owner with a car of quality q will sell at price p if and only if q _ p. As the price rises, therefore, more cars will be supplied. If *Department of economics, University of Wisconsin. This research was supported by the National Science Foundation under Grant SOC-77-08568.

Posted Content•
TL;DR: Daniel et al. as discussed by the authors presented a model of the labor force behavior of married women in which both individual and family decision making, and macro labor market conditions are found to play important roles.
Abstract: The major difference between segmented labor market and human capital theories about the labor force behavior of women lies in the attention paid to micro vs. market-wide or macro variables. In a study such as James Heckman's (1976) which includes no market variables, the demand for the labor of married women in any given education-experience (and hence offered wage) class is implicitly assumed to be infinitely elastic. Thus the observed differences in the labor force behavior of individual women are attributed entirely to differences in supply characteristics such as education and child status. On the other hand in segmented labor force analyses, such as Barbara Bergmann's and lrma Adelman's study, the macro phenomenon of occupational segregation by sex is seen as the major factor affecting the participation, wage rates, and hours of work of women. These two types of studies lead to different explanations of why the labor force participation of women has increased in recent years. Different sets of government policies aimed at improving the labor force situation of women are also implied. In this paper we present a model of the labor force behavior of married women in which both individual and family decision making, and macro labor market conditions are found to play important roles. An unemployment variable and an index summarizing the ratio of expected available local job slots for women to the potential female labor force population are incorporated into a marginal utility analysis of the labor force behavior of married women in Canada. The inclusion of the local opportunity for jobs variable is supported by detailed evidence on the labor force segregation of women in Canada. Consistent estimation results are presented for eleven age groups in a probit analysis of whether or not a married woman works, and for eight age groups in equations estimating the offered wage rates and annual hours of work of married women who do work. One unexpected finding is that working wives in Canada tend to work fewer hours per year when paid more per hour. This is contrary to the findings of other researchers for the United States, and has important policy implications. Although it is possible that our results differ from those of other researchers solely because we have analyzed data for another country, we argue in Section V of this paper that the difference in results is more likely due to differences in the form in which the labor supply function for wives is estimated and the choice of the variables which are used to control for child status. Our resulting uncompensated wage elasticities of hours of work are shown to be very similar to those reported by other researchers for men. The data base used in this study is the Family File of the first Public Use Sample to be made available from a Canadian census. Combined grouped R2s are presented showing the extent to which our equations explain the observed macro variations in the labor force behavior of married women classified by various characteristics. Finally we use our estimated model to see what changes we would expect in the labor force behavior of a hypothetical 41-year-old wife living in a small city in New Brunswick given a variety of changes *Faculty of Business Adminstration and Commerce, University of Alberta. For further computational results and theoretical arguments supporting various statements in this paper, see our book. The work for this paper was supported in part by the Statistics Canada-SSRCC Programme of 1971 Census Analytical Studies, and by the Faculties of Graduate Studies and Research and of Business Administration and Commerce of the University of Alberta. The empirical results in this paper are primarily based on Public Use Sample Data derived from the 1971 Canadian Census of Population supplied by Statistics Canada. The responsibility for the use and interpretation of these data is entirely ours. We would like to thank T. Daniel, K. Gupta, anonymous referees, and the managing editor for their helpful comments, and James Heckman for making available to us some of his work which had not yet been published.

Posted Content•
TL;DR: The New Jobs Tax Credit (ETC) was one of the four programs in the 1977 economic stimulus package as discussed by the authors, which was viewed primarily as a countercyclical measure, may also alter the equilibrium unemployment rate, UN.
Abstract: The New Jobs Tax Credit was one of the four programs in the 1977 economic stimulus package. This program, although viewed primarily as a countercyclical measure, may also alter the equilibrium unemployment rate, UN.' This paper presents our preliminary analysis of the Department of Labor survey, conducted by the Bureau of the Census, in which firms described their responses to this employment tax credit (ETC). To date, our results indicate the potential for a large employment effect. Ordinary least squares estimates suggest that firms which knew about the program increased employment 3 percent faster than other firms. A second analysis which uses multinomial logit techniques indicates that the ETC shifted the entire distribution of employment growth to the right: slowly growing firms increased employment to capture the credit. Since the firms which knew about the program, however, were not randomly drawn, our results may overstate the program's employment effect. Due to the nature of the survey data, we can only focus on direct employment effects.2 It is useful, however, to at least mention the other potential effects of the program. First, unlike Comprehensive Employment Training Act (CETA) programs which increase public employment, the ETC should increase employment in the private sector. Within the private sector, the rules of the current ETC program provide an additional stimulus to the growing industries and, to a lesser extent, to small establishments. Second, the long-run structural effects of this two-year program are probably small. A permanent ETC, however, may be able to lower UN of disadvantaged workers.

Posted Content•
TL;DR: In this paper, the authors apply Girton and Roper's model of exchange market pressure to the postwar Brazilian monetary experience and show that a much greater proportion of the exchange market market pressure was absorbed by exchange rate depreciation than in the Canadian case where changes in reserves were large relative to exchange rate movements.
Abstract: This study applies Lance Girton and Don Roper's (hereafter G-R) monetary model of exchange market pressure to the postwar Brazilian monetary experience. The model was designed specifically for the Canadian managed float during the period 1952-62. The object of their model is to explain what they term "exchange market pressure"; that is, the pressure on foreign exchange reserves and the exchange rate when there exists an excess of domestic money supply over money demand in a managed floating exchange rate regime. The basic theoretical proposition is that any such excess supply of money can be relieved by an exchange depreciation, a loss in foreign reserves, or, in the context of a managed float, by some combination of the two. In this sense, the G-R managed float model used here is firmly rooted in the modern monetary approach to exchange rates and the balance of payments.' Brazil provides a particularly good example for testing this approach, not only because it is in many senses a unique example of a postwar managed float system, but also because it can be treated as a "small, open" economy in the sense that world prices and monetary conditions faced by Brazil are taken as given. This particularly suits the purpose of most modern monetary models which make this assumption and obviates the problems of monetary dependence and neutralization dealt with in the pioneering G-R paper. Specifically, the small-country assumption permits us to devise a simple one-country equation of managed floating which depends upon four essential ingredients: 1) money demand, 2) money supply, 3) purchasing power parity, and 4) monetary equilibrium.2 Furthermore, in Brazil a much greater proportion of exchange market pressure was absorbed by exchange rate depreciation than in the Canadian case where changes in reserves were large relative to exchange rate movements. In short, postwar Brazil provides a singularly good opportunity to test the monetary model of exchange market pressure. Section I briefly states the essential elements of the monetary model, and derives the equation to be tested for the Brazilian experience from 1955 to 1975. Section II reports empirical results for the exchange market pressure model, and Section III examines the applicability of the relative version of purchasing power parity for the time period considered. Section IV summarizes the results and discusses the merits of the monetary approach in light of the Brazilian experience.