scispace - formally typeset
Search or ask a question

Showing papers in "The Review of Economics and Statistics in 1978"


Journal ArticleDOI
TL;DR: In this article, the authors focus on the first three months of training under the Manpower Development and Training Act (MDTA) in the U.S. in order to measure the full inter-temporal impact of training.
Abstract: GOVERNMENTAL post-schooling training programs have become a permanent fixture of the U.S. economy in the last decade. These programs are typically advocated for diverse reasons: (1) to reduce inflation by the provision of more skilled workers to alleviate shortages, (2) to reduce unemployment of certain groups, and (3) to reduce poverty by increasing the skills of certain groups. All of these objectives require that training programs increase the earnings of trainees above what they otherwise would be. For example, alleviating shortages by training more highly skilled workers should increase the earnings of these workers. Likewise, the concern for unemployed workers is derived from a concern for the decreased earnings of these workers; and if trainees subsequently suffer less unemployment, their earnings should be higher. Finally, training programs are intended to reduce poverty by increasing the earnings of low income workers. Evaluating the success of training programs is thus inherently a quantitative assessment of the effect of training on trainee earnings.' It is an important process both because it helps to inform discussions of public policy by shedding light on the past value of these programs as investments and because it can provide a means of testing our ability to augment the human capital of certain workers. Although there have been many studies of the effect of post-school classroom training on earnings it is by now rather widely agreed that very little is reliably known about the actual effects of these programs.2 Three main problems account for this state of affairs: (1) the large sample sizes required to detect relatively small anticipated program effects in a variable with such high variance as earnings, (2) the considerable expense required to keep track of trainees over a long enough period of time to measure the full inter-temporal impact of training, and (3) the extreme difficulty of implementing an adequate experimental design so as to obtain a group against which to reliably compare trainees.3 The purpose of this paper is to report on efforts to cope with this third problem using a data collection system that comes some way towards resolving the first two. The basic idea of this data system is to match the program record on each trainee with the trainee's Social Security earnings history. The Social Security Administration maintains a summary year-by-year earnings history for each Social Security account over the period since 1950 that may be used, under the appropriate confidentiality restrictions, for this purpose.4 In this paper I have concentrated on an analysis of all classroom trainees who started training under the Manpower Development and Training Act (MDTA) in the first 3 months of 1964 so as to ensure their having completed training in that year. In choosing to analyze trainees from so early a cohort something is clearly lost. On the one hand, the nature of the participants in these early years was considerably different than in the later years. In particular, programs geared Received for publication February 9, 1977. Revision accepted for publication August 1, 1977. * Princeton University. This research was supported by ASPER, U.S. Department of Labor, but does not represent an official position of the Department of Labor, its agencies, or staff. I would like to thank Gregory Chow, Ronald Ehrenberg, Roger Gordon, Zvi Griliches, George E. Johnson, Nicholas Kiefer, Richard Quandt, and Sherwin Rosen for helpful comments. I also owe a heavy debt to D. Alton Smith for computational and other assistance. 'See Reid (1976), for example, for a clear analysis of how knowledge of these effects is required in order to establish the impact of government training on the black/white wage differential. 2 Surveys of many of these studies may be found in Stromsdorfer (1972) and O'Neill (1973). 3For further discussion of these points see Ashenfelter (1975). 4The idea for using these data to analyze the effectiveness of government training programs is apparently quite an old one, having been suggested by the National Manpower Advisory Committee (U.S. Department of Labor, 1972) to the Secretary of Labor at its first meeting in a letter dated October 10, 1962, the year of passage of the Manpower Development and Training Act. Actual efforts along these lines were ultimately reported by Borus (1967), Commins (1970), Farber (1970), and Prescott and Cooley (1972).

1,456 citations


Journal ArticleDOI
TL;DR: This paper present a model of voting behavior that is general enough to incorporate what appear to be most of the theories of voting behaviour in the recent literature and that allows one to test their model against another.
Abstract: N important question in political economy A.’ ts how, if at all, economic events affect voting behavior. Although there is by now a fairly large literature devoted to this question,’ there is no widely agreed upon answer. Kramer (1971), for example, concluded from his analysis of U.S. voting behavior that economic fluctuations have an important influence on congressional elections, whereas Stigler (1973) concluded that they do not. This debate‘has been continued by Arcelus and Meltzer (1975a, b), Bloom and Price (1975), and Goodman and Kramer (1975)? Many of the disagreements in this area are over statistical procedures and the interpretation of empirical results, but it is also clear that there is no single theory of voting behavior to which everyone subscribes. Unfortunately, the distinction between theoretical and empirical disagreements in this literature is often not very sharp, and there has been no systematic testing of one theory against another. This paper has two main purposes. The first is to present a model of voting behavior that is general enough to incorporate what appear to be most of the theories of voting behavior in the recent literature and that allows one to test

916 citations


Journal ArticleDOI
TL;DR: The following sections are included:INTRODUCTIONTheorETICAL SPECIFICATION of the EXPORT FUNCTIONSResults Conclusion as discussed by the authors, Appendix A.1, Appendix B.
Abstract: The following sections are included:INTRODUCTIONTHEORETICAL SPECIFICATION OF THE EXPORT FUNCTIONSRESULTSCONCLUSIONAPPENDIXREFERENCES

507 citations


Journal ArticleDOI
TL;DR: In this article, the authors put forward some simple theoretical hypotheses concerning the nature of the interrelationship between the economy and the polity, particularly with respect to (central) government.
Abstract: N modern society, where government has assumed a major role in economic affairs and where the electorate has made it increasingly responsible for material well-being, it has become important to analyse the interaction between economic and political systems. Government should no longer be regarded as exogenous to the economic system. This is particularly the case with respect to econometric model building. As some authors have noted, an econometric model may be subject to serious misspecification if an endogenous variable (such as government expenditure) is treated as if it were exogenous.' The study of politico-economic interdependence also has important consequences for forecasting. As the future course of economic events is strongly dependent on government action, existing macroeconometric models that regard government as exogenous are of limited use for prediction. Furthermore, economic policy advice is often unsuccessful because it does not take political repercussions into account. A deflationary policy, for example, will hardly be adopted by a government just before an election because it carries with it a high risk of leading to government's losing the election. Politicoeconometric modelling helps economists concerned with government advising to advance proposals that have a reasonable chance of being put into action. This study puts forward some simple theoretical hypotheses concerning the nature of the interrelationship between the economy and the polity, particularly with respect to (central) government. The basic relationships are reflected in the popularity function, which describes the impact of economic conditions on government popularity; and in the reaction function, which shows how government uses policy instruments to steer the economy in a desired direction. These relationships are econometrically tested with quarterly data for the United States for the period 1953-1975. In the model both voters and government are assumed to be utility maximizers, and government's behavior is restricted by various economic, political and administrative constraints. The analysis shows that the government's (or in the case here dealt with, the president's) popularity is significantly reduced when the rate of unemployment and/or of inflation rises, and that it is significantly increased when the growth rate of private consumption rises. Government reacts to changes in its popularity because this is taken as an indicator of future electoral outcome. When popularity is low, it tries to steer the economy so as to increase its re-election chances; when popularity is high enough, it can afford to pursue ideologicallyoriented policies, which need not always be popular with the electorate. There have been a number of papers that have dealt with the influence of economic variables on election outcomes and on government popularity, most of which are unsatisfactory on theoretical and statistical grounds. There are, on the other hand, only a few that have been concerned with government reaction functions. Moreover, these studies have been either apolitical and interested only in the implied weights of a welfare function (e.g., Friedlaender, 1973); or they have related to only a particular section of the economy (e.g., Received for publication June 14, 1976. Revision accepted for publication November 30, 1976. * University of Zurich. A first version of this paper was written during a stay at the Cowles Foundation, Yale University. It was revised in the light of comments received when it was presented at the Cowles Foundation Seminar and at seminars at Princeton University, the University of North Carolina at Chapel Hill and the Center for Study of Public Choice, Virginia Polytechnic Institute and State University. The authors are especially grateful to A. S. Blinder, J. M. Buchanan, R. C. Fair, G. M. Heal, D. F. Hendry, C. Goodrich, G. H. Kramer, G. Kirchgaessner, D. MacRae, W. D. Nordhaus, W. E. Oates, E. R. Tufte, G. Tullock, R. Wagner, and to the anonymous referees. ' See Crotty (1973), Goldfeld and Blinder (1972), Blinder and Solow (1974, pp. 69-77).

450 citations


Journal ArticleDOI
TL;DR: In this paper, the potential gains to producers of forming cartels to market exhaustible resources by calculating both monopolistic and competitive price trajectories were examined to determine the potential gain to producers. But, they concluded that cartel formation has an advantage for petroleum and bauxite, but not for copper.
Abstract: Three cartels are examined to determine the potential gains to producers of forming cartels to market exhaustible resources by calculating both monopolistic and competitive price trajectories. Included in the study are the Organization of Petroleum Exporting Countries (OPEC), International Council of Copper Exporting Countries (CIPEC), and the International Bauxite Association (IBA). An optimal pricing model is described and applied to each of the cartels. Cartels are concluded to have an advantage for petroleum and bauxite, but not for copper. The smaller market shares of CIPEC and short-term lag adjustments seem to be the determining factors rather than resource exhaustion. Future research is needed to determine the effects of competitive firms changing their price expectations and cartel formation of consuming countries. 27 references.

306 citations


Journal ArticleDOI
TL;DR: For example, this paper found that certain unemployed workers are more likely to migrate than employed workers, but because these studies do not use adequate controls for other characteristics that may affect migration, the direct contribution of unemployment to migration cannot be determined.
Abstract: Migration is a means of improving the allocation of human resources. People living in places where they are not fully employed or most highly valued are expected to move to destinations having brighter prospects but both policymakers and migration analysts seem to have mixed opinions as to whether private market forces alone are sufficient to induce them to do so. On the one hand there has been considerable discussion of policies (e.g. relocation assistance programs) designed to affect migration directly by enabling or inducing the unemployed to move to more promising labor markets the implication being that the unemployed are not themselves sufficiently responsive to economic conditions. On the other hand policies of investment in depressed areas are based largely on the premise that expanded local economic opportunities will reduce economically forced outmigration. Available evidence has provided little guidance to policymakers and others concerned with the influence of personal and area unemployment on outmigration. Several studies using survey data (e.g. Saben 1964; Lansing and Mueller 1967) have confirmed that certain unemployed workers are more likely to migrate than employed workers; but because these studies do not use adequate controls for other characteristics that may affect migration the direct contribution of unemployment to migration cannot be determined. Studies using aggregate data to assess the relationship between unemployment and migration (e.g. Lowry 1966) have obtained mixed results but unemployment is often measured at the end of the migration period an hence may have been affected by the intervening migration. (excerpt)

305 citations


Journal ArticleDOI
TL;DR: In this paper, the complexity of the structure of strategic groups populating an industry exerts a significant influence on its performance, and the importance of these groups is investigated in a sample of producer-good industries.
Abstract: STATISTICAL analyses of the structureperformance relationship in manufacturing industries have invariably assumed that an industry's member firms differ only in their market shares. This paper demonstrates that this assumption is often incorrect, and that the complexity of the structure of strategic groups populating an industry exerts a significant influence on its performance. In the following sections we explain the sources and significance of strategic groups, derive hypotheses about their influence on an industry's profitability, and test these hypotheses on a sample of producer-good industries.

285 citations



Book ChapterDOI
TL;DR: This paper analyzed how the costs of providing waste removal service vary systematically with the identity of the collector, the degree of competition, and the size of the market served, and developed a theoretical model to analyze the interrelated impact of scale and market structure on cost.
Abstract: HOUSEHOLD refuse may be collected by public agencies or private firms operating in areas of spatial monopoly or not and subject to considerable or little government regulation. These characteristics define the form and scale of the market structure for refuse collection. This paper analyzes how the costs of providing waste removal service vary systematically with the identity of the collector, the degree of competition, and the size of the market served. Early studies of refuse collection either did not address the impact of market structure on costs or neglected to define clearly the service being provided. A major conclusion of these studies is that a higher level of service (i.e., more frequent collection, pickup location more distant from the curb) is more costly than a lower level of service (Hirsch, 1965; Clark et al. 1971; Partridge, 1974).' More recent studies have shed some light on the interrelationships of scale, market structure and costs, but the evidence so far has been scattered. Young (1972) presented an argument for the existence of some scale economies and for the greater efficiency of the private than of the public collector of refuse, but no empirical tests of these hypotheses were offered. In the one preceding study that investigated the presence of scale economies in refuse collection, estimates did not hold the service level constant (McFarland et al., 1972). Although some scale economies were found, these results may be misleading if small markets tend to demand a higher (and thus more costly) level of service than do large markets. A recent study of a small group of cities in Connecticut (Kemper a d Quigley, 1976) investigated the impact of market structure on costs, holding service level constant, but was unable, due to limitations of the data, to incorporate a consideration of scale nto the analysis. Although private firms were found to collect refuse at lower costs than public agencies, the findings may be spurious if private firms tend to serve larger markets (perhaps encompassing several cities and towns) than do public agencies, or, alternatively, the findings may be an outgrowth of specific regional or local conditions. The remainder of this paper attempts to remedy some of the omissions of preceding studies. The next section defines three distinct market structures, each of which is frequently observed in the real world, and develops a theoretical model to analyze the interrelated impact of scale and market structure on cost. Following this, equations derived from the theoretical framework are estimated and the results are presented. Equations are estimated using a nation-wide data base holding scale and service level effects constant.2 The final section contains policy guidelines and conclusions arising directly from the empirical work.

246 citations


Journal ArticleDOI
TL;DR: In this paper, the authors empirically examined whether security prices and volume are causally related and concluded that the relationship is positive and linear if a hypothesis of sequential information is valid.
Abstract: THE purpose of this paper is to empirically examine whether security prices and volume are causally related. Existing models that attempt to analyze the interdependence of price and volume in speculative markets have been generally built on basic underlying demand and supply conditions. Crouch (1970), Clark (1973), Westerfield (1973), and others have postulated that the absolute value of price change is positively and linearly related to volume. The rationale is that given an initial equilibrium position and assuming a net increase (decrease) in demand for a security, the market will eventually clear that security at some price above (below) the equilibrium price. During the process of adjustment transactions are constantly occurring as a function of demand. Under these conditions one would expect a rise in the volume of transactions with both a rise or fall in the price level. Copeland (1974) refined this theory by establishing the direction of the relationship between the absolute value of price changes and volume under two assumptions of information arrival. Using trading volume as a proxy for the rate of information arrival in the market, Copeland concludes that i the relationship is positive and linear if a hypothesis of sequential information is valid. An inverse correlation would support the notion of simultaneous information arrival. An alternative approach is to view a degree of association between price change per se and volume. The research of Ying (1966), Hsu (1973), and Epps and Epps (1976) are in this category. Illustrative of this line of reasoning Epps and Epps (E&E) developed a theory of financial markets based on a two-parameter portfolio model. Their rationale is that news reaching the market is generally accompanied by a change in the overall average of investors' expectations. And if investors are classified as either buyers or sellers (these groups are identified by their response to news entering the market) then the news reaching the market will result in changes in the average expectations of each of the two groups. Combining these two notions, the major premise of their model is that the extent of disagreement between these two groups of investors tends to increase with the absolute value of the overall average change in expectations. Building upon this assumption, E&E are able to imply that their theory (and empirical evidence) supports the notion of positive stochastic dependence between volume and security price change. In this paper additional empirical evidence is provided to support the thesis that price change per se and volume for individual securities are positively interrelated. The main methodological novelty is the use of a direct test of independence/causality developed by Haugh (1976), which is outlined in section II. In section III the results of the empirical tests are presented. The hypotheses are tested on stock data and warrant data. To date no empirical evidence has been published concerning the stochastic process generating warrant volume or the interrelationships between warrant price, warrant volume, stock price, and stock volume. Section IV contains a summary and conclusions.

206 citations




Journal ArticleDOI
TL;DR: In this paper, the authors show that spline functions can be easily fitted by any standard package for ordinary least squares regression and apply it to the relationship of fertility to per capita income.
Abstract: IT sometimes happens that when a new mathematical or statistical procedure is adopted from one discipline into another, it arrives complete with terminology, usage and (these days) computer software devised for specialized application to problems in the parent area. This is probably inevitable, but until the new method is more broadly perceived its application may fall considerably short of its potential in the adopting discipline. A recent example is the spline function. Briefly put, spline functions are a device for approximating the shape of a curvilinear stochastic function without the necessity of pre-specifying the mathematical form of the function. That is, it is unnecessary to restrict the estimate to a straight line, a polynomial of pre-specified degree, an exponential, or any other particular form. Brought over from engineering and the mathematics of interpolation, spline functions have appeared in several places in economic statistics in recent years. Application to economic problems has been made by Barth, Kraft and Kraft (1976), McGee and Carlton (1970) and Poirier (1973, 1976). Buse and Lim (1977) have shown spline functions to be a special case of restricted least squares. Yet because the idea is still wrapped in its original packaging, it is frequently overlooked when it might be a powerful adjunct to research. Moreover, even in some of the work where the spline function has been employed, it has not always been used to best advantage. For example, Barth and others in the article cited above, although admitting that it might improve their analysis to employ a multivariate spline function, were constrained by the fact that the software package at their disposal "unfortunately ... permits only bivariate specification." Yet, in fact, the procedure is readily adapted to bivariate or multivariate analysis. In this article we show that by use of appropriately defined composite variables, spline functions are easily fitted by any standard package for ordinary least squares regression. Some of the examples given below were fitted by the familiar SPSS package, others were fitted by members of an undergraduate class in econometrics at the University of Hawaii, using the TSP routine. In the presentation, piece-wise linear regression is employed as a general introduction" to the procedure. This is followed by development of the bivariate and the multivariate spline functions. The procedures are then illustrated by their application to the relationship of interest rates to money supply and inflation. Once the spline function is understood as a least squares regression model, additional variations become possible. As an example we present a modified or "truncated" spline function and apply it to the relationship of fertility to per capita income. We conclude with a few general remarks on the limitations of the method.


Journal ArticleDOI
TL;DR: In this paper, the authors used a multiple regression model to analyze the relationship between market rivalry and intermarket contacts of dominant firms in a sample of 187 major banking markets, focusing on the commercial banking industry because it is characterized by relatively homogeneous product mixes that operate in a variety of relatively well defined geographic markets.
Abstract: C LEARLY, one of the most significant institutional developments affecting the organization of American industry in recent years has been the trend toward diversification. Many important industries have been restructured as single product firms have been replaced (often by acquisition) by large conglomerates producing scores of diverse products. The rapid emergence of the conglomerate form of business organization has raised fundamental questions regarding the implications of this trend for the market system-a system in which interfirm competition is the basic regulating device.1 There has been a great deal of controversy within the economics and legal professions about the long-run implications of conglomerate firms for economic performance.2 This is due, in large part, to the fact that there is no theoretical framework and no general empirical evidence that is relevant to the intermarket relationships of multi-product firms. The shortcomings of theory arise from the fact that traditional microeconomic theory focuses only on the interrelationships of firms operating in the same market, while the lack of empirical evidence stems from the fact that appropriate micro level data for testing generally are not available. Although the lack of theoretical framework and data generally has precluded systematic analysis of the competitive effects of diversification,3 a number of intuitively appealing and workable hypotheses have been developed in connection with the conglomerate form of business organization. This study tests one of the major hypothesized consequences of conglomerate dominance-the development of mutual forbearance. The hypothesis holds that conglomerate firms that meet in many markets will develop a "live and let live" philosophy since action initiated in any market may induce retaliation in other markets where they are more vulnerable.4 As a consequence, the prevalence of conglomerate firms will mean a reduction in rivalry even in markets with a relatively competitive structure based on traditional measures of market structure. This study uses a multiple regression model to analyze the relationship between market rivalry and intermarket contacts of dominant firms. The study develops a simple model that illustrates the implications of the mutual forbearance hypothesis and discusses the hypothesis in the context of commercial banking. It then sets out the estimating equation and develops a variable that is designed to capture the degree of intermarket contact among dominant firms. Additional variables are developed and used along with the intermarket contact variable in a regression analysis that covers a sample of 187 major banking markets. The study focuses upon the commercial banking industry because it is characterized by firms with relatively homogeneous product mixes that operate in a variety of relatively well defined geographic markets. Furthermore, and of particular importance, the necessary micro level data are available. Finally, the issue is highly relevant in banking today.5 However, by focusing upon the banking industry the results may be subject to question in two respects. First, it may be argued that the diversification subject to investigation in this study is Received for publication December 22, 1976. Revision accepted for publication June 7, 1977. * University of Florida and Board of Governors, Federal Reserve System, respectively. Support from the Center for Public Policy Research at the University of Florida and from the Board of Governors of the Federal Reserve System is gratefully acknowledged. The opinions are those of the authors and do not necessarily reflect the views of their respective institutions. I See Grabowski and Mueller (1970) and Grether (1970). 2 See, for example, Edwards (1955), Stocking (1955), Edwards (1964), Turner (1965), Federal Trade Commission (1969), St. John's Law Review (1970), Steiner (1975). 3 For a test of one consequence of conglomerate firms, see Rhoades (1973) and Rhoades (1974). 4 This hypothesis was first stated by Edwards (1955). In another context, Solomon (1970) suggested it may be important in the banking industry. 5 See, U.S. v. Marine Bancorporation, Inc., et al. (1974) and U.S. v. Connecticut National Bank et al. (1974).

Journal ArticleDOI
TL;DR: In this article, the authors carried out an econometric test for which view of the labor market is more appropriate and concluded that the hypothesis of a labor market in continuous equilibrium must be rejected.
Abstract: AN important question in contemporary 1AILeconomics is whether or not the real wage clears the labor market. Its answer has bearing on issues as diverse as the nature of unemployment, the efficacy of fiscal and monetary policies, and the incidence of income taxes. Unfortunately, consensus as to the correct answer seems to be lacking. While much of modern macroeconomic theory allows for the possibility that the real wage fails to equate the supply and demand of labor (Barro and Grossman, 1971; Korliras, 1975), much analysis is based on the assumption of equilibrium in the labor market (Patinkin, 1965). The purpose of the present paper is to carry out an econometric test for which view of the labor market is more appropriate. Although the model we build is very aggregative and much too crude to be used as a basis for policy, we believe that it provides a first step in making operational the theoretical literature on disequilibrium macro models. Our tentative conclusion is that the hypothesis of a labor market in continuous equilibrium must be rejected. In section II we describe briefly some earlier work on modelling the aggregate supply and demand for labor. It is shown that prior studies either assume equilibrium in the labor market, or deal with disequilibrium inadequately. In section III we specify the disequilibrium model. Section IV contains a discussion of estimation problems, an interpretation of the results, and a comparison with an equilibrium version of the model. A concluding section has a summary and an agenda for future research. II. Antecedents

Journal ArticleDOI
TL;DR: An examination of two types of simple forecasting models using preliminary data compares the merits of optiml versus traditional predictors and indicates the relationship of delay in the availability of data revisions to forecasting accuracy.
Abstract: An examination of two types of simple forecasting models using preliminary data compares the merits of optiml versus traditional predictors and indicates the relationship of delay in the availability of data revisions to forecasting accuracy. The Kalman filter approach is used in the first model, based on the optimal use of data containing errors in forecasting. Several suboptimal predictors, which ignore preliminary data, treat it as error-free, or adjust for bias and serial correlation, are then compared. A significant improvement in accuracy is demonstrated with the optimal use of forecasting models. Whether accuracy will improve with more complex models is not yet known. 10 references.

Journal ArticleDOI
TL;DR: In this paper, substitution effects in collective consumption in the public sector are modeled as analogous to consumer choices in the private sector, i.e., as if generated by utility maximization subject to a budget constraint.
Abstract: IN recent years there have been notable advances in the methodology used in empirical studies of state and local public spending. The rather ad hoc econometric studies of the mid-1960s are being replaced by more carefully specified models, e.g., Barr and Davis (1966), Ohls and Wales (1972), Borcherding and Deacon (1972), and Bergstrom and Goodman (1973). Most of these efforts share the common feature that expenditures are viewed as responses to collectively exercised demands. While these studies have yielded insights, all have been partial equilibrium in nature and none has incorporated the possibility of substitution among public services in response to changes in relative costs. A goal of the present paper is to fill this gap by directly modeling and estimating such substitution effects in collective consumption. To accomplish this, it is convenient to view expenditure decisions in the public sector as analogous to consumer choices in the private sector, i.e., as if generated by utility maximization subject to a budget constraint. Quite aside from any advantages this approach holds for empirical analysis, this view of the public decision-making process has been highly attractive to theoretical researchers. Although the utility maximization paradigm has never been subjected to a direct empirical test, it has been employed to predict the effects of intergovernmental grants and spillovers across jurisdictions, and to examine other topics. A second aim of this analysis, therefore, is to provide empirical evidence on the tenability of this view of the local public sector.

Journal ArticleDOI
TL;DR: In this article, the authors compare the relationship between capital costs and risk in the Capital Asset Pricing Model (CAPM) with the industrial organization literature and conclude that risk and capital costs of powerful firms are lower than for other firms.
Abstract: RECENTLY, significant research has been conducted on the functioning of the capital market. One thrust of this research (e.g., Fama, 1970) has demonstrated that the capital market is highly efficient, i.e., the prices of securities at a point in time seem to reflect available information, and security prices seem to adjust quickly over time to new information. Another thrust (e.g., Sharpe, 1964; Lintner, 1965; Mossin, 1966; Jensen, 1972) has been the theoretical development and empirical testing of a specific model, the Capital Asset Pricing Model (CAPM), which precisely defines risk and return, gives an economic justification for diversification, and under specific assumptions draws the equilibrium conditions between risk and return in the capital market. Not withstanding serious econometric difficulties of estimation, these studies picture a highly efficient capital market in which capital funds are allocated based only upon risk and return considerations as determined by a rigorous evaluation of relevant information. The capital market depicted in this recent capital market literature would seem to differ from the capital market depicted in the literature of industrial organization economics. For example, Baumol (1967) and Hall and Weiss (1967) have argued that the major barriers to entry are not in the structure of output markets, but in the capital market. The argument is that to enter and compete effectively in many basic industries, such as automobiles, chemicals, etc., a large sum of capital is necessary, and the capital market will not allocate a large sum to a new entrant. Basically the capital market fails in its allocative function because investment opportunities, albeit opportunities with high profit potential, are "lumpy." That is, investment opportunities cannot be financed in small discrete amounts by new entrants, but can only be financed in large amounts by existing firms insuring basic industries marked by large firms with high market shares. Both the capital market literature and the industrial organization literature are valuable reference points for those who would hope to understand the allocation of capital in the economy and the effects it has upon the condition of entry, level of price and level of output found in industrial markets. The general purpose of this paper is to begin a reconciliation of these literatures with respect to their disparate views of the capital market. Since capital market theory relates capital costs to risk, our specific purpose is to determine if the market power of firms, as measured by size and seller concentration, seems to reduce the riskiness of firms and therefore their capital costs. In section II the difference between book profits and capital costs is stated. In section III, with the aid of the Capital Asset Pricing Model, the relationship between capital costs and risk is presented. In section IV the data and sample of firms are described, and in section V the data analysis is presented, which does in fact suggest that the risk and capital costs of powerful firms are lower than for other firms. Conclusions are presented in section VI.

Journal ArticleDOI
TL;DR: In this article, the authors discuss the possible choice between inflation and jobs and the choice between taxes on money balances and taxes of other kinds, and conclude that variability of inflation imposes costs on the economy that we should consider when we choose a macroeconomic policy.
Abstract: TWO separate policy issues have led to discussion of a preferred rate of inflation: the possible choice between inflation and jobs, and the choice between taxes on money balances and taxes of other kinds. Okun (1971) for the first of these issues and Logue and Willett (1976) for the second, among others, have remarked that the discussion is typically conducted as though a more inflationary policy means a rise from one steady rate to a higher steady rate. The world might not be like that; a rise in the average rate of inflation might mean, inevitably, a rise in its variability. Okun presented some evidence to suggest that countries with higher rates of inflation do experience more variable rates and suggested why it might be so; Gordon (1971) called the evidence" into question. Logue and Willett presented further results that supported Okun's position; but their results did not support Okun in the case of highly industrialized countriesthe ones that concerned him. Below I summarize these findings and report additional evidence that supports Okun for highly industrialized as well as other countries. The reason for Okun's concern is that variability of inflation imposes costs on the economy that we should consider when we choose a macroeconomic policy.' Section II supports the claim that there is an empirical relation across countries between the average rate of inflation and the variability of that rate; but it says nothing about the slope, or even the existence, of a functional relation between them that represents an opportunity locus for a single country. In section III, I conclude with a brief discussion of this issue.

Journal ArticleDOI
TL;DR: In this paper, the authors give estimates of household demand for money and savings functions for four centrally planned economies (CPEs) and carry out disequilibrium estimation in a model using the demand and supply function specifications developed in these papers.
Abstract: HIS article gives estimates of household demand for money and savings functions for four centrally planned economies (CPEs). The data are post-war annual time series for Czechoslovakia, the GDR, Hungary, and Poland. The estimation of these functions characterizing the demand side of the consumption goods market is a part of the empirical component of our research into macroeconomic equilibrium in CPEs. The supply side of the consumption goods market is analysed in a separate paper (Portes and Winter, 1977), and we are currently carrying out disequilibrium estimation (Goldfeld and Quandt, 1975) in a model using the demand and supply function specifications developed in these papers. It is conventional wisdom that since the early 1950s, the CPEs have suffered chronically from some significant degree of excess demand (repressed inflation), i.e., that buyers have faced quantity constraints (informal or formal rationing) on the markets for goods and labour (e.g., Bush, 1973; Garvy, 1975; Schroeder, 1975). When authors think it necessary to give empirical justification, this is in the form of reference to queues, shortages, "hidden" price increases, quality deterioration, excess liquidity and forced saving, and some data on prices in the small free market sectors of these


Journal ArticleDOI
TL;DR: A number of arguments have been used by social scientists to explain criminal behavior as discussed by the authors, e.g., the "sickness hypothesis" (Horton and Leslie, 1970) while some psychologists look for subconscious roots of human motivation toward crimes (Halleck, 1967).
Abstract: A. ~ number of arguments have been ad1A,vanced by social scientists to explain criminal behavior. For example, some sociologists believe in a "sickness hypothesis" (Horton and Leslie, 1970) while some psychologists look for subconscious roots of human motivation toward crimes (Halleck, 1967). An alternative approach pursued by most economists and subject to objective empirical investigation is to rely on the familiar economic maxim that people are rational and in general do respond to incentives whether they pursue legitimate or illegitimate activities. One of the earliest discussions of such an approach to criminal behavior is found in Bentham's Principles of Penal Law (1962) where he lays down the hypothesis that certainty and severity of punishment deter criminal behavior. Therefore, a reasonable general hypothesis that needs to be tested is that illegitimate behavior can be explained by opportunities as measured by potential gains from legitimate and illegitimate activities. In recent years a number of studies have attempted to investigate the relationship between crime and certain quantifiable opportunities, for example, Fleisher (1966), Ehrlich (1973), Sjoquist (1973), and Swimmer (1974). The works of Becker (1968) and Ehrlich establish the basic theoretical foundation for the empirical work pursued in this paper. We test the opportunity hypothesis and the notion that offenders do respond to incentives whether provided by the market conditions or by the legal system. In particular, we would be concerned primarily with the effect of various deterrent measures on criminal activity. An additional issue that is closely linked to the former hypothesis and needs closer empirical scrutiny is whether or not the various deterrent measures like certainty and severity of punishment are complementary. In other words, do certainty and severity go together or are they inversely related to each other for various crimes. This empirical work has certain unique features. It provides the most comprehensive investigation of the deterrent hypothesis and the certainty-severity trade-off, if any, for the cities of the United States. It covers two time periods, 1960 and 1970, which would help to evaluate the consistency of the results over time.

Journal ArticleDOI
TL;DR: Stromsdorfer et al. as discussed by the authors presented estimates of minimum wage effects on employment of teenagers 14-15, 16-17 and 18-19 years, and decomposes these estimates into scale and substitution components for calculating effects of differential minima.
Abstract: This paper (1) presents estimates of minimum wage effects on employment of teenagers 14-15, 16-17 and 18-19 years, and (2) decomposes these estimates into scale and substitution components for calculating effects of differential minima. In 1972 the House of Representatives approved an amendment to the Fair Labor Standards Act calling for a youth differential, but one was not included in the Senate version. These bills died when the House refused to submit to conference. In 1973 the Administration proposed another amendment containing a youth differential, but the amendment passed by Congress later that year eliminated it. Although attempts to enact a youth differential have failed,it is clear that support for these measures comes from a consensus concerning the relatively adverse effects of existing minima on teenage employment. It is also likely that the question of differential minima is not dead and it would be nice to have estimates of what effects might be if a differential were enacted. The point of departure for all minimum wage studies has been that those most adversely affected are those who in the absence of the minimum would have earned the lowest wage. In comparing teenagers to adults, there is fairly consistent evidence that minimum wages reduce teenage-adult employment ratios.' But there is less evidence among groups of teenagers themselves. The Labor Department Survey (1970) contrasted males and females, white and non-white, for employment of those 16-17 and for those 18-19. Results are mixed. For white males, estimates conform to expectations of more adverse employment effects for younger workers, but results for white females are inconclusive and there is no relationship for non-white teenagers. Jacob Mincer (1976) and Nori Hashimoto and Mincer (1970) combine all ages 16-19 into one group but distinguish whites from non-whites and find more adverse effects for non-whites. Similarly, Marvin Kosters and Finis Welch (1972) treat ages 16-19 as a single class and distinguish employment by sex and race. More adverse effects are reported for non-whites than for whites and for females than for males. Each of these studies uses time series estimates of teenage employment from Current Population Surveys. These data contain large sampling errors especially for age, sex, and race partitions.2 Students and part-time workers are not distinguished, wages are not available and because the data are for national aggregates, state minimum wage laws are ignored. Here, the data are from the 1 in 100 Public Use Sample of the 1970 Census and refer to teenage employment in the week before the Census was taken. Individual observations are aggregated to state totals. Nationally, students account for half of total teen employment but only one-third of hours worked and females work 95% as many hours as males. Aggregation weights reflect these differences in hours worked. The Census does not contain reliable wage information, and as is true of all previous studies, there is a problem in estimating legislative effects on costs of teenage employment. Cross-state observations enable us to include state wage laws in our estimates of these costs. Section II describes our procedure for inferring legislative effects on teenage employment costs and provides estimates of associated changes in employment. Section III provides empirical results for a decomposition of estimated effects into scale and substitution components and gives estimated effects of a 20% youth differential extended first to those 14-15 and then to those 14-17 year old. A summary follows. Received for publication July 2, 1976. Revision accepted for publication March 4, 1977. * University of California, Los Angeles, and The Rand Corporation. We are grateful to Ernst Stromsdorfer for suggesting this topic and Dennis DeTray and James P. Smith for their helpful comments. Support for this project was provided by a contract from ASPER/USDOL. 'See, for example, the paper by Mincer (1976) and the Labor Department Survey (1970). 2See Welch (1974), the comment by Siskind (1977) and reply by Welch (1977) for discussions of some of the peculiarities of these data.

Journal ArticleDOI
TL;DR: In this paper, a model is developed to analyze efficiency changes caused by asymmetrical inputs and examine the economic implications of broadening automatic fuel adjustment mechanisms to include the cost of labor, supplies, and purchased power.
Abstract: Automatic fuel adjustment mechanisms (FAM), which allow utilities to charge higher rates as fuel costs increase, are shown to disrupt the balance of economic efficiency provided for by regulatory lag. A model is developed to analyze efficiency changes caused by asymmetrical inputs and to examine the economic implications of broadening FAM to include the cost of labor, supplies, and purchased power. The conclusions are reached that efficiency is promoted by regulatory lag and formal hearings and that policies that circumvent these procedures reward inefficient behavior in terms of utility investment decisions. 13 references.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the production activity of cities in identifying the supply of labor and use a rich set of environmental attributes to develop new estimates of the hedonic prices for urban amenities.
Abstract: AS consumers choose among cities, they may trade off higher earnings against differences in the consumption of environmental goods. Commuter travel time, crime and air quality, the quality of educational and health facilities, each may involve unpurchased environmental goods. Decisions about -the consumption of such goods are made simultaneously with the choice of city of residence. By examining the compensating earnings differentials relative to differences in environmental goods across cities, one can estimate hedonic prices for each environmental attribute. The hedonic prices may be useful in valuing the benefit of environmental improvement, and as weights in constructing an index of the quality of life, as Tobin and Nordhaus (1972) propose. By relating the index of the quality of life to city size, something may be learned about the effect of urban growth on the quality of life. This method is a substantial improvement over the index of Liu (1975). Previous efforts to estimate these hedonic prices by Izraeli (1974) and by Hoch and Drake (1974) have been unsatisfying for several reasons. First, they model only the consumer side, while ignoring the possibility that differences in productivity may influence wage determination. Thus, they do not indicate the conditions necessary for their equations to be identified. Second, they do not include variables representative of a wide range of environmental attributes. Exclusion of important categories of environmental attributes may unduly bias the estimates. Hoch and Drake (1974), for example, focus on climate while ignoring many other environmental attributes. Third, if one includes a wide array of environmental attributes, one is confronted with a serious multicollinearity problem. Kelley (1977) takes explicit account of the demand side of the labor market in estimating hedonic prices for amenities. His analysis, however, does not account for differences in the cost of living in different cities, apparently assuming that all goods are traded in national markets. In addition, Kelley does not account for differences in labor force quality in different cities. This essay considers the production activity of cities in identifying the supply of labor. A rich set of environmental attributes is then used to develop new estimates of the hedonic prices for urban amenities.

Journal ArticleDOI
TL;DR: This paper examined the relationship between wage and unemployment rates in twelve cities and found a positive relationship between the two: high hourly wages were paid in high unemployment rate cities, low hourly wages are paid in low unemployment rate regions, and this relationship was a characteristic of equilibrium of the aggregate economy.
Abstract: JN a recent article Hall (1972) examined the relationship between wage and unemployment rates in twelve cities. He observed a positive relationship between the two: high hourly wages were paid in high unemployment rate cities, low hourly wages were paid in low unemployment rate cities. Moreover, this relationship, he argued, was a characteristic of equilibrium of the aggregate economy. The empirical results obtained by Hall were based on twelve observations for the year 1966. Since his model was designed to study the characteristics of equilibrium, the question may justifiably be raised as to whether the year 1966 represented an equilibrium state of the economy; might it not be possible that what Hall observed was a characteristic of disequilibrium instead of equilibrium? This is a particularly important issue, for Tobin (1972, p. 10) has tended to regard the aggregate economy to be in a state of perpetual disequilibrium. Furthermore, as Robert A. Gordon has pointed out, since Hall's results were obtained on the basis of only twelve observations, could not his results be reversed if some of the cities were excluded from his study?' Hall's conclusions were based on the results of the least squares regression of the wage rate on the unemployment rate, an appropriate procedure if the direction of causality runs from the latter to the former. Yet his theoretical analysis (correctly) implied that the two variables were jointly determined, an issue to which we will return in section II. But if the unemployment and wage rates are jointly determined then orthogonal regression should be used in order to determine, the quantitative relationship between them. The aim of this work is to re-examine and extend Hall's empirical results so as to determine the extent to which they are affected by his estimation technique, his selection of the cities and the year 1966. Although the necessary data for a direct, straightforward extension of Hall's work are not available, data do exist that permit us to look into these issues.

Journal ArticleDOI
TL;DR: In this paper, a general model of aggregate household saving behavior is formulated and data from Canada, Germany, Japan, the United Kingdom, and the United States were used to estimate the personal saving function in each of the countries and the results are used to test various hypotheses about personal saving behavior.
Abstract: PERSONAL saving rates, i.e., the ratios of personal saving to personal disposable income, in many industrialized countries have risen dramatically in recent years. A number of attempts to explain the phenomenon of rising saving rates coinciding with price inflation have drawn upon the work of George Katona (1975), who has stressed the feeling of uncertainty and pessimism about the future caused by inflation that, in turn, encourages saving. In this paper a general model of aggregate household saving behavior is formulated. Data on Canada, Germany, Japan, the United Kingdom, and the United States are used to estimate the personal saving function in each of the countries and the results are used to test various hypotheses about personal saving behavior. This paper has two major objectives: to test for a direct influence of inflation on personal saving after taking into account the influence of other relevant factors, including any indirect channels by which inflation may exert an influence (e.g., the level of real liquid assets); and to determine what factors in each country are important for explaining saving behavior.

Journal ArticleDOI
TL;DR: In this article, a cross sectional test of a model of inter-industry wage determinants in manufacturing is presented for the years 1958 and 1967, showing that six independent variables can explain over 72% of the variation in wages in each of the two test years.
Abstract: THE hypothesis that firms with market power pay higher wages than competitive industries has implications that are important to many areas of policy. If firms with market power do pay higher wages, the excessive portion of those wages would be a rather large addition to the social costs of monopoly. In addition, wages that are inconsistent with labor market characteristics and not uniformly sensitive to business cycles will hamper the implementation of macroeconomic stabilization programs. Previous studies in this area have usually tested the market power hypothesis by determining the relationship between industry concentration and industry wages.' In addition, studies focusing upon other aspects of interindustry wages, such as the effect of unions2 and plant size3 have included concentration as an explanatory variable in their models. Results of these studies have differed with respect to their findings on the importance of concentration. Some studies found concentration to be important in the wage determination process while others did not. The research reported in this article indicates that the concentrationwage relationship changes significantly over the business cycle making cross sectional studies sensitive to the year used for the analysis. This finding is consistent with the results of models developed to describe the wage determination process over time. The method of analysis used here focuses upon the concentration-wage relationship at two points in the business cycle. A cross sectional test of a model of interindustry wage determinants in manufacturing is presented for the years 1958 and 1967. The results of these tests indicate that six independent variables can explain over 72% of the variation in wages found in each of the two test years, with each of the six coefficients statistically significant in both years. However, the impact of the independent variables change considerably over the business cycle. Of particular interest are findings that support the hypotheses (1) that concentration's effect upon wages appears to change over the business cycle, thus providing support for the "spillover" hypothesis and (2) that the wage-concentration relationship is not linear.

Journal ArticleDOI
TL;DR: In this article, the effects of competition on price during the 1964-1971 period, when gasoline supplies were relatively normal and price wars were common, were surveyed to identify the effect of competition.
Abstract: Retail gasoline prices in 22 cities were surveyed to identify the effects of competition on price during the 1964-1971 period, when gasoline supplies were relatively normal and price wars were common. The informational theory of oligarchy is used to derive regression results and determine their market relevance. The 1965 price restoration move led by Texaco effectively eliminated small marketers selling at supra-competitive levels. The findings support the conclusion that collusive pricing was practiced to some extent during the period until competitive pricing returned in 1970. 8 references.