scispace - formally typeset
Search or ask a question

Showing papers in "The Review of Economics and Statistics in 1974"


Journal Article•DOI•
TL;DR: In this paper, the authors focus on the allocation of government expenditures among the states and argue that interstate inequalities in per capita federal spending can be explained in large part as the resultant of a process of maximizing expected electoral votes.
Abstract: T HE New Deal years offer a laboratory for testing the hypothesis that political behavior in a democracy can be understood as a rational effort to maximize the prospects of electoral success. This hypothesis is central to the "economic" theories of politics developed and elaborated since the publication of Downs' An Economic Theory of Democracy in 1957, but systematic empirical verification has been meager.' One of the reasons for this paucity is that in the United States political parties are rarely "in power" unambiguously, and actual policies result from the interaction of many competing objectives. But in the 1930's the Democratic party had control of both houses of Congress, and during much of the period Congress was willing to follow Presidential lead on economic policy. At the same time federal spending rose to unprecedented levels, and considerable discretionary allocative authority was concentrated in the executive branch. Most of the spending was carried out by new agencies under new programs which were clearly identified with the New Deal administration. At a time of grave economic distress, this Presidentially-dominated environment provided a stark simplification of the interaction between political and economic forces. This article focuses on the allocation of government expenditures among the states and argues that interstate inequalities in per capita federal spending can be explained in large part as the resultant of a process of maximizing expected electoral votes. Two recent articles (1969, 1970) by Leonard J. Arrington have raised this issue. Upon examination of a newlydiscovered set of figures for the years 19331939, Arrington was struck by the fact that the per capita distribution of loans and expenditures was not at all equal across the country, and furthermore that these inequalities seem perverse in that they favor states with high income. In particular, the West seems to have received far more than its per capita share of benefits, while the South -far behind in income received little.

414 citations


Journal Article•DOI•
TL;DR: In this paper, the authors construct and estimate a model which assumes entry is a function of the incentives to enter relative to the level of entry barriers, and introduce those variables considered to be entry barriers directly as determinants of entry rather than the profit rate.
Abstract: ENTRY plays a crucial role in microecoIJi nomic models of market structure and performance. However there has been very little direct empirical investigation of entry and its determinants over a broad cross section of industries. In this paper we construct and estimate a model which assumes entry is a function of the incentives to enter relative to the level of entry barriers. The subject of the analysis is the cross-section differences in entry between the three-digit industries of the Canadian manufacturing sector. Previous studies of entry have either concentrated on only several industries or have attempted to make conclusions about entry conditions by regressing profit rates on variables representing entry barriers. Bain (1956) examined 20 (mostly four-digit) United States manufacturing industries and concluded that the most significant barriers to entry were product differentiation, economies of scale in plant or firm and control of patents or scarce resources, respectively. Mann's study (1966), which was limited to 30 of the United States manufacturing industries, did not examine the relative importance of the various barriers to entry. Mansfield's (1962) sample was limited to four industries. Capital requirements was the only barrier he considered. Most econometric investigations of entry barriers have been indirect tests. They have regressed the profit rate, rather than entry, on those structural characteristics considered to be barriers to entry (Comanor and Wilson 1967, Miller 1969). Unfortunately this specification does not permit reliable conclusions regarding the effectiveness of these variables in deterring entry. There are theoretical reasons for questioning the often assumed strong positive relationship between entry barriers and the true profit rate. Additionally of course, there is the infamous gap between true and measured profits. This paper's treatment of entry barriers has several important advantages over previous work. Those variables considered to be entry barriers are introduced directly as determinants of entry rather than the profit rate. This is a direct rather than indirect test of the propensity of these factors to deter entrants. A most important result is that our conclusions are less sensitive to those unavoidable measurement errors in the profit rate. Our estimating equations consider a more extensive list of entry barriers over a larger (71) sample, covering all types of manufacturing.

371 citations



Journal Article•DOI•
TL;DR: In this paper, the authors present a review of the hypotheses that have been advanced to explain the substantial inter-industry variance that we observe in the prevalence of multinational corporations in the United Kingdom and Canada.
Abstract: FOREIGN direct investment which I shall associate with the multinational corporation varies greatly in its prominence from country to country and sector to sector. The analytical apparatus of international trade and industrial organization supplies some hypotheses to explain this variation, but they have not been drawn together and tested competitively. The purpose of this paper is to explain statistically the substantial inter-industry variance that we observe in the prevalence of multinational corporations. In the first section I review the hypotheses that have been advanced to explain this variance. The second and third sections report tests of these hypotheses on the shares of sales held by foreign-owned enterprises in Canadian and United Kingdom manufacturing industries.

303 citations


Journal Article•DOI•
TL;DR: In this paper, the effect of introducing the elasticity of supply of wheat into consideration of Parker and Klein's questions was discussed. But the effect was so large that it made Parker's questions meaningless, not because we wanted to stay at the level of generality adopted by Parker in his discussion of his assumptions.
Abstract: cizes us for not quantifying the effect of introducing our elasticities into consideration of Parker and Klein's questions (footnote 6). We avoided quantification because the effect was so large that it made Parker and Klein's questions meaningless, not because we wanted to stay at the level of generality adopted by Parker in his discussion of his assumptions.2 In addition, we found that our conclusions allowed us to suggest other implications of our findings. The pattern of slow adjustment to prices on the part of farmers is consistent with an interpretation of the Populist movement that sees the agricultural market out of equilibrium in the early 1890's. We do not wish as Professor Page appears to think to disassociate ourselves from this conclusion, but we did not want to claim it as a complete explanation of Populism either.3 The pattern of productivity change that shows up only dimly in our results is intriguing because it differs from the results of other investigators using other methods. We commented on it in the hope of encouraging further work. Our broader aim, in fact, was to encourage economic historians to use the tools of modern economics and econometrics in the analysis of historical questions. We agree that there is no substitute for accurate data and specification, and we continue to think we got as close to this ideal as we could within our budget constraint. We hope also that the controversy over wheat varieties will not obscure the real advances which can be made by the introduction of explicit estimates of the elasticity of supply of wheat into the discussion of late nineteenth century agriculture. 2 It should be noted also that Parker's awareness of some of the limitations of his assumptions does not mean that he did not use them in his research. His quantitative results came from his assumptions, not his reservations. 3 We said just this in the passage Page cites as well as in our original article. It is hard to see how this could have been misunderstood.

258 citations


Journal Article•DOI•
TL;DR: The evidence presented in this paper decisively rejects the assertion that coinsurance is irrelevant to choice; coinsurance clearly does affect the demand for services.
Abstract: T I HE effect of coinsurance on the demand for medical services has been debated for many years. Some assert that it helps control total expenditures by giving consumers a stake in how much medical care is purchased. Others assert that coinsurance is irrelevant to choice, since the physician makes the decisions about using medical services for his patients. Persons attempting to predict expenditures under various national health insurance plans are naturally interested in how coinsurance affects demand for services. The evidence we present in this paper decisively rejects the assertion that coinsurance is irrelevant to choice; coinsurance clearly does affect the demand for services. Moreover, as we shall show, the impact of coinsurance varies across medical services in a systematic fashion depending upon the time price of the service. In a longer,more detailedversion of this paper (Phelps and Newhouse 1973) we have derived expressions relating the responsiveness of demand for medical care services to coinsurance, market prices for medical care, and time costs. In the remainder of this section we sketch the assumptions underlying those derivations. We assume that consumers maximize a utility function in "other goods" (x) and health status (H) subject to a budget constraint. Medical care (h) is a homogeneous commodity that can be purchased in the market at a price of p per unit, and x can be purchased at a price of one per unit. There is a production function for H which uses h and time inputs (t). Denote the opportunity cost for time as w per unit of time, and let T be the amount of productive time available to the person. T To-th, where To is total time available and is fixed. The consumer's level of health is considered random. This induces him to purchase insurance. The insurance contract specifies a coinsurance rate -the consumer pays C per cent and the insurer pays (100-C) per cent of all incurred expenses during the period. We are not concerned here with the selection of C (Phelps 1973), but how the consumer reacts to a random loss, given his insurance policy. Assume that C has been previously chosen, or is imposed; in either event, C is fixed, and the premium (or tax) is prepaid. The total price is then the sum of the money price per unit C p and the time-price w t per unit

183 citations


Journal Article•DOI•
TL;DR: In this paper, the relationship between leverage, market structure, risk, and profitability was analyzed and measured using cross-section data on 228 United States manufacturing firms over a period spanning the 1960's.
Abstract: THIS paper attempts to analyze and measure the relationships among leverage, market structure, risk and profitability. It develops a theoretical model relating these variables and then tests the model using cross-section data on 228 United States manufacturing firms. An additional test is made using data from 85 industries with both tests covering the 1960's. Recently numerous studies have tested the relationship between market structure and rate of return (Hall and Weiss, 1967; Samuels and Smyth, 1968; Fisher and Hall, 1969; Shepherd 1971, 1972; Stigler, 1963; Kilpatrick, 1968; Collins and Preston, 1969; and Gale, 1972). Several of these authors have included a risk variable or a financial structure variable or both in a linear regression model. They have commonly represented the degree of risk by the variability of profits over time (hereafter denoted "o").'More recently, Gale (1972) has used financial structure (measured as the equity to assets ratio) to represent risk. Still other economists suggest that leverage may have an independent influence on the rate of return, unrelated to risk (Stigler, 1963; Scherer, 1970; Jean, 1970). At this point a more general test may resolve the alternative hypotheses. This paper will test both the Gale hypothesis and the Stigler, et al. hypotheses using a simultaneous 3-equation model.

155 citations




Journal Article•DOI•
TL;DR: In this article, the authors consider a one-sector model where imports are assumed to be either final goods which enter the utility functions of consumers, or intermediate goods which are separable from primary factors in the productive process.
Abstract: THE foreign sector in conventional macroeconomic models has not been properly integrated with behavioural relationships in the rest of the economy. The aggregate producing sector is usually depicted as employing primary factors, capital and labor, to produce a single output which simultaneously satisfies the demands of consumers, producers, governments, and foreigners. In this one-sector model, imports are implicitly assumed to be either final goods which enter the utility functions of consumers, or intermediate goods which are separable from primary factors in the productive process. The first assumption conflicts with empirical evidence that the bulk of international trade occurs in intermediate goods, while the second assumption involves a substantive restriction on the form of the technology which ought to be examined and justified empirically rather than assumed a priori. Most goods entering international trade require further processing before delivery to final demand. This processing requires the services of domestic primary factors of production which could be employed elsewhere. An important issue for public policy concerns the effect of trade and trade barriers on the distribution of factor income. If final demand can be satisfied either by employing domestic primary factors or by importing materials, then changes in import prices will, in general, alter the competitive returns to the primary factors. The usual assumption adopted for empirical work, namely that imports are final goods with no close domestic substitutes, rules out any income distribution effects resulting from a change in import prices. Previous investigators have estimated import demand equations by regressing the logarithm of a measure of imports on the logarithm of national income and the logarithm of the ratio of the price of imports to the price of domestic value added.' While this functional form has the advantage that the parameters measure the price and income elasticities of demand, it is not derivable from an underlying model of optimal behaviour, and it assumes that imports are final goods which are separable from all other commodities in the utility function of the consuming sector. Until very recently most empirically tractable functional forms have imposed separability restrictions a priori. Thus, even if one were to proceed from micro-economic foundations by assuming a constant elasticity of substitution functional form to model the taste or technology of the decision unit, the assumption of separability between imports and alternative factors or commodities would constitute a maintained hypothesis that could not be tested.2 The one output specification provides no way of explaining changes in the relative prices of various categories of final demand. It will only be appropriate for explaining the composition of inputs if the technology is separable with respect to a partitioning between inputs and outputs. In this case, the cost minimizing input bundle is independent of the composition of output, and, for purposes of explaining factor demands, one can pretend that a single output exists. Separability between inputs and outputs implies that marginal rates of substitution between pairs of factors are independent of the composition of output, and marginal rates of

134 citations


Journal Article•DOI•
TL;DR: In this article, a model of the choice among health insurance options which permits quantitative inference about risk aversion from revealed choices is presented. But the model is only exploratory, since several limiting assumptions and functional representations have been adopted.
Abstract: T HE dominant type of health insurance contract in the United States contains a formula providing partial reimbursement to the consumer for expenditures on selected goods and services. The consumer pays a predetermined amount per period, the premium. At the beginning of the period specified in the insurance contract, the consumer is uncertain about many future developments. The occurrence of various illnesses, the amount of medical services consumed, and the out-of-pocket ("direct") monetary loss cannot be perfectly foretold. When a consumer chooses health insurance from a set of alternative contracts, he may reveal information of a general nature about preferences for avoiding risk. This paper suggests a model of the choice among health insurance options which permits quantitative inference about risk aversion from revealed choices. The model is based on the theory of expected utility maximization, and more recent development in Arrow (1963), Pauly (1968), and Zeckhauser (1970). The analysis is only exploratory, since several limiting assumptions and functional representations have been adopted. The model will be seen, however, to have useful application to the Federal Health Benefits Program in which federal employees choose health insurance from a wide range of options. The premium cost to the employee for any option depends on the average experience of all those selecting the option. This program has been in existence since 1960, and has generated important information on the frequency distributions of total and direct expense for various types of consumer unit under various types of insurance. The Expected Utility Model

Journal Article•DOI•
TL;DR: This paper contains a general framework for the specification and estimation of policy preference functions and identifies two of the principal difficulties confronted in applications of the implicit experimental approach.
Abstract: IOLICY criteron functions provide a basis for evaluating the desirability of alternative economic outcomes or states. Typically public decision makers must choose between alternative policy proposals which influence different sectors of society in various ways and which have different welfare connotations to these segments of society. We take the objective of economic policy analyses to be that of generating information to aid policy makers in the choice among alternative policy programs. Further, we view the formalized approach of economic analyses to policy making as one which supplements rather than supplants contemporary procedures used in formulating and administering economic policy. In the quantitative analysis of economic policy, two approaches have been advanced with respect to the use of a policy criterion function. The first, which we denote the explicit approach, involves a formally stated objective function as an integral component of the policy analysis. These analyses include not only the various optimizing models of decision making, e.g., Holt (1962), Theil (1968), Prescott (1971) and Chow (1972), but also the work of Fromm (1969) and others who have used an objective function in the explicit evaluation of simulation experiments. The second approach generates, for selected values of the instrument variables, the time paths of the endogenous variables. This approach has been advanced principally by Naylor (1970). While this approach does not involve the representation of a criterion function, such a function may be regarded as a concealed component of the analysis. This approach might be referred to as the implicit approach since (implicitly) a criterion function is used in choosing the policy alternatives for experimentation and in choosing the endogenous or performance variables for which these alternatives are to be compared. In contrasting the first approach with the second it may be argued that explicit specification of the criterion function (or set of criterion functions): (i) does not necessarily involve an arbitrary selection of the policy variable levels to be investigated; (ii) allows the investigator to assist public decision makers with the choice of weights across various arguments or goals entering the criterion function, particularly if a set of criterion functions is examined; (iii) provides an initial and formal basis for interaction between the investigator and the public decision maker; (iv) does not involve averaging over many samples generated by random drawings from the distribution of the stochastic disturbances in the system and as a consequence it is difficult to assess errors in estimating the actual mathematical expectation; and (v) does not typically result in a situation in which public decision makers are inundated with so mnuch data that they cannot realistically make choices. Points (iv) and (v) represent two of the principal difficulties confronted in applications of the implicit experimental approach. The explicit approach usually involves some arbitrariness in the specification of trade-off s between different arguments while the implicit experimental approach involves such elements in the selection of the specific policy alternatives investigated. Since the latter approach is not guided by an optimization or policy improvement method, many different policies must be examined which, of course, aggravates point (iv) and also obviously provides no assurance that "good" policies will be discovered. Neglecting investigator costs of the two approaches, arbitrariness emanating from the former approach appears less objectionable than the degree of arbitrariness present in the second approach. In view of the above position, this paper contains a general framework for the specification and estimation of policy preference funcReceived for publication April 5, 1973. Revision accepted for publication February 5, 1974. * Giannini Foundation Research Paper No. 378.

Journal Article•DOI•
TL;DR: This article examined the redistributional effects of inflation on wealth-holdings of households and corporations, extending the exploratory studies just noted by utilizing information on the moderate United States infhtion during the last two decades.
Abstract: OW important is it to avoid moderate inH flation, such as we have suffered intermittently since World War II? Until the 1950's there were virtually no empirical studies of the effects of inflation on the level or distribution of output and wealth, except for the great hyperinflations. Recent years have seen extensive development of the theory of inflation and a few exploratory empirical studies of the tosts of moderate, non-run-away inflation, but we still have only limited information on these costs as a basis for policy judgments, for example, when we face the much-discussed trade-off between inflation and unemployment.' This paper examines the redistributional effects of inflation on wealth-holdings of households and corporations, extending the exploratory studies just noted by utilizing information on the moderate United States infhtion during the last two decades.2

Journal Article•DOI•
TL;DR: The idea of transmission of economic status from one generation to the next is found in the heritability of IQ as mentioned in this paper, which is a common theme in United States social theory: the poor are poor because they lack mental skills and their poverty is particularly intractable because of the genetic structure inherited from their parents who were also poor and "mentally deficient."
Abstract: THE growing disillusionment with compensatory education and other anti-poverty programs has given new life to an old theme in United States social theory: the poor are poor because they lack mental skills. Their poverty is particularly intractable because it is rooted in the genetic structure inherited from their parents who were also poor and "mentally deficient."' An explanation of transmission of economic status from one generation to the next is thus found in the heritability of IQ. The idea is rtot new: an earlier wave of genetic interpretations of economic and ethnic inequality followed in the wake of the failures of the purportedly egalitarian educational reforms of the early 20th century Progressive Era.2 The liberal environmentalist counterattack against these interpretations was highly successful; among social scientists, and in the public eye, the genetic position was largely discredited.3 Since the late 1960's, however, public disillusionment with egalitarian social programs has been enhanced by the dissemination of the heritability research of Burt, Jensen, and others, supporting the scientific claims of the genetic interpretation of racial inequality and intergenerational immobility.4 Further evidence has been found in studies such as the Coleman Report which seemed to indicate that scholastic achievement in schools is not greatly influenced by the level of educational inputs and that differences among children prior to school entry explained most of the nonrandom variance in test scores.5 The version of the genetic argument to which we will address ourselves may be summarized by two propositions: First, that IQ, as measured on standard so-called intelligence tests, is highly heritable, and second, that IQ is a major determinant of income, occupational status, and other dimensions of economic success. If both propositions were correct, it could easily be shown that intergenerational immobility, as measured by the correlation between the economic status of parents and their (adult) children, is attributable in large measure to the genetic inheritance of IQ and its role in determining economic position. The first proposition concerning the heritability of IQ has received careful scrutiny; in fact, the current debate on IQ has been dominated by a concern with IQ's heritability, virtually to the exclusion of questions concerning its economic importance.6 In this paper we Received for publication October 10, 1972. Revision accepted for publication June 26, 1973. * An earlier version of this paper was presented at the Far Eastern Meetings of the Econometric Society in Tokyo, June 1970, and at the workshop sponsored by the Committee on Behavioral Research in Education of the National Academy of Science in Chicago in June 1971. We are grateful to the participants at these meetings for helpful suggestions. Many of the ideas in this paper were worked out jointly with Herbert Gintis. We are grateful to him for his help and to Zvi Griliches, Christopher Jencks, Barbara Roemer, Janice Weiss and the members of the Harvard seminar of the Union of Radical Political Economists. The research presented here was supported financially by the Ford Foundation. 1Jensen (1969) begins his article on the heritability of IQ with: "Compensatory education has been tried, and apparently it has failed." The most explicit statement of the genetic interpretation of intergenerational immobility is Herrnstein (1971). For a critical review of Herrnstein's interpretation, see Bowles and Gintis (1973). 2Michael Katz notes the historical tendency of genetic interpretations of social inequality to gain popularity following the failure of educational reform movements (Katz, 1968). On the rise of the genetic interpretation of inequality towards the end of the Progressive Era, see Karier (1972). 3See, for example Hunt (1961). 4See Jensen (1969) and Burt (1958). For a critical review of Jensen's and Burt's estimates, see Light and Smith (1969), Jencks et al. (1972), and Kamin (1973). 5 Coleman et al. (1966). A critique of statistical bases of the Coleman Report can be found in Bowles and Levin (1968 and 1969). See also Mosteller and Moynihan (1972). 6 Information on the economic success or failure of individuals at either extreme of the IQ distribution -such as the data invoked by Herrnstein -tells us virtually nothing about the overall economic importance of IQ as a determinant of an individual's place in the distribution of income or stratification system.

Journal Article•DOI•
TL;DR: The United Kingdom is an ideal case for this latter purpose since both international trade and direct foreign investment play a large part in its manufacturing sector as discussed by the authors. But these investigations of the association between various dimensions of market structure and some measure of profitability have almost exclusively relied on United states data.
Abstract: URING the past two decades, there have been many studies of the relationship between market structure, firm conduct and industry performance. Inspired by Bain's seminal works on the influence of seller concentration on rates of return in manufacturing industry (1951) and on barriers to entry (195.6), many econometric cross-section studies have been undertaken.' Yet these investigations of the association between various dimensions of market structure and some measure of profitability have almost exclusively relied on United states data. Few indeed have been the tests on these hypotheses, in either simpler or more complex versions, using data from other industrial economies. Even fewer have yielded results nearly comparable in concreteness to those obtained in United States studies.2 The present paper is an attempt to help fill in the obvious gap in the literature. The objective is two-fold. First, I hope to investigate the influence of some of the major market structure elements on one aspect of performance pricecost margins in United Kingdom manufacturing industry.3 Second, I will attempt to evaluate the impact on these relationships of foreign trade and direct foreign investment. The United Kingdom is an ideal case for this latter purpose since both international trade and direct foreign investment play a large part in its manufacturing sector. The findings, I suggest, do substantially buttress the classical hypothesis about the links between structure and profitability. I. Analytical Framework and Variables

Journal Article•DOI•
TL;DR: The question of whether or not concentration ratios are meaningful indices of market structure or whether they are causally related to industrial performance has been studied extensively as mentioned in this paper, and the answer to both questions is yes.
Abstract: ECONOMISTS have been alternately fascinated and frustrated with industry or market concentration ratios ever since they were first calculated from census data for the Temporary National Economic Committee for 1937. They are fascinated because concentration ratios are the single best available index of the degree of oligopoly. The frustration stems from the absence of precise coincidence between the Standard Industrial Classification System (SIC) used by the Bureau of the Census and economically relevant markets. Yet, when all is said and done, most industrial organization economists agree that concentration ratios based on SIC industries not only are the best available, but provide useful measures of one dimension of the extent of oligopoly in American industry.1 This is not to imply, of course, that market concentration is the only index of oligopoly or market power. Economic theory suggests and empirical studies verify that entry barriers, product differentiation, firm conglomeration, among others, also may influence firm conduct and industrial performance. But changes in industrial concentration are uniquely significant because often they reflect, at least partially, changes in other structural variables as well. For example, if entry barriers are declining, because of growing markets or whatever, this tends to become reflected in lower concentration ratios. Hence changes in market concentration may also reflect what is happening to other structural variables affecting the discretionary power of sellers. We shall not here argue the question of whether or not concentration ratios are meaningful indices of market structure or whether they are causally related to industrial performance. Those assuming that the answer to both questions is yes, gain aid and comfort from the most comprehensive review of the empirical evidence on the subject.2


Journal Article•DOI•
TL;DR: In this paper, the authors demonstrate a new preference approach to value time based on NEOCLASSICAL DEMAND THEORY without resorting to the use of LEISURE, or the CONSUMER PRODUCTION FUNCTIONS of the BECKER MODEL.
Abstract: THIS PAPER DEMONSTRATES A NEW REVEALED PREFERENCE APPROACH TO VALUATING TIME. THE "FULL-PRICE" DEMAND FUNCTIONS, WHEREIN DEMAND DEPENDS UPON THE SUM OF MONEY PRICE AND TIME COST, MAY BE OBTAINED FROM NEOCLASSICAL DEMAND THEORY WITHOUT RESORTING TO THE USE OF LEISURE, OR THE CONSUMER PRODUCTION FUNCTIONS OF THE BECKER MODEL. THE RESTRICTIONS ON THESE DEMAND FUNCTIONS PROVIDE AN INDIRECT MEANS OF ESTIMATING THE VALUE OF TIME. IN THE EMPIRICAL SECTION, THE VALUE OF TIME IS ESTIMATED FROM AIR TRAVEL DATA AND IS DERIVED, VIA THE INDIRECT PROCEDURE DEVELOPED, FROM OTHER DEMAND STUDIES. /DOT/


Journal Article•DOI•
TL;DR: This article showed that the probability limit of the estimated error variance is smaller when the true value of p is used to perform the Orcutt transformation than when any other value po is used.
Abstract: It is also perhaps worth considering the case in which the alternative model y Zy + co contains the same regressors as the true model (i.e., Z X). Then the above proof shows that the probability limit of the estimated error variance is smaller when the true value of p is used to perform the Orcutt transformation than when any other value po is used. (This, in fact, constitutes a relatively simple proof of the consistency of the maximum likelihood estimate of p, for a correctly specified model.)

Journal Article•DOI•
TL;DR: Fisher and Temin this paper showed that a finding that the R and D input elasticity exceeds unity does not necessarily imply that the output elasticity is greater than unity also.
Abstract: FISHER and TEMIN (1973) have argued recently that many empirical studies1 relating to the Schumpeterian hypothesis are inappropriate for testing that hypothesis. They observe that Schumpeter can be interpreted as hypothesizing that the elasticity of the value of research and development (R and D) output with respect to firm size is greater than unity. On the other hand, the empirical studies have been concerned with investigating the elasticity of R and D inputs with respect to firm size. Fisher and Temin demonstrate that a finding that the R and D input elasticity exceeds unity does not imply that the R and D output elasticity exceeds unity also. Given that public policy formulation should be based on tests of the Schumpeter hypothesis rather than on tests of the R and D input elasticity, their point is well taken. Of course, in defense of the empirical studies, it can be argued that data limitations have restricted testing to the R and D input elasticity, and that most of the researchers have been aware that they were not testing the Schumpeter hypothesis. In a footnote, Fisher and Temin refer to a study of technical change in the pharmaceutical industry that does attempt to test the Schumpeter hypothesis directly.2 This study by Comanor (1965) examines the relationships among firm size, R and D inputs, and technical change in the United States pharmaceutical industry for the period 1955-1960. The amount of technical change accomplished by a firm is measured by its sales during the first two years following introduction of all new chemical entities. An important conclusion of the study was that ". . . there are substantial diseconomies of scale in R and D which are associated with large firm size." (Comanor, 1965, p. 190). Because the pharmaceutical industry is one of the rare industries for which adequate data are available to test the Schumpeter hypothesis directly, we have attempted to test the hypothesis in that industry for the more recent period, 1965-1970. Another reason for our work was to try to overcome some difficulties in Comanor's analysis that raise ambiguities of interpretation. As we shall report, our work leads to results that are essentially opposite those of Comanor. Our finding for the 1965-1970 period, that larger firms were "better" at innovation than smaller firms, has another interesting implication. That is, accepting Comanor's findings of the opposite case for the 1955-1960 period, one might be led to hypothesize that the 1962 Amendments to the Food, Drug and Cosmetic Act inadvertently provided an advantage to larger firms. The 1962 Amendments added a "proof-of-efficacy" requirement to the "proofof-safety" requirement of the 1938 Food, Drug and Cosmetic Act. In effect, the 1962 Amendments increased the costs associated with introducing new drugs by requiring extensive tests and evaluations not required previously. In order to investigate further the effect of the 1962 Amendments, we applied our model to 1955-1960 data. We found that larger firms were "better" at developing new chemical entities in that period also, but their relative advantage over smaller firms was smaller than in the post-1962 period (see footnote 18). The plan of this paper is as follows: First, we summarize and point out several problems we found with Comanor's study. Section III presents the results from a two-equation model that decomposes the technical change measure Received for publication July 5, 1973. Revision accepted for publication January 16, 1974. The authors are grateful to M. A. Adelman, M. Bronfenbrenner, D. Davies, F. Fisher, W. Gruber, T. Seaks and D. Schwartzman, and an anonymous referee for helpful comments. We are especially indebted to H. Grabowski for his very thorough criticisms of the first draft. All errors and judgments are the authors' responsibility. 1 Fisher and Temin refer to the studies by Villard (1958), Schmookler (1959), Worley (1961), Mansfield (1964), Scherer (1965), and Comanor (1967). 2 In addition to the 1965 Comanor study, Mansfield (1964) and Scherer (1965) also have made studies that fall into this category. 'For a summary of both studies, see Vernon (1972).

Journal Article•DOI•
TL;DR: In this article, the authors focus on individual residential property values in one municipality only, and thus avoid the major problem discussed above since the level of general government services is the same for all property owners.
Abstract: THE assumption that taxes are capitalized plays a central role in public finance theory. It would seem that the taxation of residential property offers a good opportunity to test this assumption empirically. One need simply investigate whether or not, after holding constant housing and land characteristics, a house with higher taxes sells for a lower price. Indeed there have been many attempts in the literature to estimate the extent to which -residential property taxes are capitalized.' Many of these studies have focused on differences in tax rates existing in neighbouring communities, and have attempted to determine whether in such a setting property values are inversely related to tax rates. The major difficulty with this approach, in which tax rates in different communities are compared, is that government expenditures may also differ from one location to another and may also be capitalized in property values. It is therefore necessary to hypothesize that property values depend on both taxes and expenditures, in which case the relationship comes close to being an identity, with average property values related to average tax rates and average levels of government expenditures.2 It is not surprising that tax rates are found in these cases to be negatively, and government expenditures positively, related to property values. However, it is not clear (two-stage least squares notwithstanding) how much of these effects can be attributed to capitalization and how much is due to the tautological nature of the problem. In this paper we focus on individual residential property values in one municipality only, and thus avoid the major problem discussed above since the level of general government services is the same for all property owners.3'4 Further, restriction to one locale does not imply that effective tax rates will be the same on all houses even though the mill rate is, of course, the same. As is the case in most cities there are wide variations in the ratio of assessed value to market value for residential properties, (even within homogeneous housing categories) thus resulting in differences in taxes paid for basically identical housing units.5' 6 Consequently, we hypothesize a relationship of the form

Journal Article•DOI•
TL;DR: In this article, a cross-sectional analysis of the distribution across population classes of "types of work" performed in the United States is presented, focusing on a matrix of coefficients describing the nature of the task involved in each occupation of a very fine occupational classification, the elements of which matrix are referred to as "job characteristics."
Abstract: THIS paper presents a cross-sectional analysis of the distribution across population classes of "types of work" performed in the United States. The novelty of the study resides in its measurement of types of work. The approach adopted is to focus on a matrix of coefficients describing the nature of the task involved in each occupation of a very fine occupational classification, the elements of which matrix are here referred to as "job characteristics." For example, the vector of job characteristics employed includes indicators of physical working conditions, repetitiveness, position on the hierarchy of relationships to people, and skill demands, inherent in a job. Sections II and III describe data sources and the process of compiling a data file, respectively. Section VI compares sample mean job characteristics by race and by sex, these being the mean values of the left-hand variables employed in section VII. The latter reports estimates of functions in the general class Zja= F(ga) + Ea (1) where Zja 1 if the occupation performed by person a has job characteristic j, 0 otherwise, ga a vector of attributes, or personal characteristics, of person a, Ea -a disturbance term. The vector ga includes race, sex, age, schooling and union membership. The particular specification chosen within the general class (1) is given in section IV, and V is a cautionary note on interpreting the results of sections VI and VII.


Journal Article•DOI•
TL;DR: In this article, the authors show that there is merit in the long-standing but much abused distinction between "informative" and other types of advertising, and that this difference is revealed in a differential effect on economic performance.
Abstract: T HIS paper attempts to show that there is merit in the long-standing but much abused distinction between 'informative' and other types of advertising, and that this difference is revealed in a differential effect on economic performance. After a brief literature survey, section II develops a theoretical distinction between informative and goodwill advertising. Section III outlines a test of the hypothesis that the different kinds of advertising will have opposite effects on market performance. Section IV presents the results of this test. Section V summarizes and relates the results to other recent work on the economics of advertising.




Journal Article•DOI•
TL;DR: The authors employed multiple regression analysis and investigated the quantitative relationship between market structure and a direct measure of excess capacity for 35 American manufacturing industries and found that partial oligopolies experience significantly more excess capacity during periods of growing aggregate demand than do tight oligopolistic or atomistic industries.
Abstract: STUDIES investigating the relationship between market structure and market performance generally focus on allocative efficiency and progressivity. Almost no empirical analysis exists which examines the relationship-between market structure and another important dimension of market performance -the degree to which industries experience chronic excess capacity.1 Of the three empirical studies dealing with the relationship between market structure and excess capacity (Bain, 1962; Meehan, 1967; Scherer, 1969), only Bain's directly relates the degree of chronic excess capacity to market structure.2 However, given the small sample employed, Bain's observation that chronic excess capacity did not appear in his six "substantial" or "very high" barriers sample industries and did appear in his three "moderate to low" barriers industries, generates only tentative conclusions with respect to the relationship between excess capacity and barriers to entry. This paper employs multiple regression analysis and investigates the quantitative relationship between market structure and a direct measure of excess capacity for 35 American manufacturing industries. In order to capture "chronic" excess capacity, the dependent variable is measured over a period of rising aggregate demand, 1963-1966. The results suggest that partial oligopolies experience significantly more excess capacity during periods of growing aggregate demand than do tight oligopolistic or atomistic industries. Section I of this paper has a discussion of the various hypotheses linking market structure and excess capacity. Section II describes the model and presents the major empirical results. Section III discusses the implications of the empirical results with respect to antitrust policy.

Journal Article•DOI•
TL;DR: In this paper, the authors examined the economic determinants of housing market value and property tax liability and explored the functioning of the real estate brokerage market in the suburban Philadelphia area known as the Main Line.
Abstract: T HIS paper is an empirical study of the housing market for 1967-1969 in the suburban Philadelphia area known as the Main Line' The basic data2 utilized for this study include 2,143 transactions for residential properties listed by the Main Line Board of Realtors' Multiple Listing Service Our purpose is to examine the economic determinants of housing market value and property tax liability and to explore the functioning of the real estate brokerage market The analysis of value is based on a theoretical framework whose assumptions are outlined in section I Our empirical analysis (section II) attempts to improve upon previous work by explicitly linking the theoretical underpinnings of market equilibrium to the development of the analysis The results, while reinforcing some of the empirical findings of earlier studies, do represent a research improvement3 since the analysis contemporaneously utilizes individual micro-economic property-data, takes into account simultaneity bias between the market value and the property tax variables, and develops estimates for the propReceived for publication April 30, 1973 Revision accepted for publication July 30, 1973 * The author wishes to thank the Rodney L White Center for Financial Research and the Berger Scholarship Fund, both at the University of Pennsylvania, for their financial support He, also, wishes to thank Professors Noel Edelson and Lawrence Jones for their very helpful comments, Professor Herb Grubel, who originally prepared the basic data, for making it available for this study, and Moshe Cohen for his computational assistance Also, Professor Grubel's mimeographed notes, entitled "Determinants of House Prices on the Philadelphia Mainline," have been of immense help Finally, an anonymous referee made several extremely helpful comments, which significantly improved this paper Of course, the author is responsible for any remaining errors 1 The main line communities considered in this analysis are the six suburban townships 'stretching west from the boundary of the city of Philadelphia along the tracks of the Penn-Central Railroad: Lower Merion (including Narberth), Radnor, Upper Merion, Easttown, Tredyffrin, and Malvern In many of our statistical manipulations the last three townships were, for convenience, treated as one unit However, for many of the "excluded" variables in the twostage least squares analysis finer geographic variables used in this study were for (1) Lower Merion Township, the following separable areas: Ardmore, Bala-Cynwyd, Bryn Mawr, Gladwyne, Wynnewood (including Penn Wynne, Overbrook Hills, Green Hill Farms), Haverford, Merion, Penn Valley, Rosemont and Narberth (2) Radnor Township, the following separable areas: Radnor, St Davids and Ithan, Villanova and Wayne (3) Tredyffrin, Easttown, and Malvern Townships, the following separable areas: Berwyn, Devon, Malvern, Paoli and Daylesford, Strafford and Colonial Village, and Valley Forge (4) Upper Merion Township, the following separable areas: Gulph Mills and King of Prussia 2The basic data for the study were generated by the Main Line Board of Realtors Multiple Listing Services, and provided through Mr Edmund Bossone After the elimination of about 15% of the properties for which information was incomplete, the number of observations used in the study was 2,143 There appears to have been no systematic bias in the characteristics of houses for which information was incomplete However, the data do not include sales of property that were unlisted and sold privately; and there is no ready way in which the bias resulting from this omission can be measured In a strict sense, therefore, the study is concerned only with the determination of values for houses sold through members of the Main Line Board of Realtors during the years 19671969 The sixteen basic variables available in the data were: (1) Style of house, (2) type of construction, (3) type of heating, (4) number of garages, (5) number of bedrooms, (6) number of bathrooms, (7) age of house, (8) location of house, (9) market value of house, (10) property taxes per year, (11) length of time listed with realtor service, (12) date of sale, (13) original asking price, (14) lot size, (15) brokerage firm handli-ng original listing, and (16) distance to Center City