scispace - formally typeset
Search or ask a question

Showing papers in "The Review of Economics and Statistics in 1969"


Journal Article•DOI•
TL;DR: In this paper, the combined problem of optimal portfolio selection and consumption rules for an individual in a continuous-time model was examined, where his income is generated by returns on assets and these returns or instantaneous "growth rates" are stochastic.
Abstract: OST models of portfolio selection have M been one-period models. I examine the combined problem of optimal portfolio selection and consumption rules for an individual in a continuous-time model whzere his income is generated by returns on assets and these returns or instantaneous "growth rates" are stochastic. P. A. Samuelson has developed a similar model in discrete-time for more general probability distributions in a companion paper [8]. I derive the optimality equations for a multiasset problem when the rate of returns are generated by a Wiener Brownian-motion process. A particular case examined in detail is the two-asset model with constant relative riskaversion or iso-elastic marginal utility. An explicit solution is also found for the case of constant absolute risk-aversion. The general technique employed can be used to examine a wide class of intertemporal economic problems under uncertainty. In addition to the Samuelson paper [8], there is the multi-period analysis of Tobin [9]. Phelps [6] has a model used to determine the optimal consumption rule for a multi-period example where income is partly generated by an asset with an uncertain return. Mirrless [5] has developed a continuous-time optimal consumption model of the neoclassical type with technical progress a random variable.

4,908 citations


Book Chapter•DOI•
TL;DR: In this paper, the optimal consumption-investment problem for an investor whose utility for consumption over time is a discounted sum of single-period utilities, with the latter being constant over time and exhibiting constant relative risk aversion (power-law functions or logarithmic functions), is discussed.
Abstract: Publisher Summary This chapter reviews the optimal consumption-investment problem for an investor whose utility for consumption over time is a discounted sum of single-period utilities, with the latter being constant over time and exhibiting constant relative risk aversion (power-law functions or logarithmic functions). It presents a generalization of Phelps' model to include portfolio choice and consumption. The explicit form of the optimal solution is derived for the special case of utility functions having constant relative risk aversion. The optimal portfolio decision is independent of time, wealth, and the consumption decision at each stage. Most analyses of portfolio selection, whether they are of the Markowitz–Tobin mean-variance or of more general type, maximize over one period. The chapter only discusses special and easy cases that suffice to illustrate the general principles involved and presents the lifetime model that reveals that investing for many periods does not itself introduce extra tolerance for riskiness at early or any stages of life.

2,369 citations


Journal Article•DOI•

1,066 citations



Journal Article•DOI•
TL;DR: In this paper, the authors estimate the magnitudes in which each of several factors influenced interstate migration over the period 1955-1960, and the most unique explanatory variable employed is the "migrant stock", i.e., the number of persons born in state i (the origin state) and living in state j (the destination state).
Abstract: HE objective of this study is to estimate T the magnitudes in which each of several factors influenced interstate migration over the period 1955-1960. Several variables were chosen which might reasonably be expected to explain the movements which occurred, and multiple regression analysis was used on the data. The most unique explanatory variable employed is the "migrant stock," i.e., the number of persons born in state i (the origin state) and living in state j (the destination state).' It is shown that the failure to include the migrant stock variable in the estimated relationship causes the true direct effect of most other variables to be obscured.

349 citations


Journal Article•DOI•
TL;DR: A number of studies have shown evidence of significant association between certain characteristics of industry structures such as high concentration and substantial entry barriers and variations in industry performance, particularly with respect to profitability as discussed by the authors.
Abstract: A NUMBER of studies have yielded evidence of significant association between certain characteristics of industry structuresuch as high concentration and substantial entry barriers and variations in industry performance, particularly with respect to profitability.* In general, these studies tend to confirm the expectation that, other things being equal, profits will tend to be higher in industries in which structural conditions depart substantially from those of the competitive model. However, as Stigler [16, p. 145] has noted, the statistical associations found are usually weak, and a substantial amount of performance diversity is left unexplained. Thus, the typical strength and character of the structure-performance associations, and the importance of individual structural factors in the overall pattern, have remained open to question. Many hypotheses have been suggested; only a few are subject to serious empirical investigation; fewer still have actually been examined. This paper presents a summary report on our efforts to test a small number of fairly straightforward structure-performance hypotheses against the most comprehensive collection of relevant data available, the concentration statistics for 1958 and 1963 [18, 19]. These tests have focused on a single performance measure, the percentage price-cost margin, which we take as an indicator of the ability of firms in an industry to obtain prices in excess of direct costs. We have found a significant association between the price-cost margin and the level of four-firm concentration among fourdigit SIC (Standard International Trade Classification) industries; and this association is not eliminated when differences in capital-intensity among industries are taken into account. We have further found: (1) a tendency for the strength of the concentration-margins association to increase over the period 19581963, particularly in industries in which the level of four-firm concentration was stable or increasing; (2) a substantially stronger association between concentration and margins in consumer goods industries, as compared to producer goods industries; and (3) evidence that the principal component of the concentration-margins association in consumer goods industries is a correlation between concentration and margins of the four largest firms alone, in those industries in which these firms have higher margins than their smaller rivals. The first section of this paper establishes the background and framework of our analysis, and the following sections present the evidence of these findings in some detail.

265 citations



Journal Article•DOI•
TL;DR: Farrar and Glauber as mentioned in this paper revisited the problem of multicollinearity in regression analysis and proposed a three-stage hierarchy of increasing detailed tests for the presence, location, and pattern.
Abstract: In a recent paper in this REVIEW,' Farrar and Glauber (hereafter FG) revisit the problem of "multicollinearity" in regression analysis. Viewing the problem of multicollinearity as "both a facet and a symptom of poor experimental design," 2 FG propose "a three-stage hierarchy of increasing detailed tests for the presence, location, and pattern," 3 of multicollinearity. The first in this series of three tests, on which the other two are conditional, is desigped to "provide a useful first measure of the presence and severity of multicollinearity" 4 in the sample on hand. Bartlett's well-known statistic for testing the joint distribution of sample correlations under the assumption of vanishing parent correlations between the variables is used by FG for detecting multicollinearity. Bartlett shows that the (natural) logarithm of the intercorrelation determinant computed from a sample drawn from a multivariate, ortho-normal distribution, multiplied by a factor k, is approximately distributed as Chi Square with v 1/2 n (n 1) degrees of freedom, where k = -[N 1 1/6 (2n + 5)], N is the sample size and n is the number of variables considered. If the investigator concludes from the first stage that multicollinearity exists and that it is severe enough to warrant some action, FG propose to regress consecutively each explanatory variable on the remaining ones. The rp'silting F statistics will test "for the dependence of particular variables on other members" 5 of the set of explanatory variables. Finally, the patterns of interdependence among the independent variables are examined by testing the significa-nce of the partial correlations of every pair of explanatory variables, all other variables held constant. The main pillar of this three-level test is, of course, Bartlett's test which is properly used for making inferences,6 under the null hypothesis that all the population correlations are zero. Since FG claim, however, that they are not interested in drawing inferences from sample to population ("inferences from sample to population . . . are possible . . . however, little importance is attached to properties of the population from which a set of data has been drawn. Attention focuses largely, if not entirely, on the sample itself" 7), their use of the Chi-Square statistic is questionable. Moreover, it is neither practical nor necessary to assume orthogonality between parent economic variables, 4f one wishes to make such inferences. Here we come to the heart of the problem of multicollinearity. One may agree with FG that it is preferable to think of multicollinearity "in terms of [its] severity rather than its existence or nonexistence." 8 If one agrees with this approach, the natural way to proceed is indeed "to define multicollinearity in terms of departures from a hypothesized statistical condition." 9 But what is this hypothesized condition? For FG this condition is "the requirement that explanatory variables be truly independent of one another." 10 However, there is no such requirement for the least-squares solution. On the contrary, the least squares solu* I share with D. C. Farrar and R. R. Glauber my indebtedness to Professor John R. Meyer who introduced us to the multicollinearity problem, and I am grateful for his comments on an earlier draft. I am also grateful to Professors J. Johnston, N. Wallace, D. Farrar, and R. Glauber for valuable discussions. I am particularly thankful to D. Farrar who did not spare his efforts in order to dig out old forgotten data, which enabled me to recompute his regression equations. 1D. C. Farrar and R. R. Glauber, "Multicollinearity in Regression Analysis: The Problem Revisited," this REvIEw, XLIX (Feb. 1967). 2Ibid., p. 93. 3 Ibid., p. 104. ' Ibid., p. 101. BIbid., p. 104. Bartlett has originally developed this statistic in order to test for the number of meaningful components that can be extracted from a set of variables. A concise statement is given by Bartlett: "A Note on the Multiplying Factors for Various x2 Approximations," Journal of the Royal Statistical Society (B), XVI, no. 2 (1954), pp. 296-298. 7D. C. Farrar and R. R. Glauber, op. cit., 100. 8Ibid., p. 106. 9 Ibid., p. 92. 10Ibid., pp. 92 and 100.

142 citations


Journal Article•DOI•
TL;DR: In this article, the authors present evidence that differences in plant size are at least as important as differences in market structure when trying to account for wage differentials among manufacturing industries, where the less competitive industries often have larger firms and larger plants.
Abstract: ECONOMISTS have shown considerable interest in the relationship between productmarket competition and wage rates. Most of the analysis has centered on manufacturing, where the less competitive industries often have larger firms and larger plants. This paper presents evidence that differences in plant size are at least as important as differences in market structure when we try to account for wage differentials among manufacturing industries.

136 citations


Journal Article•DOI•
TL;DR: Eisner and Nadiri as discussed by the authors show that the long-run partial elasticity of the flow of capital services, the stock of capital, or the flow flow of gross investment demand, with respect to the price of output (p) divided by the price (c) should be unity.
Abstract: ONE of the basic facts of life confronting econometric researchers is that in order to test any hypothesis it is necessary to assume the validity of other assumptions which cannot be tested. An important part of the art of practical econometrics is knowing how much to include in the maintained hypothesis; if too much is assumed there may be little or nothing left to test, while if too little is assumed it may be impossible to reach any conclusions, or else the analysis may become hopelessly complex. In a recent article in this Review 1 Robert Eisner and M. I. Nadiri have examined critically one of the essential maintained hypotheses used by Dale W. Jorgenson, James A. Stephenson, Robert E. Hall, and Calvin D. Siebert in a substantial body of empirical research on the demand for capital goods.2 This assumption maintains that the long-run partial elasticity of the flow of capital services, the stock of capital, the flow of gross investment demand, or the flow of net investment, with respect to the price of output (p) divided by the price of capital services (c) should be unity. By respecifying Jorgenson's model in a logarithmic form, Eisner and Nadiri have produced tests of the hypothesis that the long-run price elasticity of demand for capital stock is unity. Not only do they find that the estimated elasticity with respect to (p/c) is significantly less than one, but all of their preferred point estimates of this parameter are less than 0.16 and in some cases do not differ significantly from zero. The first of the seven conclusions summarized by Eisner and Nadiri is that "the role of relative prices, the critical element in the neoclassical approach, is not confirmed." I In principle, the Eisner-Nadiri goal of relaxing and testing crucial maintained hypotheses is a laudable one. Their conclusions, if they can be sustained, have far-reaching implications. If their estimated elasticities are correct, then fiscal and monetary policy-makers have little, if any, direct influence on investment expenditures. A cautious approach to the importance of the Eisner-Nadiri conclusions would seem justified, however, in view of the fact that others have also undertaken the task of critically examining the maintained hypotheses in the Jorgenson model. While none of the other critics of Jorgenson has defended the precise manner in which he has specified his model, without exception the results have been favorable to the essence of the "neoclassical approach" to investment functions the assumption that relative prices do matter.4 The next section of this paper is essentially an exercise in detective work aimed at finding out why Eisner and Nadiri obtained results contrary to the body of other research. The analytical method used is to carry the goal of Eisner and Nadiri relaxing and testing maintained hypothesisone step further. The maintained hypothesis I relax and test involves the assumption of serially independent errors.5 * Support for this research was provided under contract DACA31-67-C-0141, U.S. Army Corps of Engineers, for the Office of Emergency Planning, and by the National Science Foundation and Ford Foundation through grants to the Cowles Foundation for Research in Economics. I am very grateful to Professors Robert Eisner, Robert J. Gordon, David Grether, Dale Jorgenson, Franco Modigliani, and Marc Nerlove and to the members of the Workshop in Econometrics and Mathematical Economics of the University of Chicago, for criticisms of earlier versions of this paper, and to Petter Frenger for extremely helpful research assistance. '[7]. Eisner's criticisms have been amplified in [5] and [6]. 2This body of research includes [12] [13] [16] [17] [19] [20] [21] [22]. 3[7], p. 380. 'See [2] [3] [4] [9]. Some of this evidence is discussed briefly in section III below. The evidence on demand for factors other than capital, and on direct estimation of CES production functions, is also relevant, at least indirectly. See [23] for discussion of this evidence. 'As I note below, the stochastic assumption I make that the errors are a first order autoregressive process -is only one step more general than that used by Eisner and Nadiri. I do not wish to imply that this stochastic assumption is anything more than a minimal improvement; the only reasons for not using other types of assumption was my desire to minimize computational problems.

115 citations


Journal Article•DOI•
TL;DR: In this paper, the authors investigate the relationship between recomputed profit rates and advertising and evaluate the mis-statement in practice, and propose a tax avoidance policy to correct the over-or under-statement.
Abstract: SOME of the highest profit rates appear in industries that advertise heavily. These high earnings have been attributed to barriers to entry associated with product differentiation [2, 6]. A possible alternative explanation is that the treatment of long-lived advertising as current expenses leads firms that invest heavily in such intangibles to overstate their rates of return since their equity is understated [1, p. 153; 15, p. 167]. The same practice may result in the understatement of their dollar profits so that they pay less tax than other firms whose investments are all tangible. The purpose of this paper is to work out more precisely the overor under-statement of profit and rate of return involved in the "expensing" of advertising and to evaluate the mis-statement in practice.' Part I develops conceptually the conditions under which overor under-statements can be expected. Part II recomputes dollar profits and rates of return for a variety of industries, estimates the tax avoidance that results, and examines the relationship between recomputed profit rates and advertising. Part III contains a proposal for policy change.

Journal Article•DOI•
TL;DR: In this article, the authors present an input-output based projection for the State of Washington for the year 1980, which is based on the 1963 state of Washington table and is given as a datum.
Abstract: R EGIONAL input-output tables have long been acclaimed as useful tools in regional forecasting, especially long-run forecasts, yet surprisingly few regional projections have made a serious attempt to use them. This paper reports on one effort in this direction an inputoutput based projection for the State of Washington for the year 1980. Because of the time constraints no effort will be made to describe how the 1963 State of Washington table was developed. Rather, it is given as datum [3]. In addition, again because of time, not all of the details in the projection process are included. Finally, because few are interested in the results, no bulky tables have been included.


Journal Article•DOI•
TL;DR: In this article, the Lindahl voluntary exchange decision rule is used to decide how much public good to supply to a group of consumers. But it does not consider the distribution of the public goods among the consumers.
Abstract: AS Professor Samuelson has recently redemonstrated, the following two problems cannot be logically separated: (1) how much of a public good it is efficient to produce, (2) how in justice the costs of the good are to be borne by the public.' Even under the stringent assumptions of constant marginal cost for the public good, and constant marginal utility of income for all consumers, allocative efficiency in no way logically determines how cost burdens should be shared even when the income distribution, before taxes and before public good production, is considered just. As a result, public authorities are generally denied the luxury of sequential, independent, or separable decision rules for allocative efficiency and distributional equity. An omniscient decision maker, interested in maximizing social welfare as defined by some social welfare function, must simultaneously determine the quantity of the public good to produce, the share of the cost burden so generated to be charged each person, and income transfers among individuals or groups. There is, however, one tax-allocation-and-public-good-supply decision rule, namely, the "Lindahl voluntary exchange decision rule" which leaves the initial (i.e., pre-tax, pre-benefit) income distribution unchanged, and hence can argue for separation between allocation and distribution decisions. These conclusions derive from the following propositions: 1) Where the costs of public good production must be shared by individuals in predetermined proportions (by customs or fixed tax laws, etc.) and no direct income transfers are allowed among individuals, the decision of how much public good to supply is ethical. At the "optimal" supply feasible under these conditions, the MC of production need not equal the sum of individual MRS's. In this case the restrictions on transfers and on variable tax rates generally insure that the best feasible outcome is not Pareto-efficient. 2) If either direct lump-sum income transfers or variable cost sharing tax burdens are allowed, such that the authority deciding how much public good to produce can also vary one of these two factors, then the choice of a final utility distribution dictates a unique Pareto optimal public goods supply decision. This public goods supply will be efficient in the sense that MC = X MRS; it need not be true, however, that each individual's MRS equals that individual's marginal cost share. Relaxing either of the restrictions in 1, insures that Pareto efficiency in resource allocation can be achieved. There exists an infinite number of Pareto optimal public goods supplies each related to a particular utility distribution. 3) If both tax shares are variable and lumpsum income transfers are allowed, then the optimal utility distribution is Pareto-efficient (i.e., MC = I MRS) and can be achieved as a "Lindahl solution" to the public good supply problem (i.e., MRS of each individual equals that individual's marginal cost share). In this case also, as in 2, the utility distribution choice determines a particular level of public goods supply. One purpose of this paper is to demonstrate the foregoing propositions. This is done in part I with the aid of ordinary box diagrams. In part II the demonstration is repeated with simple mathematics. Part III summarizes the implications of the foregoing for the theory of taxation and expenditure and particularly for the viability of the theory of the public household as containing separable allocation and distribution branches.

Journal Article•DOI•
TL;DR: In this paper, the authors demonstrate the importance of using disaggregated data even when micro-components exhibit different behaviors, and demonstrate that models estimated from micro data will give generally superior out-of-sample forecasts.
Abstract: IN a previous article with Professor Harold Watts, the authors demonstrated empirically the loss of information in the parameter estimators when data are aggregated prior to computing least-squares regressions [3]. These results came from simulations with a simple economic model containing identical microcomponents. Specifically, in addition to the error term, each component spent 0.9 of its previous income and 0.2 of its cash balance. The main point of our previous paper was that estimation prior to aggregation yielded substantially greater precision in the estimates of the parameters and their standard errors than did estimation of the same parameters after aggregation. The implications of this for hypothesis testing and the development of satisfactory policy response models seemed obvious. On the basis of a variety of evidence, including the paper with Watts and a paper by Orcutt [4], the case for seeking and frequently using disaggregated data seemed strong but one nagging concern remained. Suppose, as seems likely, the microcomponents exhibit different behaviors. In this case it might not be sensible to pool the data and treat it as a single sample from a single universe. However, if estimators from each micro equation are computed separately, would it still be desirable to use disaggregated data instead of data aggregated over all components? This turned out to be the case with identical components but would it be with nonidentical components in which something more than constant terms were different? This paper copes directly with this issue, and we demonstrate the importance of using disaggregated data even when microcomponents exhibit different behaviors. We do not deal with cases where microcomponents have nonlinear relations, but the need for disaggregated data in such cases seems fairly obvious without Monte Carlo experiments. If we wish to compare the accuracy of estimation at different levels of aggregation, we need a measure of merit different from the extent of bias and variance of parameter estimators, which we used in our previous study, because in an aggregate model whose components have different behaviors, the expected values of the estimators may be meaningless or nonstationary [Zellner, pp. 3-5]. Therefore, we use the accuracy of the out-of-sample forecasts to measure the merit of the estimated equations. In particular, we forecast the aggregate expenditure for the eight time periods following the last sample period. The rootmean-square forecast errors from models based on data at different levels of aggregation provide the yardstick for comparisons. Our results suggest that models estimated from micro data will give generally superior out-of-sample forecasts. This finding is at variance with the belief that one reaps an "aggregation gain" by aggregating the micro data prior to estimation. The concept of a possible aggregation gain was formalized in a 1960 article in this Review by Grunfeld and Griliches:

Journal Article•DOI•
TL;DR: In this article, the authors show that double deflation can be justified as a fixed-weight linear approximation to an ideal variable-weight logarithmic index under assumptions no more restrictive than those required to justify the notion of real value added itself.
Abstract: The practice which we will refer to here as "double-deflation" is a technique for arriving at a measure of real value added when one has available the value of gross output and materials inputs and also price indices for gross output and for materials inputs. The double deflation technique, despite its rather wide use, has been regarded as crudely empirical, with little, if any, justification from the point of view of theory.' The purpose of this note is to demonstrate that one can justify double-deflation as a fixed-weight linear approximation to an ideal variable-weight logarithmic index under assumptions no more restrictive than those required to justify the notion of "real value added" itself. Analysis of production relations is simpler if we can restrict ourselves to looking at two inputs at a time. Hence in studies using disaggregated data, it is convenient to consider the contribution of capital and labor to gross output separately from the contribution of materials inputs. In order for such separate treatment to be justified, the production function must be separable. Taking y to be gross output, K to be capital, L to be labor, and M to be materials, the separability condition required is

Journal Article•DOI•
TL;DR: In this paper, a measure of vertical integration in the corporate sector is developed, which is calculated for the year 1929 and for the period 1948 through 1965, and the conclusion reached on the basis of this empirical evidence is that there has not been any discernible increase in the degree of vertical integrations of corporations to merge with suppliers or customers.
Abstract: W HEN the surface of the economist is 14/1 7 scratched we generally find a belief that vertical integration in the corporate sector has increased during the past few decades, if not longer. This proposition, however, has not been put to a rigorous empirical test for the entire corporate sector. According to Professor Bain, "We must, in the present state of knowledge, confine ourselves to a few remarks based on miscellaneous scraps of evidence." I In this note a measure of vertical integration in the corporate sector is developed. The measure is calculated for the year 1929 and for the period 1948 through 1965. The conclusion reached on the basis of this empirical evidence is that there has not been any discernible increase in the degree of vertical integration in the corporate sector. If anything, there might have been a slight decline. The index we use is the ratio of corporate sales to gross corporate product standardized to abstract from the changes in output mix. A rise in this index implies a decline in corporate vertical integration and vice versa.2 Because industry sales data are on a consolidated basis by corporation and most of the gross corporate product is on an establishment basis, this series reflects a preponderance of any general movements on the part of corporations to merge with suppliers or customers. If, for example, firm A has a gross corporate product of 500 and sales to firm B of 1000 (firm A's purchased material inputs are 500) and firm B has a gross corporate product of 500 and sales of 1500, then total corporate sales for both firms equal 2500 and total gross corporate product equals 1000. In this instance the ratio of sales to gross corporate product equals 2.5. If these two firms merge, total corporate sales will then be 1500 and gross corporate product will still be 1000. The new ratio of corporate sales to gross corporate product will be 1.5. Vertical integration has caused a decline in our ratio. As is readily aDDarent. neither pure horizontal integration nor a pure conglomerate movement will affect our ratio.3 There are natural differences among industries which preclude the meaningfulness of comparing the degree of vertical integration in one industry with that of any other industry. Thus a corporation in the service or mining industry will naturally have a much lower sales to gross corporate product ratio than a corporation in the retail or wholesale trade industry. If the proportional mix of total gross product is changing, we could very easily find a change in the aggregate sales to gross product ratio without any changes in this ratio for any specific industry. Any conclusions about changes in the ratio which are due to such changes in the proportional mix implies interindustry comparisons. In order to avoid the mix problem we calculate the aggregate ratio using the proportional mix of one base period. More explicitly our methodology is as follows: For any year "t," total corporate sales, St, is equal to the sum of total corporate sales for each industry "i." Thus,

Journal Article•DOI•
TL;DR: In this paper, the authors make a comparison between the optimality conditions for the case of public goods and the optimal conditions for private goods, and show that when the number of persons on both sides of the market increases, the problem becomes more indeterminate rather than less.
Abstract: 1. THE theory of public goods 1 is sometimes confused with the theory of joint production. This is in the nature of a pun, or a play on words: for, as I have insisted elsewhere, as we increase the number of persons on both sides of the market in the case of mutton and wool, we converge in the usual fashion to the conditions of perfect competition. But when we increase the number of persons in the case of a typical public good, we make the problem more indeterminate rather than less. To elucidate the difference, I shall fill in what appears to be a minor gap in the literature, namely a needed statement in terms of modern welfare economics of the various optimality conditions as they appear in the case of joint products. The analysis is straightforward; and after it is before us, we can clearly see the difference between it and the well-known optimality conditions for the case of public goods. 2. I begin with an examination question given recently at the Massachusetts Institute of Technology: "Corn is produced by land and labor; and so are wool-bearing mutton-bearing sheep. Assume the totals of available land and labor to be fixed. Write down the various welfare optimality conditions in the case where all people happen always to consume wool and mutton in the same proportions that sheep bear these products. And then, by contrast, write down the conditions that would have to prevail if individuals' indifference contours for wool, mutton, and corn involve the usual variability of proportions." This proved a difficult question for first-year graduate students in economic theory. Still many perceived that in the first case they could essentially work with two rather than three goods, substituting sheep as a kind of composite good for wool and mutton, and thereby ending up with the standard welfare conditions for two ordinary (private) goods, corn and sheep.




Journal Article•DOI•
TL;DR: In this paper, the Eisner-Nadiri model is shown to be false and vice-versa, and it is shown that the model used by Eisner and Nadiri directly contradicts the model we originally presented [14, 18, 20].
Abstract: IN a recent paper Robert Eisner and M. I. Nadiri [8] have raised several issues that bear on the development of the neoclassical theory of investment behavior. Specifically, they attempt to present evidence on the following issues: (1) Is the elasticity of substitution different from unity? (2) Is the general Pascal form of the distributed lag function, relating investment to its underlying determinants, appropriate? (3) Is replacement proportional to capital stock? 1 These issues are of some significance and deserve further analysis, however, they cannot be resolved on the basis of evidence presented by Eisner and Nadiri. Our first purpose is to demonstrate that the model used by Eisner and Nadiri directly contradicts the model we originally presented [14, 18, 20]. If our model is valid, the EisnerNadiri model is false and vice-versa. Evidence predicated on the validity of either model is not relevant to the validity of the other. Our second purpose is to consider the Eisner-Nadiri model as a potential contribution to the neoclassical theory of investment behavior in its own right. Unfortunately, even a cursory examination of the internal logic of the EisnerNadiri model reveals a theoretical lapse; they combine increasing returns with competitive equilibrium. Aspects of our maintained hypothesis such as constant returns to scale can be imposed on the Eisner-Nadiri model, eliminating the internal contradiction in their theoretical analysis. Their results have been reexamined from this point of view by Charles Bischoff [3].2 He finds that the stochastic specification they have employed is not supported by their data. For a correctly specified model of the Eisner-Nadiri type, the conclusions that they have reached are reversed. Our examination of the Eisner-Nadiri model leaves open the questions they raise concerning the validity of various aspects of our maintained hypothesis. The specific issue discussed at length by Eisner and Nadiri, whether the elasticity of substitution is unity, is of interest in judging the validity of the maintained hypothesis underlying our model. In Eisner's extensive series of empirical studies of investment behavior the maintained hypothesis is that the elasticity of substitution is zero.8 Thus, evidence on the elasticity of substitution bears equally on the validity of our maintained hypothesis and on that of Eisner's "permanent income" theory of investment. Substantial evidence is available on the value to be assigned to the elasticity of substitution. This evidence has been generated by detailed empirical testing extending over nearly a decade. The available evidence for manufacturing industries of the United States in the aggregate or at the two-digit level, where our work has been concentrated, provides strong support for our maintained hypothesis that the elasticity of substitution is unity. This evidence directly contradicts the maintained hypothesis underlying Eisner's research on investment. Our overall conclusion is that important issues remain to be resolved in implementation of the neoclassical theory of investment behavior, but they are not the issues raised by Eisner and Nadiri. We conclude with a brief review of the most significant of these issues.


Journal Article•DOI•
TL;DR: Cheng et al. as mentioned in this paper applied Bayesian and sequential decision theory to handle both the expectationally stochastic and the dynamic aspects of this important decision problem simultaneously and consistently.
Abstract: T HE object of this paper is to formulate a normative model for selecting a bank's Government security portfolio. Two major problems arise in constructing a model of bank portfolio selection. First, the model must handle uncertainty. This includes not only uncertain future events but also the decision maker's preferences for the outcomes associated with these events. Second, it must recognize the intertemporal or multi-period character of the decision making process. This means that a decision made in one period will influence subsequent decisions and hence, that subsequent decisions must be considered in arriving at the present one. The present paper applies Bayesian and sequential decision theory to handle both the expectationally stochastic and the dynamic aspects of this important decision problem simultaneously and consistently. No previous model of commercial bank portfolio selection handles either or both problems satisfactorily. Porter's model of bank asset selection recognizes uncertainty by treating future cash flows and security prices as random variables, but it is only one period in length. Moreover, it does not consider the decision maker's preferences.' Since the objective function is linear, the model produces a portfolio diversified between securities and loans only through the selection of distribution functions describing the random variables. These transform the function into a nonlinear one upon integration. Cheng's model of bank security portfolio selection is, in effect, a one period formulation also.2 It incorporates uncertainty and the decision maker's preferences through Markowitz's efficient portfolio concept.3 An efficient portfolio is one which maximizes expected return for a given variance of return (or minimizes the variance of return for a given expected return). As Tobin points out, however, this criterion assumes, quite restrictively, that either the variable return is normally distributed or that the decision maker has a quadratic utility function.4 Cheng also makes the highly unrealistic assumption that securities are held to maturity. Multi-period bank portfolio selection models are all based on the assumption that future events are known with certainty. One such model formulated by Chambers and Charnes attempts to reflect the risk inherent in different portfolio configurations by including the Federal Reserve's capital adequacy formula as a constraint.5 Used in the supervision of banks, the capital adequacy formula allocates a bank's capital to designated asset categories on a fractional basis. The values of the fractions are designed to measure the percent by which the different asset categories would decline in market value if they had to be liquidated quickly.6 The choice of these values is somewhat arbitrary. Moreover, the formula itself implicitly assumes a particular preference structure and a certain probabilistic occurrence of future events. Neither assumption is likely to represent accurately either the decision maker's preferences or expectations.7

Journal Article•DOI•
TL;DR: In this paper, the authors report on the validation of two types of data collected in the 1963 Survey of Financial Characteristics (SFC), a study conducted by the Bureau of the Census for the Federal Reserve Board.
Abstract: T HIS is the second of two articles reporting on the validation of two types of data collected in the 1963 Survey of Financial Characteristics (SFC), a study conducted by the Bureau of the Census for the Federal Reserve Board [6]. The SFC was designed to obtain information on the complete financial positions of a national probability sample of families, with over-representation from the high-income groups. Earlier, small-scale studies had indicated that data such as those collected in the SFC might be subject to large errors of response [2, 4, 5]. Hence, the Inter-University Committee for Research on Consumer Behavior and the Bureau of the Census, with the cooperation of the Federal Reserve Board, undertook to validate two assets collected in the SFC-savings accounts and common stock. The approach was to compare household interviewer reports of these two types of assets with institutional records for the same assets. The household interviews were carried out using identical questionnaires and interview procedures to those of the SFC, and with the same interviewing organization. Hence, in addition to providing information on the accuracy of the data that were collected, the results should help in the development of models and procedures to yield more accurate statistics in future financial surveys. It should be stressed that the data in the study do not provide direct estimates of response or nonresponse errors in the corresponding national study of consumer financial characteristics. This is because the validation study differed from the national study in five major respects, as follows:


Journal Article•DOI•
TL;DR: In this paper, the authors formulate and estimate a cost function for the United States local service airline industry and present tentative conclusions about its suitability for various types of aircraft. But they do not discuss the impact of the CAB's regulation on the form of the cost function.
Abstract: N this study we formulate and estimate a cost function for the United States local service airline industry. Section I discusses certain characteristics of the industry and its regulation by the Civil Aeronautics Board (CAB) which influence the form of the cost function and the method of estimation chosen. The model is outlined in section II. The data available and the method of estimation are discussed in section III. Some tentative conclusions are presented in section IV.


Journal Article•DOI•
TL;DR: In this paper, the authors present a method for obtaining consistent estimates of the parameters in the Engel relationship, and apply the method to a sample of 816 households in rural Kenya.
Abstract: JN estimating Engel curves, it is common to use total expenditure in place of income as an explanatory variable. Summers [6] has shown that least-squares estimators will then be inconsistent. More recently, Liviatan [4] has shown that using income as an instrumental variable will permit consistent and relatively efficient estimation of the expenditure elasticities. However, when the sample consists of households in a less-developed country (LDC) that produce partly for subsistence, there are further sources of bias. Estimates may be asymptotically biased because of the spread between the buying and selling price of many food items in a partial subsistence economy; a household that is a net buyer of an item is confronted with a higher effective price than a household that is a net seller. Also, the arbitrary valuation of home-produced goods introduces possibly serious errors of observation with respect to both the regressand and regressor, resulting in the usual errors-in-variables bias. This paper discusses these sources of bias, presents a method for obtaining consistent estimates of the parameters in the Engel relationship, and applies the method to a sample of 816 households in rural Kenya.'

Journal Article•DOI•
TL;DR: Danforth and Karczka as mentioned in this paper used a simple correlation analysis to analyze the relationship between fertility and economic development, and found that the analysis of fertility is limited to two or three dimensions, and that the regression results usually afford little insight into the fundamental relationship between population growth and economic conditions.
Abstract: D ISTRIBUTION of the population by urban, rural nonfarm and farm residence is one of the oldest and most established causes of differentials in fertility cited by demographers.' Geographical region is also frequently mentioned in demographic studies as a source of variation in fertility in the United States. Race and social class, the latter often measured by occupational status, are additional variables popular with both demographers, and less specialized sociologists, as sources of variation in fertility.2 In most of the studies by demographers, a major difficulty with the factors proposed as causes of variation in fertility is that there is no way of knowing whether the variables are separate and independent explanations of birth rate differentials. To some extent, this is due to the fact that the statistical methodology consists of simple correlations, or more frequently, tabular and graphical presentations, which limit the analysis to two or three dimensions. A more fundamental criticism is that the discussion of the causal relationship between the independent variables and the fertility differentials does not attempt to assay whether basic factors, such as income and economic conditions, and the costs and benefits of having children, are common explanations of the variations in fertility by community of residence, geographical region, class, race, etc. Analyses of birth rate differentials by economists usually are based on a stronger statistical methodology than the studies by demographers but suffer from a similar weakness in their analytical formulation. The independent contribution of the variables to fertility differentials is determined by means of multiple regression or partial correlation analysis. Although meaningful statistically, the regression results usually afford little insight into the fundamental relationship between population growth and economic development and/or economic conditions. One source of confusion in economic crosssection studies of birth rates may be the unfortunate choice of data. Some economists apparently ignore the demographers' findings that a considerable portion of the variance in fertility within a country is due to geographical differentials, and attempt a cross-section analysis of fertility on a very heterogeneous sample composed of different countries.3 It would seem that a more homogeneous sample of observations within a country is a more propitious beginning for an interpretation of the relationship between birth rates and economic variables. Economists have benefited from one of the findings of demographers. A popular variable for inclusion is the fraction of the population classified as farm or the per cent of the labor force employed in nonagricultural industries. Weintraub suggests that the ratio of the popu* This paper is a revision of an earlier version presented at the annual meeting of the Western Economic Association at Corvallis, Oregon, August, 1968. It has benefited from a critical reading by our colleague Jerzy F. Karcz who made a number of helpful suggestions. Our appreciation goes to Dana Burtness who assisted us in all phases of this study but especially in making our interactions with the IBM 360 Computer pleasant. Additional credit goes to John Danforth and Ken Gralla who assisted us in the early stages of this study. 'Ben Franklin noted this causal relationship between birth rate differentials and population distribution as early as 1786. 2The following references are typical of the analysis of differential fertility by demographers. Donald J. Bogue, The Population of the United States, The Free Press of Glencoe, Illinois (1959), chapter 12 -"The Fertility of the United States Population" (contributed by Wilson H. Grabill). Warren S. Thompson, Population Problems, McGraw-Hill Book Co. (1965), chapter 11 -"Some Factors Affecting Fertility." As an example in the same vein by a sociologist, refer to the book by T. Lynn Smith, Fundamentals of Population Study, J. P. Lippincott Co. (1960), chapter 13"Differential Fertility." 3 Typical of the multiple regression cross-section analysis of fertility differentials in different countries conducted by economists are Irma Adelman, "An Econometric Analysis of Population Growth," The American Economic Review, 52, no. 3 (1963); Robert Weintraub, "The Birth Rate and Economic Development, An Empirical Study," Econometrica, 40, no. 4 (Oct. 1962).