scispace - formally typeset
Search or ask a question

Showing papers in "The Review of Economics and Statistics in 1971"


Journal Article•DOI•
TL;DR: ChowLin distributes a series, changing the frequency to a higher one while maintaining the sum over each period, using the Chow-Lin(1971) or related procedure as mentioned in this paper.
Abstract: ChowLin distributes a series, changing the frequency to a higher one while maintaining the sum over each period, using the Chow-Lin(1971) or related procedure. The newer procedure disaggregate.src is a better choice. Chow and Lin(1971), "Best Linear Unbiased Interpolation, Distribution and Extrapolation of Time Series by Related Series", Review of Economics and Statistics, vol 53, 372-375. Fernandez(1981), "A Methodological Note on the Estimation of Time Series", Review of Economics and Statistics, vol 63, 471-478. Litterman(1983), "A Random Walk, Markov Model for the Distribution of Time Series", JBES, vol 1, 169-173. (This abstract was borrowed from another version of this item.)

1,136 citations


Journal Article•DOI•
TL;DR: In this article, it was shown that perfect arbitraging is not always possible and that there are many important cases when perfect arbitrage is not possible and leads to a martingale.
Abstract: RCOUGHLY speaking, a competitive market of securities, commodities or bonds may be considered efficient if every price already reflects all the relevant information that is available. Arrival of new information causes imperfection, but it is assumed that every such imperfection is promptly arbitraged away. In the special case where there is no risk aversion and the interest rate is zero, it can be shown that if an arbitraged price is implemented, it must follow a "martingale random process"-the definition of which will be recalled shortly. However, even on the theoretical level, the problem of market efficiency is not settled by writing the preceding definition. There exists indeed a class of important cases where useful implementation of arbitraging is impossible. The principal purpose of this paper is to describe these cases and to show why I consider them interesting. Moreover, numerous related issues will be solved along the way. Since my purpose is merely to illustrate, I am allowed to restrict the scope of the problem drastically to avoid extraneous complications. I assume, first, that the process of arbitraging starts with a well-defined single price series P,,(t) which presumably summarizes the interplay of supply, demand, etc., in the absence of arbitraging. Specifically, the increments of P,,(t) will be assumed to be generated by a stationary finite variance process. Furtherunless P,(t) is itself a martingale. I assume the purpose of arbitraging is to replace P,(t) by a different process P(t) that is a) a martingale and b) constrained not to drift from P,,(t) without bound. Had not P(t) and P,,(t) been constrained in some such way, the problem of selecting P(t) would have been logically trivial and economically pointless. Specifically, we shall seek to achieve the smallest possible mean square drift: the variance of P(t) P,(t) must be bounded for all t's and as small as possible. In addition, we shall assume that the martingale P(t) is linear, that is related to Pt,(t) linearly we shall explain how and why. Under these restrictions, the results of this paper fall under three main headings: A) A necessary and sufficient condition for the existence of the required P(t) is roughly as follows: as the lag s increases, the strength of statistical dependence between P,,(t) and P,,(t+s) must decrease "rapidly," in a sense to be characterized later on. If an arbitraged price series P(t) exists, its intertemporal variability depends upon the process P0, and may be either greater or smaller than the variability of P,,(t). More often, arbitraging is "destabilizing," but under certain circumstances, it is stabilizing.2 Note that a market specialist, in order to "insure the continuity of the market," must stabilize the variation of price. Under the usual circumstances under which perfect arbitraging would be destabilizing, the specialist prevents arbitraging from working fully and prevents prices from following a martingale. B) The case when the strength of statistical dependence of P,,(t) decreases very slowly must be examined. In this case, the belief that perfect arbitraging is possible and leads to a martingale is unfounded. Contrary to what one might have thought, such cases are much more than a mathematical curiosity. Indeed, most economic time series exhibit a "Joseph Effect"

474 citations




Journal Article•DOI•
TL;DR: In this article, the authors show that exports, through market diversification, tend to stabilize the firm's sales, and the larger the spread of these exports over several markets the more stable the sales.
Abstract: FOLKLORE has it that foreign markets are more risky than domestic markets because of political, economic, and social instability abroad. A normative implication of this belief, sometimes mentioned in the literature, is that a firm must establish itself in the domestic market before venturing into foreign markets; otherwise, it is argued, the inherent instability associated with exports might seriously damage the firm's operations. It is shown here that such implications are at variance with the diversification principle in portfolio theory. Specifically, an individual project might be very risky, yet its incorporation with other projects may decrease the overall risk of the portfolio. The overall risk of a group of projects is affected mainly by the relationships among these projects and only slightly by the individual riskiness of each. The hypothesis advanced in this study was that exports, through market diversification, tend to stabilize the firm's sales, and the larger the spread of these exports over several markets the more stable the sales. This hypothesis was tested on data selected from a sample of about 500 firms in Denmark, the Netherlands, and Israel. Results of the test were consistent with the hypothesis: sales stability and diversification of exports are indeed positively correlated.

148 citations


Journal Article•DOI•
TL;DR: This article showed that the assumption that replacement investment is proportional to the capital stock does not imply rejection of the following alternative hypothesis: replacement investment varies around some average nonzero level in a way which is systematically related to other short-run economic forces.
Abstract: D URING the past two decades, gross investment has on average been divided approximately equally between net capital accumulation and replacement. Because of its relation to economic growth, the process of net accumulation has received nearly all of the attention in the investment literature. However, gross investment is the important variable for aggregate demand and therefore for stabilization. Moreover, an understanding of the process of replacement and modernization investment is necessary for a correct analysis of expansion investment. Recent econometric studies of investment behavior, both those in the neoclassical tradition and of the flexible accelerator type, rely on the assumption that replacement investment (Ir) is proportional to the capital stock (K). In some studies, this assumption is used to estimate a replacement investment series by finding a constant proportional depreciation rate (8) which reconciles the gross investment during the period being studied with the capital stock at the starting and ending dates.' This replacement series is then subtracted from gross investment (Ig) to yield a net investment series (In) which serves as the dependent variable in the regression analysis. In other studies, gross investment is used as the dependent variable; the lagged capital stock is then added to the regressors and its coefficient is assumed to estimate the rate of depreciation. Common to both methods, however, is the assumption that the ratio of replacement investment to the capital stock is constant. This assumption has two important effects. First, it influences the estimated parameters of the net investment behavior. Second, it implies that gross investment can be explained and forecast by a simple mechanical "technological" rule once net investment behavior and the starting capital stock are given. As Jorgenson and Stephenson [131 have emphasized, induced replacement investment can be a very important part of the short-run demand effect of changes in the policy variables that influence the accumulation of net capital.2 The assumption that replacement investment is proportional to the capital stock has long been used in an ad hoc way. More recently, Jorgenson [12] has shown that renewal theory implies that, in the long run, if the capital stock is growing at a constant rate, replacement investment approaches a constant proportion of the capital stock, whatever the initial age distribution and replacement rates for individual types of capital goods. He has, moreover, gone further than previous investigators and tested aspects of this theory [Jorgenson and Stephenson, 14]. More specifically, in econometric studies of two-digit investment behavior with Ig as the dependent variable he included the lagged capital stock among the regressors and performed tests on the estimated coefficient, 8; First, he showed that the null hypothesis that Ir is not related to the capital stock (8 = 0) could be rejected at low levels of significance in fifteen of the eighteen industries studied. Second, he showed that the values of 8 obtained in this way were generally not appreciably different from the values which reconciled the change in capital stock with the net investment series. It is important to note that these tests do not establish that replacement investment is proportional to the capital stock. In particular, they do not imply rejection of the following alternative hypothesis: Replacement investment varies around some average nonzero level in a way which is systematically related to other short-run economic forces. This alternative hypothesis is also not contradicted by * We are grateful for comments from the participants in the Harvard econometrics seminar in the fall term of I969 and for partial financial support of this research by the National Science Foundation (Grant No. GS-2241). 'For a description of this method, see Jorgenson and Stephenson [14]. 2The notion of a constant depreciation rate also enters neoclassical investment theory in a quite different way; the depreciation rate (a) is a parameter in the user cost of capital. See, e.g., Jorgenson [111.

144 citations



Journal Article•DOI•
TL;DR: The authors examined the influence of foreign competition on industry profitability and concluded that such competition, as represented by the level of imports, appears to exert a significant and negative effect on industry profit rates.
Abstract: R ECENT studies investigating the variability in inter-industry profit rates largely ignore the influence of actual and potential foreign competition [2, 5, 6, 15, 171. This paper examines the influence of foreign competition on industry profitability and concludes that such competition, as represented by the level of imports, appears to exert a significant and negative effect on industry profit rates. The evidence is consistent with the hypothesis that less restrictive trade policies encourage more competitive pricing behavior in domestic industries. Section I of this paper develops the analytical framework within which to view potential foreign competition. Section II describes the model and presents the major empirical results. Section III discusses the implications of the empirical results with respect to foreign trade policies.

139 citations


Journal Article•DOI•
TL;DR: In this paper, it was shown that the accelerator model in any form is inappropriate for explaining investment behavior of American railroads during this period of volatile financial changes, and the stability of the coefficients of the financial variables (the effective yield on railroad bonds and the level of retained earnings) whether estimated for the entire period 1897-1914 or the sub-period 1897 -1907 was shown.
Abstract: An alternative procedure would be to apply some knowledge of economic history to the problem and see if changes in the financial factors affecting adjustment speeds can explain the changes in investment expenditures by themselves. If so, and my results reported in the original article indicate it is so, then a test for the relative significance of changes in financial factors compared to changes in the gaps between desired and actual capital stocks would be to see which model produces the most stable coefficients for subperiods. One of the things I noted in my article was the stability of the coefficients of the financial variables (the effective yield on railroad bonds and the level of retained earnings), whether estimated for the entire period 1897-1914 or the subperiod 1897-1907. I had no more luck than Morgan, however, in finding stable coefficients for any version of the accelerator model. The net effect of Morgan's work is to give independent support to my original argument that the accelerator model in any form is inappropriate for explaining investment behavior of American railroads during this period of volatile financial changes. lished Ph.D. dissertation "Growth, Stability and Financial Innovation in the American Economy, 1897-1914." University of California, Berkeley, 1968.

117 citations


Journal Article•DOI•
TL;DR: In this paper, the causes of internal migration in Colombia are explored and a model of interregional migration is proposed for a sample of Colombian municipalities, from which they can infer the responsiveness of migration to some economic, demographic and political developments in the rural and urban sectors of the society.
Abstract: T HIS study attempts to explore the causes of internal migration in Colombia. Migration rates are first estimated for various groups in the population to clarify who migrates and to where. A model of interregional migration is then set forth and estimated for a sample of Colombian municipalities, from which we can infer the responsiveness of migration to some economic, demographic and political developments in the rural and urban sectors of the society.

112 citations


Journal Article•DOI•
TL;DR: In this paper, the effect of temporal aggregation on the performance of a single equation model is analyzed. But the authors focus on the problem of using the aggregated data to make statistical inferences about short-run behavior.
Abstract: T EMPORAL aggregation problems in econometrics pose an important but relatively unexplored set of issues relevant for analyses of economic behavior and policy problems. When the behavior of individuals, firms or other economic entities is analyzed with temporally aggregated data, it is quite possible that a distorted view of parameters' values, lag structures and other aspects of economic behavior can be obtained. Since policy decisions usually depend critically on views regarding parameter values, lag structures, etc., decisions based on results marred by temporal aggregation effects can produce poor results. Further, as emphasized by Orcutt and others, aggregating data temporally or otherwise usually involves a loss of information. In the context of temporal aggregation, aggregation can lead to (a) lower precision of estimation and prediction, (b) lower power for tests, (c) inability to make short-run forecasts and (d) a reduction of the probability of discovering new hypo-theses about short-run behavior from data. It is generally appreciated that when annual data are employed in analyses, it is difficult to obtain satisfactory results pertaining to the intra-year behavior of economic units, for example seasonal effects that are often important in analyzing the variations of such variables as inventories, agricultural prices, agricultural output, etc. Previous work concerned with the theoretical analysis of the effects of temporal aggregation on estimation include Mundlak's [6] and Engle's [3] analyses of distributed lag schemes, Telser's [8] treatment of autoregressive processes and Zellner's results for stock adjustment models [11, 12]. In all these papers, it is shown that when econometric models are implemented with temporally aggregated data for flow variables or stock data pertaining to periods longer than that considered appropriate on a priori grounds, the results of analyses will usually be marred by temporal aggregation effects. Further, empirical analyses of several single equation models using temporally aggregated and disaggregated data have been reported which reveal sensitivity of inferences about lag structures to the level of data aggregation (see, e.g., Bryan [1], Laub [5] and Ranson [7]). While much previous work has concentrated attention on the adverse effects of temporal aggregation, there has not been much attention devoted to the problem of what can be done in analyses when we have to work with temporally aggregated data, perhaps because these are the only data available. The approach to be taken in this paper, also utilized in Zellner [11], is to formulate an economic relation in terms of the time unit, say a week or a month, thought to be appropriate on economic grounds and then to derive logically the implications of the model for explaining the variation of temporally aggregated data. With the implied model for the aggregated data explicitly set forth, the problem of using the aggregated data to make statistical inferences can then be approached. Below we present applications of this approach and make several theoretical and empirical comparisons of results obtained with aggregated data with those obtained from analyses based on disaggregated data. The plan of the paper is as follows: In section II we specify a simple "monthly" model, derive the implied "quarterly" model, and examine its properties. Then inference procedures for the monthly and quarterly versions of the model are compared and some generalizations of the analysis are indicated. In section III numerical results pertaining to a moneymultiplier model are presented. Finally, in

Journal Article•DOI•
TL;DR: In this paper, the authors present an econometric investigation of the elasticity of substitution between capital and labor in Swedish manufacturing for the period 1870-1950 and present a general production model that allows the measurement of the price responsiveness of factor utilization.
Abstract: ECONOMETRIC studies of production at the aggregate and semiaggregate levels have concentrated largely on the relation between capital and labor inputs, on the one hand, and some measure of real gross value added on the other. Studies that have gone beyond this scope to include a larger number of inputs have been confined to a highly restrictive class of production functions. Input-output studies using fixed production coefficients and studies in the agricultural field using CobbDouglas functions fall within this category.' A growing number of important economic questions, however, cannot be answered within the traditional models, but require instead a framework that allows a richer specification of the substitution possibilities among factors of production. The question of whether there are differences in the extent of substitutability or complementarity between capital and different skill categories of labor, which has been considered by Bowles [4], Cook [6], and Griliches [8], is one example where a more general production model is required. In addition, it is likely that the estimation of production parameters, in particular the elasticity of substitution (ES) between capital and labor, is biased when factors other than capital and labor are ignored. This paper presents the results of an econometric investigation of these problems using time-series data for Swedish manufacturing for the period 1870-1950. Section II presents some simple evidence that shows the extent of variation in factor output ratios in the data. Variation in these ratios is not consistent with the conditions under which the use of a valueadded production function can be justified. Section III then presents a general production model that allows the measurement of the price responsiveness of factor utilization, and section IV discusses the results of estimating the model. Finally, in section V the results of the general model are compared with the results obtained using the alternative gross value added framework and using direct production functions.




Journal Article•DOI•
TL;DR: In this article, the authors evaluate the structure of personal income tax exemptions for dependent children under the horizontal equity standard in personal income taxation and propose a new set of equivalence scales that can be applied to the evaluation of aspects of the present tax structure that are relevant to the issue of horizontal equity.
Abstract: T HE normative principle of horizontal equity in personal income taxation calls for the levying of equal tax liabilities on taxpaying units enjoying identical pretax levels of economic well-being. Unfortunately, the only straightforward application of this principle is the proposition that tax-paying units identical in every other relevant respect should bear equal tax burdens if and only if they enjoy the same level of pretax money income. Economists have long recognized, of course, that the level of money income is only one of several objective factors that determine a tax-paying unit's level of economic well-being its "ability to pay." Probably the foremost of the other factors is the number of individuals who must share a given total money income. Economists generally accept the principle that some system of personal exemptions, deductions, tax credits or other technical devices designed to adjust gross money income for the size of the tax-paying unit is necessary to maintain horizontal equity in personal income taxation. The purpose of this paper is to evaluate the structure of personal income tax exemptions for dependent children in the present personal income tax law against the horizontal equity standard stated above.' A review of the literature reveals that this subject has received only a sparse and elementary treatment. Public finance texts go no further than to suggest the qualitative norm that relatively large families should pay somewhat less taxes than averagesize families at a given level of gross money income. A very small number of studies take an approach which we shall pursue in greater depth-the application of a large body of work in consumption economics concerned with the determination of equivalent welfare incomes or "equivalence scales" for families that differ in size and composition to a quantitative assessment of the personal income tax exemption structure.2 Unfortunately, the direct adaptation of existing equivalence scale results is inappropriate for the purposes of tax policy, as we shall demonstrate in section II of the paper. The major contribution of this study is the estimation of a new set of equivalence scales that can legitimately be applied to the evaluation of aspects of the present tax structure that are relevant to the issue of horizontal equity. Section II of the paper briefly reviews the traditional methodology for estimating equivalence scales and then proceeds to detail the conceptual and pragmatic problems involved in adapting this methodology to our particular problem. Section III presents the statistical model for estimating our new set of equivalence scales and discusses the data source for the estimates. Section IV reports our empirical results and a final, brief section V treats the policy implications of our findings.


Journal Article•DOI•
TL;DR: Fisher and Temin this article showed that the share of acreage harvested in wheat in 1900 as a function of its relative price in 1899 and, with geometrically declining weights, in previous years, can explain why Populism appeared in the 1890's and not in 1903 or some other time.
Abstract: In a recent article in this Review, Franklin M. Fisher and Peter Temin [2] seek to illuminate the influence of regional specialization and the causes of Populism by estimating supply functions for wheat during the 1867-1914 period. Unfortunately, a fundamental error in their application of the model to the data mars the study. As a result their findings are difficult, if not impossible, to interpret, and I am skeptical that any light has in fact been shed on regional specialization or Populism. The problem arises because the variables specified by the model are different from the variables represented by the data. The model relates the share of acreage planted in wheat to the relative price of wheat, but the data are for acreage harvested in wheat. Significantly, only one price observation is available for each year: the price received by farmers on December 1. Fisher and Temin recognize that the acreage planted in a given year depends upon the price in the previous year ". . . because farmers could not have based their decisions about planting in one year on prices of the following December" [2, p. 137]. Therefore, for example, their regression explains the share of acreage harvested in wheat in 1900 as a function of its relative price in 1899 and, with geometrically declining weights, in previous years. What they overlook is that two kinds of wheat are grown: winter wheat, planted in the autumn and harvested in the late spring; and spring wheat, planted in the spring and harvested in the late summer. Within a given year, say 1900, both kinds of acreage were harvested, but while decisions concerning the acreage of spring wheat were influenced by the price on December 1, 1899, and by prices in previous years, decisions concerning the acreage of winter wheat were influenced by the price on December 1, 1898, and by prices in previous years. In the case of winter wheat, which constituted the bulk of the American supply during the 1867-1914 period, the acreage harvested in 1900 could not have been influenced at all by the 1899 price, since decisions concerning the planting of that acreage were made several months before the 1899 price could be observed. In states where winter wheat predominated, which include all those studied except Minnesota, Wisconsin, and the Dakotas, and possibly Iowa [6, pp. 100-101; 7], the functional form that Fisher and Temin impose on their regressions geometrically declining weights on past prices starting with the previous year is simply incapable of producing results that can be given a meaningful interpretation. The regression results obtained by Fisher and Temin might well have alerted them to potential problems. They find, for example, long-run elasticities of supply of 0.1633 for Illinois, 1.1211 for Wisconsin, and 10.7640 for Iowa. Given the great similarities of agriculture in these states, differences of this magnitude are simply incredible. Though they assert that ". . . the coefficients of lagged relative price were almost always 'quasi-significant'" [2, p. 143], in 11 of their 34 regressions the ratio of the regression coefficient for the previous period's price to its asymptotic standard error is less than 2. And one must be uneasy about their "quasi-significant" coefficients in any event. Fisher and Temin offer their finding that farmers reacted slowly to a decline in the relative price of wheat as an explanation for the Populist unrest of the 1890's. Populism was hardly popular everywhere or at all times. At the very least it must be shown that the estimated reaction speeds were significantly slower in the areas of Populist strength (e.g., Kansas, Nebraska, and the Dakotas) than elsewhere, but their estimates of reaction speeds show no such systematic differences. And even if such differences were apparent, one would still be faced with the task of explaining why Populism appeared in the 1890's and not in 1903 or some other time. Recent work on Populism [1, 3, 4] has advanced considerably beyond such a casual, conjectural, and highly-aggregated approach. Quite apart from its oversight concerning winter wheat, Fisher and Temin's study suffers from a more general lack of information about agriculture during the period considered. For example, they say: "If crops had declined because of poor weather, the farmer's loss of income should have been moderated by a rise in the price of crops attendant on their scarcity" [2, p. 136]. But the price of wheat was established in a world market during this period, and it was common for a short crop at home to correspond with a low price, and therefore low incomes, because of bumper crops in the Ukraine, Argentina, and elsewhere in the same year [5, p. 35]. Indeed, the farmer's position as a supplier in the world market is a crucial backgrouind feature of the Populist uprising. Examples of this sort might be multiplied but the conclusion is obvious: though Fisher and Temin have focused attention on an interesting subject, the elasticities of

Journal Article•DOI•
TL;DR: In this article, the authors identify those variables that exert major influences on changes in the composition of capital goods and determine the extent to which these changes arise from differential industry investment rates as distinct from changes in coefficients of the matrix itself.
Abstract: A LTHOUGH homogeneous capital stocks remain a frequent construct in growth theory and the literature on production relations, economists have not missed the fact that trucks are not lathes. Thus considerable effort has gone into specifying the conditions under which aggregation is conceptually permissible.1 Recently, the aggregation of capital services from diverse capital stocks has become an important consideration in the explanation of productivity change for the American economy. Jorgenson and Griliches 2 note the rise in the ratio of equipment stocks to structures stocks over the period 1945-1965. They employ the presumed increase in aggregate capital services from the use of relatively shorter lived assets to explain about 25 per cent of the residual change in the measure of total factor productivity. But very little has been done thus far to identify those variables that exert major influences on changes in the composition of capital goods.3 This paper is directed toward a remedy for this deficiency. In assessing shifts in composition, we have two principal objectives. First, to determine (through the use of an investment matrix) the extent to which changes in the composition of aggregate investment arise from differential industry investment rates as distinct from changes in the coefficients of the matrix itself. Second, to ascertain the determinants of the shifts in aggregate investment, particularly the ratio of equipment to structures. We shall show that the relative prices of capital goods have not been among the principal determinants of such substitution. The theory and estimates of this paper ascribe the changes in composition to variations in the relative costs of capital and labor and to the shift of investment in times of capacity expansion toward new plants with higher ratios of structures to equipment outlays than those for existing plants.

Journal Article•DOI•
TL;DR: In this article, the authors used the BLS data for the adjustment of the French and German national statistics, as well as the Dutch and Belgian national statistics for the same period.
Abstract: unemployment rates quoted in table 1; however, a fuller statement is available on request from the author. The United Kingdom figures for 1960 onward are based on a letter to the author (dated 28/1/71) revising [7]. For earlier years the official statistics have been raised by a percentage based on the revised BLS information. Percentage adjustments have also been made to the French (Institut National de la Statistique et des Etudes Economiques) and German (registration series from the ILO Yearbook of Labour Statistics) figures in similar fashion, except that the 1958 German figure was supplied by the BLS. The Swedish figures for 1953-1961 come from [3]. The Italian figures for 1954-1958 are based on the irregular ISTAT surveys adjusted for seasonality and for comparability with United States definitions (the latter on an absolute basis) on the basis of experience since 1959. The 1953 figure, for reasons described in the supplementary statement, is the average for 1954-1956. The Australian unemployment rates for 1964 and 1965 are those of the Commonwealth Statistician, while those for 1958-1963 are based on an upward percentage adjustment of the Department of Labour and National Service's registration series. The Belgian figures throughout are based on the percentage reductions indicated by [10] and [11]. The Dutch rates are those contained in the ILO Yearbook, since [10] and [11] indicated slight adjustments in conflicting directions.

Journal Article•DOI•
TL;DR: In this paper, the authors investigated factors influencing the number of employed registered nurses and their earnings across states by using a simple model which includes one structural equation for demand, one for labor force participation, and one for geographical location.
Abstract: A BETTER understanding of the professional labor market for registered nurses is important as it relates to current concerns about the rapidly increasing demand for and cost of medical care and the alleged "shortage" of registered nurses and other medical personnel. In addition, investigation of this market may help to further knowledge about the labor market in general, since registered nurses have a number of characteristics in common with various expanding sectors of the labor force: they constitute a profession, at present over 98 per cent are female, and their work is in the service sector. This study investigates factors influencing the number of employed registered nurses and their earnings across states. Several aspects of simultaneous response patterns in this labor market are examined by use of a simple model which includes one structural equation for demand, one for labor force participation, and one for geographical location. Specific aspects of labor force behavior, such as migration or labor force participation, have been examined in many earlier studies, but few efforts have been made to estimate the interaction of labor market responses. Models of the type developed here, if they successfully incorporate the principal structural relationships within a labor market, can allow examination of a whole range of issues in more precise terms than has been possible previously. For the market investigated here, these include the effects of shifting patterns of demand for medical services, changes in the supply of substitutes for registered nurses, increases in the number of nursing schools, and so forth. For a priori specification of the individual structural equations, existing theory and past studies on demand functions, labor force participation, and migration were used, but these were not sufficient to give very precise guidelines. Therefore, several alternative plausible forms of each equation were initially estimated using cross-sectional data by states for 1950.' In general, the coefficients were not very sensitive to the variations in specification examined. The complete model composed of selected forms of the individual structural equations was then estimated by three-stage least-squares, using the 1950 data. A number of variants of the model were considered for 1950, and the model was found to be fairly robust. As one test of the stability of the cross-sectional relationships, the parameters were then re-estimated using data for 1960.




Journal Article•DOI•
TL;DR: A variety of solutions have been proposed to cope with the large model problem, two of which are examined in detail in this paper as mentioned in this paper, which can be viewed as a modification of two stage least squares: 1) 2SPC (Two Stage Principal Components), originally proposed by Kloek and Mennes [11], in which a limited number of principal components of the predetermined variables are used in the first stage.
Abstract: EMPIRICAL research in economics has seen the development in recent years of large simultaneous equation econometric models -large both in terms of detail and degree of disaggregation but also in their demands upon a limited sample of data. In the main these models have been models of macro-economic activity estimated from annual or quarterly data in the postwar period. Statistical methods for estimating simultaneous equation models were first developed by the researchers at Cowles Commission [8]. In recent years, the sheer size of such empirical models has brought a new problem to the fore, as the estimation methods previously developed cannot be used without modification. Most of those estimators -those of the kclass and three-stage-least-squares involve a first "stage" of regression or estimation using the predetermined variables of the model as regressors.1 But frequently in large models the sample is smaller than the number of predetermined variables, so that a meaningful first-stage regression is not possible.2 This is a "small sample" problem of a different sort instead of needing more observations in order that the distribution of the estimates will be satisfactorily approximated by their asymptotic distributions, the sample is small relative to the size of the large model, to the extent that the standard simultaneous equation estimators either do not exist or are identical to ordinary least squares.3 A variety of solutions have been proposed to cope with the large-model problem, two of which are examined in detail in this paper. Each can be viewed as a modification of twostage-least-squares: 1) 2SPC (Two Stage Principal Components), originally proposed by Kloek and Mennes [11], in which a limited number of principal components of the predetermined variables are used in the first stage. 2) SOIV (Structurally Ordered Instrumental Variables), proposed by Fisher [5, 6], in which a limited number of predetermined variables are selected for the first stage by detailed use of the structure of the model. A complete assessment of the properties of these estimators in a large econometric model requires the knowledge of their small-sample distributions. These are in general unknown, although some progress has recently been made for small models by Amemiya [1], Basmann [2], Kadane [9], Mariano [12], Sawa [15], and Takeuchi [17]. Some information might be obtained by Monte Carlo techniques, except that the computational cost of systematically exploring the parameter space of a large model would be prohibitive. Still, a feasible project would be to employ a miniature model with a very few equations, but having more predetermined variables than observations. It is not clear, however, that the distributions would * This work was supported in part by National Science Foundation Grant GS-2635 and by the Brookings Institution. The initial research was undertaken during the tenure of fellowships from the Danforth Foundation and the National Science Foundation. Computations were done at the Massachusetts Institute of Technology and Stanford University computation centers. I am happy to acknowledge the numerous helpful suggestions of T. Amemiya, T. W. Anderson, P. J. Dhrymes, E. Kuh, F. M. Fisher and a referee. Much of the data was made available by G. Fromm. 1 The limited-information-maximum-likelihood method requires extraction of a characteristic root from a matrix of moments of the predetermined variables; the method can be interpreted as a first-stage regression of a synthetic endogenous variable on the predetermined variables. 2 Full information maximum likelihood estimators fail to exist when the number of parameters to be estimated in the model exceeds the sample size, another problem which occurs in large models. Because of computational complexity, FIML has not been a feasible estimator for models of even moderate size. 'In other cases the problem may occur in a less acute form -there may be more observations than predetermined variables, but the excess may be small, and in some sense better estimates may be obtained by using fewer variables in the first stage.

Journal Article•DOI•
TL;DR: In this article, the results of various experiments with the basic St. Louis equation using the St.Louis variables are presented, and the question asked here is: given the use of the same exogenous policy variables as St Louis are there any changes in the basic regressions which would alter the conclusion regarding the efficacy of monetary and fiscal policy?
Abstract: T HE equation developed by the Research Department of the Federal Reserve Bank of St. Louis to explain changes in GNP ' has had a considerable impact on the thinking of monetary economists. It has, in particular, reinforced the position of those who contend that monetary policy has a powerful impact on GNP. On the other hand, serious doubts arise according to the basic St. Louis equation as to the efficacy of the fiscal impact on GNP. There have been a, number of comments on the St. Louis equation which presented alternative specifications of the basic reduced-form equa,tion for GNP.2 In general, these formulations have revealed a more powerful role for fiscal policy while the strength of monetary policy remained unaltered. Most of these reformulations of the St. Louis equation have specified different monetary and/or fiscal variables than those used by St. Louis. This has given rise to disputes between researchers as to the appropriate monetary and fiscal variables 3 to use in these equations. In this paper the results of various experiments with the basic St. Louis equation using the St. Louis variables are presented. In other words, the question asked here is: given the use of the same exogenous policy variables as St. Louis are there any changes in the basic regressions which would alter the conclusion regarding the efficacy of monetary and fiscal policy? The results reported below suggest that alternative specifications of the basic St. Louis equation do produce significantly different implications for the efficacy of fiscal policy. In particular, a straightforward division of the sample period produces "Republican" and "Democratic" St. Louis equations with the "associated" monetary and fiscal multipliers.


Journal Article•DOI•
TL;DR: In this paper, a comprehensive quantitative examination of the impact of inflation on income distribution during the last decade appears to be a worthwhile undertaking, and some implications of the findings for public policy are discussed in section X II The Progression of Inflation.
Abstract: T HE inflation associated with the Vietnam War has had a strikingly differential impact on the rates of economic expansion in different regions of the country and on the distribution of income among major groups in the economy This experience undoubtedly is of interest to economists concerned with the welfare implications of gains and losses in the relative position of principal groups of income recipients In addition, these differential effects have enlivened the debate over the appropriate stance of public policies with respect to inflation Consequently, a comprehensive quantitative examination of the impact of inflation on income distribution during the last decade appears to be a worthwhile undertaking This paper presents the results of such an inquiry In section II, the origins of the current inflation (in the Vietnam War) and its subsequent progression are traced The regional impact of inflation is examined in section III Broad trends in the distribution of personal income and the main factors affecting this distribution are analyzed in sections IV and V, respectively The experiences of particular groups of income recipients are assessed in sections VI through IX Finally, some implications of the findings for public policy are discussed in section X II The Progression of Inflation

Journal Article•DOI•
TL;DR: In this paper, a nonlinear iterative least squares (NILES) procedure is used to estimate the coefficient of proportionality when both permanent income and consumption are unknown, and it is shown that NILES is computationally more convenient than other nonlinear procedures.
Abstract: PpTHE theory of aggregative consumption function has undergone several developments since Kuznets' [27] saving-income estimates for the United States showed disagreement with Keynes' [ 2 1 ] "fundamental psychological rule." Noteworthy among these developments are the Relative Income Hypothesis [10] the Life-Cycle Hypothesis [33] and the Permanent Income Hypothesis [ 13 ]. Although these different hypotheses are rather complementary yet the Permanent Income Hypothesis (PIH) has been more widely studied than the others. According to the PIH, measured income and consumption are composed of two components permanent and transitory. Under the assumption of zero correlation between transitory and permanent and also between the two transitory components, it postulates that permanent consumption constitutes a constant proportion of permanent income. The basic concepts of permanent income and consumption, although theoretically attractive, involved considerable difficulties in regard to their measurability. This stimulated a number of debates and empirical researches. Some corroborated the PIH, while others showed a strong disagreement over some of its major assumptions. A close investigation of these debates and researches suggests that Friedman's own proposal to use Cagan's [7] distributed lag scheme, as one of the alternatives for constructing an estimate of permanent income in time series studies, has perhaps invited more troubles than it actually solved. Clearly, it does not allow for a separation of the permanent and transitory components from the observed data.' It is not surprising then that Laumas [28] and Choudhury [8] can obtain significant marginal propensities to consume transitory income for Canada and India respectively, and that Walters [41] can show a relationship between transitory and permanent income. A cursory look at such studies may sometimes shatter the faith in the strict version of the PIH but if one studies them more carefully one feels inclined to speak with Ovid "it often happens that the material is better than the workmanship." The purpose of the present paper is two-fold: first, to improve upon the workmanship and second, to investigate whether the PIH is applicable to economies with different structures. The improvement in the 'workmanship' lies in the application of Nonlinear Iterative Least Squares (NILES) procedure 2 to estimate the coefficient of proportionality when both permanent income and consumption are unknown. The advantages of this method over the commonly used Ordinary Least Squares procedure are that it overcomes the problem of identification in the consumption function arising from the nonlinearity of parameters and that it yields individual estimates of the parameters with least squares properties. In addition to this, it is computationally more convenient than other nonlinear procedures. This method has been applied on the one hand to the existing approach in which the estimate of permanent income has been defined in terms of Cagan's distributed lag scheme, and on the other hand to an alternative approach in which it considers the PIH on the lines of the classical two-variables linear model where both variables are subject to errors of observation. In neither case do we violate any of Friedman's assumptions. The advantage of the second approach lies in the fact that the estimation of permanent income does not require any extraneous informa* The authors would like to express their gratitude to Professors A. R. Dobell, S. J. Turnovsky and T. A. Wilson for constructive criticism and encouragement, and to the referee for helpful comments on an earlier draft of this paper. Needless to say, the responsibility for any errors lies with the authors alone. 'Singh [38], pp. 3-6. 2For the NILES procedure see Wold [43], pp. 433-434 and 438-440.

Journal Article•DOI•
TL;DR: In this paper, the authors used both time series and cross-section data to determine the extent to which persons of a particular income group spend an increment to income, and the hypothesis that the marginal propensity to consume decreases as income increases can be tested.
Abstract: T DEALLY, to determine the extent to which persons of a particular income group spend an increment to income, time series data on consumption and disposable income for individual households (panel data) are needed. Unfortunately, such data are not available. Available are time series aggregate data, which do not allow one to determine differential marginal propensities to consume for different income groups, and cross section data which do not allow one to trace over time the effects of changes in income on consumption. However, by utilizing both time series and cross section data, the hypothesis that the marginal propensity to consume decreases as income increases can be tested.