scispace - formally typeset
Search or ask a question

Showing papers in "International Economic Review in 1970"


Journal ArticleDOI
TL;DR: In this paper, the authors show that the partial relative risk aversion function P(t; w) = -tu"(t + w)/u'(t +w) is important when the risk is varied but wealth w remains fixed, while R(t) becomes relevant when wealth and risk are changed in the same proportion.
Abstract: the existing theory by establishing the economic significance of the partial relative risk aversion function. Let u(t) be a utility function for wealth. The functions A(t) = -u"(t)/u'(t) and R(t) -tu"(t)/u'(t) are the Arrow-Pratt absolute and relative risk aversion functions. The importance of A(t) arises when considering an individual's aversion to risk as wealth is varied but the risk remains unchanged, while R(t) becomes relevant when wealth and the risk are changed in the same proportion. We shall demonstrate that the partial relative risk aversion function P(t; w) = -tu"(t + w)/u'(t + w) is important when the risk is varied but wealth w remains fixed. In addition, we indicate the economic relationships between the functions A, R, and P; present some results about the behavior of P; and relate its behavior to that of A and R. The analysis in this paper is based on Pratt's risk premium which we feel is the only function which actually measures risk aversion for arbitrary risks.2 Our analysis differs from that of both Arrow and Pratt in that we do not use "infinitesimal" risks. Arrow and Pratt interpret A and R as "local" measures of absolute and relative risk aversion. The results of this paper show that the functions A, R, and P have a significance beyond their interpretation as "local" measures of risk aversion in that they determine the behavior of the risk premium in different comparative static contexts. In Section 2 basic concepts are briefly discussed. Section 3 contains our main results. In it the economic significance of A, R, and the new function P is established through their relationship with the risk premium, and we show how the behavior of these functions is relevant for the theory of risk aversion. Section 4 contains a comparison of our results with those of Arrow and Pratt, and indicates the usefulness of A, P, and R. for comparative static analysis of expected utility maximization models.

388 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extend the traditional deterministic model of the firm to the situation in which the price for the firm's product is a random variable, and introduce additional considerations, such as attitudes toward risk, that may help to explain observed behavior.
Abstract: THE NEOCLASSICAL THEORY of the firm assumes that the entrepreneur behaves as if his demand curve, production function, and factor costs are known with certainty. Although it is recognized that the firm may be uncertain about the form of these functions, the entrepreneur is assumed to compress his judgments about a function into a best estimate. He then behaves as if the best estimate represents the function with certainty. The formal consideration of uncertainty about the functions, however, can significantly qualify the results of neoclassical theory. The purpose of this article is to extend the traditional deterministic model of the firm to the situation in which the price for the firm's product is a random variable. The analysis of this situation is important not only 1Qecause of the generalization of the traditional model, but because it introduces additional considerations, such as attitudes toward risk, that may help to litter explain observed behavior. A number of authors 'have investigated various aspects of the static theory of the firm under demand uncertainty and their major results will be briefly discussed here. Their models can be differentiated by the competitiveness of the economic environment assumed, the nature of the demand uncertainty, and by the behavioral assumptions employed. The models of purely competitive firms will be discussed first and then models of firms in imperfect competition will be considered. Uncertainty is usually introduced into a model of pure competition by assuming that price is uncertain and that the firm can sell any quantity at the price that obtains in the market. Oi [15] assumed that the firm was able to observe price prior to determining output or equivalently that the firm could instantaneously adjust output. With this assumption and an objective of maximizing( expected profit the firm produces such that price and marginal cost are equated as in deterministic theory. Oi was concerned with the desirability of price uncertainty and demonstrated that expected profit exceeds the profit that would be obtained with a certain price which is equal to the expected price. He also demonstrated that the firm prefers increased variability of price in certain cases and extended the analysis to the case of a firm with nonlinear risk preferences.2 Nelson [13] presumed that the firm makes its output decision prior to ob

284 citations


Journal ArticleDOI
TL;DR: In this paper, the authors apply a least square approach to generate an estimator which, with a normality assumption, is a maximum likelihood estimator, and the relationship of this estimator to certain instrumental variable estimators is set forth.
Abstract: as an independent variable. Last in Zellner [10], it is shown that equations of simultaneous equation models can be brought into a regression form involving some observable and some unobservable independent variables. Given that regression relationstcontaining unobservable independent variables occur quite frequently, and -in' fact are a special case of "errors in the variables" models, it is important to have good methods for analyzing them. Previous analyses have almost always involved the use of an instrumental variable approach, an approach which leads to estimators with the desirable large sample property of consistency. However, it is not clear that the instrumental variable approach leads to asymptotically efficient estimators for all parameters of a model and the small sample properties of instrumental variable estimators are for the most part unknown. In the present paper, we first consider the specification and interpretation of the models under consideration in Section 2. Then in Section 3 we apply a least squares approach to generate an estimator which, with a normality assumption, is a maximum likelihood estimator. The relationship of this estimator to certain instrumental variable estimators is set forth. Then in Section 4, a Bayesian analysis of the model is presented. Finally, in Section 5 some concluding remarks are presented.

260 citations




Journal ArticleDOI
Abstract: THIS PAPER PRESENTS a theory and measurement of the effect of unionism on occupational wage differentials. One possible theory of union occupational "wage policy" is sketched in Section 2 and is essentially an exercise in optimal pricing among multiple related markets. Union effects on skilled-semiskilledunskilled wage differentials are measured with the use of cross-section data in Section 3. Measurements are derived independently of Section 2 and are interesting in their own right. Thus, the reader may interpret them in ways other than is done here, though they tend to be consistent with the central hypotheses of the optimal pricing model. Major substantive results may be summarized: If production labor is divided into skilled or not-skilled categories, unionism has widened wage differentials, increasing wage rates of union skilled craftsmen compared with nonunion skilled craftsmen by relatively more than corresponding unionnonunion rates for atl other production workers. Further disaggregation of production labor into skilled craftsmen, semii-skilled operatives and unskilled laborers indicates unionism has probably increased wage rates of unskilled laborers by at least as much and possibly more than that of skilled craftsmen, confirming a result of several other investigators. However, the unionnonunion differential of unskilled labor is significantly higher than that of semiskilled operatives. The latter effect is quite small, though this group constitutes a high proportion of all production workers and the outcome is that unionism has most likely widened the occupational wage structure when all three groups are considered together.

105 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe a technique for dealing with a changing seasonal pattern, which is an aspect of one of the most immanent of all scientific problems; however, they should not be used uncritically as the unique optimal solutions, for no model upon which an optimization procedure can be based can represent the truth.
Abstract: THE SEASONAL ADJUSTMENT of economic time series has recently received a great deal of attention from statisticians. The reason for this is not hard to perceive. The economic policymaker faced with the problem of controlling the level of activity does not wish to mistake a seasonal movement for a long-term or medium-term change in the level of economic activity. Pressure is thus brought to bear on official statisticians for better estimates and this pressure filters through to the theorist. The paper by Shiskin and Eisenpress [9] describing methods of seasonal adjustment used by the U. S. Bureau of the Census provoked much of the ensuing discussion. The first paper to use modern spectral methods to discuss this problem seems to have been Hannan [2] and much recent work seems to have used these techniques. (See Hannan [3, 5], Nerlove [7], Nettheim [8] for example.) These methods seem particularly appropriate, for any model for the seasonal component will surely represent it as a sum of six narrow (frequency) band signals which are amplitude, phase, and possibly frequency modulated. (A more complete discussion of the model is, of course, given below.) To this is added "noise" and the narrow band nature of the signals means that, substantially, only the average noise level over these bands is of significant concern. Thus a spectral treatment of the noise will require the introduction of only relatively few parameters so that more special models add little or nothing in simplicity and efficiency, while they increase the risk of an invalid analysis. The main problem of seasonal adjustment undoubtedly arises from the fact that the seasonal pattern may be changing. The problem of estimating such a changing seasonal pattern is an aspect of one of the most immanent of all scientific problems. The difficulty of the problem is simple to perceive but must be understood. If we construct an estimation procedure which is sensitive to changes in the seasonal component, then we shall have one which is sensitive also to chance fluctuations, i.e., to noise effects. Given an initial model we can optimize. Such optimum solutions may be of great value both for their own sake and as standards against which to compare ad hoc procedures; however they should not be used uncritically as the unique optimal solutions, for no model upon which an optimization procedure can be based can represent the truth. In addition, of course, a suitable optimum criterion may not be easy to produce as it may have to reflect subjective elements difficult to quantify (e.g., the reluctance of an official institution to use methods which may require substantial later revisions of initial estimates). In the next section we describe a technique for dealing with a changing

101 citations


Journal ArticleDOI
TL;DR: In this article, the existence of competitive equilibrium in a market with countably many commodities is proved, and the significance of this generalization is that it opens a door that could lead, eventually, to a decentralized economic growth.
Abstract: : In this paper, we prove the existence of competitive equilibrium in a market with countably many commodities. By the term 'market' we mean to emphasize that we are dealing with a pure trade model, in which there is no production. Our result generalizes the existing theory of competitive equilibrium in a finite-dimensional commodity space. The significance of this generalization, we feel, is that it opens a door that could lead, eventually, to a theory of decentralized economic growth. (Author)

95 citations






Journal ArticleDOI
TL;DR: In this article, the authors defined and analyzed the optimum rate of growth when two distinct political targets are assigned: growth and regional income equality, and derived the stability condition of the balanced growth path in this model.
Abstract: rate. The second section presents a theoretical model and derives the stability condition of the balanced growth path in this model. Section 3 deals with the definition and analysis of the optimum rate of growth when two distinct political targets are assigned: growth and regional income equality. Section 4 is the application of the analysis to the Japanese economy in which we shall prove that the balanced growth path was unstable in the 1950's when the attitude of the government was "growth-biased," and further, that without an efficient policy for increasing production factors while taking into account structural requirements, it would be impossible to achieve a high growth rate with regional income equality.


Journal ArticleDOI
TL;DR: The objective here is to analyze the implications of a non-normal distribution of the random elements of a linear program in the framework of probabilistic linear programming, where only the two approaches of chance-constrained programming (CCP) and stochastic linear programming (SLP) are considered.
Abstract: AN ORDINARY LINEAR PROGRAMMING MODEL is said to be chance-constrained if its linear constraints are associated with a set of probability measures indicating the extent of violation of the constraints. When partial violation of the constraints is allowed for, the chance-constrained approach may be viewed as a method for providing appropriate safety margins. This approach has been generalized [9] in recent years in several directions. of which two are specially worth mentioning. First, although for reasons of simplicity solutions restricted to linear decision rules only have often been employed in chance-constrained programming, it is now possible to have solutions of a more general functional form, and this considerably enhances the scope of application of the chance-constrained approach in dynamic models with nonlinear objective functions. Second, it is not necessary in the chance-constrained approach to make the assumption that the decision maker's utility function is quadratic (or of a specific form), as it is required, for example, in the portfolio selection studies by Markowitz and others who base the analysis on the mean and variance of the probability distribution of net returns. Although the extent of violation of the constraints that would be tolerated is preassigned subjectively in this approach by the decision maker before the: actual solutions are computed, the tolerance measure may be parametrically varied, as in the revealed preference theory, and the resulting optimal solutions may help the decision maker move to the most preferred solution. This. formulation (see [8], [19]) also considerably helps the scope of applicability of the chance-constrained approach by developing suitable criteria for decisions under risk. Our objective here is to analyze the implications of a non-normal distribution of the random elements (A, b, c) of a linear program in the framework of probabilistic linear programming, where only the two approaches of chance-constrained programming (CCP) and stochastic linear programming (SLP) are considered. It is interesting to note that although the CCP and the SLP approaches are developed with different objectives in mind, they have frequently been applied under the assumption of normality of relevant

Journal ArticleDOI
TL;DR: In this paper, a more general model permitting both additive and multiplicative errors is proposed. But the model is not suitable for the case of single-trip travel demand functions, since the multiplicative choice is typically made on grounds of computational convenience.
Abstract: either specified to be additive or multiplicative.3 Although there are instances in which there may be compelling a priori reasons for specifying the error as being of a particular type,4 the multiplicative choice is typically made on grounds of computational convenience. The primary purpose of this paper is to introduce a more general model permitting both additive and multiplicative errors. We first present a method of estimation devised to account simultaneously for additive and multiplicative errors. We then illustrate the method on several simple and artificial examples. Finally, we indicate the workability of this new method by computing estimates for two alternative travel demand functions.


Journal ArticleDOI
TL;DR: In this paper, the existence of a generalized factor-price frontier for a steady-state neoclassical model with equal own-rates of return has been established rigorously under the assumption that labor is required, either directly or indirectly, in the production of every commodity.
Abstract: production functions. However, this proposition can be established rigorously only after it has been proved that unique equilibrium prices exist for all given admissible values of the own-rates of return. We prove the latter theorem in Section 3. The existence of a generalized factor-price frontier for our model follows immediately. Moreover, as a by-product we obtain a nonsubstitution theorem which states that for given (admissible) own-rates of return, the equilibrium real wage in terms of any numeraire is determined independently of the composition of output.3 Our nonsubstitution theorem is analogous to those proved by Samuelson [6, 7] and Bruno [2] for Leontief models, and by Morishima [4] for a steady-state neoclassical model with equal own-rates of return. These topics are discussed in Section 4. Section 5 is concerned with the properties of the set of admissible ownrates of return. It is shown that our restrictions are a natural generalization of a two-sector model with a single capital good. Furthermore, we obtain our results, including the validity of our main theorem, under the assumption that labor is required, either directly or indirectly, in the production of every commodity; this condition is much weaker than the alternative assumptions that either labor is required to produce every good or that the technology is indecomposable. In Section 6 we prove that another of Bruno's [2] results for the linear

Journal ArticleDOI
TL;DR: The Structurally ordered instrumental variables estimator (SOIV) estimator as mentioned in this paper is an estimator based on a preference ordering of eligible instruments relative to a given endogenous variable, which is then combined with the data to yield a linear combination of instruments for that given variable.
Abstract: proposed by Fisher.2 That estimator is an instrumental variables estimator, with the instruments chosen by a fairly elaborate technique intended to combine structural information on the model with information gained from the data. We have named it the "structurally ordered instrumental variables" (SOIV) estimator. Briefly, each endogenous variable is assumed to appear on the left-hand side of one and only one equation of the model, such normalization rules being part of the natural specification. This fact is used to construct a preference ordering of eligible instruments relative to a given endogenous variable; that preference orderingv is then combined with the data to yield a linear combination of instruments for that given endogenous variable. Finally, each equation is estimated by replacing all right-hand side endogenous variables by the linear combinations of the instruments so constructed and regressing the left-hand side endogenous variable on those combinations. Fisher's treatment leaves (at least) three questions of some importance unanswered: 1) Most economy-wide models contain not only equations but also identities. This means that not every endogenous variable appears on the left-hand side of an equation. While the identities can always be eliminated by substitution, the estimator may not be invariant to the way in which this is done. How then should identities be treated so as to preserve the rationale for the estimator? 2) In the construction of the preference orderings in practice, ties may (and do)' appear. How should such cases be handled? 3) If each right-hand side endogenous variable is replaced by its regression on a particular set of instruments and if those sets are not the same for different right-hand side variables in the same equation, then consistency is not guaranteed.3 Further, it is by no means clear how to compute the asymptotic variance-covariance matrix of the estimates in this case. Yet the use of instruments specific to given endogenous variables is the touchstone

Journal ArticleDOI
TL;DR: In this article, the one-commodity, n-region spatial-equilibrium model can be formulated as follows: for each of n regions there exists an excess demand curve EDi and an excess supply curve ESi in terms of the local price in each region:
Abstract: THE THEORY OF SPATIALLY SEPARATED MARKET for one commodity was first investigated by Enke [2] and Samuelson [9] in the early fifties. Enke solved a three region model by electric analogue and Samuelson, after converting the model into a maximizing problem, conjectured some comparative statics results on the basis that each regional excess supply curve was positively sloped. This condition, it will be shown, is the condition for Hicksian perfect stability in the model. Samuelson did not investigate the conditions for dynamic stability and it turns out that very little comparative statics information follows from the knowledge that the spatial system is dynamically stable. In addition, certain comparative statics relations stated by Samuelson -are incorrect. The purpose of this paper is to develop this theory rigorously, to derive new results concerning the one-commodity case and to extend the analysis to the rn-commodity case. The one-commodity, n-region spatial-equilibrium model can be formulated as follows: For each of n regions there exists an excess demand curve EDi and an excess supply curve ESi in terms of the local price in each region:

Journal ArticleDOI
TL;DR: In this paper, Amemiya and Fuller [1] have shown that Hannan's estimates have the same asymptotic distribution as the maximum likelihood estimates in the distributed lag model.
Abstract: VARIOUS TECHNIQUES have been proposed for estimating the parameters in distributed lag schemes. Of these, ordinary least squares is known to be inconsistent, and the technique proposed by Koyck [8] and elaborated by Klein [7] has been shown by Amemiya and Fuller [1] to be less efficient than maximum likelihood or Aitken type estimates.2 Liviatan [9] has proposed an instrumental variables approach which is consistent but inefficient. Hannan [5] has proposed a method of estimating regression coefficients by spectral techniques and in [6] has applied the method to the estimation of distributed lag schemes. Amemiya and Fuller [1] have shown that Hannan's estimates have the same asymptotic distribution as the maximum likelihood estimates in the distributed lag model

Journal ArticleDOI
TL;DR: In this article, it was shown that the optimal tariff structure requires the solution of a system of homogeneous linear equations, and that the solution is not unique in the sense that it is a structure of optimal tariffs, rather than an optimum tariff, which is relevant to this aspect of international trade analysis.
Abstract: NUMEROUS ARTICLES HAVE APPEARED on the subject of an optimum tariff. We speak of an optimum tariff advisedly, for there has been little attempt to discuss the possibility of a tax on more than one commodity-usually within the context of a two-commodity trade model in which either the import or export is taxed. The most well known of the exceptions are those of Graaff [1] and Kemp [2]. As notable as the work of these authors may be, they did not provide satisfactory answers to the many-commodity case.' We urge the reader to note that it is a structure of optimal tariffs, rather than an optimum tariff, which is relevant to this aspect of international trade analysis. This paper was motivated by the need to demonstrate a completely general optimal tariff structure in a model which would readily lend itself both to interpretation and practical application. By employing concepts immediately familiar to economists, estimates of known parameters may be inserted directly into the model whenever desired. On the other hand, the principal intellectual interest focused on the question of whether or not the optimal tariff structure might contain some negative elements. Although Graaff, in fact, claimed that in order for the structure of tariffs to be optimal, it might be necessary to have one or more subsidies in the structure, he did not prove that the case might even exist, let alone upon what assumptions such a case would have to rest. Since it will be shown that the optimal tariff structure requires the solution of a system of homogeneous linear equations, the solution is not unique. The vector of optimal tariffs will be arbitrary to the extent of a scalar constant factor. Arbitrary designation of a commodity numeraire upon which the tariff is set at some (arbitrarily) specified level will determine a particular vector of optimal tariffs. Hence the tariff on the numeraire may be regarded as a parameter in terms of which the remaining tariffs can be linearly and uniquely expressed. At first sight the choice of numeraire may appear to be important because it enables us to select a particular solution to the optimal tariff structure from the infinity of solutions inherent in the system of homogeneous equations. It might then appear that we are left open to choose a solution involving negative tariffs. Although it would be quite ridiculous to regard this as an answer, it does at least give us a clue with respect to the question we ought to ask.


Journal ArticleDOI
TL;DR: In this article, the CCP approach and the SLP approach are compared, where the distribution properties of relevant random variables statisfying pre-assigned tolerance limits are used to specify a deterministic nonlinear program.
Abstract: An ordinary linear programming model is said to be chance-constrained if its linear constraints are associated with a set of measures indicating the extent of violation of the constraints. The CCP approach usually assumes the resource vector to be normally and mutually independently distributed and then derives a deterministic concave programming problem. In the SLP approach the tolerance measure for the linear constraints is not preassigned by the decision maker and the approach seeks to derive the statistical distribution of the optimal solution vector and also of the optimal objective function under the assumption that the set (A, b, c) of parameters contains elements with known probability distributions. Some basic differences of the CCP and the SLP approaches may be noted at the outset. First, the CCP approach utilizes the distribution properties of relevant random variables statisfying preassigned tolerance limits to specify a deterministic nonlinear program, whereas the SLP approach starts from a deterministic linear program (e.g., a program where all random elements are replaced by their expected values) and admits the random variations around its optimal basis to derive the probability distribution of the optimal solution satisfying (if necessary at a later stage) some tolerance measures if and when feasible. Second, nonlinearities are introduced in both approaches, although the initial problem in both cases is a linear programming problem. Third, the CCP approach restricts decision rules within a certain class (e.g.,

Journal ArticleDOI
TL;DR: While the correlation between changes in the direction of volume and price fluctuations was inconclusive in general, there was a significant difference between industrial raw materials and foodstuffs in this respect, indicating that increases in supply, caused by plentiful crops, tend to drive prices down, or vice versa.
Abstract: While the correlation between changes in the direction of volume and price fluctuations was inconclusive in general, there was a significant difference between industrial raw materials and foodstuffs in this respect. For industrial materials alone, there was a tendency for the correlations to be positive, though not always high. This indicated that price and volume tended to move up or down together; the conclusion may be drawn that they both tended to be governed by changes in demand. In the case of foodstuffs, the correlation tended to be negative, that is, price and volume tended to move in opposite directions-indicating that increases in supply, caused by plentiful crops, for example, tend to drive prices down, or vice versa. In such cases, changes in the supply factor governed fluctuations in prices and proceeds.


Journal ArticleDOI
TL;DR: A number of non-market or combined market and institutional theories of wage determination have been proposed to measure the relative wage impact of collective bargaining by examining union and non-union wages as discussed by the authors.
Abstract: IN RECENT YEARS there has been a proliferation of non-market or combined market and institutional theories of wage determination. This development was a natural response to the spread of collective bargaining in the 1930's. Impressive attempts have been made to measure the relative wage impact of collective bargaining by examining union and nonunion wages.2 Moreover, considerable effort has been devoted to identifying and measuring the effects of those variables which influence the outcome of collective bargaining. Variables such as profit rates, indices of product market monopoly or labor force skill, and labor costs as a share of total costs have all been used as. possible sources of union success. In particular, variables linking union success in one sector with that in another sector are often mentioned (see, for example, [5], [8], [14], [16]). The postulated processes have been described variously as spillover, key bargain. wage leadership, pattern wage adjustment, imitation, and diffusion. According to these theories in their most general form, wage movements in some sectors are influenced not only by traditional market forces in that sector but also by wage movements in some other sector. Although these theories are sometimes restricted to wage determination under collective bargaining, they also have been applied to cases in which allegedly monopsonistic employers administer wages and imitate highly visible economic sectors. We believe that the wage determination literature has moved too far in the direction of institutional theories which often totally exclude market forces, particularly labor supply forces. After all, about 75-80 percent of the A merican labor force is unorganized. Therefore, one should not overestimate the impact of institutional forces as opposed to market forces on aggregate wages. Moreover, even in the union sector market forces may

Journal ArticleDOI
TL;DR: In this paper, a theoretical framework was proposed to specify the static global equilibrium pattern of specialization when the number of goods (n) exceeds mn on the basis of the Heckscher-Ohlin theory of factor price equalization.
Abstract: GIVEN THE PRODUCTION TECHNOLOGY, we may expect that the pattern of production and trade of a country would be determined by factor endowments and demand patterns in the Heckscher-Ohlin model. Imagine, however, a kind of perfect Heckscher-Ohlin world where the factor prices are equalized in every country. Suppose we are asked what the pattern of production and trade of a country would be, given the data on production technology, factor endowments and demand patterns of every country. If it is a world where the number of goods (n) exceeds the number of factors (in), we cannot say anything definite about the production and trade pattern of a country, i.e., the precise degree of international specialization is indeterminate. When Wt < V, the factor endowments and the demand patterns of each individual country have no formal places in the Heckscher-Ohlin theory of factor price equalization itself to determine the precise pattern of production and trade of each country. Furthermore, no one has yet rigorously explored the possible implications of the factor price equalization theorem specifically with respect to the determination of patterns of production and trade of each country. The main purpose of this paper is to derive a theoretical framework which can specify the static global equilibrium pattern of specialization when n exceeds mn on the basis of the Heckscher-Ohlin theory of factor price equalization. In order to specify the production and trade pattern of each country, we will introduce a simple assumption which seems reasonably realistic that, if the total value of outputs is the same, each country has a tendency to minimize international transaction activities. With this assumption it will be shown that we can eliminate the uncertainty concerning the precise pattern of production and trade of each country, and hence we can deduce a theoretical framework to determine the pattern of production and trade in multisectoral economy from the factor price equalization theorem.

Journal ArticleDOI
TL;DR: The assumption that an individual's ordinal utility function is "homogeneous" can be translated into an equivalent assumption about his preferences as mentioned in this paper, and Tobin [11, (7-11) seems to make implicit use of the homogeneity assumption.
Abstract: that an individual's preferences can be represented by an ordinal utility function which is "homogeneous" (i.e., an increasing monotonic transformation of a function homogeneous of degree one). Friedman [2] and Modigliani and Brumberg [3] use this assumption in their theories of the consumption function. Radner's turnpike theorem [6] [7] requires that the planner's preferences among terminal states can be represented by a homogeneous utility function. And Tobin [11, (7-11)] in his discussion of portfolio selection for mixed target dates, seems to make implicit use of the homogeneity assumption. 2 The assumption that an individual's ordinal utility function is "homogeneous" can be translated into an equivalent assumption about his preferences. Definition. Let U(X) be an ordinal utility function, where X denotes the vector (xI, *., XT).3 If there exists a function f(X), homogeneous of degree one, and a twice differentiable function F, F' > 0, such that F[U(X)] = f(X),