scispace - formally typeset
Search or ask a question

Showing papers in "The Review of Economic Studies in 1977"


Journal ArticleDOI
TL;DR: Bargains and Ripoffs: A Model of Monopolistically Competitive Price Dispersion, this paper, is a model of price dispersion that is based on Salop and Stiglitz's model.
Abstract: Bargains and Ripoffs: A Model of Monopolistically Competitive Price DispersionAuthor(s): Steven Salop and Joseph StiglitzSource: The Review of Economic Studies, Vol. 44, No. 3 (Oct., 1977), pp. 493-510Published by: The Review of Economic Studies Ltd.Stable URL: http://www.jstor.org/stable/2296903Accessed: 15/09/2009 15:43

1,092 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on the formulation of equilibrium distributions of sales and advertising prices and develop a model in which sellers have constant average cost curves; approach to prove assumptions on consumer preferences; description of the basic model of consumer pricing.
Abstract: Focuses on the formulation of equilibrium distributions of sales and advertising prices. Development of a model in which sellers have constant average cost curves; Approach to prove assumptions on consumer preferences; Description of the basic model of consumer pricing. (Из Ebsco)

1,037 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the problem of catalog quotes out-of-date prices and suggest that companies should consider putting out catalogs more frequently to keep up with prices.
Abstract: How MUCH? Companies that sell through catalogs run into a pricing problem. Fast rising prices threaten to make catalog quotes out of date. Montgomery Ward & Co. which bought cautiously for its winter catalog, says it is " keeping its fingers crossed " that prices won't surge much in the near future. An official of Oak Supply & Furniture Co., Chicago, complains that the company's catalogs have been obsolete in terms of prices before they got into customers' hands the past two years. Basco Inc., Cherry Hill, N.J. mailed its latest catalog just a few weeks ago but already is reviewing some prices. J. C. Penney Co. says it stands behind prices during the approximately sevenmonth life of its semiannual catalogs. Like other companies, it gets guarantees from many vendors that they won't raise their prices for specified periods. But more catalog sellers tell customers prices are subject to change. Jewelcor Inc. puts such a warning on its catalog's jewelry pages. Some companies, such as Basco, consider putting out catalogs more frequently to keep up with prices. For the last two years Sears Roebuck has issued some 20 special catalogs twice rather than once a year. (Wall Street Journal-10/31/74).

743 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that price formation via the procedure of competitive bidding satisfies a version of the law of large numbers, in both the probabilistic sense and the economic sense.
Abstract: I demonstrate in this paper that price formation via the procedure of competitive bidding satisfies a version of the law of large numbers, in both the probabilistic sense and the economic sense. That is, if in a sealed-tender auction a seller offers to sell at the highest bid an item having a definite but unknown monetary value, and each of many bidders submits a bid based only on his private sample information about the value, where the bidders' samples are independent and identically distributed conditional on the value, then the maximum bid is almost surely equal to the true value. Thus, no bidder knows the true value of the item, yet it is essentially certain that the seller will receive that value as the sale price. Certain regularity assumptions are needed to prove this proposition. I present three examples, two for which the result is valid and another for which it is not.

645 citations



Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of a planner or ethical observer who wants to derive a collective preference ordering over a set of feasible alternatives from the knowledge of individual utility functions.
Abstract: We consider the problem of a planner or ethical observer who wants to derive a collective preference ordering over a set of feasible alternatives from the knowledge of individual utility functions. By assumption, he is concerned with social welfare judgements, not with committee decisions. As a tool of analysis, we use the concept of social welfare functional (SWFL), which was developed by Sen [9] on foundations originally laid down by Arrow [1]. Rather than to compare SWFL’s directly, we treat them somewhat like composite goods and we compare sets of axioms which characterize them. We select five such sets, which differ mainly with respect to the planner’s informational basis. This term refers to an “invariance” axiom which defines in each case the measurability and comparability properties of individual utility functions. Taking up a suggestion of Sen’s [10], we focus our attention on the implications of each informational basis for the equity content of collective choice. Our study does not treat all possible invariance axioms; it does not even exhaust all the most relevant ones. However, we think that it brings about some logical clarification. Among other things, we characterize utilitarianism and the leximin (or lexical maximin) principle by means of two sets of axioms which differ only in one respect, viz. the invariance axiom. The paper is divided into three sections. In Section 1 we describe our problem formally, we discuss our invariance axioms, and we show that some of them are equivalent, in the light of the Review of Economic Studies, 44(2), 199-209, 1977. On this distinction, see Sen [11].

474 citations



Journal ArticleDOI
TL;DR: In this paper, the authors argue that when consumers differ in their information-generating efficiencies and costs, dispersion serves as a device for splitting up the market to permit price discrimination, since less efficient information gatherers will search less and thus will pay a higher price than will efficient searchers.
Abstract: Although economists often assume that commodities form homogeneous categories with a single price, there are in fact heterogeneities within commodity groupings. Many markets for apparently identical commodities are characterized by dispersion in price and differences in durability and other quality measures. The information a buyer requires in order to obtain the lowest price or " best buy must be produced at a cost. For example, various activities for producing this information are reading magazines such as Consumer Reports, consultations with friends and sales personnel, scanning newspaper advertisements and directly sampling store prices. Consumers' search techniques and the efficiency with which they gather information varies. This heterogeneity leads to differences in optimal information-gathering strategies. Those consumers who are more efficient information-gatherers and searchers obtain better buys on average. Although there are clearly private returns to information-gathering, dispersion appears socially wasteful. If there were no dispersion, consumers would not need to engage in this costly learning activity. It is only the failure of the market to price correctly that allows the private returns to search. This waste leads to the suspicion that a monopolistically controlled market will be characterized by a smaller degree of dispersion than a more competitive one. Stigler [19, p. 223] argues that " From the manufacturer's viewpoint, uncertainty concerning his price is clearly disadvantageous, the cost of search is a cost of purchase, and consumption will be smaller the greater the dispersion of prices and the greater the optimum amount of search ". On the other hand, when consumers differ in their information-generating efficiencies and costs, dispersion serves as a device for splitting up the market to permit price discrimination. The basic idea is as follows. Suppose that demand conditions are such that the monopolist would like to price discriminate against the less efficient informationgatherers; that is, suppose the submarket consisting of inefficient consumers is more price inelastic. Given these potential gains from discrimination, the monopolist must also discover some method of identifying the inefficient, price inelastic consumers. Simply permitting dispersion is such a method since less-efficient information gatherers will search less and thus on average pay a higher price than will efficient searchers. The very presence of dispersion both splits the market and charges a higher purchase price to the submarket of inefficient searchers. Thus, dispersion acts as a costly device for sorting consumers into submarkets to permit price discrimination. If it is not too costly and demand elasticities vary in the " correct " direction so that the feasible price discrimination is profitable, then dispersion is more profitable than a single price.

326 citations


Journal ArticleDOI
TL;DR: In this article, the role of futures markets as a place where information is exchanged and where people who collect and analyse information about future states of the world can earn a return on their investment in information gathering is explained.
Abstract: It is a fact that futures markets exist in some commodities and not others. Similarly, contingent commodity contracts of the type described by Debreu do not exist for all commodities in all states of the world. Any explanation of this phenomenon must be intimately connected with a theory of what functions these markets serve. The KeynesHicks theory of commodity futures markets is that they provide a mechanism by which risk averse speculators insure other risk averse traders who hold (positive or negative) stocks of a commodity subject to price fluctuation.' We propose a new explanation of the role of futures markets as a place where information is exchanged, and where people who collect and analyse information about future states of the world can earn a return on their investment in information gathering. In particular, it is shown how the private and social incentives for the operation of a futures market depend on how much information spot prices alone can convey from " informed " to " uninformed" traders. (Firms which have information about future states of the world are called "informed ", while firms who do not are called " uninformed ".) In equilibrium, without a futures market, informed firms will use their information about next period's price to make spot market purchases. The commodity purchase is stored in anticipation of a capital gain. Therefore, the trading activity of informed firms in the present spot market makes the spot price a function of their information. Uninformed traders can use the spot price as a statistic which reveals some of the informed traders' information. When the spot price reveals all of the informed traders' information, both types of traders have the same beliefs about next period's price. In this case there will be no incentive to trade based upon differences in beliefs about next period's price. In general the spot price will not reveal all of the informed traders' information because there are many other factors (" noise ") which determine the price along with the informed traders' information. This implies that in equilibrium with only a spot market, informed and uninformed traders will have different beliefs about next period's price. The difference in beliefs creates an incentive for futures trading in addition to the usual hedging incentive. When a futures market is introduced uninformed firms will have the futures price as well as the spot price transmitting the informed firms' information to them. This is the informational role of futures markets. The model has the following testable implications. The degree of predictability of a future spot price from only a current spot price determines the private incentives for futures trading in a commodity which has no futures market. For commodities with futures markets the volume of futures trading is directly related to how poorly current and futures prices predict the future spot price, relative to how well various exogenous variables predict the future spot price. " How well " refers to mean square prediction error conditional on available information, not biasedness of the predictions.

320 citations



Journal ArticleDOI
TL;DR: In this paper, the authors show that a policy which reduces all price distortions uniformly will improve the welfare of the economy, if it is stable in the Marshallian sense, and if the good with the highest distortion is substitutable for all the other goods and the economy is stable under the AIM, if the aggregate of income terms weighted by marginal costs (AIM) is positive.
Abstract: The theory of the second best, first formally presented by Lipsey and Lancaster [16], maintains that the abolition of an arbitrarily chosen distortion in an economy with multiple distortions may reduce the welfare of the economy The main objective of the present paper is to formulate some piecemeal policy recommendations which would definitely result in a move towards efficiency In particular, we will prove the following: (a) a policy which reduces all price distortions uniformly will improve the welfare of the economy, if it is stable in the Marshallian sense (b) a policy which brings the highest distortion to the level of the next highest will improve the welfare of the economy, if the good with the highest distortion is substitutable for all the other goods and if the economy is stable in the Marshallian sense Our results integrate the characterization of the second best solution by Green [9], the analysis of the uniform reduction of tariff and excise tax by Foster and Sonnenschein [8] and Bruno [4], and the demonstration by Kemp [15] that the welfare effect of the tariff reduction in the two commodity world is related to the stability of the economy In the present paper, an extensive use of the compensated demand function enables us to reveal the underlying relationship among these seemingly unrelated works' In Section 2, we will define the compensated demand function, and will present its properties used in this paper The model will be presented in Section 3 In Section 4 we will establish that in an economy with constant-cost technology a uniform reduction in excise tax rates improves welfare provided that the aggregate of income terms weighted by marginal costs (AIM) is positive We will also show that a reduction of the highest tax rate to the level of the next highest rate improves the welfare if the AIM is positive and if the good with the highest tax rate is substitutable for all other goods In Section 5 the main theorems will be proved by establishing that the positivity of AIM in the propositions of Section 4 can be replaced by another condition if the economy is stable under the Marshallian adjustment mechanism (which is defined in the text) Section 6 will re-evaluate the theory of the second best from our framework (This section can be read independently of Section 5) Throughout this paper, a matrix will be denoted by an upper-case letter; a lower-case bold-faced letter will represent a column vector; its transpose will be shown by a prime; and the ith element of the vector is denoted by the same letter with subscript i, unless stated otherwise


Journal ArticleDOI
TL;DR: In this article, the authors examine the economic issues posed by the existence of guarantees, and make a distinction between situations where buyers and sellers have equal access to information, and a situation where sellers have superior access.
Abstract: The fact that a consumer is frequently uncertain about the quality of a product that he purchases, and is therefore also unsure of the extent to which it will render him the services he might expect of it, is one that is gaining increasing recognition. In an earlier paper [5] I examined in a simple framework the effects of changes in the uncertainty about a product's quality on the consumer's demand, and also touched briefly on the effect of a guarantee. In this paper I want to examine in more detail some of the economic issues posed by the existence of guarantees. There are several useful distinctions that can be drawn as a preliminary to more detailed study. One is a distinction between situations where buyers and sellers have equal access to information, and a situation where sellers have superior access. The first situation is exemplified by a market where trade is in a product whose quality is genuinely random, with the distribution known to both buyers and sellers. Thus if a certain fraction p of the cars from a given factory are generally known to be faulty, then a transaction in which a retailer sells a car to a buyer comes into this category: each knows that the chance of the car being faulty is p, but, because the car is unused, neither knows whether it is actually faulty. Contrast this with a situation where the first owner of the car is reselling it: now there is an important asymmetry, in that the seller is much better informed about the quality of the product than the buyer. This is the case with which Akerlof's very interesting analysis [1] is largely concerned, and which I have also discussed in [4]. My main concern here, however, is with a situation of equal information. Thinking casually about such a situation, it is clear that one can distinguish between the incentive effects and the risk-sharing effects of a guarantee. Incentive effects arise because the existence of a guarantee provides the producer with an incentive to improve the quality of his product, at least to the extent of reducing the chances of its falling below the guaranteed level. If the compensation in the event of failure is less than complete, then the consumer also has an incentive to maintain the product. For example, a used car guarantee, under which the buyer and seller will each pay half of any repair bills, provides both parties with incentives to minimize these bills. Of course, if the guarantee is valid for only a limited period of time, then there is the further effect of providing the buyer with an incentive to ensure that if there is to be a failure, it occurs early in the product's life. This may act in opposition to the other effect, and reduce his eagerness to maintain the product. In addition to creating the incentive effects mentioned, a guarantee also acts as a way of sharing the risk associated with uncertainty about the quality of a product: to be efficient in this sense, it will apportion this risk in accordance with the risk-aversion of the participants. My main concern here is with the risk-sharing aspects of guarantees. This is partly because these seem to be the most tractable aspects of the problem, but also stems from a belief that these are the most important aspects. Casual empiricism suggests that except in rather unique cases, a product is usually designed and produced before any attention is given to the choice of guarantee terms: these are then chosen as part of a marketing package. In such situations, the reliability of the product will clearly be independent of the






Journal ArticleDOI
TL;DR: The divide-and-choose game has played an important role in the literature on fair division as mentioned in this paper, and it has been widely used in the real world, where sometimes even prolonged and costly negotiations produce only imperfect agreements.
Abstract: The " divide-and-choose " method has played an important role in the literature on fair division.' This technique for allocating bundles of goods seems impartial, requires little cooperation from agents, and is nearly free of administrative costs. It is therefore somewhat puzzling that it has found so few applications in the real world, where sometimes even prolonged and costly negotiations produce only imperfect agreements. Either the method has drawbacks not yet well understood, or it is underutilized. This paper examines the game that arises when two agents agree to use the divide-and-choose method. The analysis leads to a resolution of the puzzle mentioned above and identifies a class of situations where replacing conventional arbitration procedures with the divide-and-choose method can be strongly recommended. In the sequel, an agent who would prefer another agent's bundle of goods to his own will be said to envy the other agent. An allocation at which no agent envies another will be called a fair allocation.2 A well-known property of the two-person version of the divideand-choose game3 is that each player can insure that he does not envy the other. The divider (D) can accomplish this by dividing so that lie is indifferent to his opponent's choice; the chooser (C) need only choose his most preferred bundle after D divides. It is interesting that the players can insure that the outcome of the game is fair, but more information about the allocations actually generated by the game is needed to judge its usefulness as a fair division device. Conceivably, with players motivated by self-interest, the game could generate an unfair allocation in spite of the above result. To learn more about the divide-and-choose method, I assume that players seek to obtain the most desirable bundle possible. They are also assumed to behave noncooperatively, since negotiating a mutually acceptable settlement would be relatively easy if they were willing to cooperate, and the method would then be superfluous. In Section 2 of this paper D's problem is formulated and his optimal non-cooperative strategy is characterized. As is suggested by Kolm [6, p. 61], Luce and Raiffa [8, p. 365] and Singer [12], the common belief that D should divide the bundle so that D is indifferent about C's choice is false. If D knows C's preferences with certainty, under very general conditionsroughly, that players' behaviour can be described by the maximization of continuous and strongly monotonic utility functions and that goods are homogeneous and perfectly divisible-his optimal strategy involves dividing the bundle so that it is C, rather than D, who is indifferent about his choice. Once D's optimal strategy has been characterized, several interesting conclusions follow. As Kolm [6, p. 31] points out, Luce and Raiffa's belief [8, pp. 364-365] that the



Journal ArticleDOI
TL;DR: In this paper, the authors show that aggregate production functions with parameters estimated from factor payments do appear to give good results, at least sometimes, and to do so in an apparently non-trivial way.
Abstract: As a purely theoretical matter, aggregate production functions exist only under conditions too stringent to be believed satisfied by the diverse technological relationships of actual economies. There is a summary discussion and bibliography in Fisher [2]. Yet aggregate production functions estimated from real data do appear to give good results, at least sometimes, and to do so in an apparently non-trivial way. Not only do such estimated relationships give good fits to input and output data, but also the calculated marginal products appear to be related to observed factor payments. Alternatively, production functions with parameters estimated from factor payments turn out to fit input and output data pretty well sometimes. It is not a simple matter to decide why this should be so as a matter of theory. Indeed, the problem is sufficiently complicated that perhaps the most promising mode of attack on it is through the construction and analysis of simulation experiments. By constructing simplified economies in which the conditions for aggregation are known not to be satisfied, we can hope to find out inductively the circumstances under which aggregate production functions appear to give good results in the double sense just discussed. Moreover, such experiments can cast light on other aspects of the estimation of aggregate production functions from underlying non-aggregatable data. The related papers of Houthakker [4], Levhari [6] and Sato [8] do not bear directly on the problems here addressed. Those papers show what aggregate production functions can be expected when the distribution of capital over firms with related technologies is fixed (or changes in very restricted ways). Such fixity of distribution, however, can hardly be expected in the real world and is certainly not true in the world of simulation experiments reported below and in Fisher [3]. Here the issue is that of why aggregate production functions should appear to exist at all, rather than that of what form they will take given that a constant distribution of capital over firms ensures their existence. See Fisher [2, pp. 571-574]. The books by Johansen [5] and Sato [9] give excellent discussions of aggregation from several points of view. This programme of research was begun in Fisher [3]; individual firms (with a single homogeneous output and single homogeneous labour but different capital types) were given different Cobb-Douglas production functions, underlying capital and labour data were generated in various ways, and labour was assigned to firms to maximize output. An aggregate Cobb-Douglas production function was then estimated and its wage predictions examined. A number of subsidiary results were found in these experiments, and we shall comment

Journal ArticleDOI
TL;DR: A mathematical model is established whereby optimal accumulation paths of population and the economy can be figured when the basic feature of population dynamics, age dependence, is taken into account and an integral-equation control theory is utilized.
Abstract: A mathematical model is established whereby optimal accumulation paths of population and the economy can be figured when the basic feature of population dynamics age dependence is taken into account. This model series links 2 previously separate disciplines and literatures - i.e. formal policy dealing with the dynamics of age and time. An integral-equation control theory is utilized. Recognition of age structure in analyzing fertility trends is important at both macro- and microeconomic levels. The present model was constructed by embedding an age-disaggregated population in a simple economic growth model where the fertility level and rate of savings can both be influenced by government policy. Such policy should balance the lifetime value of births and capital against the social costs necessary to create them. One the problem has been defined and a dynamic theory developed optimum static theories can be establihsed where the variables are held constant over time.

Journal ArticleDOI
TL;DR: Turnovsky as mentioned in this paper showed that the effect of price uncertainty on a small country's allocation of a single input to the production of two goods using a Ricardian technology is dependent on the choice of numeraire good.
Abstract: Over the past few years a number of papers have been written analysing the effects of price uncertainty on a small trading country.' The conventional approach adopted in these studies is first to pick a numeraire commodity, thereby defining a relative price (terms of trade) which the small country takes as exogenously given. The objective is then to compare various decisions and quantities when this relative price is random with what they would be under certainty. To make this comparison one must introduce a " certainty price " and without exception this has quite naturally and unquestioningly been the arithmetic mean of this relative price. Typical of such studies is a recent article in this Review by one of the present authors, Turnovsky [12]. He considers the impact of price uncertainty on a small country's allocation of a single input to the production of two goods using a Ricardian technology.2 It turns out that by using the arithmetic mean of a relative price as the certainty price, some of his propositions are weakened in so far as they are dependent upon the choice of numeraire good. This is clearly an unsatisfactory situation since this choice is presumably an arbitrary one. The purpose of this paper is two-fold. First we wish to use one of his propositions (proposition 7) to illustrate how the use of the arithmetic mean makes the result dependent upon the choice of numeraire. We also show (Section 3) how this dependence can be avoided if some other measure of central tendency such as the geometric mean is used as the certainty price, in which case the proposition holds unambiguously. The second objective is a somewhat more general one. As already indicated, the procedure followed by Turnovsky is in fact the standard one in this literature. Much of it, particularly those papers dealing with the neoclassical technology, makes extensive use of Jensen's inequality, which enables the effects of uncertainty to be determined from the convexity-concavity properties of relevant functions.3 It turns out that in many cases the choice of numeraire influences these properties, leading to the same kind of difficulty as

Journal ArticleDOI
TL;DR: In this article, a probabilistic approach is used for bypassing Arrow's impossibility result on social choice functions, which is a good substitute to the single-peakedness approach.
Abstract: One of the routes for bypassing Arrow's Impossibility result [1] on social choice functions is that taken by Hinich and Ordeshook [8] and Hinich, Ledyard and Ordeshook [7].1 They relax the assumption of a deterministic vote by an individual based on his own preferences, and, rather, assume that given a set of alternatives there is a probability distribution for each individual indicating the probability that he will vote for each alternative or that he will not vote at all. Also a new probabilistic equilibrium concept is used in which expected plurality replaces plurality in the deterministic models.2 The mathematical tool used by Hinich, Ledyard and Ordeshook is game theory. In this paper we use this probabilistic approach for several reasons. For one we consider it a good substitute to the single-peakedness approach. In particular, the assumption about the probability of voting functions being concave in one argument (a precise definition comes later) seems to us to be less severe than single-peakedness. This is so since a probability function can be viewed as a conditional utility function of a candidate (an elaboration on this will come later). In our opinion, this is so even in the one-dimensional case. And since it allows one to get some results in the multi-dimensional case as well, when singlepeakedness is not sufficient, it appears overall to be a better approach. We expand the work of Hinich, Ledyard and Ordeshook in several ways. Our assumptions (see Section 2) are much weaker than theirs. For example, we do not require differentiable utility functions or symmetric probability functions (an assumption which they did not make explicitly, but which is embedded implicitly intheir work). By weakening the assumptions we give up the uniqueness of equilibrium and, in particular, we show that the sets of equilibrium points for each candidate may differ. This result, which differs from the " median-position " result stated first by Black [3], may explain, at least in part, why candidates choose somewhat different positions. The existence of different equilibrium points for different candidates no longer gives rise to just one social choice function, but to several. Finally, we use a method of proof (see Section 3) much simpler and much less technical than that used by HLO. This method is based on a recently stated fixed-point theorem [6]. 2. THE MODEL AND SOME BASIC RESULTS



Journal ArticleDOI
Efraim Sadka1
TL;DR: In this article, the authors show that the latter conclusion is erroneous and offer another class of producers for which aggregate production efficiency is desirable, namely those who are to produce at all.
Abstract: An interesting question in public economics is the desirability of production efficiency in an economy where there are no restrictions on the government with respect to commodity and profit taxation. Diamond and Mirrless [1] considered economies with constant returns-toscale technologies and concluded, among other things, that overall production efficiency is desirable under fairly general conditions. A subsequent paper by Mirrlees [3] dealt with economies possessing both constant and decreasing returns-to-scale technologies. In that paper Mirrlees derived conditions under which aggregate production efficiency is desirable. When these conditions are not met, then " production efficiency is desirable (only) for a class of producers, namely those who are to produce at all . . . " ([3, p. 107]). We shall show in this note that the latter conclusion is erroneous and offer another class of producers for which production efficiency is desirable. We shall find first a particular class of producers for which production efficiency is desirable. For this purpose let us recall Mirrlees' notation. Let Yi (with elements y,), j = 1, ..., m, be the production set of the jth firm and let G (with elements z) be the public production set. All of these sets are assumed to be closed and convex and to contain the origin. Suppose first that each firm has unique (up to a scalar multiplication) supply prices pj(yj) which vary continuously with net outputs yj. Define rj(yj) to be the profit of producer j, namely: rj(yj) _ yj.pj(yj). This profit function is clearly continuous. The rate of profit tax (possibly negative) which is imposed on firm j is denoted by 1-aj < 1. Consumers (with diverse tastes) face consumer prices q. Aggregate net demand is denoted by x{q; a1rl(yl), ..., amrm(ym)}, since it depends on the consumer prices and the net profit of each firm; x is assumed to be continuous in q. Similarly, the indirect social welfare function is denoted by V{q; a1rl(yl), ..., amrm(ym)}. It is assumed that V does not attain a local unconstrained maximum with respect to q. This assumption trivially holds if there is a commodity that no consumer buys (respectively, sells) and some sell (respectively, buy). The problem at hand is to choose q, z E G, yj E Yj and aj . O (j = 1,..., m) so as to

Journal ArticleDOI
TL;DR: In this article, the authors introduce an econometric model of export pricing and sales behavior, the parameters of which vary with some indicator of excess productive capacity, and they derive aggregate equations for the economy's exports and the problems of estimation and measurement posed by them.
Abstract: This paper has two purposes. The first is to introduce an econometric model of export pricing and sales behaviour, the parameters of which vary with some indicator of excess productive capacity. For the individual firm this is achieved simply by proposing that it follows a demand-constrained regime when levels of working are low, and a supplyconstrained regime when full capacity is approached. Application of this sort of model to the whole economy would lead to implausible discontinuities; as overall capacity utilization rose a switching point would be reached where factors influencing demand-world trade, competitor prices-became suddenly unimportant and those influencing supply-investment, profitability-became suddenly all-important. The second purpose of this paper is to offer a resolution of this conceptual inelegance by postulating that at any one time firms experience a variety of capacity utilization conditions distributed around means which are generally high at peaks of the domestic business cycle and low in troughs. The resulting aggregate equations exhibit no unwonted discontinuities. The potential area of application of this device is large, as all multi-regime models involve some abrupt changes in the behaviour patterns of the agents they describe. It is, however, circumscribed by the technical problems of combining the aggregation procedure with constraints on the agents' behaviour which are essentially stochastic, and by the need to fabricate data on the way the switching variable (capacity utilization) is distributed across individual agents (firms). We shall look at two versions of the export model, the first resting on the assumption that the constraints on firm behaviour share a common stochastic term, thus effectively reducing these technical problems to those of pure aggregation, the second on the more general premise that such random elements are independent. The first suffices to illustrate the important points that weighting or sample-splitting schemes are inappropriate to its estimation, and that there is a real danger of incorrectly rejecting the hypothesis that two regimes are at work if firms differ in their experience of the switching variable. The second model is developed partly to reveal a further source of bias in this direction, but mainly to catalogue the practical problems of obtaining a manageable estimating equation for a model in which independent stochastic constraints coexist with an aggregation problem. This is in fact achieved only by confining all random variables to a non-normal class of distribution functions which is tailored to produce relatively simple expressions for total exports while making full use of the meagre data available on intraindustry variations in capacity utilization. In the first section below, the basic model of the exporting firm is formalized and set in the context of current research on models with varying parameters. In the second section, aggregate equations for the economy's exports are derived and the problems of estimation and measurement posed by them are discussed. Finally, estimated equations for the volume and price of UK exports of manufactured goods are presented with special emphasis on the distinct effects of cyclical variations in the degree of capacity utilization on export demand and supply.

Journal ArticleDOI
TL;DR: In this article, the authors pointed out that the criterion (maximin) is unsuitable for determining the just rate of saving, it is intended only to hold within generations and that the fair principles of justice would be those chosen by individuals behind a veil of ignorance.
Abstract: Almost invariably when growth theorists have looked for optimal growth paths they have chosen to maximize the sum (or discounted sum) of utilities over time Thus it is not surprising that when considering Rawls' contribution to justice theory the economists' interest has focused upon Rawls' alternative principles for intertemporal distribution, specifically for intergenerational distribution What is surprising is how Rawls deals with intergenerational distribution, having a different principle of justice to allocate within generations than the process used to allocate between generations, a view he has since confirmed: " I should add that the criterion (maximin) is unsuitable for determining the just rate of saving, it is intended only to hold within generations " [8] Rawls considers fair principles of justice would be those chosen by individuals behind a veil of ignorance (ie no one knows his position in society, etc) and he uses the majority of his book [7] to defend the view that individuals would unanimously choose the following principles: