scispace - formally typeset
Search or ask a question

Showing papers in "Economica in 1983"


Journal ArticleDOI
TL;DR: In this article, the problem of household comparability is abstracted by considering a population of n households, identical in all respects except for their incomes, and the question of ordering social states then becomes one of ranking income distributions over a group of anonymous households or individuals.
Abstract: A large variety of policy questions involve choices between social states and a consequent ordering of the feasible alternatives. When these social states are related to the levels of welfare experienced by individuals or households, two central issues stand out in determining the relative desirability of different social outcomes. One of these is the essentially positive exercise of achieving comparability between households with different characteristics (such as composition or preferences) operating in different environments (for example, facing different price structures). The other concerns the normative judgments implicit in the evaluation of alternative allocations of resourcesthe emphasis placed on inequality between households and the extent to which greater inequality can be compensated by higher average living standards. This paper focuses on the second of these issues, and in doing so we abstract from the problem of household comparability by considering a population of n households, identical in all respects except for their incomes.' The question of ordering social states then becomes one of ranking income distributions over a group of anonymous households or individuals. Borrowing the usual assumptions imposed on consumer preferences, we may suppose that a social ordering of income distributions can be represented by a "welfare function"2

1,003 citations


Journal ArticleDOI

557 citations



Journal ArticleDOI

127 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a method of estimating the union/non-union differential at the individual level while Section II presents the first estimates for the United Kingdom based on individual-level data.
Abstract: Considerable attention has been focused in recent years on the empirical analysis of the effect of trade unions on the structure of earnings. Such empirical investigation usually entails measurement of the union/non-union differential in wages or earnings. At the level of the individual this involves comparison of the earnings of union members with estimates of what their earnings would be were they not union members, or, equivalently, examination of differences in earnings between individuals comparable with regard to all other relevant characteristics who differ in their union membership alone. In the United Kingdom all previous attempts at estimation of this differential have used aggregate data at either the industry or occupation level.1 The conventional methodology involves regression of the logarithm of the average wage on either the proportion of workers who are union members or the proportion covered by a collective agreement, together with a vector of industry characteristics that are deemed to determine the non-union wage. However, several problems are immediately apparent with this methodology. First, the model ignores the possibility of variation in the differential across industries, which has potentially serious consequences. Second, the vector of other characteristics customarily used is somewhat limited and raises doubts concerning the adequacy of the standardization for what might loosely be termed "labour quality". Third, only a sub-sample of industries is ever used owing to lack of data. Those industries excluded are generally smaller (in terms of employment) that those included. If the differential varies with the size of the industry a further bias will be induced. All of these problems are either removed or lessened by the use of individual-level data. (For an analysis of their impact at the industry level see Geroski and Stewart, 1981).2 As a result, it is now widely accepted that the appropriate way to measure such differentials is to use micro-data sets at the level of the individual worker. Until recently suitable data have not been available for the United Kingdom and hence such analysis was not possible. This paper presents the first estimates for the United Kingdom based on individual-level data. If the value of the union/non-union differential were the same for all workers in the unionized sector, analysis at the industry level and the individual level would produce similar results. However, the Marshallian laws of derived demand suggest that this will not be the case. Despite this clear prediction of variation in the differential, there has been little systematic investigation of the variation. This paper also presents such an analysis. Since variation in the differential across industries results in aggregation bias in industry-level studies, such an analysis is of considerable relevance to the evaluation of the evidence from such studies, as well as being of interest in its own right. The paper is laid out as follows. Section I presents a method of estimating the union/non-union differential at the individual level while Section II

113 citations


Journal ArticleDOI
TL;DR: In this paper, a simple model is described with two countries and one final good, and it is shown how the stage of specialization can be determined as an endogenous variable, and the price system and the gains from trade are discussed.
Abstract: The traditional models of the pure theory of international trade have mainly discussed specialization in final goods. That is, the entire vertical processing of a good is done in the country that specializes in that good. Thus, if a country specializes in the production of clothing, the raw materials needed for the production of clothing are also produced in that country. Vertical specialization,1 on the other hand, means that the countries specialize along the vertical productive spectrum of a good. One country may produce yarn and export it while the other country may produce clothing with imported yarn. The traded goods under vertical specialization are final goods as well as raw materials. A good, typically, has many stages along its vertical spectrum, and a country can specialize at any one of those stages. The question that immediately arises is what determines the particular stage at which the vertical spectrum is broken so that each country can specialize in one part? To put it differently, if a country is seen to be specializing in the production of yarn, there must be some reason why it does not specialize in raw cotton. The purpose of this paper is to answer that question. In Section I, a very simple model is described with two countries and one final good, and it is shown how the stage of specialization can be determined as an endogenous variable. Section II discusses the price system and the gains from trade; while Section III briefly discusses how the model can be extended to incorporate two final goods.

86 citations


Journal ArticleDOI
TL;DR: In this paper, an agent is assumed to maximize expected utility, which depends on wages and effort (a "bad") and monitoring of effort and output is costly, and an agent's problem is to select a level of monitoring and a compensation package that will minimize the cost of obtaining a desired level of effort, given the agent's preferences and the opportunity cost of accepting employment with the firm (his labour supply constraint).
Abstract: latitude with respect to the effort they supply, and monitoring of effort and output is costly. In this paper we examine the agency problem that arises in these circumstances. The agent (employee) is assumed to maximize expected utility, which depends on wages and effort (a "bad"). We assume that output is a deterministic function of effort, that either effort or output can be accurately monitored, and that monitoring is costly. The firm's problem is to select a level of monitoring and a compensation package that will minimize the cost of obtaining a desired level of effort, given the cost of monitoring, the agent's preferences and the agent's opportunity cost of accepting employment with the firm (his labour supply constraint). The firm chooses a compensation scheme from the set in which compensation is contingent on effort if the agent is monitored, and is equal to a standard

79 citations



ReportDOI
TL;DR: In this article, the relationship between the demand for international reserves and exchange rate adjustments is empirically investigated for agroup of LDC's and it is shown that countries that have maintained a fixed exchange rate for a long period of time have a different demand function than countries that use exchange rate adjustment for correcting payments imbalances.
Abstract: In this paper the relationship between the demand for international reserves and exchange rate adjustments is empirically investigated for agroup of LDC's. It is shown that countries that have maintained a fixed exchange rate for a long period of time have a different demand function than countries that have occasionally used exchange rate adjustments for correcting payments imbalances. The dynamics of the adjustment for both groupsof countries are also analyzed. The results show that while both groups tend to eliminate reserve disequilibria fast, those countries that have maintained a fixed rate tend to do it more slowly than countries that have occasionally devalued their currency. It Is also shown that the year prior to a devaluation, international reserves have been, on average, 30% below their short-run desired level. These results are important since they indicate that not all LDC's should be aggregated for prediction purposes. The results also have implications for the analysis of the adequacy of international reserves in less developed countries.

69 citations


Journal ArticleDOI
TL;DR: The Arrow-Debreu extension is based on the idea that consumer preferences may be defined on state contingent commodities; as a special but important case, these preferences may represent von Neumann-Morgenstern utilities as mentioned in this paper.
Abstract: In recent years a number of papers have questioned the extension of the main theorems of welfare economics to a world of uncertainty stemming from the work of Arrow (1953) and Debreu (1959). The Arrow-Debreu extension is based on the idea that consumer preferences may be defined on state contingent commodities; as a special but important case, these preferences may be represented by von Neumann-Morgenstern utilities. With this interpretation of the commodity space the main theorems of weltare economics remain valid in a world of uncertainty. A competitive equilibrium is a Pareto optimum, a Pareto optimum can be sustained as a competitive equilibrium, and the diagnosis of the causes of market failure remains as before. There are several reasons why one may feel uneasy about this construction. A good deal of attention has been paid to the assumption of universality of markets necessary to establish the connection between equilibrium and optimum. On a deeper level perhaps are the doubts that one may have about the criterion of welfare or efficiency. First, should social welfare be a function of individuals' expected utilities, or should the appropriate concept be that of expected welfare? Second, if probability beliefs are not identical across individuals, whose probabilities, if any, should be used for welfare evaluation? In the next section we review some concepts and results in the recent literature on ex post welfare economics, and in Section II it is argued that this literature can provide the foundations for a more satisfactory theory of merit goods, a concept originally introduced by Musgrave (1959). This idea is applied in Section III to a model with a complete set of markets in the Arrow-Debreu sense, and in Sections IV and V to an incomplete markets model, which provides the most natural framework for the study of merit goods. Various remarks on further aspects of the problem are collected in the concluding section of the paper.

66 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an alternative approach to estimating payroll tax incidence on employees that substantially improves upon the approach of Brittain, which involves estimating an aggregate demand curve for labour in the United Kingdom and analysing how the curve shifts in response to changes in the payroll tax.
Abstract: The payroll tax should be of interest to empirical researchers for a number of reasons. It is a large and growing source of government revenues in many countries such as United States, Canada and the United Kingdom. It is also a major source of funds out of which important social security benefits such as unemployment insurance and public pensions are paid. However, while relatively easy to collect and administer, the payroll tax may be distributionally regressive and contradict the equal sacrifice principle based on ability to pay. It is thus of some concern for reasons of equity and public policy to be able to evaluate the actual magnitude of these distributional effects. In addition, there is considerable current interest in alternative social security schemes (see for example Okner, 1975, and Munnell, 1977) arising from the present demographic trends, which, with present financial arrangements, are likely to result in future funding deficits, so that revisions will have to be undertaken. In particular, a number of studies have examined the liability structure and distributional incidence of the payroll tax (Peckman and Okner, 1974; Peckman, 1977; and Johnston and Wixon, 1978). In order to evaluate the distributional implications of the various options available, there should be a better understanding particularly of the distributional characteristics of the payroll tax. The specific problem this paper addresses is the distributional incidence of the payroll tax and whether the employer portion of the tax is entirely shifted backwards on to labour. That is, do employees bear the full burden of the employer portion of the payroll tax in the form of wage cuts or forgone earnings? The empirical literature in the area of payroll tax shifting is so far fairly limited and is characterized by markedly differing estimates of the incidence. Works by Deran (1966, 1967), Weitenburg (1969), Brittain (1972), Vroman (1974a), Leuthold (1975), and Hamermesh (1979, 1980) employed different data bases and empirical procedures, and obtained widely differing results. In particular, the work of Brittain (1972) has become the standard reference and the basis for a good deal of empirical analysis. The present paper puts forward an alternative approach to estimating payroll tax incidence on employees that substantially improves upon the approach of Brittain. Specifically, it involves estimating an aggregate demand curve for labour in the United Kingdom and analysing how the curve shifts in response to changes in the payroll tax. The result that, in a world of competitive markets, the full incidence of a universal payroll tax is shifted on to an inelastic supply of labour is a standard result ifl microeconomics (Musgrave and Musgrave, 1976, pp. 404-406). The

Journal ArticleDOI
TL;DR: A Boston University student newspaper said today it had uncovered a university policy of selling admissions to medical school and to law school to wealthy applicants as mentioned in this paper, but the university denied that such a policy existed and said no one has been admitted to Boston University in consideration of the payment of money.
Abstract: A Boston University student newspaper said today it had uncovered a university policy of selling admissions to medical school and to law school to wealthy applicants. Mr Silber called the newspaper's charges deliberate lies and denied that such a policy existed. 'No one has been admitted to Boston University in consideration of the payment of money,' he said. 'No one has ever bought a place in one of our schools.' [New York Times, 14 March 1978]

Journal ArticleDOI
TL;DR: In this article, the authors trace the transatlantic flow of a technology from Britain, the fermenter of the Industrial Revolution and the world's most advanced country, to the post-colonial United States, still an isolated agrarian-mercantile society.
Abstract: Winner of the 1980 Edelstein Prize given by the Society for the History of Technology (SHOT). and Winner of the John H. Dunning Prize in U.S. History sponsored by the American Historical Association.The social impact of a technical innovation--however great its intrinsic significance or originality--is entirely dependent on the extent and rate of its diffusion into practical life. The study of this diffusion--technology transfer--is a recent historical endeavor, but one that has already brought new understanding to past transformations of society and has important implications for future developments, especially in countries now emerging into the industrialized phase.Jeremy's book is central in this line of inquiry. It traces the transatlantic flow of a technology--textile manufacture, one of the first of the mechanized industries--from Britain, the fermenter of the Industrial Revolution and the world's most advanced country, to the post-colonial United States, still an isolated agrarian-mercantile society. But the author shows that by the early 19th century, this flow of technology was already moving in both directions across the Atlantic.The book examines the transfer of four specific technologies: cotton spinning, powerloom weaving, calico printing, and woollen manufacturing. These technologies all made successful transatlantic crossings in spite of the institutional and technical barriers to transfer that Jeremy describes, including industrial secretiveness, the English patent search system, the paucity of technical publications, the prohibitory laws, artisan resistance to technica change, variations in local technical traditions, and changes in the pace and direction of invention.\"Transatlantic Industrial Revolution\" is firmly based on modern economic theory. It is well illustrated with halftones and line drawings and its conclusions are by numerous primary sources, including British patents and American passenger (immigration) lists, customs documented records, and the manuscript version of the U.S. 1820 Census of Manufacturers, which yielded new estimates of the extent of America's textile expansion.

Journal ArticleDOI
TL;DR: In this paper, the authors show that whether the investment decision is distorted under such corporate tax bases depends on how the financial structure of the firm is determined, and they also extend the analysis to incorporate personal taxes impinging on capital income and the effects of inflation.
Abstract: real income-earning business activities of firms rather than the financial ones. Furthermore, it has virtually ignored the implications of the financial structure of the firm for the neutrality of flow-of-funds taxation of both real and financial activities of the firm in a model of the firm in which the financial structure has been explicitly included. We shall be particularly concerned with the sorts of flow-of-funds tax systems suggested by the Meade Report, the so-called R and R + F bases. We will show that whether the investment decision is distorted under such corporate tax bases depends on how the financial structure of the firm is determined. The R base is neutral under some financial constraints, while the R+F base is neutral under others.3 We also extend the analysis to incorporate personal taxes impinging on capital income and the effects of inflation. Finally, we introduce financial assets held by the firm. We show that the R base can be generalized to what we call the R + A base. This base will be neutral under the same borrowing constraints for which the R base is neutral. In general, the R+F base as defined in the Meade Report will not capture all the profits arising from financial intermediation and may


Journal ArticleDOI
TL;DR: In this article, the authors examined aspects of labor migration in developing countries using the approach developed by J R Harris and M P Todaro in particular, they tried to model the process of labor turnover between the urban unemployed and urban firms and between the rural sector and urban unemployed.
Abstract: Aspects of labor migration in developing countries are examined using the approach developed by J R Harris and M P Todaro In particular the author attempts to model the process of labor turnover between the urban unemployed and urban firms and between the rural sector and the urban unemployed The model includes a dynamic system of wage rates and urban unemployment rates (ANNOTATION)

Journal ArticleDOI
TL;DR: In this article, a simple analysis of the movement of differentials in Britain in the last decade is presented, showing that, however finely the labour force is subdivided, equalization between groups has played a very small role in the overall equalization and the main changes.
Abstract: Periodic incomes policies have by now become a feature of most Western economies. It has also become clear that there is a trade-off between the features of an incomes policy that make it initially acceptable and the features that make it ultimately effective. For example, most British incomes policies of the 1970s have provided higher permissible proportionate wage increases for low-wage than for high-wage workers. As we argue in Section II below, the primary advantage (and perhaps cause) of such policies is that they can more readily secure the ex ante support of a majority of workers for any specified overall increase in the wage bill or inflation target. If they are effective, however, such policies also result in the narrowing of wage differentials, and this results in new problems. Some economists consider such wage equalization the inevitable cost that an economy must bear for the sake of an effective incomes policy. Others believe the arbitrary narrowing of wage differentials creates industrial relations problems that ultimately cause the incomes policy to fail. In either case, it is important to know how much equalization has been caused by the actual incomes policies and how this compares with what would have been expected from strictly adhered-to policies. In consequence, the first (and major) part of this paper contains a simple analysis of the movement of differentials in Britain in the last decade. Most incomes policies have been explicitly equalizing, but how much equalization occurred, compared with the equalization that would have occurred if the policies had been explicitly adhered to? We show that the 1973 formula of ?1 plus 4 per cent was associated with the full equalizing effect it implied, but the 1975 ?6 a week was largely submerged in other changes and was associated with less than a third of the equalization implied by the formula. Indeed, such mild equalization as occurred between 1975 and 1977 may be explained partly by the lower inflation rate, which reduced that part of the inequality of instantaneous earnings that is caused by different groups of workers settling at different times of year. Against this, higher unemployment may have been tending to increase inequality at the same time. In 1974 threshold payments were associated with an effect of about one-half of their predicted level. The main force that offset the equalizing nature of the threshold formula and the ?6 a week was, we believe, the pressure of employers bidding up the relative earnings of the more skilled workers. Having looked at the overall picture, we then examine more closely which groups have gained and lost from recent changes in relative wages, and what equalization has occurred within the different groups. The groups are defined by occupation and by collective agreement. Our general conclusion is that, however finely the labour force is subdivided, equalization between groups has played a very small role in the overall equalization and the main changes

Journal ArticleDOI
TL;DR: A number of articles have addressed the problem of labour productivity in coalmining during the late nineteenth and early twentieth centuries, two of the most important being A. J. Taylor's (1961) examination of productivity and technological change in Britain and R. H. Walters' (1975) analysis of the same topic for South Wales as discussed by the authors.
Abstract: Naked to the waist, hot and grimy with labour, they squatted on their heels for a few minutes and talked, seeing each other dimly by the light of the safety lamps, while the black coal rose jutting around them, and the props of wood stood like little pillars in the low, black, very dark temple.... The day passed pleasantly enough. There was an ease, a go-asyou-please about the day underground, a delightful camaraderie of men shut off alone from the rest of the world, in a dangerous place, and a variety of labour, holing, loading, timbering, and a glamour of mystery and adventure in the atmosphere,. . . D. H. Lawrence (1955) The above quote from D. H. Lawrence, while perhaps over-emphasizing "glamour" and "adventure", illustrates one important aspect of colliery work: the intensity of a collier's effort was primarily self-imposed. It was the work intensity, combined with other factors, that determined labour productivity in the coal mines. A number of articles have addressed the problem of labour productivity in coalmining during the late nineteenth and early twentieth centuries, two of the most important being A. J. Taylor's (1961) examination of productivity and technological change in Britain and R. H. Walters' (1975) analysis of the same topic for South Wales.1 In addition, there is a rich and growing literature on the broader problem of the existence of a "climacteric", a downward shift in the performance of the British economy in the late nineteenth century, often attributed to the "failure" of British entrepreneurs, as represented by their seeming unwillingness to adopt the best available techniques of production.2 The purpose of this paper is to specify and estimate an empirical model of labour productivity in coalmining, as well as to provide some suggestive evidence regarding the performance of entrepreneurs in this major industry.3 We believe our results provide added insight into the determination of labour productivity during this period, lending support to those who argue that the failure of British entrepreneurs has yet to be proved, while providing mixed evidence regarding conclusions drawn by economic historians.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the relationship between inflation uncertainty and the returns of multi-period bonds and examined the hypotheses using ex ante data and measures of inflation uncertainty based on these data.
Abstract: The new wave of research on the effect of inflation on interest rates revolves around the incorporation of inflation uncertainty into the analysis.1 In a recent paper, Liviatan and Levhari (1977) examine in a single-period model the market risk premium on nominal bonds awarded as compensation for inflation-related uncertainty, where this premium is determined by attitudes of investors towards risk and the risk of inflation. The hypothesis that nominal bonds carry an inflation uncertainty premium could be extended to multi-period bonds. This is also consistent with Hick's (1946) liquidity premium, which was estimated by Kessel (1965) and McCullough (1975), who found systematic premia on the average returns of multi-period bonds.2 This study too is concerned with the possible relationships between inflation uncertainties and the returns on multi-period bonds. We are examining the hypotheses using ex ante data and measures of inflation uncertainty based on these data. The two sources of inflation uncertainty that may be associated with the returns on multi-period bonds are the uncertainty about the rate of inflation in the next period, and the uncertainty about expectations of inflation in future periods. For a bond with two periods to maturity, the holding period return is

ReportDOI
TL;DR: In this article, a general equilibrium model of a small open economy is presented, where the sequence of adjustment and readjustment is modeled as two successive temporary equilibria, and the question of whether optimism is good is posed in terms of an explicit ( ex post) welfare evaluation.
Abstract: Assume that an economy is in a state of Keynesian unemployment. Since production is demand-determined there are bootstraps (multiple) equilibria. Then, the more optimist agents are about the future the higher will be theur demand today and hence current production. In that limited sense optimism turns out to be unwarranted , which forces a download adjustment. Is this unwarranted optimism still good? We analyze this question by help of a general equilibrium model of a small open economy where the sequence of adjustment and readjustment is modeled as two successive temporary equilibria. The question wheter optimism is good is posed in terms of an explicit ( ex post) welfare evaluation. We fine that if the future is Walrasian, the future multiplier is unity, whereas the present multiplier is larger than unity. Then optimism increases ex post welfare. If the future has Keynesian unemployment, optimism still increases ex post welfare, as long as the present multiplier is larger than the future one. A necessary and sufficient condition for this is presented.

Journal ArticleDOI
TL;DR: In this article, it was shown that the expected profits of the bidders fall as the quality of the information available to them increases, and that the seller who gains when all participants try to improve their information.
Abstract: by the quality of the information available to the players. In particular the analysis is designed to give some insight into the players' incentives to obtain an informational advantage over their competitors. Our interest lies mainly in understanding the economic structure of the problem, and analysing the interactions that occur among the players. We have, therefore, limited ourselves to the analysis of a model with quite restrictive assumptions. The advantage of proceeding in this manner is that it enables us to obtain interesting explicit results with only a limited amount of mathematical computations. The disadvantage is, of course, that it is unclear to what extent the results can be generalized. In the model analysed here it turns out that: (a) Aggregate expected profits of the bidders fall, as the quality of the information available to them increases. It is thus mainly the seller who gains, when all participants try to improve their information. This seems to be a quite general result, and has been derived under considerably less restrictive assumptions in Case (1979), Reece (1978) and Rothkopf (1969), among others. (b) Even in the absence of collusion, it may turn out that no bidder ever places a bid greater than or equal to the true value of the tract. While it can easily be seen that this result depends strongly on the specific assumptions of the model we analyse, it does illustrate that it will be very difficult to prove that the players of an auction game are acting collusively. (c) Each participant's bidding strategy will be affected by the quality of the information available to his competitors. When this interaction is taken into account, it turns out that an increase in the quality of the information available to any one of them may lead to a fall in his expected profits. This is the main novel result of the paper.

Journal ArticleDOI
TL;DR: A number of authors, most prominently Clower (1965) and Leijonhofvud (1968), have interpreted Keynes's theory of unemployment as a disequilibrium model.
Abstract: A number of authors, most prominently Clower (1965) and Leijonhofvud (1968), have interpreted Keynes's theory of unemployment as a disequilibrium model. Nominal price and wage rigidity prevent the labour market from clearing. The actual level of employment is given by the minimum of the effective demand for labour and the effective supply. "General disequilibrium" models of income and employment determination have been developed by Barro and Grossman (1976) and Malinvaud (1977). The model has been extended to the open economy by Dixit (1978) and to a two-period, rational expectations context by Neary and Stiglitz (1979). The econometric implications of the disequilibrium approach have been developed by Goldfeld and Quandt (1972), Maddala and Nelson (1974) and Rosen and Quandt (1978), among others. In summary, the disequilibrium approach has provided a fruitful context for analysing macroeconomic behaviour.' In a competitive Walrasian system the notional supply of labour is defined as the amount of labour a worker wishes to provide when he can buy and sell as much as he would like in all markets, subject only to his budget constraint and given prices and wages. The notional supply of labour, then, is given by l* where (1*, c*) attains the indirect utility function

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of product selection in monopolistic competition and show that profit-maximizing decisions generally deviate from the social optimum in terms of quality assessments.
Abstract: The number and variety of products observed at any point of time in a given industry results from a complex selection mechanism where various economic forces interact. Some of these forces proceed from the demand behaviour of the market. For instance, the need for product variety should increase, the larger the degree of income or taste differences between consumers. Similarly, product variety may be more desirable from the consumer's viewpoint, the smaller the degree of substitutability between the quantity of a given product and its quality (a copious meal may not compensate for second-rate food). On the other hand, the opening of a new product line is generally accompanied by important overhead costs; these costs may lead to a restriction in the variety of goods that it would otherwise be desirable to produce. Sometimes also a firm in a given industry may be constrained by technical feasibility or institutional reasons to produce a single, or a restricted, set of products only, though in other industries multi-product firms are allowed to cover the whole spectrum of goods. The product selection mechanism arising in the two types of industries should differ since, in the first case, competition between product lines takes place between different firms, and, in the second, the same competition arises inside the firm. These reasons make the problem of product selection one of the most difficult subjects of monopolistic competition. In spite of this difficulty, recent work in the field has opened avenues for further research. Of particular interest is the study of the welfare implications of this selection mechanism, and its links with price or quality regulation policies. In this respect, contributions by Sheshinski (1976) and Spence (1975) have revealed that, when monopolists have latitude in determining the quality attribute of their product, profit-maximizing decisions generally deviate from the social optimum in terms of quality assessments. On the other hand, Stern (1974) and Meade (1974) have provided examples showing that the variety of products that should be produced according to welfere criteria does not always coincide with the combination of products that is profitable to produce in an unregulated market. It is with these contributions that the present paper is concerned. The difference between social optimum and profitability is imputed by Meade to the interaction between two major forces:

Journal ArticleDOI
TL;DR: In this paper, an empirical method of measuring technological change biases in many-factor production with an application to postwar Japanese agriculture, and then to investigate the factors that guided the evolution of biases in agriculture.
Abstract: Technological change bias in agriculture has important effects on and is affected by other changes in an economy. Labour-saving technological change, for example, enables more farm labour to migrate to the non-farm sector (Kako, 1978). A land-using bias, on the other hand, may stimulate efficiency differentiation among farm groups and lead to a rise in farm land prices (Lee 1980a), while a machinery-using bias accelerates the rate of investment in agricultural machinery. All of these affect the income distribution as well as the economic growth path of the economy. In spite of its importance, there are relatively few empirical studies of biases in technological change. The purpose of this paper is to present, first, an empirical method of measuring technological change biases in many-factor production with an application to postwar Japanese agriculture, and then to investigate the factors that guided the evolution of biases in agriculture. Sources of biases have been one of the critical concerns of growth economics. Relative factor prices have been asserted to be the prime motivator of biases. However, land size per farm, output price and innovation lags are also important factors affecting technological change biases. The fundamental method used here to test these hypotheses is to measure the biases in four regions among which economic variables have been lifferent or have moved at different rates, and then to examine the relationship between the measured biases and the economic variables. Since the Japanese agriculture sector has undergone substantial technological changes in the rapid economic growth since the Second World War, it provides an ideal case study which is of considerable relevance to similar but less developed countries in South-east Asia. In the first section, we present a theory of measuring biases in many-factor production, using a production function approach. In Section II, the homogeneous translog production function is estimated with micro-farm data on rice production in postwar Japan. Section III analyses the characteristic structure of technological change in postwar Japanese agriculture, and then investigates the sources of the biases following the method presented. In the final section, we draw conclusions about the sources and implications of the measured biases.



Journal ArticleDOI
TL;DR: In this article, a rigorous solution to the Marshallian consumer surplus path-of-integration problem is presented. But the solution is restricted to the Hicksian version of consumer surplus.
Abstract: It is well known that Marshallian consumer surplus1 is generally dependent on the path of integration. Silberberg (1972) emphasized this dependency by demonstrating that Marshallian consumer surplus is unbounded above and below for arbitrary paths of integration. Despite the approximation results of Willig (1976), because of the path-dependency problem and the lack of a legitimate a priori restriction on the admissible paths of integration, one could maintain that Marshallian consumer surplus is potentially unreliable and generally invalid as an index of preference.2 The purpose of this paper is to elucidate a rigorous solution to the Marshallian consumer surplus path-of-integration problem. The family of expenditure function indices (which includes the Hicksian variations of consumer surplus) is examined and shown to be equivalent to Marshallian consumer surplus with certain restrictions on the admissible paths of integration; essentially, the implicit restriction guarantees the order-preserving property of these indices.3 The logical conclusion is that any a priori restriction on the admissible paths of integration that yields an order-preserving index is legitimate. In addition to the family of expenditure function indices, \"monotonic variations\" satisfy this principle (Zajac, 1979; Stahl, 1980).

Journal ArticleDOI
TL;DR: The authors examined three judgmental forecasters working for City of London institutions who produce monthly balance-of-payments forecasts and compared them with advanced time series forecasting methods such as the Box-Jenkins ARIMA class of model.
Abstract: Judgmental forecasting is the most common approach adopted within organizations for producing forecasts. Casual evidence suggests that practitioners believe the various approaches adopted under this heading are cheap and flexible, incorporate subtleties that quantitative models cannot, and, most important, are more accurate than simple extrapolative models. Researchers associated with the area of forecasting agree only to the flexibility of the approach. Even the question of cost is disputed, with Mabert (1976), for example, showing that a formalized judgmental approach may have substantially higher costs than the various alternative extrapolative models he considered. Hogarth and Makridakis (1981) offer a good summary of the many sources of bias that can undermine judgmental forecasting performance. Such judgmental forecasts or expectations play an important theoretical role in economic arguments. However, the words of Cragg and Malkiel (1968) still bear repeating some 13 years after they wrote: "the extent of agreement of the significance of expectations is almost matched, however, by the paucity of data that can even be considered reasonable proxies for those forecasts". Most research has considered aggregate economic expectations. Study of disaggregated judgmental economic forecasts has been concentrated largely in the accounting literature, with the thrust of the research directed towards comparing the comparative efficiency of stock market analysts, management forecasts and extrapolative models in forecasting future earnings per share. Typical papers are Green and Segall (1967), Brown and Rozeff (1978) and Ruland (1978). However, a recent working paper from the National Bureau of Economic Research (Zarnowitz, 1982) considers the disaggregated macroeconomic forecasts that were produced for the American Statistical Association Business Outlook Surveys. Given the conflicting results reported in these and other papers, it is difficult to argue that one particular forecasting model or type of institution consistently outperforms its competitors. In areas outside accounting and economics such as those discussed in Dawes (1977) and Armstrong (1978), the evidence seems to suggest that judgmental methods are worse than their quantitative alternatives. In this paper we examine three judgmental forecasters working for City of London institutions who produce monthly balance of payments forecasts. The market is competitive in that the degree of accuracy of such forecasts is seen as a determinant of exchange dealings and commission income. We attempt to determine the comparative effectiveness of these judgmental forecasters, both between themselves and compared with advanced time series forecasting methods such as the Box-Jenkins ARIMA class of model. As we have mentioned, the issue of whether such judgmental forecasts can improve

Journal ArticleDOI
TL;DR: In this paper, the authors focus on the variability in the real wage caused by inflation variability, and show that the real output may increase or decrease with the change in the inflation rate.
Abstract: In recent years economists have paid considerable attention to the relation between the rate of inflation and its temporal variance. The consensus emerging from this discussion is that there exists a strong positive relation between the inflation rate and its temporal variance. (See for example, Okun, 1971; Gordon, 1971; Jaffe and Kleiman, 1977; and Foster, 1978.) Clearly, the increased variability of the inflation rate is likely to involve social costs that concern inefficiency in consumption and in production. Thus, in his Nobel lecture, Friedman (1977) says: "The growing volatility of inflation and the growing departure of relative prices from the values that market forces alone shall set, combine to render the econoniic system less efficient" (p. 470). In general, the interpretation given to production inefficiency resulting from increased variability in the inflation rate is that of a reduction in output to a level lower than that occurring under price stability. For example, Levi and Makin interpret Friedman's inefficiency argument thus: "From the two components of Friedman's argument we have that high inflation accompanied by ... high variability of inflation will mean a lower output" (Levi and Makin, 1980, p. 1023). It is our purpose in this paper to elaborate on the concept of the production inefficiency caused by increased inflation variability (IV), where we focus on the variability in the real wage caused by inflation variability. We suggest a concept of inefficiency of production that relates average changes in the use of labour to average changes in output. Specifically, it is shown that IV alters output and/or labour employment in a way that is inferior to the relation between output and employment under stable prices. That, we suggest, is the crux of production inefficiency resulting from IV. This production inefficiency will emerge as a result of IV independently of whether output rises or falls as a result of IV. Indeed, our analysis will illustrate that mean output may increase or decrease, so that, by using output alone as an index of efficiency, we may obtain the unlikely result that IV raises production efficiency. One result we derive en route is that, under plausible assumptions about the production function, labour employment may, on average, rise with IV. Notwithstanding this, we show that efficiency in production declines. This result may be used as an interpretation of the inefficiency concept as used by Friedman. Thus, Friedman says: "These developments clearly lower economic efficiency. It is less clear what their effect is on recorded unemployment. . ." (1977, p. 466). We proceed by considering the basic model and showing the ambiguity of the effect of inflation variability on mean output in Section I. In Section II, we show that IV introduces a production inefficiency defined upon the