scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1976"


Journal Article•DOI•
TL;DR: In this paper, the authors proposed a new measure of poverty, which should avoid some of the shortcomings of the measures currently in use, and used an axiomatic approach to derive the measure.
Abstract: The primary aim of this paper is to propose a new measure of poverty, which should avoid some of the shortcomings of the measures currently in use. An axiomatic approach is used to derive the measure. The conception of welfare in the axiom set is ordinal. The information requirement for the new measure is quite limited, permitting practical use.

2,678 citations


Journal Article•DOI•
TL;DR: In this article, a theory of financial markets based on a two-parameter portfolio model is shown to imply stochastic dependence between transaction volume and the change in the logarithm of security price from one transaction to the next.
Abstract: A theory of financial markets based on a two-parameter portfolio model is shown to imply stochastic dependence between transaction volume and the change in the logarithm of security price from one transaction to the next. The change in the logarithm of price can therefore be viewed as following a mixture of distributions, with transaction volume as the mixing variable. For common stocks these distributions (of which the distribution of A log p is a mixture) appear to have a pronounced excess of frequency near the mean and a deficiency of outliers, relative to the normal. These findings are consistent with the hypothesis that stock price changes over fixed intervals of time follow mixtures of finitevariance distributions.

1,013 citations



Book Chapter•DOI•
TL;DR: In this article, the authors derived a distribution that is a generalization of the Pareto distribution and the Weibull distribution used in analyses of equipment failures, and the distribution fits actual data remarkably well compared with the pareto and the lognormal.
Abstract: The paper derives a function that describes the size distribution of incomes. The two functions most often used are the Pareto and the lognormal. The Pareto function fits the data fairly well towards the higher levels but the fit is poor towards the low income levels. The lognormal fits the lower income levels better but its fit towards the upper end is far from satisfactory. There have been other distributions suggested by Champernowne, Rutherford, and others, but even these do not result in any considerable improvement. The present paper derives a distribution that is a generalization of the Pareto distribution and the Weibull distribution used in analyses of equipment failures. The distribution fits actual data remarkably well compared with the Pareto and the lognormal.

467 citations


Journal Article•DOI•
TL;DR: In this paper, it is shown that if interpersonal comparisons are made in a certain way, one can construct a social welfare ordering by a method which satisfies suitably modified forms of Arrow's 1963 conditions, together with these condi- tions.
Abstract: An Arrow social welfare function was designed not to incorporate any interpersonal comparisons. But some notions of equity rest on interpersonal comparisons. It is shown that a generalized social welfare function, incorporating interpersonal comparisons, can satisfy modifications of the Arrow conditions, and also a strong version of an equity axiom due to Sen. One such generalized social welfare function is the lexicographic form of Rawls' ARRow (1) INVESTIGATED the problem of how to amalgamate the personal welfare orderings of the members of a society into a social welfare ordering. His approach was deliberately designed to avoid making any kind of interpersonal comparison. He was then able to show that such an approach must fail as long as one insists on certain other apparently appropriate conditions. It would therefore seem that an obvious way around Arrow's impossibility theorem is to make interpersonal comparisons and to use them in the construc- tion of a social ordering. Moreover, some considerations of equity which many people would think relevant for making social choices are specifically excluded by Arrow's approach. This paper shows how, if interpersonal comparisons are made in a certain way, one can construct a social welfare ordering by a method which satisfies suitably modified forms of Arrow's 1963 conditions. Moreover-as is just as well, given that the interpersonal comparisons are deliberately based on a notion of equity- it is also possible to satisfy an extra condition, which is a kind of equity axiom. The lexicographic extension of Rawls' difference principle, or maximin rule, satisfies all these conditions. In addition, it is the only rule or principle which satisfies a condition which underlies Suppes' grading principle, together with these condi- tions. Section 2 presents preliminary definitions and notation, and shows how some considerations of equity are excluded by Arrow's approach to social choice. Section 3 shows how these considerations of equity may be represented by ordinal interpersonal comparisons of the kind discussed in Sen (6), how they are related to an equity axiom due to Sen (7), and how Sen's equity axiom may be generalized. Section 4 defines generalized social welfare functions (GSWF's) and shows how Arrow's conditions can be modified to apply to GSWF's. Section 5 'This is an expanded and subsequently revised version of a paper presented to the European

429 citations






Journal Article•DOI•
TL;DR: In this paper, the authors present some econometric evidence on the effects of taxes on married women, a group of growing importance in the American labor force, and present a testable model of labor supply which permits statistical estimation of a "coefficient of tax perception." Unlike previous models of labour supply, it allows for the possibility that the wage may depend on the number of hours worked.
Abstract: PAYROLL AND PROGRESSIVE INCOME taxes play an enormous role in the American fiscal system. It is therefore of some importance to know the extent to which they influence work incentives. The purpose of this study is to present some econometric evidence on the effects of taxes on married women, a group of growing importance in the American labor force.2 A testable model of labor supply is developed which permits statistical estimation of a "coefficient of tax perception." Unlike previous models of labor supply, it allows for the possibility that the wage may depend on the number of hours worked. Contrary to much of the literature, the results of this paper strongly suggest that marginal tax rates do have an important impact on labor force behavior. This section reviews briefly the past thought on this problem. Section 2 develops a model to explain work decisions when an individual faces a whole set of wagehour combinations, rather than a given wage independent of the number of hours he works. In Section 3 this model is modified to permit an explicit test of whether or not taxes affect individuals' labor supply decisions. Estimation problems are discussed at length, and the empirical results are presented. A concluding section contains a summary and suggestions for future research.

232 citations


Journal Article•DOI•

Book Chapter•DOI•
TL;DR: In this paper, a new coordinate system for the Lorenz curve is introduced, with particular attention to a special case of wide empirical validity, and the well-known inequality measures are obtained as the function of the estimated parameters.
Abstract: The Lorenz curve is widely used to represent and analyze the size distribution of income and wealth. The purpose of this paper is to introduce a new coordinate system for the Lorenz curve, with particular attention to a special case of wide empirical validity. Four alternative methods have been used to estimate the proposed Lorenz curve from the grouped observations. The well-known inequality measures are obtained as the function of the estimated parameters of the Lorenz curve. The procedure of estimating the asymptotic standard errors of the inequality measures is also provided. In addition, the frequency distribution is derived from the equation of the Lorenz curve. A new representation of the Lorenz curve is introduced and related to a number of conventional measures of income inequality. The report describes a number of estimation methods and reports some empirical results based on the data from the Australian survey of Consumer Expenditure and Finances.

Journal Article•DOI•
TL;DR: In this article, the authors present new estimates of the price and income elasticities of charitable giving, which are then used with the United States Treasury Tax File to simulate the effects of several possible alternatives to the current tax treatment of charitable contributions.
Abstract: Charitable contributions are an important source of basic finance for a wide variety of private nonprofit organizations that perform quasi-public functions. The tax treatment of charitable contributions substantially influences the volume and distribution of these gifts. The current study presents new estimates of the price and income elasticities of charitable giving. The parameter estimates are then used with the United States Treasury Tax File to simulate the effects of several possible alternatives to the current tax treatment of charitable giving. INDIVIDUAL CHARITABLE CONTRIBUTIONS are an important source of basic finance for a wide variety of private nonprofit organizations. Higher education, research, health care, the visual and performing arts, welfare services, and community and religious activities rely heavily on the voluntary institution. In 1970, American families contributed more than $17 billion for their support. The volume and distribution of charitable gifts is influenced by the personal income tax treatment of charitable contributions. There are today a number of widely discussed proposals for changing these rules. The appropriate tax treatment of such gifts involves a complex series of economic issues. Critical to a resolution of these issues is an understanding of the likely quantitative effects of alternative tax rules: the effects on the total volume of charitable gifts and its distribution among the different types of donees; the effects on the distribution of tax burdens


Journal Article•DOI•
TL;DR: In this article, an experimental study of expectation formation and revision in a time series context was conducted, and it was shown that the speed of adjustment seems to fall in turning point periods.
Abstract: This paper reports on an experimental study of expectation formation-and revision in a time series context. In an adaptive expectations framework, it is shown that the speed of adjustment seems to fall in turning point periods. Expectations are considered as probability density functions, and a scoring system is devised and employed that gives subjects an incentive to report a measure of the dispersion of these functions. This measure, which is inversely related to the confidence with which expectations are held, seems to be inversely related to past forecasting performance. THIS PAPER REPORTS an empirical exploration of the way individuals form and hold expectations about future values of time series variables. In the application of economic models in which expectations about the future play a major role in determining behavior, these expectations are rarely directly observable, and the econometrician is generally forced to assume that a technical rule generates expectations as a simple function only of past observations. One way to see what sort of technical rules make sense in such applications might be to attempt to use this indirect approach to discriminate among possible functional forms. Usually, however, this is computationally burdensome and not terribly revealing. Another approach, currently receiving attention, involves direct analysis of realworld expectations data.2 A third approach, and the ope followed here, is to create and analyze an experimental situation in which the rule followed must be technical because no information other than the past history of the time series in question is available. The main reason for the attractiveness of the experimental approach here, however, lies in the two aspects of expectation formation with which this study is principally concerned. The first of these concerns the influence of turning points in a time series context. The basic hypothesis is due to F. M. Fisher [9, p. 48]:

Journal Article•DOI•
TL;DR: In this article, the relationship between revealed preference and the negative semidefiniteness of the matrix of substitution terms (NSD) has been investigated, and it has been shown that the latter is equivalent to the strong axiom of revealed preference.
Abstract: In this paper we provide a statement of the relationship between the weak axiom of revealed preference (WA) and the negative semidefiniteness of the matrix of substitution terms (NSD). As a corollary we determine the relation between WA and the strong axiom of revealed preference (SA). The latter is equivalent to NSD and the symmetry of the matrix of substitution terms. The former, WA, implies NSD but is not implied by NSD. Also, WA is implied by the condition that the matrix of substitution terms is negative definite (ND), but it does not imply ND. Application of these results yield an infinity of demand functions which satisfy WA but not SA.




Book Chapter•DOI•
TL;DR: The first edition of the Elements d 'Economie Politique Pure by Leon Walras was half-way through printing in the early nineties as mentioned in this paper, and it was used for the first time at the International Organization of Labor Congress (IOLC).
Abstract: ONE HUNDRED YEARS AGO, the first edition of the Elements d 'Economie Politique Pure by Leon Walras was half-way through printing. This anniversary provides special justification for the presentation of a Walras lecture at our congress. It also places a special burden on the author of the lecture to rationalize the choice of his pet subject through appropriate references to the life and works of Walras. I will in due course provide such rationalization for my topic, which is the pure theory of labor management and participatory economies. This topic currently arouses a great deal of interest, at various levels. For some, labor management, or self-management, is a global project of political, social, and economic organization. For others, it is a form of organization that meets a basic human aspiration and should be fostered wherever possible, through modest as well as ambitious projects. For our purpose here, a labor-managed economy is an economy where production is carried out in firms organized by workers who get together and form collectives or partnerships. These firms hire nonlabor inputs, including capital, and sell outputs, under the assumed objective of maximizing the welfare of the members, for which a simple proxy is sometimes found in the return (value added) per worker. The capital can be either publicly or privately owned. To permit easier comparison, I will base this presentation on private ownership. Such economies have been studied by Vanek whose General Theory of LabourManaged Market Economies [24] extends comprehensively the seminal contributions of Ward [28] and Domar [5]; and by Meade who presents a lucid, concise review of that work, as well as his own views on "The Theory of LabourManaged Firms and Profit Sharing" [16]. I would like to report here on my attempts at studying labor-managed economies with the general equilibrium methodology. The outcome of these attempts may serve as a yardstick to assess the usefulness of the approach introduced by Walras one hundred years ago. Although my presentation will be largely informal, it rests upon technical analysis that will be made available separately. The presentation will consist of three parts. In the first part, I define a Walrasian equilibrium for a labor-managed market economy and contrast its properties with those of a competitive equilibrium. Next, I discuss the choice of working conditions under labor management and profit maximization. Finally, I take up some of the more intriguing problems raised by risk-bearing.




Journal Article•DOI•
TL;DR: In this paper, the authors adapted an "old" technique of numerical analysis, Hermite interpolation, to the problem of estimating the Gini index and showed that it usually works in theory and in practice.
Abstract: ECONOMISTS OFTEN SUMMARIZE the income distribution by the Lorenz curve and Gini index A variety of parametric methods (eg, [1 and 8]) have been developed to estimate these measures from the grouped income data governments provide (eg, [3 and 12]) Previously, one of the authors developed a distribution-free approach [5] which yielded accurate bounds on the Gini index While analogous bounds on the Lorenz curve can be obtained [5 and 10], the resulting curve is not smooth so a method of interpolation is needed The purpose of this paper is to adapt an "old" technique of numerical analysis, Hermite interpolation [7 and 13], to our problem and to show that it usually works in theory and in practice Our paper was motivated by the work of Brittain [2] who also used numerical methods Unfortunately, his procedure often resulted in estimates of the Gini index which were inconsistent with the above-mentioned bounds Although the piecewise Hermite interpolation yielded accurate estimates of the Gini index, it is not always convex as the Lorenz curve must be Section 5 states conditions for the interpolated curve to be convex or at least increasing over an interval While these conditions are usually satisfied by real data, a theoretical example illustrates how an error may arise

Journal Article•DOI•
TL;DR: In this article, the authors consider the problem of reconciling the conflicting interests of firms and factors in a factor market and propose two alternative methods to reconcile the traditional theories of their behavior.
Abstract: alone. Problems may arise, however, if a firm employs more than one factor of production. This is because the firm's productivity derives from its ability to organize the collective behavior of its factors. This collective behavior often requires that the factors perform their tasks either simultaneously or consecutively and that their workdays bear some appropriate relationship to one another. Yet only by coincidence would the workdays preferred by each factor conform to this relationship. The firm would therefore hardly be content to offer each factor its "going wage" and allow it to choose for itself its hours of work. Yet this is precisely the way firms are assumed to behave in traditional theories of factor markets. We would expect instead that the firm will itself decide both the length of the workday and the rate of compensation for each factor. It must choose these wage-hours combinations not only to maximize its own productivity, but also to lure factors successfully away from competing opportunities elsewhere in the economy. These considerations complicate both the theory of the firm and the theory of the consumer, with results which we will see later in this paper. However, we should first note two alternative methods of reconciling the conflicting interests of firms and factors, methods which could conceivably justify the traditional theories of their behavior. The first method assumes heterogeneous preferences among consumers and heterogeneous technologies among firms. In that case, while the


Journal Article•DOI•
TL;DR: For a model with non-stochastic regressors, this article showed that a systematic inequality relation exists among the test statistics; namely, the value of the Wald statistic is greater than or equal to that of the LR statistic, which, in turn, is higher than and equal to the LM statistic.
Abstract: Silvey [10]. For a model with nonstochastic regressors we show that a systematic inequality relation exists among the test statistics; namely, the value of the Wald statistic is greater than or equal to that of the LR statistic which, in turn, is greater than or equal to that of the LM statistic. When the null hypothesis is true, we find that the Wald, LR, and LM test statistics have identical limiting chi-square distributions. Since for a large sample test the three procedures employ the same critical region, the inequality relation among the test statistics implies that there exists a significance level such that the tests will produce conflicting inferences. These results are parallel to those obtained by Berndt and Savin [2] in the context of a multivariate regression model with independent disturbance vectors. We also consider the Wald and LR tests for a model with a lagged dependent variable. In this case the Wald statistic is not the same as in the nonstochastic regressor case with the result that the inequality between the Wald and LR test statistics no longer holds. We conclude the paper with an empirical example which illustrates the relation among the test statistics.


Journal Article•DOI•