scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1981"



Journal ArticleDOI

6,503 citations



Journal ArticleDOI
TL;DR: In this paper, several procedures are proposed for testing the specification of an econometric model in the presence of one or more other models which purport to explain the same phenomenon.
Abstract: Several procedures are proposed for testing the specification of an econometric model in the presence of one or more other models which purport to explain the same phenomenon. These procedures are shown to be closely related, but not identical, to the non-nested hypothesis tests recently proposed by Pesaran and Deaton [7], and to have similar asymptotic properties.. They are remarkably simple both conceptually and computationally, and, unlike earlier techniques, they may be used to test against several alternative models simultaneously. Some empirical results are presented which suggest that the ability of the tests to reject false hypotheses is likely to be rather good in practice.

1,599 citations




Journal ArticleDOI
TL;DR: In this paper, the authors show how the conventional methods of applied welfare economics can be modified to handle discrete choice situations, focusing on the computation of the excess burden of taxation, and the evaluation of quality change.
Abstract: Economists have been paying increasing attention to the study of situations in which consumers face a discrete rather than a continuous set of choices Such models are potentially very important in evaluating the impact of government programs upon consumer welfare But very little has been said in general regarding the tools of applied welfare economics indiscrete choice situations This paper shows how the conventional methods of applied welfare economics can be modified to handle such cases It focuses on the computation of the excess burden of taxation, and the evaluation of quality change The results are applied to stochastic utility models, including the popular cases of probit and logit analysis Throughout, the emphasis is on providing rigorous guidelines for carrying out applied work

1,003 citations


Journal ArticleDOI
TL;DR: In this article, five procedures for incorporating demographic variables into theoretically plausible demand systems were discussed: translating scaling and the Gorman reverse Gorman and implicit Prais-Houthakker procedures.
Abstract: In this paper [the authors discuss] five procedures for incorporating demographic variables into theoretically plausible demand systems: translating scaling and the Gorman reverse Gorman and implicit Prais-Houthakker procedures.... These five procedures are used to incorporate a single demographic variable--the number of children in a household--into the generalized CES demand system using household budget data for the United Kingdom for the period 1966-1972 (EXCERPT)

599 citations


Journal ArticleDOI
TL;DR: In this article, a bidding model is developed which has the market-like features that bidders act as price takers and that prices convey information, and for which the equilibrium price is fully revealing.
Abstract: Most rational expectations market equilibrium models are not models of price formation, and naive mechanisms leading to such equilibria can be severely manipulable. In this paper, a bidding model is developed which has the market-like features that bidders act as price takers and that prices convey information. Higher equilibrium prices convey more favorable information about the quality of the objects being sold than do lower prices. Bidders can benefit from trading only if they have a transactions motive or if they have access to inside information. Apart from exceptional cases, prices are not fully revealing. A two stage model is developed in which bidders may acquire information at a cost before bidding and for which the equilibrium price is fully revealing, resolving a well-known paradox.

518 citations



Journal ArticleDOI
TL;DR: The Arrow-Pratt measures of risk aversion for von Neumann-Morgenstern utility functions have become workhorses for analyzing problems in the microeconomics of uncertainty.
Abstract: THE ARROW-PRATT MEASURES of risk aversion for von Neumann-Morgenstern utility functions have become workhorses for analyzing problems in the microeconomics of uncertainty. They have been used to characterize the qualitative properties of demand in insurance and asset markets, to examine the properties of risk taking in taxation models, and to study the interaction between risk and life-cycle savings problems to name just a few applications. Equally importantly, they have generated the linear risk tolerance class of utility functions which has provided canonical examples in such diverse areas as portfolio theory and the theory of teams. Despite these successes, there have been a number of areas for which the results have been weaker than hoped. It is natural to use the risk aversion measures to compare the behavior of individuals in risky choice situations. For example, consider the individual portfolio choice problem in a two asset world with a riskless asset and a risky asset. If individual A has a uniformly higher Arrow-Pratt coefficient of risk aversion than individual B, then B will always choose a portfolio combination with more wealth invested in the risky asset. But, suppose that both assets are risky. Now, there is no obvious sense in which the more risk averse individual can be said to hold a less risky portfolio, but it seems strange that such a simple alteration should destroy the analytics which support the basic intuition. Similarly, consider the basic insurance problem. If one individual, A, is more risk averse than another, B, in the Arrow-Pratt sense, it follows that A will pay a larger premium to insure against a random loss than will B. Typically, though, an individual evaluates partial rather than total insurance, that is, only some gambles can be insured against and others must be retained. In this case, even when the gambles which are retained are independent from those which are insured, it is no longer true that the individual whose Arrow-Pratt measure of risk aversion is higher will pay a larger insurance premium. The situation is no better when we consider comparative statics exercises for a single individual. Decreasing absolute risk aversion in the sense of Arrow and

ReportDOI
TL;DR: In this article, a theoretical and empirical analysis of the effects of time and money costs of labor market participation on married women's supply behavior is presented. And the empirical results indicate that fixed costs of work are of prime importance in determining the labor supply behavior of married women.
Abstract: This study is a theoretical and empirical analysis of the effects of time and money costs of labor market participation on married women's supply behavior. The existence of fixed costs implies that individuals are not willing to work less than some minimum number of hours, termed reservation hours. The theoretical analysis of the properties of the reservation hours function are derived. The empirical analysis develops and estimates labor supply functions when fixed costs are present, but cannot be observed in the data. The likelihood function developed to estimate the model is an extension of the statistical model of Heckman (1974) that allows the minimum number of hours supplied to be nonzero and differ randomly among individuals. The empirical results indicate that fixed costs of work are of prime importance in determining the labor supply behavior of married women. At the sample means, the minimum number of hours a woman is willing to work is about 1300per year. The estimated fixed costs an average woman incurs upon entry into the labor market is $920 in 1966 dollars. This represents 28 percent of her yearly earnings. Finally, labor supply parameters estimated with the fixed cost model are compared to those estimated under the conventional assumption of no fixed costs. Large differences in estimated parameters are found, suggesting that the conventional model is seriously misspecified.

Journal ArticleDOI
TL;DR: In this paper, a model to capture the sequential nature of activities like research, development, or exploration requires optimal funding criteria to take account of the fact that subsequent funding decisions will be made throughout the future.
Abstract: Department of Energy, nor any of their employees, nor any of their contractors, subcontractors, or their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product or process disclosed or represents that its use would not infringe privately owned rights. ABSTRACT The sequential nature of activities like research, development, or exploration requires optimal funding criteria to take account of the fact that subsequent funding decisions will be made throughout the future. Thus, there is a continual possibility of reviewing a project's status, based on the latest information. After setting up a model to capture this feature, optimal funding criteria are investigated. In an important special case, an explicit formula is derived. As well as throwing light upon the nature of development activities, the analysis is also relevant to the general theory of information gathering processes. for energy research and development and more specifically for management decisions in the Photovoltaic Program.

Journal ArticleDOI
TL;DR: In this article, it is shown that the notion of equilibrium with local governments does not have the nice properties of general competitive equilibrium, except under very restrictive assumptions, and that if one tries to generalize the rigorous version of Tiebout's theory, then equilibria may no longer exist or may not be Pareto optimal.
Abstract: The last section of this paper presents a rigorous version of Tiebout's theory of local public goods. It is shown that equilibria exist and are Pareto optimal. This rigorous theory follows closely the more rigorous part of Tiebout's work. This rigorous theory makes a number of very special assumptions which make local public goods essentially private. The body of this paper presents a series of examples, which show that if one tries to generalize the rigorous version of Tiebout's theory in a number of interesting directions, then equilibria may no longer exist or may not be Pareto optimal. The conclusion is that Tiebout's idea does not lead to a satisfactory general theory of local public goods. THE GOAL OF THIS PAPER is to point out that Tiebout's notion of equilibrium with local governments does not have the nice properties of general competitive equilibrium, except under very restrictive assumptions. Tiebout [39] suggested that there are competitive forces which tend to make local governments allocate resources in a Pareto optimal fashion. Consumers choose to live in those towns with the mix of taxes and public goods they prefer. Local governments choose this mix so as to attract inhabitants. This idea may seem intriguing, for it suggests that the invisible hand solves an important part of Samuelson's perplexing public goods problem [32]. Tiebout, in fact, makes an argument which is nearly rigorous. I give a rigorous version of his argument at the end of the paper. However in this rigorous version, so many restrictive assumptions are made that public goods become essentially private. In the body of the paper, I give a series of examples with which I try to convince the reader that one is forced to adopt Tiebout's restrictive assumptions. The idea is that if one changes any of his assumptions, then either equilibria may not exist or may not be Pareto optimal. My examples are presented in the context of a general class of Tiebout models. I consider several subclasses, one of which is the special case considered by Tiebout. In each of the subclasses except that considered by Tiebout, I give a counterexample either to the existence of equilibrium or to its Pareto optimality. The subclasses are so chosen that the difficulties they reveal would be shared by any reasonable Tiebout model which differed from his special case. I believe that my examples controvert Tiebout's suggestion [39, last paragraph] that his theory compares favorably with competitive equilibrium theory. Most of the examples in this paper have already appeared in the literature. I cite related work as I go along. What is new here is that I assemble the examples in a unified argument.

Journal ArticleDOI
TL;DR: In this article, a centering labyrinth seal for detachably connecting two parts such as cylinder and cylinder head with annular grooves in the surfaces along which the two parts are to be interconnected, including an annular core member having connected radially arranged annular webs with elastically or plastically and elastically deformable sealing lips adapted to engage wall portions of said grooves and to follow axial and radial deformations thereof.
Abstract: A centering labyrinth seal for detachably connecting two parts such as cylinder and cylinder head with annular grooves in the surfaces along which the two parts are to be interconnected, said seal including an annular core member having connected thereto radially arranged annular webs with elastically or plastically and elastically deformable sealing lips adapted to engage wall portions of said grooves and to follow axial and radial deformations thereof.



Journal ArticleDOI
TL;DR: Chan, Hayya, and Ord as discussed by the authors showed that the residuals from linear regression of a realization of a random walk (the summation of a purely random series) on time have autocovariances which for given lag are a function of time and thereafter that residuals are not stationary.
Abstract: Econometric analysis of time series data is frequently preceded by regression on time to remove a trent component in the date. The resulting residuals are then treated as a stationary series to which procedures requiring stationarity, such as spectral analysis, can be applied. The objective is often to investigate the dynamics of transitory movements in the systems, for example, in econometric models of the business cycle. When the data does consist of a deterministic function of time plus a stationary error then regression residuals will clearly be unbiased estimates of the stationary component. However, if the data is generated by (possibly repeated) summation of a satisfactory and inevitable process then the series cannot be expressed as a deterministic function of time plus a stationary deviation, even though a least squares trend line and the associated residuals can always be calculated for any given finite sample. In a recent paper, Chan, Hayya, and Ord (1977) hereafter CHO) were able to show that a residuals from linear regression of a realization of a random walk (the summation of a purely random series) on time have autocovariances which for given lag are a function of time and thereafter that the residuals are not stationary. Further, CHO established that the expected sample autocovariance function (the expected autocovariances for given lag averaged over the time interval of the sample) is a function of sample size as well as lag and therefore an artifact of the detrending procedure. This function is characterized by CHO in their figure 1 as being effectively linear in lag (although the exact function is a fifth degree polynomial) with the rate of decay from unity at the origin depending inversely on sample size.


Journal ArticleDOI
TL;DR: In this article, the Tobit model and the truncated regression model are compared for the special case of non-normality, where the sample contains only non-limit observations.
Abstract: This paper presents a precise characterization of the bias of least squares in two limited dependent variable models, the Tobit model and the truncated regression model. For the cases considered, the method of moments can be used to correct the bias of OLS. For more general cases, the results provide approximations which appear to be relatively robust. 13, and o, . In this paper we present a precise characterization of that bias for the particular case in which xt, as well as -,, is normally distributed. We also show that the bias of the OLS slope estimator can be corrected by dividing each estimate by the sample proportion of nonlimit observations. Other structural parameters can be consistently estimated in a similar fashion. We present some evidence on the effect of nonnormality with respect to the predictions obtained in the normal model. The case in which the sample contains only nonlimit observation (the truncated regression model) is considered elsewhere (Olsen (7)). We analyze the relationship between his results and ours, and derive some predictions of the normal model with respect to the seriousness of "truncation bias."

Journal ArticleDOI
TL;DR: This paper analyzed whether the experts' predictions are unbiased, and whether complete use was made of all relevant, known information (unbiasedness and completeness being necessary conditions for fully rational expectations) and found little bias in either the half-year or full-year predictions, but extensive underutilization of information-particularly data on monetary growth-occurred.
Abstract: For more than three decades, economic columnist Joseph A. Livingston has canvassed a panel of economists twice a year, eliciting their six-month and twelve-month forecasts for more than a dozen key variables. This study analyzes whether the experts' predictions are unbiased, and whether complete use was made of all relevant, known information (unbiasedness and completeness being necessary conditions for fully rational expectations). Little bias was found in either the half-year or full-year predictions, but extensive underutilization of information-particularly data on monetary growth-occurred. "To prophecy is extremely difficult-especially with respect to the future." Chinese proverb Do ECONOMISTS' EXPECTATIONS regarding key price and nonprice variables utilize all known, relevant information, in an unbiased, efficient manner? This is a worthy subject for research, for several reasons. Properties of experts' predictions likely form an upper bound for those of laymen. Further, as John Muth [14] has noted, "the character of dynamic processes is typically very sensitive to the way expectations are influenced by the actual course of events" (p. 316); hence, we need to know precisely how events do affect expectations. Finally, the common practice of replacing a variable's (generally unobserved) expectation with a proxy based on its past values will be unbiased (and will not cause bias in other





Journal ArticleDOI
TL;DR: In this paper, the authors extended the conclusions obtained by Stiglitz and others about the asymptotic wealth distribution in the neo-classical growth model when the saving function is convex and showed that locally stable two-class unegalitarian equilibria may exist along with the egalitarian equilibrium, but they necessarily are Pareto superior to it.
Abstract: This paper extends the conclusions obtained by Stiglitz and others about the asymptotic wealth distribution in the neo-classical growth model when the saving function is convex. It is shown not only that locally stable two-class unegalitarian equilibria may exist along with the egalitarian equilibrium, but also that they necessarily are Pareto superior to it. More generally, the paper also analyzes the class of Pareto optimal unegalitarian equilibria.

Journal ArticleDOI
TL;DR: This paper explored the relationship between various types of separability, particularly weak and implicit separability and optimal tax rates in the various models discussed in the literature and showed that empirically calculated tax rates, based on econometric estimates of parameters, will be determined in structure, not by the measurements actually made, but by arbitrary, untested (and even unconscious) hypotheses chosen by the Econometrician for practical convenience.
Abstract: If optimal tax theory is to be the basis for calculating tax rates, a close understanding is required of the relationship between the structure of preferences and the configuration of optimal tax rates. Otherwise hypotheses chosen by the econometrician for practical convenience may completely determine the results, independently of measurement. This paper explores the relationship between various types of separability, particularly weak and implicit separability, and optimal tax rates in the various models discussed in the literature. The use of distance functions and the Antonelli matrix provides a significant unification of previously disparate results. IN THE FINAL ANALYSIS, optimal tax theory should be the basis for actual calculation of tax rates. Although recently there have been great advances in theoretical results and in our understanding of their meaning, we are still some way from a working knowledge of whether uniform commodity taxes are in practice optimal or, if not, which commodities should be discriminated against. Present theoretical formulae do not yield clear-cut results except in special cases and it has recently become clear that optimal rates depend crucially on the detailed structure of consumer preferences. For example, Atkinson and Stiglitz [3] show that with an optimal nonlinear income tax, discriminatory commodity taxes are only necessary to the extent that individual commodities are not weakly separable from leisure. More recently, Deaton [6] has shown that a similar result holds for what is perhaps the most interesting of the standard models, that where there are many consumers and only a linear income tax and proportional commodity taxes are allowed. In this case, separability between goods and leisure, together with linear Engel curves for goods, removes the need for differential commodity taxation. In consequence, nothing can be learned about commodity taxes from consumer demand studies in which commodity demands are explained conditionally on total expenditure and commodity prices and which assume linear Engel curves. All such studies require separability from leisure as a maintained hypothesis and so are consistent with uniform commodity taxation. These results suggest that the prospects for meaningful empirical calculations of tax rates are bleak. Econometricians estimating commodity demand and labor supply equations make generous use of separability assumptions to enable estimation at all. In consequence, it is likely that empirically calculated tax rates, based on econometric estimates of parameters, will be determined in structure, not by the measurements actually made, but by arbitrary, untested (and even unconscious) hypotheses chosen by the econometrician for practical convenience. To remedy this situation, and as a prelude to fruitful empirical work, it is necessary to have a more explicit understanding of how preference structure affects optimal tax rates. Such is the object of this paper. Three different

Journal ArticleDOI
TL;DR: In this article, two theorems are derived about social choice functions, which are defined on comprehensive convex subsets of utility allocation space, which imply that a social choice function must be either either utilitarian or egalitarian.
Abstract: Two theorems are derived about social choice functions, which are defined on comprehensive convex subsets of utility allocation space. Theorem 1 asserts that a linearity condition, together with Pareto optimality, implies that a social choice function must be utilitarian. Theorem 2 asserts that a concavity condition, together with Pareto optimality and independence of irrelevant alternatives, implies that a social choice function must be either utilitarian or egalitarian. These linearity and concavity conditions have natural interpretations in terms of the timing of social welfare analysis (before or after the resolution of uncertainties) and its impact on social choices.