scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1980"


Journal ArticleDOI
TL;DR: In this article, a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic is presented, which does not depend on a formal model of the structure of the heteroSkewedness.
Abstract: This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator to those of the usual covariance estimator, one obtains a direct test for heteroskedasticity, since in the absence of heteroskedasticity, the two estimators will be approximately equal, but will generally diverge otherwise. The test has an appealing least squares interpretation.

25,689 citations


Journal ArticleDOI
TL;DR: In this article, the authors argue that the style in which their builders construct claims for a connection between these models and reality is inappropriate, to the point at which claims for identification in these models cannot be taken seriously.
Abstract: Existing strategies for econometric analysis related to macroeconomics are subject to a number of serious objections, some recently formulated, some old. These objections are summarized in this paper, and it is argued that taken together they make it unlikely that macroeconomic models are in fact over identified, as the existing statistical theory usually assumes. The implications of this conclusion are explored, and an example of econometric work in a non-standard style, taking account of the objections to the standard style, is presented. THE STUDY OF THE BUSINESS cycle, fluctuations in aggregate measures of economic activity and prices over periods from one to ten years or so, constitutes or motivates a large part of what we call macroeconomics. Most economists would agree that there are many macroeconomic variables whose cyclical fluctuations are of interest, and would agree further that fluctuations in these series are interrelated. It would seem to follow almost tautologically that statistical models involving large numbers of macroeconomic variables ought to be the arena within which macroeconomic theories confront reality and thereby each other. Instead, though large-scale statistical macroeconomic models exist and are by some criteria successful, a deep vein of skepticism about the value of these models runs through that part of the economics profession not actively engaged in constructing or using them. It is still rare for empirical research in macroeconomics to be planned and executed within the framework of one of the large models. In this lecture I intend to discuss some aspects of this situation, attempting both to offer some explanations and to suggest some means for improvement. I will argue that the style in which their builders construct claims for a connection between these models and reality-the style in which "identification" is achieved for these models-is inappropriate, to the point at which claims for identification in these models cannot be taken seriously. This is a venerable assertion; and there are some good old reasons for believing it;2 but there are also some reasons which have been more recently put forth. After developing the conclusion that the identification claimed for existing large-scale models is incredible, I will discuss what ought to be done in consequence. The line of argument is: large-scale models do perform useful forecasting and policy-analysis functions despite their incredible identification; the restrictions imposed in the usual style of identification are neither essential to constructing a model which can perform these functions nor innocuous; an alternative style of identification is available and practical. Finally we will look at some empirical work based on an alternative style of macroeconometrics. A six-variable dynamic system is estimated without using 1 Research for this paper was supported by NSF Grant Soc-76-02482. Lars Hansen executed the computations. The paper has benefited from comments by many people, especially Thomas J. Sargent

11,195 citations


Journal ArticleDOI
TL;DR: In this article, an explicit solution for an important subclass of the model Shiller refers to as the general linear difference model is given, together with the conditions for existence and uniqueness.
Abstract: IN HIS SURVEY ON RATIONAL EXPECTATIONS, R. Shiller indicates that the difficulty of obtaining explicit solutions for linear difference models under rational expectations may have hindered their use [14, p. 27]. The present paper attempts to remedy that problem by giving the explicit solution for an important subclass of the model Shiller refers to as the general linear difference model. Section 1 presents the form of the model for which the solution is derived and shows how particular models can be put in this form. Section 2 gives the solution together with the conditions for existence and uniqueness. 1. THE MODEL The model is given by (la), (lb), and (1c) as follows:

2,536 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider a wide class of inequality indices and identify those which are additively decomposable, including the squared coefficient of variation and the two Theil's entropy formulas.
Abstract: This paper considers a wide class of inequality indices and identifies those which are additively decomposable. The sub-class of mean independent, additively decomposable measures turns out to be a single parameter family which includes the squared coefficient of variation and the two Theil's entropy formulas.

1,566 citations


Journal ArticleDOI
TL;DR: It is demonstrated that commodity-independent compensated price effects must be known to infer the existence of the unobservable interdependent shadow prices of the model with a relatively weak structure improsed on preference orderings.
Abstract: The predictive content of the quantity-quality model of fertility and the empirical information required for verification under a minimal set of restrictions on the utility function is described. It is demonstrated that commodity-independent compensated price effects must be known to infer the existence of the unobservable interdependent shadow prices of the model with a relatively weak structure improsed on preference orderings. A method of using multiple birth events to substitute for these exogenous prices is proposed and applied to household data from India. (Authors)

714 citations


Journal ArticleDOI

663 citations


Journal ArticleDOI
TL;DR: In this article, it is argued that a sound and natural approach to such tests must rely primarily on the out-of-sample forecasting performance of models relating the original (non-prewhitened) series of interest.
Abstract: This paper is concerned with testing for causation, using the Granger definition, in a bivariate time-series context. It is argued that a sound and natural approach to such tests must rely primarily on the out-of-sample forecasting performance of models relating the original (non-prewhitened) series of interest. A specific technique of this sort is presented and employed to investigate the relation between aggregate advertising and aggregate consumption spending. The null hypothesis that advertising does not cause consumption cannot be rejected, but some evidence suggesting that consumption may cause advertising is presented.

480 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the stability of coalitions in a cooperative game with hedonic coalitions and showed that transfers among coalitions may be necessary to attain Pareto optimality.
Abstract: In many economic situations, individuals carry out activities as coalitions, and have personal preferences for belonging to specific groups (coalitions) These situations are studied in the framework of cooperative games with coalition structures, by defining for each player a utility function with two arguments, namely his consumption bundle and the coalition to which (s)he belongs The optimality analysis brings out a surprising property of the games with hedonic coalitions, namely that transfers among coalitions may be necessary to attain Pareto optimality Moreover, quite restrictive assumptions are needed to rule out this property The stability analysis is concerned with the conditions under which no individual has incentives and opportunities to change coalitions Two concepts of "individual stability" of a coalition structure are introduced, and their existence properties are analyzed 11 Summary IN MANY ECONOMIC SITUATIONS, individuals carry out activities as coalitions Thus, individuals organize themselves in firms for production purposes and in clubs for consumption purposes; or they rely upon local communities for the provision of public goods In such situations, individuals typically have personal preferences for belonging to specific groups (coalitions) First, they are concerned with the size of the group and personalities of its members Second, they are concerned with qualitative and quantitative characteristics of the group activities: Working conditions in the firms, facilities available at the clubs, local public goods Cooperative games with coalition structures provide a natural framework for a formal analysis of these situations, when the individuals partition themselves into coalitions A general way of introducing explicitly personal preferences for membership in specific coalitions is to define for each player a utility function with two arguments, namely his consumption bundle and the coalition to which he belongs It then seems natural to speak about games with "hedonic coalitions" A model of an economy, or cooperative game, with hedonic coalitions is introduced in Section 12 The agents organize themselves in coalitions which form a partition (ie, each agent belongs to one and only one coalition) Each coalition, endowed with a production set, produces public and private goods Each agent consumes the public goods produced by the coalition to which he belongs, and private goods His preferences are represented by a utility function which is strictly increasing in private goods and continuous in private as well as public goods, but which depends upon the coalition in an arbitrary way Our initial interest was to study the stability of coalitions in this model Section 3 is devoted to that topic However, in the course of our study, we encountered an

418 citations


Journal ArticleDOI
TL;DR: In this paper, a joint model which represents both the regression model to be estimated and the process determining when the dependent variable is to be observed is proposed. But this model does not take into account non-randomness for the observed values of a dependent variable.
Abstract: WHEN ESTIMATING REGRESSION MODELS it is very nearly always assumed that the sample is random. The recent literature has begun to deal with the problems which arise when estimating a regression model with samples which may not be random. The most general case in which one only has access to a single nonrandom sample has not been addressed since it is a very imposing problem. The case which has been addressed starts with a random sample but considers the problem of missing values for the dependent variable of a regression. If the determination of which values are to be observed is related to the unobservable error term in the regression, then methods such as ordinary least squares are in general inappropriate. By constructing a joint model which represents both the regression model to be estimated and the process determining when the dependent variable is to be observed, some progress can be made towards taking into account nonrandomness for the observed values of the dependent variable. The actual techniques employed fall into two rough groups, full information maximum likelihood models, and limited information methods which are more easily estimated. In the full information category are two methods. One model combines the probit and the normal regression models, and the other combines the Tobit or limited dependent variable model with the normal regression model. The form of the probit regression model is

392 citations



Journal ArticleDOI
TL;DR: The authors examines the implications of the rational expectations hypothesis for applied econometrics, and argues that its full force has yet to be appreciated in empirical work, and pays little attention to specific applications of the hypothesis, such as the efficient markets literature and the "efficient markets" literature.
Abstract: The implications for applied econometrics of the assumption that unobservable expectations are formed rationally in Muth's sense are examined. The statistical properties of the resulting models and their distributed lag and time series representations are described. Purely extrapolative forecasts of endogenous variables can be constructed, as alternatives to rational expectations, but are less efficient. Identification and estimation are considered: an order condition is that no more expectations variables than exogenous variables enter the model. Estimation is based on algorithms for nonlinear-in-parameters systems; other approaches are surveyed. Implications for economic policy and econometric policy evaluation are described. EXPECTATIONS VARIABLES ARE WIDELY USED in applied econometrics, since the optimizing behavior of economic agents, which empirical research endeavors to capture, depends in part on their views of the future. Directly observed expectations or anticipations are relatively rare, hence implicit forecasting schemes are used. Most commonly expectations are taken to be extrapolations, that is, weighted averages of past values of the variable under consideration. However, these "are almost surely inaccurate gauges of expectations. Consumers, workers, and businessmen ... do read newspapers and they do know better than to base price expectations on simple extrapolation of price series alone" (Tobin [31, p. 14]). An alternative approach is offered by the rational expectations hypothesis of Muth [15], which assumes that in forming their expectations of endogenous variables, economic agents take account of the interrelationships among variables described by the appropriate economic theory. "Price movements observed and experienced do not necessarily convey information on the basis of which a rational man should alter his view of the future. When a blight destroys half the midwestern corn crop and corn prices subsequently rise, the information conveyed is that blights raise prices. No trader or farmer under these circumstances would change his view of the future of corn prices, much less of their rate of change, unless he is led to reconsider his estimate of the likelihood of blights," again quoting Tobin. This paper examines the implications of the rational expectations hypothesis for applied econometrics, and argues that its full force has yet to be appreciated in empirical work. The discussion is quite general, proceeding in terms of the standard linear simultaneous equation system, and pays little attention to specific applications of the hypothesis, such as the "efficient markets" literature and

Journal ArticleDOI
Jesús Seade1
TL;DR: In this article, the authors examine the effect of entry on output and profits in the Cournot model of oligopoly and conclude that the effect on profits is not necessarily unambiguous, as they would need to be zero in the new equilibrium.
Abstract: THE PROBLEM OF ENTRY receives a great deal of attention in present-day Industrial Economics. The main question typically asked in this connection, ever since the work of Bain and Sylos-Labini, is what the best strategies are for oligopolists facing the threat of entry into their industry, that is, the implications of potential entry on their optimal policies regarding pricing, investment, research and development, advertising, and so on. Were entry to occur, conventional wisdom says, the effects would be unambiguous: profits per firm, and perhaps also output per firm would fall, while the industry as a whole would become "more competitive" in some sense, in particular expanding output. These effects are commonly taken for granted in discussions on entry, as obvious truths or, at best, as underlying assumptions. The natural question arises of whether this deeprooted piece of conventional wisdom is in fact correct for the general case, as the behavior of oligopoloy is, alas, complex enough to keep many surprises in store. Of course, these remarks are not meant to apply to the limit case where barriers to entry are removed altogether, thus breaking entirely the oligopolistic set-up. The effect on profits, in particular, would in this extreme case be necessarily unambiguous, as they would need to be zero in the new equilibrium, be it perfect or monopolistic competition. This is no more than a definition of equilibrium, but perhaps our intuition draws too heavily on this trivial consideration 2 Some of the effects of entry we shall be examining, in particular those on output, have been studied before, albeit in a rather limited form. Frank [1], Okuguchi [3], and Ruffin [4] found that certain "reasonable" conditions were sufficient for aggregate output to rise and firm-output to fall as entry occurs in the simple Cournot model of oligopoly.3 However, these authors do not examine what


Journal ArticleDOI
TL;DR: In this article, a two-stage estimation method for switching simultaneous equations models where the criterion function determining the switching is of the probit type and the tobit type is discussed.
Abstract: The paper discusses the two-stage estimation method for switching simultaneous equations models where the criterion function determining the switching is of the probit type and the tobit type. It derives the asymptotic covariance matrices of these estimators and shows that when the criterion function is of the probit type the correct covariance matrix is underestimated when the heteroscedasticity introduced in the first step is ignored, whereas the same is not necessarily the case for one of the regimes when the criterion function is of the tobit type.

ReportDOI
TL;DR: In this paper, a three-element variance components model is proposed for analyzing earnings of young workers in Sweden, which are interpreted as the effects of differential on-the-job training (OJT) and differential economic ability.
Abstract: The fine structure of earnings is defined by a theoretically meaningful decomposition of the covariance matrix of earnings (or log earnings) time series A three-element variance components model is proposed for analyzing earnings of young workers These components are interpreted as the effects of differential on-the-job training (OJT) and differential economic ability Several properties of these components and relationships between them are deduced from the OJT model Background noise generated by a nonstationary first-order autoregressive process, with heteroscedastic innovations and time-varying AR parameters is also assumed present in observed earnings ML estimates are obtained for all parameters of the model for a sample of Swedish males The results are consistent with the view that the OJT mechanism is an empirically significant phenomenon in determining individual earnings profiles

Book ChapterDOI
TL;DR: In this paper, the authors extended and generalized recursive equilibrium theory and established optimality of equilibria and supportability of optima in a direct way, and four economic applications are reformulated as recursive competitive equilibrium and analyzed.
Abstract: Recursive equilibrium theory is extended and generalized. Optimality of equilibria and supportability of optima are established in a direct way. Four economic applications are reformulated as recursive competitive equilibria and analyzed.

Journal ArticleDOI

Journal ArticleDOI
TL;DR: In this article, the benefits of price stabilization in terms of the convexity-concavity properties of the consumer's indirect utility function are analyzed. But the analysis is restricted to the case of a single commodity price and does not consider the effect of stabilizing an arbitrary number of commodity prices.
Abstract: This paper evaluates the benefits to consumers from price stabilization in terms of the convexity-concavity properties of the consumer's indirect utility function. It is shown that in the case where only a single commodity price is stabilized, the consumer's preference for price instability depends upon four parameters: the income elasticity of demand for the commodity, the price elasticity of demand, the share of the budget spent on the commodity, and the coefficient of relative risk aversion. All of these parameters enter in an intuitive way and the analysis includes the conventional consumer's surplus approach as a special case. The analysis is extended to consider the benefits of stabilizing an arbitrary number of commodity prices. Finally, some issues related to the choice of numeraire and certainty price in this context are discussed.


Journal ArticleDOI
TL;DR: In this paper, an alternative interpretation and a generalization of Sen's index as an "ethical index" is proposed. But the notion of an ethical index is not defined in this paper.
Abstract: ordinal approach to welfare comparisons. Given a poverty line, a priori, this index has several appealing properties: (i) it can be computed using readily available information, (ii) it is sensitive to the percentage of the population that is below the line (the "head-count ratio"), (iii) it depends on the income of the average poor person, and (iv) it depends on the amount of inequality among the poor themselves. In this note, we offer an alternative interpretation and a generalization of Sen's index as an "ethical index." These are indices, usually of inequality, that are exact for social evaluation functions. Each index is thus implied by and implies at least one social evaluation function. Essential to the construction of these ethical indices is the notion of the





Journal ArticleDOI
TL;DR: In this paper, a new way of looking at repeated games is introduced which incorporates a bounded memory and rationality, and a resolution of the prisoner's dilemma is given, where the agents only keep some kind of summary or average of the past outcomes or payoffs in their memory.
Abstract: A new way of looking at repeated games is introduced which incorporates a bounded memory and rationality. In these terms, a resolution of the prisoner's dilemma is given. THE GOAL HERE is to give a natural way of introducing dynamics into game theory, or at least for non-cooperative games. Perhaps the main idea in this treatment of dynamics is the way the past is taken into account. We suppose for both mathematical and model theoretic considerations that the agents only keep some kind of summary or average of the past outcomes (or payoffs) in their memory. Decisions are based on this summary. This kind of modeling reflects the fact that there exist substantive bounds to the storing and organizing of information. We give an axiomatization of bounded memory and rationality, with both institutions and people in mind. On the other hand, the hypothesis used in this treatment leads to a tractable mathematics. Differential equations on function spaces which contain little geometry are replaced by a dynamics on a finite dimensional space. And yet dynamics takes the past into account as a kind of substitute for the theory of delay equations. The perspective in this paper is that of no finite horizon and no discounting of the future. There is always a tomorrow in our plans, and it is as important as today. Also there is a history, a beginning of history, but no end. Decisions are based on the effect of past actions of agents, not on promises or binding agreements. However communication is certainly not precluded. Solutions in our games are asymptotic solutions. To be important for us, they must meet the criteria of stability. This criterion is well-defined by virtue of the dynamical foundations of the models. The first section deals with an example, the repeated prisoner's dilemma, in the language of an arms race. Here a class of strategies, "good strategies," is given where the solution is Pareto optimal, stable, and a Nash equilibrium. Thus at least asymptotically, we have a rather robust resolution of the prisoner's dilemma. We show how good strategies with optimal solutions might bifurcate into strategies with the worst solutions.



ReportDOI
TL;DR: This paper as discussed by the authors extends previous equilibrium business cycle models by incorporating an economy-wide capital market, where the relative price that appears in the supply and demand functions in local commodity markets becomes an anticipated real rate of return on earning assets, rather than a ratio of actual to expected prices.
Abstract: Previous equilibrium business cycle models are extended by the incorporation of an economy-wide capital market. This extension alters the information structure of these models and modifies the relative price variable that transmits money shocks to real variables. Monetary effects on nominal and real interest rates are a focus of the analysis. THIS PAPER exends previous equilibrium "business cycle" models of Lucas [10, 11] and myself [4] by incorporating an economy-wide capital market. One aspect of this extension is that the relative price that appears in the supply and demand functions in local commodity markets becomes an anticipated real rate of return on earning assets, rather than a ratio of actual to expected prices. The analysis brings in as a central feature a portfolio balance schedule in the form of an aggregate money demand function. The distinction between the nominal and real rates of return is an important element in the model. From the standpoint of expectation formation, the key aspect of the extended model is that observation of the economy-wide nominal rate of return conveys current global information to individuals. In this respect the present analysis is distinguished from Lucas' [12] model, which considered only local (internal) finance. However, my analysis does not deal with the dynamics of capital accumulation, as considered by Lucas, and does not incorporate any other elements, such as inventory holdings, multi-period lags in the acquisition of information, or the adjustment costs for changing employment that were treated by Sargent [16], that could produce persisting effects of monetary and other disturbances. In order to retain the real effects of monetary surprises in the model, it is necessary that the observation of the current nominal rate of return, together with an observation of a current local commodity price, not convey full information about contemporaneous disturbances. Limitation of current information is achieved in the present framework by introducing a contemporaneously unobserved disturbance to the aggregate money demand function, along with an aggregate money supply shock and an array of disturbances to local excess commodity demands. Aggregate shocks to the commodity market (to the extent that they were not directly and immediately observable) could serve a similar purpose. With respect to the effect of money supply shocks on output, the model yields results that are similar to those generated in earlier models. Notably, incomplete

Book ChapterDOI
TL;DR: In several instances in economics, one is confronted with estimating a set of equations as discussed by the authors, i.e., set of demand equations across different sectors, industries, or regions, across different regions.
Abstract: In several instances in economics, one is confronted with estimating a set of equations. This could be a set of demand equations across different sectors, industries, or regions.

Journal ArticleDOI
TL;DR: In this paper, three different forms of congestion of production factors are defined and analyzed within an axiomatic theory of production, which is used to characterize a law of variable proportion.
Abstract: : Three different forms of congestion of production factors are defined and analyzed within an axiomatic theory of production. These forms of congestion are used to characterize a law of variable proportion. (Author)