scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1978"


Journal Article•DOI•
TL;DR: In this article, the null hypothesis of no misspecification was used to show that an asymptotically efficient estimator must have zero covariance with its difference from a consistent but asymptonically inefficient estimator, and specification tests for a number of model specifications in econometrics.
Abstract: Using the result that under the null hypothesis of no misspecification an asymptotically efficient estimator must have zero asymptotic covariance with its difference from a consistent but asymptotically inefficient estimator, specification tests are devised for a number of model specifications in econometrics. Local power is calculated for small departures from the null hypothesis. An instrumental variable test as well as tests for a time series cross section model and the simultaneous equation model are presented. An empirical model provides evidence that unobserved individual factors are present which are not orthogonal to the included right-hand-side variable in a common econometric specification of an individual wage equation.

16,198 citations


Journal Article•DOI•
TL;DR: In this article, the authors examine the stochastic behavior of equilibrium asset prices in a one-good, pure exchange economy with identical consumers, and derive a functional equation for price as a function of the physical state of the economy.
Abstract: THIS PAPER IS A THEORETICAL examination of the stochastic behavior of equilibrium asset prices in a one-good, pure exchange economy with identical consumers. The single good in this economy is (costlessly) produced in a number of different productive units; an asset is a claim to all or part of the output of one of these units. Productivity in each unit fluctuates stochastically through time, so that equilibrium asset prices will fluctuate as well. Our objective will be to understand the relationship between these exogenously determined productivity changes and market determined movements in asset prices. Most of our attention will be focused on the derivation and application of a functional equation in the vector of equilibrium asset prices, which is solved for price as a function of the physical state of the economy. This equation is a generalization of the Martingale property of stochastic price sequences, which serves in practice as the defining characteristic of market "efficiency," as that term is used by Fama [7] and others. The model thus serves as a simple context for examining the conditions under which a price series' failure to possess the Martingale property can be viewed as evidence of non-competitive or "irrational" behavior. The analysis is conducted under the assumption that, in Fama's terms, prices "fully reflect all available information," an hypothesis which Muth [13] had earlier termed "rationality of expectations." As Muth made clear, this hypothesis (like utility maximization) is not "behavioral": it does not describe the way agents think about their environment, how they learn, process information, and so forth. It is rather a property likely to be (approximately) possessed by the outcome of this unspecified process of learning and adapting. One would feel more comfortable, then, with rational expectations equilibria if these equilibria were accompanied by some form of "stability theory" which illuminated the forces which move an economy toward equilibrium. The present paper also offers a convenient context for discussing this issue. The conclusions of this paper with respect to the Martingale property precisely replicate those reached earlier by LeRoy (in [10] and [11]), and not surprisingly, since the economic reasoning in [10] and the present paper is the same. The

4,860 citations


Journal Article•DOI•

4,733 citations


Report•DOI•
TL;DR: In this article, the authors considered the formulation and estimation of simultaneous equation models with both discrete and continuous endogenous variables and proposed a statistical model that is sufficiently rich to encompass the classical simultaneous equation model for continuous endogenous variable and more recent models for purely discrete endogenous variables as special cases of a more general model.
Abstract: This paper considers the formulation and estimation of simultaneous equation models with both discrete and continuous endogenous variables. The statistical model proposed here is sufficiently rich to encompass the classical simultaneous equation model for continuous endogenous variables and more recent models for purely discrete endogenous variables as special cases of a more general model.

1,956 citations


Journal Article•DOI•
TL;DR: In this paper, the authors consider dynamic choice behavior under conditions of uncertainty, with emphasis on the timing of the resolution of uncertainty and provide an axiomatic treatment of the dynamic choice problem which still permits tractable analysis.
Abstract: We consider dynamic choice behavior under conditions of uncertainty, with emphasis on the timing of the resolution of uncertainty. Choice behavior in which an individual distinguishes between lotteries based on the times at which their uncertainty resolves is axiomatized and represented, thus the result is choice behavior which cannot be represented by a single cardinal utility function on the vector of payoffs. Both descriptive and normative treatments of the problem are given and are shown to be equivalent. Various specializations are provided, including an extension of "separable" utility and representation by a single cardinal utility function. CONSIDER THE FOLLOWING idealization of a dynamic choice problem with uncertainty. At each in a finite, discrete sequence of times t = 0, 1, . . ., T, an individual must choose an action d,. His choice is constrained by what we temporarily call the state at time t, xt. Then some random event takes place, determining an immediate payoff zt to the individual and the next state xt+l. The probability distribution of the pair (zt, xt+l) is determined by the action dt. The standard approach in analyzing this problem, which we will call the payoff vector approach, assumes that the individual's choice behavior is representable as follows: He has a von Neumann-Morgenstern utility function U defined on the vector of payoffs (z0, z1, . . ., ZT). Each strategy (which is a contingent plan for choosing actions given states) induces a probability distribution on the vector of payoffs. So the individual's choice of action is that specified by any optimal strategy, any strategy which maximizes the expectation of utility among all strategies (assuming sufficient conditions so that an optimal strategy exists). This paper presents an axiomatic treatment of the dynamic choice problem which is more general than the payoff vector approach, but which still permits tractable analysis. The fundamental difference between our treatment and the payoff vector approach lies in our treatment of the temporal resolution of uncertainty: In our models, uncertainty is "dated" by the time of its resolution, and the individual regards uncertainties resolving at different times as being different. For example, consider a situation in which a fair coin is to be flipped. If it comes up heads, the payoff vector will be (zo, z1) = (5, 10); if it is tails, the vector will be (5, 0). Because z0 = 5 in either case, the coin flip can take place at either time 0 or time 1. It will not matter when the flip occurs to someone who has cardinal utility on the vector of payoffs. But an individual can obey our axioms and prefer either one to the other. One justification for our approach is the well known "timeless-temporal" or "joint time-risk" feature of some models (usually models which are not "complete"). For example, preferences on income streams which are induced from primitive preferences on consumption streams in general depend upon when the

1,753 citations


Journal Article•DOI•
TL;DR: In this article, the Lagrange multiplier approach is adopted and it is shown that the test against the nth order autoregressive and moving average error models is exactly the same as the test in the case of the serial correlation model.
Abstract: Since dynamic regression equations are often obtained from rational distributed lag models and include several lagged values of the dependent variable as regressors, high order serial correlation in the disturbances is frequently a more plausible alternative to the assumption of serial independence than the usual first order autoregressive error model. The purpose of this paper is to examine the problem of testing against general autoregressive and moving average error processes. The Lagrange multiplier approach is adopted and it is shown that the test against the nth order autoregressive error model is exactly the same as the test against the nth order moving average alternative. Some comments are made on the treatment of serial correlation.

1,304 citations


Journal Article•DOI•
TL;DR: This article presented an earlier version of this paper at the Third World Congress of the Econometric Society, August, 1975, which was later published in the journal of the American Mathematical Society.
Abstract: An earlier version of this paper was presented at the Third World Congress of the Econometric Society, August, 1975

956 citations


Journal Article•DOI•

838 citations




Journal Article•DOI•
TL;DR: In this paper, Monte Carlo (MC) is used to estimate posterior moments of both structural and reduced form parameters of an equation system, making use of the prior density, the likelihood, and Bayes' Theorem.
Abstract: textMonte Carlo (MC) is used to draw parameter values from a distribution defined on the structural parameter space of an equation system. Making use of the prior density, the likelihood, and Bayes' Theorem it is possible to estimate posterior moments of both structural and reduced form parameters. The MC method allows a rather liberal choice of prior distributions. The number of elementary operations to be preformed need not be an explosive function of the number of parameters involved. The method overcomes some existing difficulties of applying Bayesian methods to medium size models. The method is applied to a small scale macro model. The prior information used stems from considerations regarding short and long run behavior of the model and form extraneous observations on empirical long term ratios of economic variables. Likelihood contours for several parameter combinations are plotted, and some marginal posterior densities are assessed by MC.

Journal Article•DOI•
TL;DR: In this article, an econometric methodology was proposed to deal with life cycle earnings and mobility among discrete earnings classes. But the methodology is not suitable for the case of single individuals.
Abstract: This paper proposes an econometric methodology to deal with life cycle earnings and mobility among discrete earnings classes. First, we use panel data on male log earnings to estimate an earnings function with permanent and serially correlated transitory components due to both measured and unmeasured variables. Assuming that the error components are normally distributed, we develop statements for the probability that an individual's earnings will fall into a particular but arbitrary time sequence of poverty states. Using these statements, we illustrate the implications of our earnings model for poverty dynamics and compare our approach to Markov chain models of income mobility.(This abstract was borrowed from another version of this item.)



Journal Article•DOI•
TL;DR: In this paper, an alternative maximum likelihood procedure which incorporates the first observation and the stationarity condition of the error process is proposed, which is similar to the Cochrane-Orcutt procedure and appears to be at least as computationally efficient.
Abstract: The widely used Cochrane-Orcutt and Hildreth-Lu procedures for estimating the parameters of a linear regression model with first-order autocorrelation typically ignore the first observation. An alternative maximum likelihood procedure which incorporates the first observation and the stationarity condition of the error process is proposed in this paper. It is similar to the Cochrane-Orcutt procedure, and appears to be at least as computationally efficient. This estimator is superior to the conventional ones on theoretical grounds, and sampling experiments suggest that it may yield substantially better estimates in some circumstances.

Journal Article•DOI•
TL;DR: In this article, the authors consider a one-sector dynamic model of an economy with a convex-concave production function and apply the maximum principle in Arrow's form, which is extremely useful for the analysis of the economic processes.
Abstract: NUMEROUS STUDIES OF OPTIMAL MODELS in economic growth theory conducted with the aid of Pontryagin's maximum principle [3] led to important qualitive conclusions about the optimal development of economic systems over a finite or even infinite horizon (the latter is the more natural statement of the problem). At the same time almost all authors have been limited by consideration of production functions of only a narrow class, as a rule the class of concave functions. Concave production functions are known to be a good approximation of economic reality when the economy is in a high state of economic development (for instance, when the ratio of capital K to labor force L is great). However, accurate analysis of growth in certain less developed countries leads one to the conclusion that economic description by a concave function is not always applicable and that it is necessary to expand the class of production functions under consideration for a more adequate description of an economic system. Of special interest in this respect are functions which possess increasing returns to scale at an early stage of economic development and diminishing returns at a later stage. In turn, introduction of such functions generates a number of difficulties of a mathematical character (for example, Mangasarian's theorem on the sufficiency of the Pontryagin's conditions is not valid in this case). At present we do not know works where this problem has been studied definitively even in the one-dimensional case. Meanwhile, in our opinion, it is of considerable interest. In the present paper we consider a one-sector dynamic model of an economy with a convex-concave production function. The study is based on application of a maximum principle in Arrow's form [1] which is extremely useful for the analysis of the economic processes, since it allows taking phase constraints into consideration. Arrow's proposition has not been strictly proved; however, to my knowledge, there does not exist any contradictory examples.

Journal Article•DOI•
TL;DR: In this paper, decision rules for discriminating among alternative regression models are proposed and mutually compared based on the Akaike Information Criterion as well as the Kullback-Leibler information Criterion (KLIC).
Abstract: Some decision rules for discriminating among alternative regression models are proposed and mutually compared. They are essentially based on the Akaike Information Criterion as well as the Kullback-Leibler Information Criterion (KLIC) : namely, the distance between a postulated model and the true unknown structure is measured by the KLIC. The proposed criteria combine the parsimony of parameters with the goodness of fit. Their relationships with conventional criteria are discussed in terms of a new concept of unbiasedness .

Journal Article•DOI•
TL;DR: SHAZAM as mentioned in this paper is a large-scale program that can be run in batch mode or interactively at a computer terminal and includes a wide variety of output statistics, including autocorrelation, regression on principal components, ridge regression, regressions by matrix decompositions, random number generatign for Monte Carlo samples, forecasting and plotting.
Abstract: to prepare and allow a large number of options. The program can be run in batch mode or interactively at a computer terminal. Computer core storage is dynamically allocated so that large problems are only limited by the size of the machine. SHAZAM is designed to grow so that new algorithms and procedures can easily be added by any programmer familiar with the internal structure of the program. Features of SHAZAM include ordinary least squares, two-stage least squares, seemingly unrelated regressions and iterative estimation of seemingly unrelated regressions, threestage least squares and iterative three-stage least squares, models with first and second order autocorrelated disturbances, estimation of Box-Cox [1] type nonlinear functional forms, principal components and factor analysis, regression on principal components, ridge regression, regressions by matrix decompositions, random number generatign for Monte Carlo samples, forecasting, and plotting. Any set of linear restrictions or hypothesis tests can be used in the estimation. A wide variety of output statistics are available with each procedure. The autocorrelation section of SHAZAM is rather extensive and includes maximum likelihood or least squares estimation by a grid search or iterative Cochrane-Orcutt [2] procedure and inclusion or deletion of initial observations, exact and higher-order DurbinWatson [4] type tests, tests based on Golub's [6] uncorrelated residuals, Dhrymes [3, p. 199] corrections for lagged dependent variables, Savin-White [7] corrections for missing observations in a time series, Savin-White [8] type simultaneous testing for functional form and autocorrelation, and forecasting using Goldberger's [5] best linear unbiased predictor. A SHAZAM user's manual [9], which is also machine readable, is available from the author on request.

Journal Article•DOI•
TL;DR: In this paper, Pesaran et al. extended the analysis to cover multivariate nonlinear models whenever full-information maximum likelihood estimation is possible, and showed that the F test can give quite different results to conventional informal selection procedures.
Abstract: In Pesaran [9], the test developed by Cox for comparing separate families of hypotheses was applied to the choice between two non-nested linear single-equation econometric models. In this paper, the analysis is extended to cover multivariate nonlinear models whenever full information maximum likelihood estimation is possible. This allows formal comparisons not only of competing explanatory variables but also of alternative functional forms. The largest part of the paper derives the results and shows that they are recognizable as generalizations of the single-equation case. It is also shown that the calculation of the test statistic involves very little computation beyond that necessary to estimate the models in the first place. The paper concludes with a practical application of the test to the analysis of the U.S. consumption function and it is demonstrated that formal tests can give quite different results to conventional informal selection procedures. Indeed, in the case examined, five alternative hypotheses, some of which appear to perform quite satisfactorily, can all be rejected using the test. THE NEED FOR STATISTICAL PROCEDURES for testing separate families of hypotheses has become more acute with the increased use of econometric techniques in practice. The usual F tests can only be applied to test nested hypotheses, i.e. those which are members of the same family. However, in practice, one is frequently faced with the problem of testing non-nested hypotheses. In an earlier article, Pesaran [9] applied the test developed by Cox [3, 4], for separate families of hypotheses to single-equation linear regression models both with autocorrelated and nonautocorrelated disturbances. In that paper, the question was confined to the selection of appropriate explanators for a given dependent variable. However, in much applied work, the investigator is required not merely to select variables but simultaneously to find an appropriate functional form. This problem can be especially acute since in many areas of research, economic theory can guide us in the choice of variables, but helps very little in the choice of functional form. As computing capacity has increased, and nonlinear estimation has become routine, the use of linearity has become more a matter of choice than of necessity; the criteria for such a choice are thus of considerable practical importance. In this paper, we extend the earlier analysis to cover these problems by deriving the comparable statistics without assuming linearity of the models. This allows formal comparisons of different explanatory variables, of different functional forms, and of the interactions between the two. We also extend the results to cover competing systems of nonlinear equations whenever full-information maximum-likelihood estimation is possible. This allows the test to be applied to non-nested simultaneous equation models as well



Book Chapter•DOI•
Robert Wilson1•
TL;DR: In this article, the meaning of exchange efficiency is examined in the context of an economy in which agents differ in their endowments of information, and definitions of efficiency, and of the core, are proposed which emphasize the role of communication.
Abstract: The meaning of exchange efficiency is examined in the context of an economy in which agents differ in their endowments of information. Definitions of efficiency, and of the core, are proposed which emphasize the role of communication. Opportunities for insurance are preserved by restricting communication, or in a market system by restricting insider trading, prior to the pooling of information for the purposes of production.



Journal Article•DOI•
TL;DR: The cardinality vs ordinality issue has been extensively discussed in the literature as discussed by the authors, and the cardinality of the usual measures is not only a source of controversy, but it is also redundant.
Abstract: answer. There are two points of contention. One is the issue of cardinality vs. ordinality. Practitioners of the cardinal approach compare distributions by means of summary measures such as a Gini coefficient, variance of logarithms, and the like. For purposes of ranking the relative inequality of two distributions, the cardinality of the usual measures is not only a source of controversy, but it is also redundant.3 Accordingly, some researchers prefer an ordinal approach, adopting Lorenz domination as their criterion. The difficulty with the Lorenz criterion is its incompleteness, affording rankings of only some pairs of distributions but not others. Current practice in choosing between a cardinal or an ordinal approach is now roughly as follows: Check for Lorenz domination in the hope of making an unambiguous comparison; if Lorenz domination fails, calculate one or more cardinal measures. This raises the second contentious issue: which of the many cardinal measures in existence should one adopt? The properties of existing measures have been discussed extensively in several recent papers.4 Typically, these studies have started with the measures and then examined their properties. In this paper, we reverse the direction of inquiry. Our approach is to start by specifying as axioms a relatively small number of properties which we believe a "good" index of inequality should have and then examining whether the Lorenz criterion and the various cardinal measures now in use satisfy those properties. The key issue is the reasonableness of the postulated properties. Work to date has shown the barrenness of the Pareto criterion.5 Only recently have researchers begun to develop an alternative axiomatic structure.6 The purpose of this paper is to contribute to such a development.


Journal Article•DOI•
TL;DR: In this paper, it is shown that the information requirement cannot be substantially reduced for any convergent price mechanism, that is for price mechanisms expressed in terms of a difference or differential equation where the solutions converge to a competitive equilibrium.
Abstract: It is known that the price mechanism whereby the rate of change of a price is proportional to the excess demand of the corresponding commodity need not converge to a competitive equilibrium for a pure exchange economy with more than two commodities. On the other hand, there exist convergent price mechanisms, similar to the Newton iterative process, where the rate of change of the prices is determined by the excess demand and the marginal excess demands of all the commodities. This is a considerable informational requirement. It is shown that this requirement cannot be substantially reduced for any convergent price mechanisms, that is for price mechanisms expressed in terms of a difference or differential equation where the solutions converge to a competitive equilibrium.


Journal Article•DOI•