scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1987"


Journal ArticleDOI
TL;DR: The relationship between co-integration and error correction models, first suggested in Granger (1981), is here extended and used to develop estimation procedures, tests, and empirical examples.
Abstract: The relationship between co-integration and error correction models, first suggested in Granger (1981), is here extended and used to develop estimation procedures, tests, and empirical examples. If each element of a vector of time series x first achieves stationarity after differencing, but a linear combination a'x is already stationary, the time series x are said to be co-integrated with co-integrating vector a. There may be several such co-integrating vectors so that a becomes a matrix. Interpreting a'x,= 0 as a long run equilibrium, co-integration implies that deviations from equilibrium are stationary, with finite variance, even though the series themselves are nonstationary and have infinite variance. The paper presents a representation theorem based on Granger (1983), which connects the moving average, autoregressive, and error correction representations for co-integrated systems. A vector autoregression in differenced variables is incompatible with these representations. Estimation of these models is discussed and a simple but asymptotically efficient two-step estimator is proposed. Testing for co-integration combines the problems of unit root tests and tests with parameters unidentified under the null. Seven statistics are formulated and analyzed. The critical values of these statistics are calculated based on a Monte Carlo simulation. Using these critical values, the power properties of the tests are examined and one test procedure is recommended for application. In a series of examples it is found that consumption and income are co-integrated, wages and prices are not, short and long interest rates are, and nominal GNP is co-integrated with M2, but not M1, M3, or aggregate liquid assets.

27,170 citations


ReportDOI
TL;DR: In this article, a simple method of calculating a heteroskedasticity and autocorrelation consistent covariance matrix that is positive semi-definite by construction is described.
Abstract: This paper describes a simple method of calculating a heteroskedasticity and autocorrelation consistent covariance matrix that is positive semi-definite by construction. It also establishes consistency of the estimated covariance matrix under fairly general conditions.

18,117 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that simple least squares regression consistently estimates a unit root under very general conditions in spite of the presence of autocorrelated errors. But, the results of this paper are restricted to the unit root case.
Abstract: This paper studies the random walk, in a general time series setting that allows for weakly dependent and heterogeneously distributed innovations. It is shown that simple least squares regression consistently estimates a unit root under very general conditions in spite of the presence of autocorrelated errors. The limiting distribution of the standardized estimator and the associated regression t statistic are found using functional central limit theory. New tests of the random walk hypothesis are developed which permit a wide class of dependent and heterogeneous innovation sequences. A new limiting distribution theory is constructed based on the concept of continuous data recording. This theory, together with an asymptotic expansion that is developed in the paper for the unit root case, explain many of the interesting experimental results recently reported in Evans and Savin (1981, 1984).

2,951 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of providing incentives over time for an agent with constant absolute risk aversion, and find that the optimal compensation scheme is a linear function of a vector of accounts which count the number of times that each of the N kinds of observable events occurs.
Abstract: We consider the problem of providing incentives over time for an agent with constant absolute risk aversion. The optimal compensation scheme is found to be a linear function of a vector of N accounts which count the number of times that each of the N kinds of observable events occurs. The number N is independent of the number of time periods, so the accounts may entail substantial aggregation. In a continuous time version of the problem, the agent controls the drift rate of a vector of accounts that is subject to frequent, small random fluctuations. The solution is as if the problem were the static one in which the agent controls only the mean of a multivariate normal distribution and the principal is constrained to use a linear compensation rule. If the principal can observe only coarser linear aggregates, such as revenues, costs, or profits, the optimal compensation scheme is then a linear function of those aggregates. The combination of exponential utility, normal distributions, and linear compensation schemes makes computations and comparative statics easy to do, as we illustrate. We interpret our linearity results as deriving in part from the richness of the agent's strategy space, which makes it possible for the agent to undermine and exploit complicated, nonlinear functions of the accounting aggregates.

2,843 citations


Journal ArticleDOI
TL;DR: In this paper, an extension of the ARCH model was proposed to allow the conditional variance to be a determinant of the mean and is called ARCH-M. The model explains and interprets the recent econometric failures of the expectations hypothesis of the term structure.
Abstract: The expectati on of the excess holding yield on a long bond is postulated to depend upon its conditional variance. Engle's ARCH model is extended to allow the conditional variance to be a determinant of the mean and is called ARCH-M. Estimation and infer ence procedures are proposed, and the model is applied to three interest rate data sets. In most cases the ARCH process and the time varying risk premium are highly significant. A collection of LM diagnostic tests reveals the robustness of the model to various specification changes such as alternative volatility or ARCH measures, regime changes, and interest rate formulations. The model explains and interprets the recent econometric failures of the expectations hypothesis of the term structure. Copyright 1987 by The Econometric Society.

2,654 citations


Journal ArticleDOI
TL;DR: In this article, a new theory of choice under risk is proposed, a theory which, in a sense that will become clear, is dual to expected utility theory, hence the title "dual theory."
Abstract: IN THIS ESSAY, a new theory of choice under risk is being proposed. It is a theory which, in a sense that will become clear, is dual to expected utility theory, hence the title "dual theory." Risky prospects are evaluated in this theory by a cardinal numerical scale which resembles an expected utility, except that the roles of payments and probabilities are reversed. This theme-the reversal of the roles of probabilities and payments-will recur throughout the paper. I should emphasize that playing games, with probabilities masquerading as payments and payments masquerading as probabilities, is not my object. Rather, I hope to convince the reader that the dual theory has intrinsic economic significance and that, in some areas, its predictions are superior to those of expected utility theory (while in other areas the reverse will be the case). Two reasons have prompted me to look for an alternative to expected utility theory. The first reason is methodological: In expected utility theory, the agent's attitude towards risk and the agent's attitude towards wealth are forever bonded together. At the level of fundamental principles, risk aversion and diminishing marginal utility of wealth, which are synonymous under expected utility theory, are horses of different colors. The former expresses an attitute towards risk (increased uncertainty hurts) while the latter expresses an attitude towards wealth (the loss of a sheep hurts more when the agent is poor than when the agent is rich). A question arises, therefore, as to whether these two notions can be kept separate from each other in a full-fledged theory of cardinal utility. The dual theory will have this property. The second reason that leads me to look for an alternative to expected utility theory is empirical: Behavior patterns which are systematic, yet inconsistent with expected utility theory, have often been observed. (Two prominent references, among many others, are Allais (1953) and Kahneman-Tversky (1979).) So deeply

2,382 citations


Journal ArticleDOI
TL;DR: In this paper, a simple, regenerative, optimal stopping model of bus-engine replacement is proposed to describe the behavior of Harold Zurcher, superintendent of maintenance at the Madison (Wisconsin) Metropolitan Bus Company.
Abstract: This paper formulates a simple, regenerative, optimal-stopping model of bus-engine replacement to describe the behavior of Harold Zurcher, superintendent of maintenance at the Madison (Wisconsin) Metropolitan Bus Company. Admittedly, few people are likely to take particular interest in Harold Zurcher and bus engine replacement per se. The author focuses on a specific individual and capital good because it provides a simple, concrete framework to illustrate two ideas: (1) a "bottom-up" approach for modeling replacement investment and (2) a "nested fixed point" algorithm for estimating dynamic programming models of discrete choice.

1,815 citations


Journal ArticleDOI
TL;DR: In this paper, the theorie du processus cointegre is utilised for montrer que ces estimateurs ont des proprietes asymptotiques differentes de celles des estimateurs des moindres carres dans les series temporelles stationnaires.
Abstract: On utilise la theorie du processus cointegre pour montrer que ces estimateurs ont des proprietes asymptotiques differentes de celles des estimateurs des moindres carres dans les series temporelles stationnaires

1,466 citations


Journal ArticleDOI
TL;DR: In this article, the authors make use of the common prior assumption that differences in probability assessments by different individuals are due to the different information that they have (where "information" may be interpreted broadly, to include experience, upbringing, and genetic makeup).
Abstract: If it is common knowledge that the players in a game are Bayesian utility maximizers who treat uncertainty about other players' actions like any other uncertainty, then the outcome is necessarily a correlated equilibrium. Random strategies appear as an expression of each player's uncertainty about what the others will do, not as the result of willful randomization. Use is made of the common prior assumption, according to which differences in probability assessments by different individuals are due to the different information that they have (where "information" may be interpreted broadly, to include experience, upbringing, and genetic makeup). Copyright 1987 by The Econometric Society.

1,283 citations


Journal ArticleDOI
TL;DR: In this paper, the authors re-examine three basic issues in measuring poverty: the choice of the poverty line, the index of poverty, and the relation between poverty and inequality.
Abstract: Official statistics in the United States and the United Kingdom show a rise in poverty between the 1970's and the 1980's but scepticism has been expressed with regard to these findings. In particular, the methods employed in the measurement of poverty have been the subject of criticism. This paper re-examines three basic issues in measuring poverty: the choice of the poverty line, the index of poverty, and the relation between poverty and inequality. One general theme running through the paper is that there is a diversity of judgments which enter the measurement of poverty and that it is necessary to recognize these explicitly in the procedures adopted. There is likely to be disagreement about the choice of poverty line, affecting both its level and its structure. In this situation, we may only be able to make comparisons and not to measure differences, and the comparisons may lead only to a partial rather than a complete ordering. The first section of the paper discusses the stochastic dominance conditions which allow such comparisons, illustrating their application by reference to data for the United States. The choice of poverty measure has been the subject of an extensive literature and a variety of measures have been proposed. In the second section of the paper a different approach is suggested, considering a class of measures satisfying certain general properties and seeking conditions under which all members of the class (which includes many of those proposed) give the same ranking. Those sceptical about measures of poverty often assert that poverty and inequality are being confounded. The third section of the paper distinguishes four different viewpoints and relates them to theories of justice and views of social welfare.

1,201 citations


Journal ArticleDOI
TL;DR: In this paper, a systematic analysis of several theoretic and statistical assumption s used in many empirical models of female labor supply is performed. But the two most important assumptions appear to be the Tobit assumption used to control for sel f-selection into the labor force and exogeneity assumptions on the worker's wage rate and her labor market experience.
Abstract: This study undertakes a systematic analysis of several theoretic and statistical assumption s used in many empirical models of female labor supply. Using a singl e data set (PSID 1975 labor supply data) the author is able to replic ate most of the range of estimated income and substitution effects fo und in previous studies in this field. He undertakes extensive specif ication tests and finds that most of this range should be rejected du e to statistical and model misspecifications. The two most important assumptions appear to be the Tobit assumption used to control for sel f-selection into the labor force and exogeneity assumptions on the wi fe's wage rate and her labor market experience. Copyright 1987 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate testable implications of equilibrium asset pricing models and derive a general representation for asset prices that displays the role of conditioning information, which is then used to examine restrictions implied by asset pricing model on the unconditional moments of asset payoffs and prices.
Abstract: The purpose of this paper is to investigate testable implications of equilibrium asset pricing models. We derive a general representation for asset prices that displays the role of conditioning information. This representation is then used to examine restrictions implied by asset pricing models on the unconditional moments of asset payoffs and prices. In particular, we analyze the effect of information omission on the mean-variance frontier of one-period returns on portfolios of securities. Also, we deduce an information extension of equilibrium pricing functions that is useful in deriving restrictions on the unconditional moments of payoffs and prices.

Journal ArticleDOI
TL;DR: In this paper, an approche basee sur une innovation due a Phillips (1983) sur les approximations des fonctions de densite is proposed. But this approche is based on a different approach.
Abstract: On developpe une approche basee sur une innovation due a Phillips (1983) sur les approximations des fonctions de densite

Journal ArticleDOI
TL;DR: In this paper, a new solution concept called divine equilibrium is introduced, which refines the set of sequential equilibria by requiring that off-the-equilibrium-path beliefs satisfy an additional restriction.
Abstract: This paper studies the sequential equilibria of signaling games. It introduces a new solution concept, divine equilibrium, that refines the set of sequential equilibria by requiring that off-the-equilibrium-path beliefs satisfy an additional restriction. This restriction rules out implausible sequential equilibria in many examples. We show that divine equilibria exist by demonstrating that a sequential equilibrium that fails to be divine cannot be in a stable component. However, the stable component of signaling games is typically smaller than the set of divine equilibria. We demonstrate this fact through examples. We also present a characterization of the stable equilibria in generic signaling games.

Journal ArticleDOI
TL;DR: In this article, the authors consider estimation and hypothesis tests for coefficients of linear regression models, where the coefficient estimates are based on location measures defined by an asymmetric least squares criterion function.
Abstract: This paper considers estimation and hypothesis tests for coefficients of linear regression models, where the coefficient estimates are based on location measures defined by an asymmetric least squares criterion function. These asymmetric least squares estimators have properties which are analogous to regression quantile estimators, but are much simpler to calculate, as are the corresponding test statistics. The coefficient estimators can be used to construct test statistics for homoskedasticity and conditional symmetry of the error distribution, and we find these tests compare quite favorably with other commonly-used tests of these null hypotheses in terms of local relative efficiency. Consequently, asymmetric least squares estimation provides a convenient and relatively efficient method of summarizing the conditional distribution of a dependent variable given the regressors, and a means of testing whether a linear model is an adequate characterization of the "typical value" for this conditional distribution.

ReportDOI
TL;DR: In this paper, the authors developed two methods for imposing curvature conditions globally in the context of cost function estimation, based on a generalization of a functional form first proposed by McFadden.
Abstract: Empirically estimated flexible functional forms frequently fail to satisfy the appropriate theoretical curvature conditions. Lau and Gallant and Golub have worked out methods for imposing the appropriate curvature conditions locally, but those local techniques frequently fail to yield satisfactory results. We develop two methods for imposing curvature conditions globally in the context of cost function estimation. The first method adopts Lau's technique to a generalization of a functional form first proposed by McFadden. Using this Generalized McFadden functional form, it turns out that imposing the appropriate curvature conditions at one data point imposes the conditions globally. The second method adopts a technique used by McFadden and Barnett, which is based on the fact that a non-negative sum of concave functions will be concave. Our various suggested techniques are illustrated using the U.S. Manufacturing data utilized by Berndt and Khaled

ReportDOI
TL;DR: This paper showed that saving should be at least as good a predictor of declines in labor income as any other for ecast that can be constructed from publicly available information, even when income is stationary in first differ ences rather than levels.
Abstract: The permanent income hypothesis implies that people save because they rationally expect their permanent income to decline; they save "for a rainy day." It follows that saving should be at l east as good a predictor of declines in labor income as any other for ecast that can be constructed from publicly available information. Th e paper tests this hitherto ignored implication of the permanent inco me hypothesis, using quarterly aggregate data for the period 1953-84 in the United States. By contrast with much of the recent literature, the results here are valid when income is stationary in first differ ences rather than levels. Copyright 1987 by The Econometric Society.

ReportDOI
TL;DR: In this paper, the authors analyze an aggregative general equilibrium model, in which the use of money is motivated by a cash-in-advance constraint, applied to purchases of a subset of consumption goods.
Abstract: The authors analyze an aggregative general equilibrium model, in which the use of money is motivated by a cash-in-advance constraint, applied to purchases of a subset of consumption goods. The system is subject to both real and monetary shocks, which are economy-wide and observed by all. They develop methods for verifying the existence of, characterizing, and explicitly calculating equilibria. Copyright 1987 by The Econometric Society.

Book ChapterDOI
TL;DR: In this paper, the authors report on a series of experiments examining three key implications of affiliated private value auctions: (i) in a first-price auction, public information about rivals' values increases expected revenue, (ii) an English auction increased expected revenue compared to a first price auction, and (iii) a second price auction is isomorphic to an English auctions.
Abstract: In affiliated private value auctions, each bidder has perfect information regarding his/her own value for the object at auction, but higher values of the item for one bidder make higher values for other bidders more likely We report on a series of experiments examining three key implications of these auctions: (i) in a first-price auction, public information about rivals' values increases expected revenue, (ii) an English auction increases expected revenue compared to a first-price auction, and (iii) a second-price auction is isomorphic to an English auction In examining these issues, we compare predictions of some ad hoc bidding models with Nash equilibrium predictions In the first-price auction experiments, Nash equilibrium bidding theory organizes the data better than either of two ad hoc bidding models Public information about others' valuations does increase average revenue, but the increase in revenue is smaller and less reliable than predicted under risk neutral Nash equilibrium bidding Lower average revenue might be attributed to risk aversion, while the high variability is attributed to a sizable frequency of individual bidding errors relative to the theory Bidding theory precisely organizes English auction outcomes after a brief initial learning period The dominant strategy equilibrium does not organize second-price auctions nearly as well, as market prices persistently exceed predicted prices The difference between English and second-price outcomes is attributed to effects of different information flows, inherent in the structure of the two institutions, on eliminating bidding errors Revenue impacts of these two institutions, relative to a first-price auction, are examined in light of observed bidding patterns

Journal ArticleDOI
TL;DR: In this paper, a condition comportementale significative on les fonctions d'utilite for les richesses which signifie qu'une loterie indesirable ne peut jamais etre rendue desirable by the presence of another loterie independante is introduced.
Abstract: On introduit et on etudie une condition comportementale significative sur les fonctions d'utilite pour les richesses qui signifie qu'une loterie indesirable ne peut jamais etre rendue desirable par la presence d'une loterie indesirable independante

Journal ArticleDOI
TL;DR: This paper showed that preference reversal can be consistent with transitive preferences if these preferences violate the independence axiom of expected utility theory and for the class of experiments that were used to produce the evidence concerning preference reversal, elicitation of certainty equivalents is possible if, and only if, the respondent's preferences can be represented by functionals that are linear in the probabilities.
Abstract: This paper shows that: (1) the "preference reversal" phenomenon can be consistent with transitive preferences if these preferences violate the independence axiom of expected utility theory and (2) for the class of experiments that were used to produce the evidence concerning "preference reversal," the elicitation of certainty equivalents is possible if, and only if, the respondent's preferences can be represented by functionals that are linear in the probabilities. Furthermore, a more general class of experiments is not immune to "preference reversal" if nonexpected utility preferences are admitted. Copyright 1987 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this article, a generalization of the multinomial logit (MNL) model is developed for cases where discrete alternatives are ordered, by allowing stochastic correlation among alternatives in close proximity.
Abstract: A generalization of the multinomial logit (MNL) model is developed for cases where discrete alternatives are ordered, by allowing stochastic correlation among alternatives in close proximity. The model belongs to the Generalized Extreme Value class and is therefore consistent with random utility maximization. An extension can handle cases where observations have been selected from a truncated choice set. A two-stage procedure using MNL computer software provides a specification test for MNL against the proposed model. Two empirical applications are briefly described. Copyright 1987 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this paper, it is shown that correlated rationalizability is equivalent to a posteriori equilibrium, a refinement of subjective correlated equilibrium, and a decision-theoretic justification for the equilibrium approach to game theory is provided.
Abstract: We discuss the unity between the two standard approaches to noncooperative solution concepts for games. The decision-theoretic approach starts from the assumption that the rationality of the players is common knowledge. This leads to the notion of correlated rationalizability. It is shown that correlated rationalizability is equivalent to a posteriori equilibrium — a refinement of subjective correlated equilibrium. Hence a decision-theoretic justification for the equilibrium approach to game theory is provided. An analogous equivalence result is proved between independent rationalizability, which is the appropriate concept if each player believes that the others act independently, and conditionally independent a posteriori equilibrium. A characterization of Nash equilibrium is also provided.

Journal ArticleDOI
TL;DR: In this article, the authors characterize the set of incentive-compatible and individually rational trading mechanisms, and give a simple necessary and sufficient condition for such mechanisms to dissolve the partnership ex post efficiently.
Abstract: Several partners jointly own an asset that may be traded among them. Each partner has a valuation for the asset; the valuations are known privately and drawn independently from a common probability distribution. We characterize the set of all incentive-compatible and interim- individually- rational trading mechanisms, and give a simple necessary and sufficient condition for such mechanisms to dissolve the partnership ex post efficiently. A bidding game is constructed that achieves such dissolution whenever it is possible. Despite incomplete information about the valuation of the asset, a partnership can be dissolved ex post efficiently provided no single partner owns too large a share; this contrasts with Myerson and Satterthwaite's result that ex post efficiency cannot be achieved when the asset is owned by a single party.

Journal ArticleDOI
TL;DR: In this article, the residual variance is estimated by nearest neighbor nonparametric regression and the resulting weighted least squares estimator of the regression coefficients is shown to be adaptive, in the sense of having the same asymptotic distribution, to first order, as estimators based on knowledge of the actual variance function or a finite parameterization of it.
Abstract: In a multiple regression model the residual variance is an unknown function of the explanatory variables, and estimated by nearest neighbor nonparametric regression. The resulting weighted least squares estimator of the regression coefficients is shown to be adaptive, in the sense of having the same asymptotic distribution, to first order, as estimators based on knowledge of the actual variance function or a finite parameterization of it. A similar result was established by Carroll (1982) using kernel estimation and under substantially more restrictive conditions on the data generating process than ours. Extensions to various other models seem to be possible.

Journal ArticleDOI
TL;DR: In this paper, the authors study duopolistic competition in a homogeneous good through time under the assumption that its current desirability is an exponentially-weighted function of accumulated past consumption, which implies that the current price of the good does not decline by as much to accommodate any given level of current consumption.
Abstract: The authors study duopolistic competition in a homogeneous good through time under the assumption that its current desirability is an exponentially-weighted function of accumulated past consumption. This implies that the current price of the good does not decline by as much to accommodate any given level of current consumption. Our an alysis is conducted in terms of a differential game. It is found that the equilibrium price corresponding to the open-loop Nash equilibriu m strategies approaches the static Cournot equilibrium price while th e equilibrium price corresponding to the closed-loop Nash equilibrium strategies, which are subgame perfect, approaches a price below it. Copyright 1987 by The Econometric Society.

Journal ArticleDOI
TL;DR: In this paper, a finite-horizon search model that is econometrically implemented using all of the restrictions impli ed by the theory is presented, followed by a sample of male high school graduates from graduat ion to employment.
Abstract: This paper presents a finite-horizon search model that is econometrically implemented using all of the restrictions impli ed by the theory. Following a sample of male high school graduates fr om the youth cohort of the National Longitudinal Surveys from graduat ion to employment, search parameters are estimated. Reservation wages and offer probabilities are estimated to be quite low. Simulations a re performed of the impact of changing the parameters on the expected duration of unemployment. Copyright 1987 by The Econometric Society.


Journal ArticleDOI
TL;DR: In this article, deux resultats utiles for demontrer l'existence and caracteriser des equilibres de separation dans des jeux de signalement were presented.
Abstract: On donne deux resultats utiles pour demontrer l'existence et pour caracteriser des equilibres de separation dans des jeux de signalement

Journal ArticleDOI
TL;DR: In this paper, it was shown that inference remains possible if the disturbances for each panel member are known only to be time-stationary with unbounded support and if the explanatory variables vary enough over time.
Abstract: Andersen (1970) considered the problem of inference on random effects linear models from binary response panel data. He showed that inference is possible if the disturbances for each panel member are known to be white noise with the logistic distribution and if the observed explanatory variables vary over time. A conditional maximum likelihood estimator consistently estimates the model parameters up to scale. The present paper shows that inference remains possible if the disturbances for each panel member are known only to be time-stationary with unbounded support and if the explanatory variables vary enough over time. A conditional version of the maximum score estimator (Manski, 1975, 1985) consistently estimates the model parameters up to scale.