scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1982"


Journal ArticleDOI
TL;DR: In this article, a new class of stochastic processes called autoregressive conditional heteroscedastic (ARCH) processes are introduced, which are mean zero, serially uncorrelated processes with nonconstant variances conditional on the past, but constant unconditional variances.
Abstract: Traditional econometric models assume a constant one-period forecast variance. To generalize this implausible assumption, a new class of stochastic processes called autoregressive conditional heteroscedastic (ARCH) processes are introduced in this paper. These are mean zero, serially uncorrelated processes with nonconstant variances conditional on the past, but constant unconditional variances. For such processes, the recent past gives information about the one-period forecast variance. A regression model is then introduced with disturbances following an ARCH process. Maximum likelihood estimators are described and a simple scoring iteration formulated. Ordinary least squares maintains its optimality properties in this set-up, but maximum likelihood is more efficient. The relative efficiency is calculated and can be infinite. To test whether the disturbances follow an ARCH process, the Lagrange multiplier procedure is employed. The test is based simply on the autocorrelation of the squared OLS residuals. This model is used to estimate the means and variances of inflation in the U.K. The ARCH effect is found to be significant and the estimated variances increase substantially during the chaotic seventies.

20,728 citations



Journal ArticleDOI
TL;DR: In this article, a general equilibrium model is developed and fitted to U.S. quarterly data for the post-war period, with the assumption that more than one time period is required for the construction of new productive capital and the non-time-separable utility function that admits greater intertemporal substitution of leisure.
Abstract: The equilibrium growth model is modified and used to explain the cyclical variances of a set of economic time series, the covariances between real output and the other series, and the autocovariance of output. The model is fitted to quarterly data for the post-war U.S. economy. Crucial features of the model are the assumption that more than one time period is required for the construction of new productive capital, and the non-time-separable utility function that admits greater intertemporal substitution of leisure. The fit is surprisingly good in light of the model's simplicity and the small number of free parameters. THAT WINE IS NOT MADE in a day has long been recognized by economists (e.g., Bdhm-Bawerk [6]). But, neither are ships nor factories built in a day. A thesis of this essay is that the assumption of multiple-period construction is crucial for explaining aggregate fluctuations. A general equilibrium model is developed and fitted to U.S. quarterly data for the post-war period. The co-movements of the fluctuations for the fitted model are quantitatively consistent with the corresponding co-movements for U.S. data. In addition, the serial correlations of cyclical output for the model match well with those observed. Our approach integrates growth and business cycle theory. Like standard growth theory, a representative infinitely-lived household is assumed. As fluctuations in employment are central to the business cycle, the stand-in consumer values not only consumption but also leisure. One very important modification to the standard growth model is that multiple periods are required to build new capital goods and only finished capital goods are part of the productive capital stock. Each stage of production requires a period and utilizes resources. Halffinished ships and factories are not part of the productive capital stock. Section 2 contains a short critique of the commonly used investment technologies, and presents evidence that single-period production, even with adjustment costs, is inadequate. The preference-technology-information structure of the model is presented in Section 3. A crucial feature of preferences is the non-time-separable utility function that admits greater intertemporal substitution of leisure. The exogenous stochastic components in the model are shocks to technology and imperfect indicators of productivity. The two technology shocks differ in their persistence.

5,728 citations


Journal ArticleDOI
TL;DR: In this paper, a study which examined perfect equilibrium in a bargaining model was presented, focusing on a strategic approach adopted for the study and details of the bargaining situation used; discussion on perfect equilibrium.
Abstract: Focuses on a study which examined perfect equilibrium in a bargaining model. Overview of the strategic approach adopted for the study; Details of the bargaining situation used; Discussion on perfect equilibrium. (From Ebsco)

5,139 citations


Journal ArticleDOI
TL;DR: In this article, the consequences and detection of model misspecification when using maximum likelihood techniques for estimation and inference are examined, and the properties of the quasi-maximum likelihood estimator and the information matrix are exploited to yield several useful tests.
Abstract: This paper examines the consequences and detection of model misspecification when using maximum likelihood techniques for estimation and inference. The quasi-maximum likelihood estimator (QMLE) converges to a well defined limit, and may or may not be consistent for particular parameters of interest. Standard tests (Wald, Lagrange Multiplier, or Likelihood Ratio) are invalid in the presence of misspecification, but more general statistics are given which allow inferences to be drawn robustly. The properties of the QMLE and the information matrix are exploited to yield several useful tests for model misspecification.

4,867 citations


Journal ArticleDOI
TL;DR: In this article, a new general auction model was proposed, and the properties of affiliated random variables were investigated, and various theorems were presented in Section 4-8 and Section 9.
Abstract: : In Section 2, we review some important results of the received auction theory, introduce a new general auction model, and summarize the results of our analysis. Section 3 contains a formal statement of our model, and develops the properties of affiliated random variables. The various theorems are presented in Sections 4-8. In Section 9, we offer our views on the current state of auction theory. Following Section 9 is a technical appendix dealing with affiliated random variables.

3,857 citations



Journal ArticleDOI
TL;DR: In this article, the authors developed a model of strategic communication in which a better-informed Sender (S) sends a possibly noisy signal to a Receiver (R), who then takes an action that determines the welfare of both.
Abstract: This paper develops a model of strategic communication, in which a better-informed Sender (S) sends a possibly noisy signal to a Receiver (R), who then takes an action that determines the welfare of both. We characterize the set of Bayesian Nash equilibria under standard assumptions, and show that equilibrium signaling always takes a strikingly simple form, in which S partitions the support of the (scalar) variable that represents his private information and introduces noise into his signal by reporting, in effect, only which element of the partition his observation actually lies in. We show under further assumptions that before S observes his private information, the equilibrium whose partition has the greatest number of elements is Pareto-superior to all other equilibria, and that if agents coordinate on this equilibrium, R's equilibrium expected utility rises when agents' preferences become more similar. Since R bases his choice of action on rational expectations, this establishes a sense in which equilibrium signaling is more informative when agents' preferences are more similar.

3,048 citations


Journal ArticleDOI
TL;DR: In this paper, the optimal rate of investment as a function of marginal q adjusted for tax parameters is derived from data on average q assuming the actual U.S. tax system concerning corporate tax rate and depreciation allowances.
Abstract: It is increasingly recognized that Tobin's conjecture that investment is a function of marginal q is equivalent to the firm's optimal capital accumulation problem with adjustment costs. This paper formalizes this idea in a very general fashion and derives the optimal rate of investment as a function of marginal q adjusted for tax parameters. An exact relationship between marginal q and average q is also derived. Marginal q adjusted for tax parameters is then calculated from data on average q assuming the actual U.S. tax system concerning corporate tax rate and depreciation allowances. IN THE LAST DECADE and a half, the literature on investment has been dominated by two theories of investment-the neoclassical theory originated by Jorgenson and the \"q\" theory suggested by Tobin. The neoclassical theory of investment starts from a firm's optimization behavior. The objective of the firm is to maximize the present discounted value of net cash flows subject to the technological constraints summarized by the production function. It seems useful to divide the neoclassical theory into two stages. The earlier version of the neoclassical approach developed by Jorgenson [11] derives the optimal capital stock under constant returns to scale and exogenously given output. To make the rate of investment determinate, the model is completed by a distributed lag function for net investment. This earlier version of the neoclassical investment theory has a couple of drawbacks. The assumption of exogenously given output (which makes the optimal capital stock determinate) is inconsistent with perfect competition. The theory itself cannot determine the rate of investment; rather, it relies on an ad hoc stock adjustment mechanism. Some sort of adjustment costs are introduced implicitly through the distributed lag function for investment.

2,928 citations



Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of how to regulate a monopolistic firm whose costs are unknown to the regulator, and derive an optimal regulatory policy for the case in which the regulator does not know the costs of the firm.
Abstract: We consider the problem of how to regulate a monopolistic firm whose costs are unknown to the regulator. The regulator's objective is to maximize a linear social welfare function of the consumers' surplus and the firm's profit. In the optimal regulatory policy, prices and subsidies are designed as functions of the firm's cost report so that expected social welfare is maximized, subject to the constraints that the firm has nonnegative profit and has no incentive to misrepresent its costs. We explicitly derive the optimal policy and analyze its properties. IN THEIR CLASSIC PAPERS Dupuit [2] and Hotelling [5] considered pricing policies for a bridge that had a fixed cost of construction and zero marginal cost. They demonstrated that the pricing policy that maximizes consumer well-being is to set price equal to marginal cost and to provide a subsidy to the supplier equal to the fixed cost, so that a firm would be willing to provide the bridge. This first-best solution is based on a number of informational assumptions. First, the demand function is assumed to be known to both the regulator and to the firm. While the assumption of complete information may be too strong, the assumption that information about demand is as available to the regulator as it is to the firm does not seem unnatural. A second informational assumption is that the regulator has complete information about the cost of the firm or at least has the same information about cost as does the firm. This assumption is unlikely to be met in reality, since the firm would be expected to have better information about costs than would the regulator. As Weitzman has stated, "An essential feature of the regulatory environment I am trying to describe is uncertainty about the exact specification of each firm's cost function. In most cases even the managers and engineers most closely associated with production will be unable to precisely specify beforehand the cheapest way to generate various hypothetical output levels. Because they are yet removed from the production process, the regulators are likely to be vaguer still about a firm's cost function" [12, p. 684]. As this observation suggests, it is natural to expect that a firm would have better information regarding its costs than would a regulator. The purpose of this paper is to develop an optimal regulatory policy for the case in which the regulator does not know the costs of the firm. One strategy that a regulator could use in the absence of full information about costs is to give the firm the title to the total social surplus and to delegate the pricing decision to the firm. In pursuing its own interests, which would then be to maximize the total social surplus, the firm would adopt the same marginal cost pricing strategy that the regulator would have imposed if the regulator had


Journal ArticleDOI
TL;DR: In this article, the authors studied competitive adjustment processes in labor markets with perfect information but heterogeneous firms and workers and showed that equilibrium in such markets exists and is stable, in spite of workers' discrete choices among jobs, provided that all workers are gross substitutes from each firm's standpoint.
Abstract: Competitive adjustment processes in labor markets with perfect information but heterogeneous firms and workers are studied. Generalizing results of Shapley and Shubik [7], and of Crawford and Knoer [1], we show that equilibrium in such markets exists and is stable, in spite of workers' discrete choices among jobs, provided that all workers are gross substitutes from each firm's standpoint. We also generalize Gale and Shapley's [3] result that the equilibrium to which the adjustment process converges is biased in favor of agents on the side of the market that makes offers, beyond the class of economies to which it was extended by Crawford and Knoer [1]. Finally, we use our techniques to establish the existence of equilibrium in a wider class of markets, and some sensible comparative statics results about the effects of adding agents to the market are obtained. THE ARROW-DEBREU THEORY of general economic equilibrium has long been recognized as a powerful and elegant tool for the analysis of resource allocation in market economies. Not all markets fit equally well into the Arrow-Debreu framework, however. Consider, for example, the labor market-or the housing market, which provides an equally good example for most of our purposes. Essential features of the labor market are pervasive uncertainty about market opportunities on the part of participants, extensive heterogeneity, in the sense that job satisfaction and productivity generally differ (and are expected to differ) interactively and significantly across workers and jobs, and large set-up costs and returns to specialization that typically limit workers to one job. All of these features can be fitted formally into the Arrow-Debreu framework. State-contingent general equilibrium theory, for example, provides a starting point for studying the effects of uncertainty. But this analysis has been made richer and its explanatory power broadened by the examination of equilibrium with incomplete markets, search theory, and market signaling theory. The purpose of this paper is to attempt some improvements in another dimension: we study the outcome of competitive sorting processes in markets where complete heterogeneity prevails (or may prevail). To do this, we take as given the implications of set-up costs and returns to specialization by assuming that, while firms can hire any number of workers, workers can take at most one job. We also return to the simplification of perfect information. In the customary view of competitive markets, agents take market prices as given and respond noncooperatively to them. In this framework equilibrium cannot exist in general unless the goods traded in each market are truly homogeneous; heterogeneity therefore generally requires a very large number of markets. And since these markets are necessarily extremely thin-in many cases containing only a single agent on each side-the traditional stories supporting the plausibility of price-taking behavior are quite strained.


Journal ArticleDOI
TL;DR: In this article, the authors disaggregated the income of individuals or households into different factor components, such as earnings, investment income, and transfer payments, and considered how to assess the contributions of these sources to total income inequality.
Abstract: This paper disaggregates the income of individuals or households into different factor components, such as earnings, investment income, and transfer payments, and considers how to assess the contributions of these sources to total income inequality. In the approach adopted, a number of basic principles of decomposition are proposed and their implications for the assignment of component contributions are examined.


Journal ArticleDOI
TL;DR: In this paper, the authors consider the possibility of static and dynamic speculation when traders have rational expectations and show that, unless traders have different priors or are able to obtain insurance in the market, speculation relies on inconsistent plans and thus is ruled out by rational expectations.
Abstract: This paper considers the possibility of static and dynamic speculation when traders have rational expectations. Its central theme is that, unless traders have different priors or are able to obtain insurance in the market, speculation relies on inconsistent plans, and thus is ruled out by rational expectations. Its main contribution lies in the integration of the rational expectations equilibrium concept into a model of dynamic asset trading and in the study of the speculation created by potential capital gains. Price bubbles and their martingale properties are examined. It is argued that price bubbles rely on the myopia of traders and that they disappear if traders adopt a truly dynamic maximizing behavior.


Journal ArticleDOI
TL;DR: In this article, the authors point out that this is true only of standard quadrature techniques such as trapezoidal integration or its improved variants; Gaussian quadratures, on the other hand, is extremely efficient and is well within the bounds of computational feasibility on modern computers.
Abstract: A PROBLEM OF ESTIMATION that has long confronted many economists is the difficulty of estimating the parameters of equations with limited dependent variables on cross-section time-series (i.e., panel) data. While there are widely available packaged computer programs for estimating either (a) cross-section probit and Tobit models or (b) simple permanent-transitory, random-effects panel models with continuous dependent variables, there are no available computationally feasible methods of combining these two models. This is because the likelihood function that arises in such a combined model contains multivariate normal integrals whose evaluation is quite difficult, if not impossible, with conventional approximation methods. There is a widespread feeling among those working in the area that one possible method of evaluation, the use of quadrature techniques, is in principle possible but is in practice computationally too burdensome to consider (e.g., Albright et al. [2, p. 13]; Hausman and Wise [6, p. 12]). In this note we point out that this is true only of standard quadrature techniques such as trapezoidal integration or its improved variants; Gaussian quadrature, on the other hand, is extremely efficient and is well within the bounds of computational feasibility on modern computers. In what follows, we state the nature of the integrals that need to be evaluated, provide a brief exposition of Gaussian quadrature, and provide a numerical illustration of its use in

ReportDOI
TL;DR: This paper investigated the stochastic relation between income and consumption within a panel of about 2,000 households and found that consumption responds much more strongly to permanent than to transitory movements of income.
Abstract: We investigate the stochastic relation between income and consumption (specifically, consumption of food) within a panel of about 2,000 households. Our major findings are: 1. Consumption responds much more strongly to permanent than to transitory movements of income. 2. The response to transitory income is nonetheless clearly positive. 3. A simple test, independent of our model of consumption, rejects a central implication of the pure life cycle-permanent income hypothesis. The observed covariation of income and consumption is compatible with pure life cycle-permanent income behavior on the part of80 percent of families and simple proportionality of consumption and income among the remaining 20 percent. As a general matter, our findings support the view that families respond differently to different sources of income variations. In particular, temporary income tax policies have smaller effects on consumption than do other, more permanent changes in income of the same magnitude.(This abstract was borrowed from another version of this item.)

Journal ArticleDOI
TL;DR: In this paper, the Kuhn-Tucker multiplier test statistic is defined and its relationships with the likelihood ratio test and the Wald test are examined, and it is shown that these relationships are the same as in the equality constrained case.
Abstract: This paper considers the problem of testing statistical hypotheses in linear regression models with inequality constraints on the regression coefficients. The Kuhn-Tucker multiplier test statistic is defined and its relationships with the likelihood ratio test and the Wald test are examined. It is shown, in particular, that these relationships are the same as in the equality constrained case. It is emphasized, however, that their common asymptotic distribution is a mixture of chi-square distributions under the null hypothesis.

ReportDOI
TL;DR: In this article, the authors point out some pitfalls in Rosen's procedure, which, if ignored, could lead to major identification problems, and discuss the potential problems inherent in this procedure and provide an example.
Abstract: MANY COMMODITIES can be viewed as bundles of individual attributes for which no explicit markets exist. It is often of interest to estimate structural demand and supply functions for these attributes, but the absence of directly observable attribute prices poses a problem for such estimation. In an influential paper published several years ago, Rosen [3] proposed an estimation procedure to surmount this problem. This procedure has since been used in a number of applications (see, for example, Harrison and Rubinfeld [2] or Witte, et al. [4]). The purpose of this note is to point out certain pitfalls in Rosen's procedure, which, if ignored, could lead to major identification problems. In Section 2 we summarize briefly the key aspects of Rosen's method as it has been applied in the literature. Section 3 discusses the potential problems inherent in this procedure and provides an example. Section 4 concludes with a few suggestions for future research.(This abstract was borrowed from another version of this item.)

Journal ArticleDOI
TL;DR: In this paper, a dynamic optimal resource allocation to R and D in an n-firm industry is developed using differential games, which represents a synthesis of the analytic methods previously applied to the problem: static game theory and optimal control.
Abstract: A theory of dynamic optimal resource allocation to R and D in an n-firm industry is developed using differential games. This technique represents a synthesis of the analytic methods previously applied to the problem: static game theory and optimal control. The use of particular functional forms allows the computation and detailed discussion of the Nash equilibrium in investment rules. THIS PAPER ADDRESSES THE PROBLEM of resource allocation to research and development. Among the important issues that a firm engaging in R and D must evaluate are: uncertainty regarding the feasibility and profitability of a particular innovation; the possibility of a protracted development period; the possibility that a rival firm may innovate first, capturing either a patent or a significant share of the new market; the possibility that a rival firm may imitate the innovation and appropriate some of the profits in the new market. In what follows, we will develop a theory of optimal resource allocation to research and development which incorporates the aspects of R and D enumerated above. We will use a dynamic game theoretic analysis, determining the Nash equilibrium strategies for n identical firms. The availability of perfect patent protection is shown to accelerate development of the innovation, and the effect of increasing rivalry is addressed. The impact of increasing rivalry on Nash equilibrium investment in R and D depends upon the degree of appropriability of rewards. If patent protection is perfect, then increasing the number of Nash rivals results in increased R and D effort. However, when imitation is rewarded, the opposite may be true. Finally, some notions of competitive and perfectly competitive equilibrium are examined in the context of the model developed below.


Journal ArticleDOI
TL;DR: In this article, the authors investigate the effect of information not included in the classical model of games of complete information on bargaining behavior in the context of games with complete information, and the results of the experiment permit them to identify such component effects, in equilibrium, including effects that depend on whether certain information is common knowledge or not.
Abstract: A fundamental assumption in much of game theory and economics is that all the relevant information for determining the rational play of a game is contained in its structural description. Recent experimental studies of bargaining have demonstrated effects due to information not included in the classical models of games of complete information. The goal of the experiment reported here is to separate these effects into components that can be attributed to the possession of specific information by specific bargainers, and to assess the extent to which the observed behavior can be characterized as equilibrium behavior. The results of the experiment permit us to identify such component effects, in equilibrium, including effects that depend on whether certain information is common knowledge or not. The paper closes with some speculation on the causes of these effects.

Journal ArticleDOI
TL;DR: In this paper, the authors devise and apply a new method for estimating demand for local public goods from survey data, where individuals' responses to questions about whether they wanted more, less, or the same amount of various public goods are combined with observations of their incomes, tax rates, and the amounts of actual spending in their home communities.
Abstract: We devise and apply a new method for estimating demand for local public goods from survey data. Individuals' responses to questions about whether they wanted more, less, or the same amount of various local public goods are combined with observations of their incomes, tax rates, and the amounts of actual spending in their home communities. Parameter estimates turn out to be quite similar to those found with studies like Bergstrom and Goodman's study based on total expenditures across communities.


Journal ArticleDOI
TL;DR: In this article, the authors studied the time path of asset prices in a stationary experimental environment, and showed that after several replications prices converge to a perfect foresight equilibrium in a sequential market with an "informational trap" and a futures market.
Abstract: The time path of asset prices is studied within a stationary experimental environment. After several replications prices converge to a perfect foresight equilibrium. A sequential market having an "informational trap" and a futures market are also studied.

Journal ArticleDOI
TL;DR: In this paper, a class of estimators called two stage least absolute deviations estimators (2SLAD) is defined, their asymptotic properties are derived, and the problem of finding the optimal member of the class is considered.
Abstract: In this paper the method of least absolute deviations is applied to the estimation of the parameters of a structural equation in the simultaneous equations model. A class of estimators called two stage least absolute deviations estimators is defined, their asymptotic properties are derived, and the problem of finding the optimal member of the class is considered. IN THIS PAPER WE APPLY the method of least absolute deviations to the estimation of the parameters of a structural equation in the simultaneous equations model. We define a class of estimators called two stage least absolute deviations estimators (2SLAD) and derive their asymptotic properties. They are so named as their relationship to the two stage least squares estimator (2SLS) is analogous to the relationship of the least absolute deviations estimator (LAD) to the least squares estimator (LS) in the standard regression model. The LAD estimation has been extensively studied in the context of the standard regression model and its usefulness is universally recognized. In this paper we show that the advantage of 2SLAD over 2SLS in the simultaneous equations model can be as great as that of LAD over LS in the standard regression model, if 2SLAD is properly defined. This last clause is very important, since the results of this paper indicate that the LAD analogue of 2SLS that has been considered before in the literature is not an appropriate method.