scispace - formally typeset
Search or ask a question

Showing papers in "Econometrica in 1971"


Journal ArticleDOI
TL;DR: This article defined investment as the act of incurring an immediate cost in the expectation of future rewards, i.e., the payments it must make to extract itself from contractual commitments, including severance payments to labor, are the initial expenditure, and the prospective reward is the reduction in future losses.
Abstract: Economics defines investment as the act of incurring an immediate cost in the expectation of future rewards. Firms that construct plants and install equipment, merchants who lay in a stock of goods for sale, and persons who spend time on vocational education are all investors in this sense. Somewhat less obviously, a firm that shuts down a loss-making plant is also \"investing\": the payments it must make to extract itself from contractual commitments, including severance payments to labor, are the initial expenditure, and the prospective reward is the reduction in future losses.

3,648 citations


Journal ArticleDOI
TL;DR: This article developed several models for limited dependent variables, which are extensions of the multiple probit analysis model and differ from that model by allowing the determination of the size of the variable when it is not zero to depend on different parameters or variables from those determining the probability of its being zero.
Abstract: THIS PAPER DEVELOPS some models for limited dependent variables.2 The distinguishing feature of these variables is that the range of values which they may assume has a lower bound and that this lowest value occurs in a fair number of observations. This feature should be taken into account in the statistical analysis of observations on such variables. In particular, it renders invalid use of the usual regression model. The second section of this paper develops several models for such variables. Like Tobin's [10] model, they are extensions of the multiple probit analysis model.3 They differ from that model by allowing the determination of the size of the variable when it is not zero to depend on different parameters or variables from those determining the probability of its being zero. Estimation and discrimination in the models are considered in Section 3. The models, like their prototypes, seem particularly intractable to exact analysis and large sample approximations have to be used. The adequacy of inferences based on these procedures is explored in Section 4 through a small sampling experiment. Limited dependent variables arise naturally in the study of consumer purchases, particularly purchases of durable goods. When a durable good is to be purchased, the amount spent may vary in fine gradations, but for many durables it is probably the case that most consumers in a particular period make no purchase at all. In Section 5 we apply the models to the demand for durable goods to provide an application of the techniques.

2,808 citations



Journal ArticleDOI
TL;DR: In this paper, it is shown under weak regularity conditions that local identifiability of the unknown parameter vector is equivalent to nonsingularity of the information matrix, which is a measure of the amount of information about the unknown parameters available in the sample.
Abstract: A theory of identification is developed for a general stochastic model whose probability law is determined by a finite number of parameters. It is shown under weak regularity conditions that local identifiability of the unknown parameter vector is equivalent to nonsingularity of the information matrix. The use of "reduced-form" parameters to establish identifiability is also analyzed. The general results are applied to the familiar problem of determining whether the coefficients of a system of linear simultaneous equations are identifiable. THE IDENTIFICATION PROBLEM concerns the possibility of drawing inferences from observed samples to an underlying theoretical structure. An important part of econometric theory involves the derivation of conditions under which a given structure will be identifiable. The basic results for linear simultaneous equation systems under linear parameter constraints were given by Koopmans and Rubin [10] in 1950. Extensions to nonlinear systems and nonlinear constraints were made by Wald [15], Fisher [4, 5, 6], and others. A summary of these results can be found in Fisher's comprehensive study [7]. The identification problem has also been thoroughly analyzed in the context of the classical single-equation errors-in-variables model. The basic papers here are by Neyman [12] and Reiers0l [13]. Most of this previous work on the identification problem has emphasized the special features of the particular model being examined. This has tended to obscure the fact that the problem of structural identification is a very general one. It is not restricted to simultaneous-equation or errors-in-variables models. As Koopmans and Reiers0l [9] emphasize, the identification problem is "a general and fundamental problem arising, in many fields of inquiry, as a concomitant of the scientific procedure that postulates the existence of a structure." In their important paper Koopmans and Reiers0l define the basic characteristics of the general identification problem. In the present paper we shall, in the case of a general parametric model, derive some identifiability criteria. These criteria include the standard rank conditions for linear models as special cases. Our approach is based in part on the information matrix of classical mathematical statistics. Since this matrix is a measure of the amount of information about the unknown parameters available in the sample, it is not surprising that it should be related to identification. For lack of identification is simply the lack of sufficient information to distinguish between alternative structures. The following results make this relationship more precise.2

990 citations


Journal ArticleDOI

542 citations


Journal ArticleDOI
TL;DR: In this paper, the applicability and usefulness of the maximum likelihood method and analysis of covariance techniques in the analysis of this type of model, particularly when one of the covariates used is a lagged dependent variable.
Abstract: The paper argues that variance components models are very useful in pooling cross section and time series data because they enable us to extract some information about the regression parameters from the between group and between time-period variation-a source that is often completely eliminated in the commonly used dummy variable techniques. The paper studies the applicability and usefulness of the maximum likelihood method and analysis of covariance techniques in the analysis of this type of model, particularly when one of the covariates used is a lagged dependent variable.

486 citations






Journal ArticleDOI
TL;DR: In this paper, the authors consider the case where a seller is aware that its pricing policy will affect the probability of entry of competing suppliers and develop an optimal price policy under the assumption that the entry probability is a non-decreasing function of product price and that the objective is present value maximization.
Abstract: The situation in which a seller is aware that his pricing policy will affect the probability of entry of competing suppliers is studied. The seller's optimal price policy is developed under the assumption that the entry probability is a non-decreasing function of product price and that the objective is present value maximization. It is shown that the optimal pre-entry price tends to fall as the discount rate drops, the market growth rate rises, the post-entry profit possibilities decline, or certain non-price barriers to entry fall. ECONOMISTS HAVE LONG known that maximizing immediate profits is often not the optimal strategy for a firm to pursue if its planning horizon extends beyond the present. A policy for achieving the highest overall reward may dictate the sacrifice of some current gain. This point has played a central role in the development of the theory of a "limit price." The theory deals with determination of the entrypreventing price by a supplier of a market when potential entrants exist. The supplier in question may be a firm or a group of (tacitly) cooperating firms. The high short term profits associated with the pursuit of monopoly pricing must be balanced against the loss of long term profits upon entry of additional suppliers attracted by the high price. In an early paper formalizing the problem, Bain [2] defined the "limit price" as the highest price that the established sellers can set without inducing entry. Modigliani [9] developed a graphical derivation of the limit price and analyzed a number of its determinants. Fisher [6] related these results to Cournot's duopoly model. Recent contributors include Pashigian [10] and Dewey [5]. On the other side of the Atlantic, Harrod [7], in an attack on the "doctrine of excess capacity," argued that a long-run profit maximizing firm would set price to preclude entry. According to Hicks' [8] formalization of Harrod's argument, the firm seeks maximization of a weighted sum of short-run and long-run profits, with the relative weights reflecting the firm's attitudes regarding these periods. It follows from this that the firm may not set price at its entry preventing level. Explicit criticism of the limit price concept has not been lacking. Williamson [13], while extending the concept of a limit price to a limit price-selling cost frontier, suggested that the deterministic framework be modified to a probabilistic one. In proposing a stochastic approach, he noted that the limit price theory is highly rigid, with a single point or curve dividing certain entry from no entry. Williamson also observed that the assumed optimality of the limit price implied that the firm would be willing to prevent entry at any cost. Stigler [12, p. 227] has pointed out that the attractiveness of entry will depend not only upon the current rate of return to the industry, but also upon the anticipated rate of growth of industry demand. If the latter is large, then the present value of future profits may be sufficiently large

Journal ArticleDOI
TL;DR: In this paper, it is shown that the qualitative difference between optimal consumption decisions in the two different models is very strongly influenced by the shape of the utility function, and that the third derivative plays a rather large role.
Abstract: linear production function, that for some utility functions the optimal initial consumption in the random case decreases for all values of initial wealth as compared with the initial consumption in the deterministic case. For other utility functions the optimal consumption always increases. Hence it seems, from these examples, that two divergent forces are at work. The first is the desire to consume more initially as a hedge against the uncertain future. The second force is the desire to consume less initially so as to increase the future consumption prospects. (It is assumed, of course, that increased inputs increase outputs for all possible random events, or states of the world). The relative strength of each of these forces, as implied by the utility function, is the key to the relationship between random consumption and deterministic consumption in this model. The major conclusion of this paper is that the qualitative difference between optimal consumption decisions in the two different models is very strongly influenced by the shape of the utility function. In particular the third derivative of the utility function plays a rather large role. It is this derivative that determines the attitude toward the skewness of a distribution in the theory of portfolio choices, as may be seen from the analysis of Pratt [7] and Tobin [10]. Even in these models, however, the third derivative cannot be ignored, since ignoring skewness distorts the results. Moreover, there does not seem to be any intuitive economic reason to make any assumptions concerning the third derivative of the utility function. The extent to which the utility function influences savings and consumption decisions is exhibited in a precise manner. It may be shown that the qualitative relationship between random and deterministic consumption depends in general on the initial wealth. It is not true, as one would infer from the papers cited above, that random consumption is always either greater than or less than deterministic consumption independently of the initial wealth. In other words, for many utility functions the initial wealth turns out to be a decisive factor in the qualitative relationship between the random and deterministic case. Naturally this relationship will also normally depend on -the probabilistic structure of the model. The key result of this paper is a theorem which gives a necessary and sufficient condition for determining the qualitative relationship between random consumption and deterministic consumption. This condition, which is both necessary and sufficient, is in a particularly simple form in that it depends only on the known parameters of the model (i.e., the production function, the utility function, and the distribution of the random variable) and also on the optimal deterministic policy which, in general, is much simpler to exhibit than its counterpart in the random case.

Journal ArticleDOI
TL;DR: In this paper, the authors examined some modifications required in well known propositions of general equilibrium theory when transactions require resources, and some preliminary remarks on the analysis of money in such economies are also offered.
Abstract: Publisher Summary This paper examines some of the modifications required in well known propositions of general equilibrium theory when transactions require resources. Some preliminary remarks on the analysis of money in such economies are also offered.

Journal ArticleDOI
TL;DR: In this article, the authors considered a model in which a covariance-stationary exogenous process is related to an endogenous process by an unrestricted, infinite, linear distributed lag, and showed that when an underlying continuous time model is sampled at unit intervals to yield endogenous and exogenous discrete-time processes, the discrete time processes are related by a discrete time equivalent of the underlying continuous model.
Abstract: A model is considered in which a covariance-stationary exogenous process is related to an endogenous process by an unrestricted, infinite, linear distributed lag. It is shown that when an underlying continuous time model is sampled at unit intervals to yield endogenous and exogenous discrete time processes, the discrete time processes are related by a discrete time equivalent of the underlying continuous model. The relationship between the underlying continuous lag distribution and its discrete time equivalent is "close" when the exogenous process is "smooth." Even then, however, it is interesting to note that (i) a monotone continuous time distribution does not in general have a monotone discrete time equivalent and (ii) a one-sided continuous time distribution does not in general have a one-sided discrete time equivalent. The implications of the results for statistical practice are considered in the latter part of the paper.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a model where the components of a vector are predetermined at time t and the vector x(t) is not serially correlated, so that (among other requirements) we have
Abstract: where y(t) and x(t) are vectors of G components, z(t) has K components, B is a G x G matrix, and F is a G x K matrix.' The components of z(t) are predetermined at time t and the vector x(t) is not serially correlated, so that (among other requirements) we have



Journal ArticleDOI

Journal ArticleDOI
TL;DR: In this article, a new approach called small-sigma asymptotics is introduced and applied to the choice of k-class estimators of the parameters of a single equation in a system of linear simultaneous stochastic equations.
Abstract: A new approach to the choice of econometric estimators, called small-sigma asymptotics, is introduced and applied to the choice of k-class estimators of the parameters of a single equation in a system of linear simultaneous stochastic equations. I find that when the degree of overidentification is no more than six, the two stage least squares estimator uniformly dominates the limited information maximum likelihood estimator in a certain sense. The small sigma method can be used on many problems in statistics and econometrics. THE STUDY OF simultaneous equation econometric models has led to many alternative estimators to ordinary least squares: single equation limited information maximum likelihood, and two stage least squares, for example. The behavior of these estimators has been difficult to describe, however, and it has been difficult to choose among these estimators. The work described in this paper explores this problem for the case in which lagged dependent variables are not permitted. To be most useful for normative purposes, a description must be detailed enough to give a good approximation and expose differences between estimators, and yet be simple enough to strengthen intuition and yield easily described comparisons. Since detail and simplicity are in conflict, approaches may differ in this respect. This paper introduces a new approach, based on asymptotic series in a scalar multiple, a, of the variance of the disturbance in the model. As a -+ 0 the regression function is an increasingly good description of the random variables generated. Intuitively this is suggested by Gauss' "Theory of Errors" the errors were never intended to be so large as to swamp the regression function. One important approach used in the past is large sample asymptotic theory. This reveals a persistent bias in ordinary least squares, and a large sample asymptotic equivalence between two stage least squares and single equation limited information maximum likelihood. Additionally, Nagar [13] found the 11T term in the large sample asymptotic bias and the 1/T and I/T2 terms of the moment matrix of two stage least squares. Economists have been uneasy, however, about application of large sample theory to samples which may not be "large" in the relevant sense. Additionally large sample asymptotic results often depend on an assumption about the asymptotic behavior of the moment matrix of exogenous variables which is difficult to justify.

Book ChapterDOI
TL;DR: The following sections are included:INTRODUCTIONALTERNATIVE ESTIMATORSSAMPLING RESULTS Conclusion as mentioned in this paper, and the following sections include the following abstracts:
Abstract: The following sections are included:INTRODUCTIONALTERNATIVE ESTIMATORSSAMPLING RESULTSCONCLUSIONSREFERENCES (This abstract was borrowed from another version of this item.)

Book ChapterDOI
TL;DR: Conditional expected utility (CUE) as discussed by the authors is a theory of making decisions when their consequences are uncertain, and it is the most familiar example of measurement in the social sciences, which can have both physical and behavioral interpretations.
Abstract: This chapter focuses on conditional expected utility. Unlike most theories of measurement, which can have both physical and behavioral interpretations, the theory of expected utility is devoted explicitly to the problem of making decisions when their consequences are uncertain. It is the most familiar example of a theory of measurement in the social sciences. This chapter illustrates the ideas underlying the theory. The basic entities of the theory are a set of uncertain alternatives and an individual's ordering of them according to his personal preferences. Each specific gamble prescribes a particular contingency between events and their consequences, but numerous other gambles can be constructed from the same chance events and consequences. The set of possible consequences can include many different types of things: the gain or loss of money, the receipt of commodities or commodity bundles, the presentation of emotional stimuli of various sorts, etc.



Journal ArticleDOI
TL;DR: In this paper, the standard error of forecast of a single equation and the covariance matrix of forecasts of a complete system of equations that are appropriate when the exogenous variables in the forecast period are stochastic are presented.
Abstract: This paper presents formulae for the standard error of forecast of a single equation and the covariance matrix of forecasts of a complete system of equations that are appropriate when the exogenous variables in the forecast period are stochastic. The problems of defining forecast intervals and multidimensional forecast regions are also discussed. FORECASTS MADE with econometric models are probabilistic statements. A number of significant papers [1, 4, 5, 6] have developed the idea of a standard error of forecast for a single equation and the error covariance matrix of the forecast for complete systems. In all of these, it is assumed that the exogenous variables in the forecast period are \"known constants,\" not subject to forecasting error. As several of the authors have noted, this is a serious limitation on their results. Fortunately, that limitation is not necessary. This paper presents formulae for the standard error of forecast of a single equation and the covariance matrix of forecasts of a complete system of equations that are appropriate when the exogenous variables in the forecast period are stochastic. The problems of defining forecast intervals and multidimensional forecast regions are also discussed.

Journal ArticleDOI
TL;DR: In this article, the authors developed approximations of the Gram-Charlier type to the cumulative distribution function of the instrumental variables estimator on classical assumptions, which is good for the special case even for small sample size over a wide range of values of the parameters.
Abstract: This paper develops approximations of the Gram-Charlier type to the cumulative distribution function of the instrumental variables estimator on classical assumptions. In the special case where there are only two endogenous variables in the estimated equation, exact values of the cumulative distribution function are computed by numerical integration and compared with the approximations. Although the error in the approximation depends critically on the parameters of the stochastic model, the approximation is good for the special case even for small sample size over a wide range of values of the parameters. THIS PAPER was originally conceived as a study of the finite sample distribution of two stage least squares estimates. Since it was found that the distribution of a more general class of instrumental variables estimates can be discussed in the same way with a trifling complication of the algebra, the paper was modified to cover these estimates. The basic approach is somewhat similar to that of Nagar [15], since it involves expanding the formulae for the estimator as a series of terms of 0(1), O(T-+), O(T- 1), O(T- 1+), etc., and from this a similar expansion is found for the cumulative probability of the form




Journal ArticleDOI
TL;DR: In this paper, the Stolper-Samuelson theorem was generalized to the n x n case and the conditions established in these theorems were then interpreted economically in terms of the generalized versions of factor intensity.
Abstract: This paper is concerned with the generalization of the Stolper-Samuelson theorem from the 2 x 2 case to the n x n case. We start by proving theorems establishing the validity of the factor price equalization theorem and the Stolper-Samuelson theorem for the n x n case. The conditions established in these theorems are then interpreted economically in terms of the generalized versions of factor intensity. It may be noted that the above results, apart from being more readily interpretable in economic terms, are of basic mathematical interest.