scispace - formally typeset
Search or ask a question

Showing papers in "International Economic Review in 1984"




ReportDOI
TL;DR: In this article, the authors extended earlier work on the RID to patents relationship (Pakes-Griliches 1980, and Hausman, Hall, and Griliches,1984) to a larger but shorter panel of firms and compared weighted nonlinear least squares as well as Poisson-type models as solutions to the former problem.
Abstract: This paper extends earlier work on the RID to patents relationship (Pakes-Griliches 1980, and Hausman, Hall, and Griliches,1984) to a larger but shorter panel of firms. The focus of the paper is on solving a number of econometric problems associated with the discreteness of the dependent variable and the shortness of the panel in the time dimension. We compare weighted nonlinear least squares as wellas Poisson-type models as solutions to the former problem. In attempting to estimate a lag structure on R&D in the absence of a sufficient history of the variable, we take two approaches: first, we use the conditional version of the negative binomial model, and second, we estimate the R&D variable itself as a low order stochastic process and use this information to control for unobserved R&D. R&D itself turns out to befairly well approximated by a random walk. Neither approach yields strong evidence of a long lag. The available sample, though numerically large, turns out not to be particularily informative on this question. It does reconfirm, however, a significant effect of R&D on patenting (with most of it occuring in the first year) and the presence of rather wide and semi-permanent differences among firms in their patenting policies.

561 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a generalized cost function that retains the principal advantages of the neoclassical cost function but does not require that cost minimization subject to market prices be imposed as a maintained hypothesis.
Abstract: Duality theory has been applied increasingly in recent years to study the structure of production. A major advantage of using duality theory is that systems of input demand equations can be derived by simple differentiation, facilitating the use of flexible functional forms. Also, use of duality theory makes it possible to choose a representation of the technology which has desirable properties for estimation, such as exogeneity of the regressors. However, the use of a dual representation of production structure is appropriate only if the corresponding maintained hypothesis concerning economic behavior (e.g., cost minimization or profit maximization) is valid. The perceived advantages of the dual approach to studying production structure have led to some applications in which the maintained hypothesis concerning economic behavior is unlikely to be appropriate. For example, neoclassical cost functions have been used to study the characteristics of production in regulated industries,' despite an extensive theoretical literature indicating that a profit maximizing regulated firm would not minimize costs subject to market prices.2 In this paper we propose a generalized cost function that retains the principal advantages of the neoclassical cost function but does not require that cost minimization subject to market prices be imposed as a maintained hypothesis. In our model, firms are assumed to base their production decisions on unobservable shadow prices which reflect the effects of regulation on the effective prices of inputs. Parametric tests for cost minimization are obtained by expressing shadow prices as functions of market prices. The generalized cost function is estimated with data for regulated electric utilities. The parametric restrictions corresponding to cost minimizing behavior subject to market prices are rejected, implying that the use of a neoclassical cost function is not appropriate in this application.3

238 citations


Journal ArticleDOI
Abstract: General Competitive Analysis in an Economy with Private Information Author(s): Edward C. Prescott and Robert M. Townsend Source: International Economic Review, Vol. 25, No. 1 (Feb., 1984), pp. 1-20 Published by: Blackwell Publishing for the Economics Department of the University of Pennsylvania and Institute of Social and Economic Research -Osaka University Stable URL: http://www.jstor.org/stable/2648863 Accessed: 25/06/2010 16:47

232 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that the maximum likelihood estimators (MLE) of a non-normality estimator will be inconsistent if normality is a misspecification.
Abstract: Limited dependent variable (LDV) models arise when the dependent variable is restricted in some way. The examples are numerous and contain situations (i) where the dependent variable is restricted and takes a limiting value with a positive probability and (ii) where it can take only one of a finite set of values (e.g., when modeling qualitative responses). For the analysis of LDV models, several estimation and inferential results are presently available (e.g., see Tobin [1958], Amemiya [1973], Fair [1977], Olsen [1978], and Pratt [1981]) and these are being increasingly used in economic modeling. These results are applicable under a set of assumptions which typically include disturbance "normality". The consequences of violation of the normality assumption in LDV situations can be quite severe since, unlike the usual regression model, the maximum likelihood estimators (MLE) can be inconsistent under non-normality. Robinson [1982] has shown this by taking the disturbances as uniform variates. Through a simulation study, Arabmazar and Schmidt [1982] demonstrated that the asymptotic bias (inconsistency) can be quite substantial if normality is wrongly assumed (see also Goldberger [1980], and White [1979]). In addition to the logistic and student t distributions, which have closely similar shapes with the normal distribution, and Laplace distribution considered in Goldberger [1980], using the equation (1.4) of Robinson [1982, p. 28], it can be shown that under most of the commonly used non-normal distributions in the statistical literature, the MLE will be inconsistent if normality is a misspecification. For merely general theoretical interest, it is an open question whether normality assumption is necessary for consistency. (While there may be some pathological situations, in particular, with all explanatory variables being discrete in which it may be possible to construct some non-normal distributions which will not lead to inconsistency, however the possibility of inconsistency in the event of non-normality is sufficient reason for testing normality and, for the truncated sample cases, the truncated normal distributions.) Because of sample truncation or censoring, the normality tests developed for

210 citations


Journal ArticleDOI
TL;DR: Summey, George, Jr. as mentioned in this paper is a distinguished professor of economics at Syracuse University and a visiting professor at the University of California-San Diego (UCSD).
Abstract: ACADEMIC POSITIONS Distinguished Professor of Economics, Syracuse University, 2005-present. Part-time Chair in Economics, University of Leicester, 2004-present. George Summey, Jr. Professor of Liberal Arts, Texas A&M University, 1993-2005. Visiting Professor, University of California-San Diego, 2002. Visiting Professor, University of Arizona, 1996. Visiting Professor, Université Pantheon-Assas, Paris II, 2003-present. Professor of Economics, Texas A&M University, 1988-2005. Associate Professor of Economics, University of Houston, 1984-1988. Assistant Professor of Economics, University of Houston, 1979-1984.

173 citations



Journal ArticleDOI
TL;DR: In this article, an extremely general procedure for performing a wide variety of model specification tests by running artificial linear regressions was developed, which allows us to develop non-nested hypothesis tests for any set of models which attempt to explain the same dependent variable(s), even when the error specifications of the competing models differ.
Abstract: This paper develops an extremely general procedure for performing a wide variety of model specification tests by running artificial linear regressions. Inference may then be based either on a Lagrange Multiplier statistic from the procedure, or on conventional asymptotic t or F tests based on the artificial regressions. This procedure allows us to develop non-nested hypothesis tests for any set of models which attempt to explain the same dependent variable(s), even when the error specifications of the competing models differ. (This abstract was borrowed from another version of this item.)

86 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the exact finite sample distribution of the limited information maximum likelihood estimator in a general and leading single equation case is multivariate Cauchy.
Abstract: It is shown that the exact finite sample distribution of the limited information maximum likelihood (LIML) estimator in a general and leading single equation case is multivariate Cauchy. When the LIML estimator utilizes a known error covariance matrix (LIMLK) it is proved that the same Cauchy distribution still applies. The corresponding result for the instrumental variable (IV) estimator is a form of multivariate t density where the degrees of freedom depend on the number of instruments.(This abstract was borrowed from another version of this item.)

79 citations



Journal ArticleDOI
TL;DR: In this article, the authors provide guidelines on how to pool time-series of cross-section data in the simultaneous model and evaluate the small sample performance of various pooling estimators in a two-equation simultaneous model.
Abstract: Empirical studies utilizing time-series of cross-section data are constantly appearing in virtually every field of economics. This has been made easier by the increasing availability of panel data, and the increasing capability of computers in handling large data sets. Some of the earlier studies include Mundlak [1961, 1963] and Hoch [1962] in the production function literature, Kuh [1959] in the investment function literature, and Balestra and Nerlove [1966] in the energy literature. More recent studies include Chamberlin and Griliches [1975], Hausman and Wise [1979], Lillard and Weiss [1979] in the labor literature, and Griffin [1979] and Pindyck [1980] in the energy literature, to mention only a few. Efforts to develop the econometric theory for pooling time-series of cross-section data have concentrated largely on the single equation error components model.2 More recently, Avery [1977] and Baltagi [1981b] extended this error components literature to the seemingly unrelated regressions and the simultaneous equations model, respectively. Asymptotic as well as small sample properties of various pooling estimators received adequate investigation in the single equation model.3 However, the same cannot be said about pooling estimators in the simultaneous equations case. This paper is an attempt to remedy this situation. The small sample performance of various pooling estimators in a two-equation simultaneous model are studied by means of Monte Carlo experiments. The main purpose is to provide the applied researcher with some guidelines on how to pool time-series of crosssection data in the simultaneous model. Monte Carlo studies for the classical simultaneous model are plentiful,4 as are

Journal ArticleDOI
TL;DR: In this article, a neighborhood turnpike theorem of Bewley and McKenzie was shown to be equivalent to the asymptotic turnpikes theorem for competitive equilibrium paths.
Abstract: In his seminal paper, Bewley [1982] integrates growth theory and competitive equilibrium theory, considering a model with finitely many infinitely lived consumers who trade commodities in infinitely many competitive markets from the present through the future. In such a model, a competitive equilibrium path is an optimal growth path. Using this property, he proves a turnpike theorem for competitive equilibrium paths. In growth theory, a social welfare function, with respect to which optimal growth paths are considered, has been assumed to be exogenously given. Bewley shows that in the competitive market system the market mechanism endogenously determines the social welfare function with respect to which an equilibrium path is regarded as an optimal path. In Bewley's model, the future is discounted by a factor p. In growth theory, two types of turnpike theorems have been proved when the future is discounted. The first type is called a neighborhood turnpike theorem.' It asserts that given c>0, there is 0< p' <1 such that p'? p < 1 implies that an optimal path at the discount factor p eventually stays within the s-neighborhood of a stationary path. This theorem does not require the social welfare function to be differentiable but only to be strictly concave, and is proved by McKenzie [1979]. The second type is called an asymptotic turnpike theorem. It asserts that there is 0 < p' < 1 such that p' < p < 1 implies that an optimal path at the dicount factor p converges to a stationary path. This theorem has been proved only under differentiability assumptions (Scheinkman [1976]). Bewley proves an asymptotic turnpike theorem, making use of extensive differentiability assumptions. The purpose of this paper is to synthesize these two importnat results of Bewley and McKenzie. Namely, we will prove a neighborhood turnpike theorem of McKenzie type in a model like that of Bewley, making no differentiability assumptions. We will make a few generalizations of Bewley's assumptions. Among them, of importance are relaxing his differentiability assumptions, weakening his interiority assumption, and replacing his assumption of decreasing returns to



Journal ArticleDOI
TL;DR: In this article, the authors investigate the impact of demand uncertainty on the choice of plant capacity by a regulated firm, and find that the Averch-Johnson model of a firm operating at or near an "allowed rate of return constraint" (with the allowed rate being in excess of the cost of capital to the firm) should be replaced by a model of price regulation in which attention is shifted to the "non-negativity of profits constraint", or at least this should be done with respect to the electric utility industry.
Abstract: In this paper, we investigate the impact of demand uncertainty on the choice of plant capacity by a regulated firm. Over the past few years, demand uncertainty has become a major element in the decision-making of utilities, and particularly in their decision-making with respect to capacity choices. In a recent study by SRI [1977], it was reported that to maximize expected consumers' surplus, more generating capacity was required for the electric utility industry when operating under demand uncertainty than under demand certainty.1 This finding raises the question whether the structure of rate regulation of electric utilities provides the appropriate incentives for them to invest in more capacity under demand uncertainty then under certainty. The present paper addresses such questions. The model of the regulated firm that is employed in this paper derives from the work of Joskow [1974] concerning the recent history of rate regulation in the electric utility industry. Briefly, Joskow reported that during the 1960s and the early 1970s, the impetus for rate reviews in the electric utility industry came mainly from the utilities and not from the public utility commissions (PUC's). Joskow's findings are corroborated by data for the period 1948-1978. A study of all rate cases involving electric utilities in the U.S. for that period shows that of a total of 363 cases, 350 of these were instances of utility-initiated rate reviews and only 13 were cases of PUC-initiated reviews.2 From the point-of-view of the formal theory of the regulated firm, this strongly suggests that the Averch-Johnson model of a firm operating at or near an "allowed rate of return constraint" (with the allowed rate of return being in excess of the cost of capital to the firm) should be replaced by a model of price regulation in which attention is shifted to the "non-negativity of profits constraint", or, at least, this should be done with respect to the electric utility industry. With some elaborations, this is basically what we do in this paper, building on the approach adopted in a recent paper on a related topic.3


Journal ArticleDOI
TL;DR: The Giffen paradox as mentioned in this paper states that the marginal utility of money must rise with an increase in the price of the Giffin good, i.e., the utility of a good must increase with a price increase.
Abstract: Concern over the possibility of an upward sloping demand curve started with Marshall, who credited Robert Giffen, an English statistician, with having first observed that more of a commodity may be consumed when its price rises.' Nearly 100 years after Marshall published his remarks, the received analysis of a Giffen good is almost entirely contained in the assertion that in the Slutsky equation for the "own price" effects on consumption of the good, the income term may be sufficiently large to outweigh the negative pure substitution term. Even the elementary question of the impact of this large income effect on the consumption of other commodities seems to have been overlooked.2 In the standard two-good (x1 and x2) world, for example, if x1 is a Giffen good, i.e., if Ox1/lp, > 0, then from the budget constraint, p1OX1/P1 +p2aX2/p1= x1 and thus ax2/ap1 O. To satisfy the budget constraint, the large negative income term for x1 must produce a large positive income effect for x2 that outweighs the net substitution term in the Slutsky equation for the effects of changes in Pi on x2. When the price of the Giffen good changes, therefore, not only does the income term outweigh the substitution term for the Giffen good, but a similar result is produced for the cross effect on the other commodity. This simple result seems to have gone unnoticed. Many economists have studied the Giffen paradox but a satisfactory analysis appears to be lacking. Marshall believed that the marginal utility of money, i, must rise with an increase in the price of the Giffen good. George Stigler asserted [1950, p. 327], that the Giffen case is inconsistent with an additively separable utility function.3 William Gramm [1970] also examined the paradox, but his interpretation was not intended to add to knowledge about the economics of the


ReportDOI
TL;DR: In this paper, the authors combine the two approaches into a unified framework, where the degree to which prices are rigid is determined endogenously and the relationship between the variance of deviations from ppp and the aggregate variability is not monotonic.
Abstract: The volatility of the exchange rate under floating rates can be interpreted in terms of approaches that allow for short term price rigidity as well as in terms of models that consider the magnification effect of new information. This paper combines the two approaches into a unified framework,where the degree to which prices are rigid is determined endogenously. It is shown that the variance of percentage deviations from ppp has an upper bound,and that the relationship between the variance of deviations from ppp and the aggregate variability is not monotonic. Allowing for a short-run Phillips curve with optimal indexation, it is also demonstrated that a higher price flexibility will reduce deviations from ppp and output volatility.

Journal ArticleDOI
TL;DR: In this article, the difference in the expected rates of inflation between Japan and the United States as a function of purchasing power parity holds even in the short run was analyzed, where the expected average rate of appreciation of the dollar against the yen over the time period 1.
Abstract: (1) ~~~~~~r, = r* + Xl (1) r1=?x where r, is the current yen interest rate on bonds with a term to maturity of length 1, r* is the dollar interest rate on bonds with a term to maturity of length 1, and xl is the expected average rate of appreciation of the dollar against the yen over the time period 1. Further, suppose that purchasing power parity holds even in the short run. Then, x1 will be determined by the difference in the expected rates of inflation between Japan and the United States as





Journal ArticleDOI
TL;DR: In this paper, the authors show that an equilibrium relative to the Helpman-Razin Mechanism rarely exists, making their optimality result essentially vacuous, and that all equilibrium allocations in the helpman-razin model are constrained Pareto optima.
Abstract: The recent literature on economies with an incomplete set of markets has been devoted to the study of the efficiency properties of collective stockholder decision mechanisms for guiding the behavior of firms when the restrictive Ekern-Wilson spanning condition is not satisfied. The results have been essentially negative; a majority voting rule and controlling interest rules will not yield efficient equilibrium allocations in general. However, in a recent paper, Helpman and Razin (1978) suggested a decision rule that assures constrained Pareto optimality of equilibrium allocations. Their rule is patterned on the recent contributions to the theory of incentive compatibility. In this paper, we show that an equilibrium relative to the Helpman-Razin Mechanism rarely exists, making their optimality result essentially vacuous. We then demonstrate that an equilibrium does exist in general relative to the Shared Cost Mechanism developed by Hurwicz (1976), and that all equilibrium allocations in the Helpman-Razin model are constrained Pareto optima. Finally, we suggest that the optimality of equilibrium allocations is as much a consequence of how technology is modeled as of the incentives induced by the decision mechanism. Existence, on the other hand, is very sensitive in general to the decision mechanism adopted.




Journal ArticleDOI
TL;DR: In this article, the authors consider an economy operating under a flexible exchange rate and with perfect myopic foresight in expectations of the inflation rate and the rate of rate of return.
Abstract: The dynamics of open economy models embodying the assumption of perfect foresight have attracted a great deal of attention in the literature over recent years. A familiar conclusion to emerge from earlier analyses of dynamic monetary models under perfect foresight is that these models typically exhibit saddlepoint instability (see, for example, Burmeister and Dobell [1970], Nagatani [1970], Black [1974]). One approach around this instability problem, introduced by Sargent and Wallace [1973], is to assume that following any exogenous shock the system would jump to a stable path which converges to the new equilibrium. This procedure has been adopted by a number of authors (for example, Dornbusch [1976], Gray and Turnovsky [1979], Dornbusch and Fischer [1980], and Burmeister, Flood and Turnovsky [1981]).2 An implication of the jump is that at the point in time in which it occurs the assumption of perfect myopic foresight does not hold. However, as Gray and Turnovsky [1979] argued, this can be justified on the grounds that when a given shock is completely unanticipated it seems excessively stringent to require that expectations be perfectly accurate at the instant that the shock impinges on the system. Similar justification can be made in the case of pre-announced or anticipated policy changes as considered by Wilson [1979], Boyer and Hodrick [1982], as well as Gray and Turnovsky [1979]. In this case, the initial jump occurs at the time of announcement and the system remains on a continuous new equilibrium path at the time of policy implementation. Further, empirical evidence provides some support for the assumption that a system embodying the rational expectations hypothesis (or its deterministic analogue, perfect myopic foresight) will not arch away from the new equilibrium along a divergent path (see Flood and Garber [1980]). This paper deals with an economy operating under a flexible exchange rate and with perfect myopic foresight in expectations of the inflation rate and the rate of