scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Transcendental logarithmic production frontiers

01 Feb 1973-The Review of Economics and Statistics (REVIEW OF ECONOMICS AND STATISTICS)-Vol. 55, Iss: 1, pp 28-45
TL;DR: Ebsco as mentioned in this paper focuses on additive and homogeneous production possibility frontiers that have played an important role in formulating statistical tests of the theory of production and characterizes the class of production possibility frontier that are homogeneous and additive.
Abstract: Focuses on additive and homogeneous production possibility frontiers that have played an important role in formulating statistical tests of the theory of production. Characterization of the class of production possibility frontiers that are homogenous and additive; Representation of the production possibility frontier; Statistical tests of the theory of production. (Из Ebsco)
Citations
More filters
Posted Content
TL;DR: In this article, the authors developed an estimation algorithm that takes into account the relationship between productivity on the one hand, and both input demand and survival on the other, guided by a dynamic equilibrium model that generates the exit and input demand equations needed to correct for the simultaneity and selection problems.
Abstract: Technological change and deregulation have caused a major restructuring of the telecommunications equipment industry over the last two decades. We estimate the parameters of a production function for the equipment industry and then use those estimates to analyze the evolution of plant-level productivity over this period. The restructuring involved significant entry and exit and large changes in the sizes of incumbents. Since firms choices on whether to liquidate and the on the quantities of inputs demanded should they continue depend on their productivity, we develop an estimation algorithm that takes into account the relationship between productivity on the one hand, and both input demand and survival on the other. The algorithm is guided by a dynamic equilibrium model that generates the exit and input demand equations needed to correct for the simultaneity and selection problems. A fully parametric estimation algorithm based on these decision rules would be both computationally burdensome and require a host of auxiliary assumptions. So we develop a semiparametric technique which is both consistent with a quite general version of the theoretical framework and easy to use. The algorithm produces markedly different estimates of both production function parameters and of productivity movements than traditional estimation procedures. We find an increase in the rate of industry productivity growth after deregulation. This in spite of the fact that there was no increase in the average of the plants' rates of productivity growth, and there was actually a fall in our index of the efficiency of the allocation of variable factors conditional on the existing distribution of fixed factors. Deregulation was, however, followed by a reallocation of capital towards more productive establishments (by a down sizing, often shutdown, of unproductive plants and by a disproportionate growth of productive establishments) which more than offset the other factors' negative impacts on aggregate productivity.

4,380 citations


Cites background from "Transcendental logarithmic producti..."

  • ...…of the Cobb-Douglas technology in the empirical work in this paper, it is easy to generalize the estimation algorithm to allow for more general production technologies; translog with neutral efficiency differences across firms would do equally well (see Christensen, Jorgenson, and Lau (1973))....

    [...]

ReportDOI
TL;DR: In this paper, an empirical focus is on estimating the parameters of a production function for the equipment industry, and then using those estimates to analyze the evolution of plant-level productivity.
Abstract: Technological change and deregulation have caused a major restructuring of the telecommunications equipment industry over the last two decades. Our empirical focus is on estimating the parameters of a production function for the equipment industry, and then using those estimates to analyze the evolution of plant-level productivity. The restructuring involved significant entry and exit and large changes in the sizes of incumbents. Firms' choices on whether to liquidate, and on input quantities should they continue, depended on their productivity. This generates a selection and a simultaneity problem when estimating production functions. Our theoretical focus is on providing an estimation algorithm which takes explicit account of these issues. We find that our algorithm produces markedly different and more plausible estimates of production function coefficients than do traditional estimation procedures. Using our estimates we find increases in the rate of aggregate productivity growth after deregulation. Since we have plant-level data we can introduce indices which delve deeper into how this productivity growth occurred. These indices indicate that productivity increases were primarily a result of a reallocation of capital towards more productive establishments.

3,657 citations

Journal ArticleDOI
TL;DR: In this article, a rich class of non-cooperative games, including models of oligopoly competition, macroeconomic coordination failures, arms races, bank runs, technology adoption and diffusion, R&D competition, pretrial bargaining, coordination in teams, and many others, are studied.
Abstract: We study a rich class of noncooperative games that includes models of oligopoly competition, macroeconomic coordination failures, arms races, bank runs, technology adoption and diffusion, R&D competition, pretrial bargaining, coordination in teams, and many others. For all these games, the sets of pure strategy Nash equilibria, correlated equilibria, and rationalizable strategies have identical bounds. Also, for a class of models of dynamic adaptive choice behavior that encompasses both best-response dynamics and Bayesian learning, the players' choices lie eventually within the same bounds. These bounds are shown to vary monotonically with certain exogenous parameters. WE STUDY THE CLASS of (noncooperative) supermodular games introduced by Topkis (1979) and further analyzed by Vives (1985, 1989), who also pointed out the importance of these games in industrial economics. Supermodular games are games in which each player's strategy set is partially ordered, the marginal returns to increasing one's strategy rise with increases in the competitors' strategies (so that the game exhibits "strategic complementarity"2) and, if a player's strategies are multidimensional, marginal returns to any one com- ponent of the player's strategy rise with increases in the other components. This class turns out to encompass many of the most important economic applications of noncooperative game theory. In macroeconomics, Diamond's (1982) search model and Bryant's (1983, 1984) rational expectations models can be represented as supermodular games. In each of these models, more activity by some members of the economy raises the returns to increased levels of activity by others. In oligopoly theory, some models of Bertrand oligopoly with differentiated products qualify as supermodu- lar games. In these games, when a firm's competitors raise their prices, the marginal profitability of the firm's own price increase rises. A similar structure is present in games of new technology adoption such as those of Dybvig and Spatt (1983), Farrell and Saloner (1986), and Katz and Shapiro (1986). When more users hook into a communication system or more manufacturers adopt an interface standard, the marginal return to others of doing the same often rises. Similarly, in some specifications of the bank runs model introduced by Diamond and Dybvig (1983), when more depositors withdraw their funds from a bank, it is more worthwhile for other depositors to do the same. In the warrant exercise

1,795 citations


Additional excerpts

  • ...16 See Christensen, Jorgensen, and Lau (1973)....

    [...]

  • ...15 See Varian (1978)....

    [...]

  • ...See also Fudenberg and Tirole (1986)....

    [...]

Journal ArticleDOI
TL;DR: A key development in the economic theory of index numbers has been the demonstration that many index number formulas can be explicitly derived from particular aggregator functions as mentioned in this paper, which provides a powerful new basis for selecting an index number procedure.
Abstract: Early in this century economists began to give serious attention to making comparisons using index number techniques. There was extensive debate as to which index number formulas were the most appropriate for carrying out comparisons.1 The debate was extensive in no small part due to the lack of agreement as to criteria for preferring one formula over another. In recent decades there has been a resurgence of interest in index numbers, resulting from discoveries that the properties of index numbers can be directly related to the properties of the underlying aggregator functions that they represent. The underlying functions - production functions, utility functions, etc. - are the building blocks of economic theory, and the study of relationships between these functions and index number formulas has been referred to by Samuelson and Swamy (I974) as the economic theory of index numbers.2 A key development in the economic theory of index numbers has been the demonstration that numerous index number formulas can be explicitly derived from particular aggregator functions. This development provides a powerful new basis for selecting an index number procedure. Rather than starting the selection process with a number of plausible index number formulas, one can specify an aggregator function with desirable properties and derive the corresponding index number procedure. The resulting index is termed exact for that particular aggregator function. Diewert (I976) makes a strong case for limiting the consideration of aggregator functions to those which are flexible, i.e. those which can provide a second order approximation to an arbitrary aggregator function. He has termed index numbers that are exact for flexible aggregator functions 'superlative '. There are two superlative index numbers that are of particular interest - the Fisher Ideal index and the Tornqvist-Theil-translog index. Fisher (I 922) dubbed the following index Ideal since it best satisfied his several criteria for choosing among index numbers:

1,660 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors use the Shephard duality theorem to obtain a system of derived demand equations which are linear in the technological parameters, thus facilitating econometric estimation.
Abstract: The paper indicates how the Shephard duality theorem may be utilized in order to obtain a system of derived demand equations which are linear in the technological parameters, thus facilitating econometric estimation. This theorem states that technology may be equivalently represented by either a production function or a cost function, and a proof of the theorem is given. The chosen functional form is a quadratic form in the square roots of input prices and is a generalization of the Leontief cost function. The generalization has the property that it can attain any set of partial elasticities of substitution using a minimal number of parameters.

1,140 citations

Book ChapterDOI
TL;DR: The three-stage least squares (3-STMLEC) method as discussed by the authors is the first method that uses the moment matrix of the structural disturbances to estimate all coefficients of the entire system simultaneously.
Abstract: In simple though approximate terms, the two-stage least squares method of estimating a structural equation consists of two steps, the first of which serves to estimate the moment matrix of the reduced-form disturbances and the second to estimate the coefficients of one single structural equation after its jointly dependent variables are “purified” by means of the moment matrix just mentioned. The three-stage least squares method, which is developed in this paper, goes one step further by using the two-stage least squares estimated moment matrix of the structural disturbances to estimate all coefficients of the entire system simultaneously. The method has full-information characteristics to the extent that, if the moment matrix of the structural disturbances is not diagonal (that is, if the structural disturbances have nonzero “contemporaneous” covariances), the estimation of the coefficients of any identifiable equation gains in efficiency as soon as there are other equations that are over-identified. Further, the method can take account of restrictions on parameters in different structural equations. And it is very simple computationally, apart from the inversion of one big matrix.

1,067 citations

Journal ArticleDOI
TL;DR: In this article, the authors discuss the effect of different types of modifiers on the performance of different kinds of games. But they do not discuss the impact of different modifiers on each game.
Abstract: В статье рассматривается класс производственных функций с постоянной эластичностью замены безотносительно цен факторов, рассматривается случай, когда в процесс вовлекается больше чем два фактора.

791 citations