scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Using least squares and tobit in second stage DEA efficiency analyses

01 Sep 2009-European Journal of Operational Research (North-Holland)-Vol. 197, Iss: 2, pp 792-798
TL;DR: Examination of second stage DEA efficiency analyses, within the context of a censoring data generating process (DGP) and a fractional data DGP, when efficiency scores are treated as descriptive measures of the relative performance of units in the sample suggests Tobit estimation in this situation is inappropriate.
About: This article is published in European Journal of Operational Research.The article was published on 2009-09-01. It has received 705 citations till now. The article focuses on the topics: Ordinary least squares & Heteroscedasticity.
Citations
More filters
Book
01 Jan 2009

8,216 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examine the factors that influence the social performance of hybrid organizations that pursue a social mission and sustain their operations through commercial activities by studying work integra, and examine the influence of these factors on social performance.
Abstract: We examine the factors that influence the social performance of hybrid organizations that pursue a social mission and sustain their operations through commercial activities by studying work integra

576 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examine the wide-spread practice where data envelopment analysis (DEA) efficiency estimates are regressed on some environmental variables in a second-stage analysis, and make clear that second stage OLS estimation is consistent only under very peculiar and unusual assumptions on the data-generating process that limit its applicability.
Abstract: This paper examines the wide-spread practice where data envelopment analysis (DEA) efficiency estimates are regressed on some environmental variables in a second-stage analysis. In the literature, only two statistical models have been proposed in which second-stage regressions are well-defined and meaningful. In the model considered by Simar and Wilson (J Prod Anal 13:49–78, 2007), truncated regression provides consistent estimation in the second stage, where as in the model proposed by Banker and Natarajan (Oper Res 56: 48–58, 2008a), ordinary least squares (OLS) provides consistent estimation. This paper examines, compares, and contrasts the very different assumptions underlying these two models, and makes clear that second-stage OLS estimation is consistent only under very peculiar and unusual assumptions on the data-generating process that limit its applicability. In addition, we show that in either case, bootstrap methods provide the only feasible means for inference in the second stage. We also comment on ad hoc specifications of second-stage regression equations that ignore the part of the data-generating process that yields data used to obtain the initial DEA estimates.

543 citations


Cites background or methods from "Using least squares and tobit in se..."

  • ...Such a model is conspicuously absent in Hoff (2007), McDonald (2009) and Ramalho et al. (2010)....

    [...]

  • ...Unfortunately, several topical papers, including Hoff (2007), McDonald (2009), and Ramalho et al. (2010) have recently argued that log-linear specifications (estimated by OLS), censored (i.e., tobit) specifications (estimated by ML), or other particular parametric specifications should be used in…...

    [...]

  • ...McDonald (2009) calls this the “instrumentalist” approach....

    [...]

  • ...Unfortunately, BN, Hoff (2007), and McDonald (2009) have been cited by a number of empirical researchers as justification for using OLS in second-stage regressions....

    [...]

  • ...This refrain has been repeated almost verbatim by others, including McDonald (2009, page 797) and Ramalho et al. (2010, Section 2, eighth paragraph). It is true that BN allow for noise, while SW do not. However, as discussed above in Section 3, the noise allowed by BN must be (i) bounded, and (ii) the bounds must be constant. The second assumption—that the bounds must be constant—was shown in Section 3 to be critical to the success of the BN approach. However, this is a strong assumption, akin to assuming homoskedasticity, which is frequently violated with cross-sectional data, and especially with data used in production or cost functions. It is also true that SW assume a truncated normal density in their Assumption A3. Necessarily, the numerous studies that have employed tobit estimation in second-stage regressions have assumed a censored normal density. Again, the goal of SW was to match as closely as possible what empirical researchers have been doing while providing a well-defined statistical model in which a second-stage regression would be meaningful. Other assumptions can be made, or the second stage regression can be estimated non-parametrically using the local ML method discussed by Park et al. (2008). Moreover, as discussed above in Section 3....

    [...]

Journal ArticleDOI
TL;DR: The five most active DEA subareas in recent years are identified; among them the “two-stage contextual factor evaluation framework” is relatively more active.
Abstract: This study surveys the data envelopment analysis (DEA) literature by applying a citation-based approach. The main goals are to find a set of papers playing the central role in DEA development and to discover the latest active DEA subareas. A directional network is constructed based on citation relationships among academic papers. After assigning an importance index to each link in the citation network, main DEA development paths emerge. We examine various types of main paths, including local main path, global main path, and multiple main paths. The analysis result suggests, as expected, that Charnes et al. (1978) [Charnes A, Cooper WW, Rhodes E. Measuring the efficiency of decision making units. European Journal of Operational Research 1978; 2(6): 429–444] is the most influential DEA paper. The five most active DEA subareas in recent years are identified; among them the “two-stage contextual factor evaluation framework” is relatively more active. Aside from the main path analysis, we summarize basic statistics on DEA journals and researchers. A growth curve analysis hints that the DEA literature’s size will eventually grow to at least double the size of the existing literature.

482 citations


Cites background from "Using least squares and tobit in se..."

  • ...006 The last paper on the local main path, McDonald2009 [46], is also a work on two-stage analysis....

    [...]

Posted Content
TL;DR: In this article, the authors examine the wide-spread practice where data envelopment analysis (DEA) efficiency estimates are regressed on some environmental variables in a second-stage analysis, and make clear that second stage OLS estimation is consistent only under very peculiar and unusual assumptions on the data-generating process that limit its applicability.
Abstract: This paper examines the wide-spread practice where data envelopment analysis (DEA) efficiency estimates are regressed on some environmental variables in a second-stage analysis. In the literature, only two statistical models have been proposed in which second-stage regressions are well-defined and meaningful. In the model considered by Simar and Wilson (J Prod Anal 13:49–78, 2007), truncated regression provides consistent estimation in the second stage, where as in the model proposed by Banker and Natarajan (Oper Res 56: 48–58, 2008a), ordinary least squares (OLS) provides consistent estimation. This paper examines, compares, and contrasts the very different assumptions underlying these two models, and makes clear that second-stage OLS estimation is consistent only under very peculiar and unusual assumptions on the data-generating process that limit its applicability. In addition, we show that in either case, bootstrap methods provide the only feasible means for inference in the second stage. We also comment on ad hoc specifications of second-stage regression equations that ignore the part of the data-generating process that yields data used to obtain the initial DEA estimates.

452 citations

References
More filters
Book
01 Jan 2001
TL;DR: This is the essential companion to Jeffrey Wooldridge's widely-used graduate text Econometric Analysis of Cross Section and Panel Data (MIT Press, 2001).
Abstract: The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis. Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.

28,298 citations

Journal ArticleDOI
TL;DR: In this article, a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic is presented, which does not depend on a formal model of the structure of the heteroSkewedness.
Abstract: This paper presents a parameter covariance matrix estimator which is consistent even when the disturbances of a linear regression model are heteroskedastic. This estimator does not depend on a formal model of the structure of the heteroskedasticity. By comparing the elements of the new estimator to those of the usual covariance estimator, one obtains a direct test for heteroskedasticity, since in the absence of heteroskedasticity, the two estimators will be approximately equal, but will generally diverge otherwise. The test has an appealing least squares interpretation.

25,689 citations

Journal ArticleDOI
TL;DR: In this article, Lindley et al. make the less restrictive assumption that such a normal, homoscedastic, linear model is appropriate after some suitable transformation has been applied to the y's.
Abstract: [Read at a RESEARCH METHODS MEETING of the SOCIETY, April 8th, 1964, Professor D. V. LINDLEY in the Chair] SUMMARY In the analysis of data it is often assumed that observations Yl, Y2, *-, Yn are independently normally distributed with constant variance and with expectations specified by a model linear in a set of parameters 0. In this paper we make the less restrictive assumption that such a normal, homoscedastic, linear model is appropriate after some suitable transformation has been applied to the y's. Inferences about the transformation and about the parameters of the linear model are made by computing the likelihood function and the relevant posterior distribution. The contributions of normality, homoscedasticity and additivity to the transformation are separated. The relation of the present methods to earlier procedures for finding transformations is discussed. The methods are illustrated with examples.

12,158 citations


"Using least squares and tobit in se..." refers methods in this paper

  • ...A more sophisticated approach is to transform the dependent variable by a Box-Cox transformation (Box and Cox, 1964)....

    [...]

Book
01 Jan 2009

8,216 citations


"Using least squares and tobit in se..." refers background or methods in this paper

  • ...…although the parameter estimates of alternative methods differ, the main inferences and marginal effects are often similar (see, for example, Greene, 2008 pp. 781-3 for binary choice models, pp. 873-4 for limited dependent models and p. 876 for heteroskedasticity in limited dependent…...

    [...]

  • ...…and discrete choice models, the best way of measuring fit is not obvious and naive methods can often be constructed that out-perform more appropriate procedures, particularly in unbalanced data situations (see for example Greene, 2008, p.792, for a discussion in the binary choice situation)....

    [...]

  • ...The properties of OLS, given the data are generated by (4), parallel those of OLS in the linear probability binary discrete choice model (discussed by, for example, Greene, 2008, pp.770-793 and Judge et al., 1988, pp.753-768)....

    [...]

  • ...The latter problem could be mitigated, to a degree, by applying inequality-restricted least squares, similar to that suggested in the binary choice model, but it is unclear this would be advantageous (see Judge et al., 1988, pp.759-761 and Greene, 2008, p.773, ft.2)....

    [...]

Book
30 Nov 1997
TL;DR: This book is the first systematic survey of performance measurement with the express purpose of introducing the field to a wide audience of students, researchers, and practitioners.
Abstract: The second edition of An Introduction to Efficiency and Productivity Analysis is designed to be a general introduction for those who wish to study efficiency and productivity analysis. The book provides an accessible, well-written introduction to the four principal methods involved: econometric estimation of average response models; index numbers, data envelopment analysis (DEA); and stochastic frontier analysis (SFA). For each method, a detailed introduction to the basic concepts is presented, numerical examples are provided, and some of the more important extensions to the basic methods are discussed. Of special interest is the systematic use of detailed empirical applications using real-world data throughout the book. In recent years, there have been a number of excellent advance-level books published on performance measurement. This book, however, is the first systematic survey of performance measurement with the express purpose of introducing the field to a wide audience of students, researchers, and practitioners. Indeed, the 2nd Edition maintains its uniqueness: (1) It is a well-written introduction to the field. (2) It outlines, discusses and compares the four principal methods for efficiency and productivity analysis in a well-motivated presentation. (3) It provides detailed advice on computer programs that can be used to implement these performance measurement methods. The book contains computer instructions and output listings for the SHAZAM, LIMDEP, TFPIP, DEAP and FRONTIER computer programs. More extensive listings of data and computer instruction files are available on the book's website: (www.uq.edu.au/economics/cepa/crob2005).

7,616 citations


"Using least squares and tobit in se..." refers methods in this paper

  • ...In an interesting paper, Hoff (2007) advocates using tobit and ordinary least squares (OLS) in second stage data envelopment analysis (DEA) efficiency analyses stating “It is firstly concluded that the tobit approach will in most cases be sufficient in representing second stage DEA models....

    [...]

  • ...Some procedures have been developed that incorporate the influence of efficiency factors in the DEA analysis (see Cooper et al., 2000; Coelli et al., 1999; Fried et al., 1999; Grosskopf, 1996), but the two-stage procedure is very appealing both in terms of its simplicity and the way efficiency is…...

    [...]