scispace - formally typeset
Search or ask a question

Showing papers by "William W. Cooper published in 2001"


Journal ArticleDOI
TL;DR: The newer analytical methods described in this paper make it possible to determineranges within which all data may be varied for any DMU before a reclassification from efficient to inefficient status occurs.
Abstract: This paper surveys recently developed analytical methods for studying the sensitivity of DEA results to variations in the data. The focus is on the stability of classification of DMUs (Decision Making Units) into efficient and inefficient performers. Early work on this topic concentrated on developing solution methods and algorithms for conducting such analyses after it was noted that standard approaches for conducting sensitivity analyses in linear programming could not be used in DEA. However, some of the recent work we cover has bypassed the need for such algorithms. Evolving from early work that was confined to studying data variations in only one input or output for only one DMU at a time, the newer methods described in this paper make it possible to determine ranges within which all data may be varied for any DMU before a reclassification from efficient to inefficient status (or vice versa) occurs. Other coverage involves recent extensions which include methods for determining ranges of data variation that can be allowed when all data are varied simultaneously for all DMUs. An initial section delimits the topics to be covered. A final section suggests topics for further research.

252 citations


Journal ArticleDOI
TL;DR: The Imprecise Data Envelopment Analysis (IDEA) method used permits us to deal not only with imprecise data and exact data but also with weight restrictions as in the (now) widely used "Assurance Region" (AR) and "cone-ratio envelopment" approaches to DEA.
Abstract: Data Envelopment Analysis (DEA) models, as ordinarily employed, assume that the data for all inputs and outputs are known exactly. In some applications, however, a number of factors may involve imprecise data, which take forms such as ordinal rankings and knowledge only of bounds. Here we provide an example involving a Korean mobile telecommunication company. The Imprecise Data Envelopment Analysis (IDEA) method we use permits us to deal not only with imprecise data and exact data but also with weight restrictions as in the (now) widely used "Assurance Region" (AR) and "cone-ratio envelopment" approaches to DEA. We also show how to transform AR bounds on thevariables, obtained from managerial assessments, for instance, intodata adjustments. This involves an extended IDEA model, which we refer to as AR-IDEA. All these uses are illustrated by an example application directed to evaluate efficiencies of branch offices of a telecommunication company in Korea.

198 citations


Journal ArticleDOI
TL;DR: In this article, the authors used data envelopment analysis models to identify inefficiencies in the management of congestion in the textile industry and showed how elimination of such managerial inefficiency could have led to output augmentation without reducing employment.
Abstract: Congestion is said to be present when increases in inputs result in output reductions. An “iron rice bowl” policy instituted in China shortly after the revolution led by Mao Tze Tung resulted in congestion that ultimately led to bankruptcy in the textile industry, and near bankruptcy in other industries. A major policy shift away from the “iron rice bowl policy” in 1990 led to massive layoffs and increasing social tensions. Were these massive layoffs necessary? Extensions of data envelopment analysis models effected in the present paper identified inefficiencies in the management of congestion. Using textiles and automobiles for illustration, it is shown how elimination of such managerial inefficiencies could have led to output augmentation without reducing employment. Thus, even in the presence of congestion, it proved to be possible to identify additional (managerial) inefficiencies that provided opportunities for improvement. In the heavily congested textile industry, these output augmentations could have been accompanied by reductions in the amounts of capital used (as an added bonus). In any case, we show how to identify and evaluate new types of efficiency—viz., the efficiency with which needed (or desired) inefficiencies are managed.

108 citations


Journal ArticleDOI
TL;DR: Examination of the two approaches that are presently available in the data envelopment analysis (DEA) literature for use in identifying and analyzing congestion shows that the Fare et al. approach can fail in both modes.

99 citations


Journal ArticleDOI
TL;DR: The present paper removes a limitation of IDEA and AR-IDEA which requires access to actually attained maximum values in the data by introducing a dummy variable that supplies needed normalizations on maximal values and this is done in a way that continues to provide linear programming equivalents to the original problems.
Abstract: IDEA (Imprecise Data Envelopment Analysis) extends DEA so it can simultaneously treat exact and imprecise data where the latter are known only to obey ordinal relations or to lie within prescribed bounds. AR-IDEA extends this further to include AR (Assurance Region) and the like approaches to constraints on the variables. In order to provide one unified approach, a further extension also includes cone-ratio envelopment approaches to simultaneous transformations of the data and constraints on the variables. The present paper removes a limitation of IDEA and AR-IDEA which requires access to actually attained maximum values in the data. This is accomplished by introducing a dummy variable that supplies needed normalizations on maximal values and this is done in a way that continues to provide linear programming equivalents to the original problems. This dummy variable can be regarded as a new DMU (Decision Making Unit), referred to as a CMD (Column Maximum DMU).

92 citations


Journal ArticleDOI
TL;DR: This paper constitutes a response to the critique by Cherchye, Kuosmanen and Post which is leveled at the paper “Comparisons and evaluations of alternative approaches to the treatment of congestion in DEA” by Cooper, Gu and Li.

44 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present conditions for equivalence of these two approaches as well as conditions under which invalid results would be secured from one of the two approaches from the perspective of data envelopment analysis.
Abstract: Two approaches are presently available for the treatment of congestion in the data envelopment analysis (DEA) literature. Conditions for equivalence of these two approaches as well as conditions under which invalid results would be secured from one of these two approaches are set forth in Cooper, Seiford and Zhu (CSZ) (Fare et al., The Measurement of Efficiency of Production, Kluwer-Nijihoff Publishing, Norwell, MA, 1985). Fare and Grosskopf (FG) (Fare and Grosskopf, Slacks and congestion; a comment. Socio-Economic Planning Sciences 2000;34:27–33) supply a counterexample to the critically important Theorem 1 in CSZ as well as to its corollary. The counterexample to the corollary is correct but the counterexample to the theorem rests on a misunderstanding. To avoid such misunderstandings, the theorem and proof are here recast in a more direct and positive manner to (a) clarify the theorem and (b) prove its validity. The paper then goes on to examine other relations and properties of these two approaches that bear on the FG comments which include conditions under which one (but not the other) will yield invalid results as shown in the example provided by CSZ. It is thus to be noted that equivalence holds only when the FG approach is valid.

33 citations


Journal ArticleDOI
TL;DR: In this article, the authors respond to Steinmann and Zweifel's criticism that their RAM model fails to supply an overall measure of efficiency that satisfies non-negativity as prescribed in the properties we labeled PI and P2 in the article by Cooper et al.
Abstract: The preceding critique by Lukas Steinmann and Peter Zweifel (2000) provides a welcome opportunity to clarify what was done (and intended) in our article on RAM as published in the Journal of Productivity Analysis (Cooper et al, 1999; see also Aida et al, 1998) We respond first to their criticism that our RAM model fails to supply an overall measure of efficiency that satisfies non-negativity as prescribed in the properties we labeled PI and P2 in the article by Cooper et al (1999) In particular, as Professors Steinmann and Zweifel exhibit by example, these properties are violated because our RAM measure can assume negative values After responding to this criticism we turn to the criticism that RAM fails to provide a valid ranking of performance (as prescribed in the property we identifies as P6 Finally we turn to the claim by Steinmann and Zweifel that RAM is defective because it is biased against large DMUs as was found to be the case in their use of RAM in a study of 307 Swiss hospitals A concluding section then summarizes our response and suggests further topics for research

33 citations