scispace - formally typeset
Search or ask a question
Author

William W. Cooper

Bio: William W. Cooper is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Data envelopment analysis & Linear programming. The author has an hindex of 79, co-authored 254 publications receiving 76641 citations. Previous affiliations of William W. Cooper include Harvard University & Carnegie Mellon University.


Papers
More filters
Journal ArticleDOI
TL;DR: BarNiv et al. as discussed by the authors presented an early warning system for regulatory use in insolvency prediction based on artificial intelligence (in particular, a neural network model), and used financial and other insurer operations data such as those available in the annual statements filed with the National Association of Insurance Commissioners (NAIC).
Abstract: Introduction The definition and measurement of business risk has been a central theme of financial and actuarial literature for years Although the works of Botch (1970), Bierman (1960), Tinsley (1970), and Quirk (1961) dealt with the issue of corporate failure, their models did not lend themselves to empirical testing Additionally, while the work by Altman (1968), Williams and Goodman (1971), Sinkey (1975), and Altman, Haldeman, and Narayanan (1977) attempted to predict bankruptcy by using discriminant analysis, their approaches were static in nature Thus, the dynamics of a firm's operations and the changing economic environment were not included in their analyses Santomero and Vinso (1977), and Vinso (1979) used actuarial ruin theory models to incorporate the dynamic aspects of the cash flow process for assessing the likelihood of bank insolvency However, there appear to be mathematical problems with their development, making their results difficult to implement in practice Insolvency within the insurance industry has become a major issue of public debate and concern, and the identification of potentially troubled firms has become a major regulatory research objective Previous research on the topic of insurer insolvency prediction includes Ambrose and Seward (1988), BarNiv and Hershbarger (1990), BarNiv and MacDonald (1992), Barnoff (1993), and Harrington and Nelson (1986) BarNiv and MacDonald provide a particularly good review of the previous research techniques and results on rating and monitoring insolvency risk for insurers and can be consulted for further background on alternative approaches The research presented in this article aims to construct an early warning system for regulatory use in insolvency prediction The approach we utilize is based upon modern methods in artificial intelligence (in particular, a neural network model), and uses financial and other insurer operations data such as those available in the annual statements filed with the National Association of Insurance Commissioners We also compare the insolvency prediction results using the neural network methodology with those obtained via discriminant analysis, A M Best ratings, and the National Association of Insurance Commissioners' Insurance Regulatory Information System ratings Situational Overview In the context of warning of pending insurer insolvency, the regulator has several sources of information For example, there are reporting and rating services such as the A M Best Company, which rates 3,000 property-liability and life health insurers However, many of the insurers of interest to state regulators are not rated by Best's or by other rating services (eg, Moody's or Standard and Poor's) In addition, the National Association of Insurance Commissioners (NAIC) has developed a system called the Insurance Regulatory Information System (IRIS) This system was designed to provide an early warning system for insurer insolvency based upon financial ratios derived from the regulatory annual statement The IRIS system identifies insurers for further regulatory evaluation if four of the eleven (or twelve, in the case of life insurers) computed financial ratios for a particular company lie outside a given "acceptable" range of values IRIS uses univariate tests, and the acceptable range of values is determined such that, for any given univariate ratio measure, only approximately 15 percent of all firms have results outside of the particular specified "acceptable" range The adequacy of IRIS for predicting troubled insurers has been investigated empirically and found not to be strongly predictive For example, one small-scale comparison study using only five of the IRIS ratio variables from the NAIC data has shown that by using statistical methods it is possible to obtain substantial improvements over the IRIS insolvency prediction rates (cf Barrese, 1990) More recently, the NAIC has implemented a supplementary system based on a number of additional financial ratios …

179 citations

Journal ArticleDOI
TL;DR: It is shown to be possible to avoid some of the need for dealing with non-linear problems of congestion in DEA by identifying conditions under which they can be replaced by ordinary (deterministic) DEA models.

167 citations

Journal ArticleDOI
TL;DR: In this paper, the authors discuss DEA (Data Envelopment Analysis) and some of its future prospects, including extensions to different objectives such as satisfactory or full efficiency objectives.
Abstract: This paper covers some of the past accomplishments of DEA (Data Envelopment Analysis) and some of its future prospects. It starts with the “engineering-science” definitions of efficiency and uses the duality theory of linear programming to show how, in DEA, they can be related to the Pareto–Koopmans definitions used in “welfare economics” as well as in the economic theory of production. Some of the models that have now been developed for implementing these concepts are then described and properties of these models and the associated measures of efficiency are examined for weaknesses and strengths along with measures of distance that may be used to determine their optimal values. Relations between the models are also demonstrated en route to delineating paths for future developments. These include extensions to different objectives such as “satisfactory” versus “full” (or “strong”) efficiency. They also include extensions from “efficiency” to “effectiveness” evaluations of performances as well as extensions to evaluate social-economic performances of countries and other entities where “inputs” and “outputs” give way to other categories in which increases and decreases are located in the numerator or denominator of the ratio (=engineering-science) definition of efficiency in a manner analogous to the way output (in the numerator) and input (in the denominator) are usually positioned in the fractional programming form of DEA. Beginnings in each of these extensions are noted and the role of applications in bringing further possibilities to the fore is highlighted.

157 citations

Journal ArticleDOI
TL;DR: A range-adjusted measure of efficiency, as recently developed in data envelopment analysis (DEA), is used to evaluate the performance of entities that supply water services in Japan and its robustness properties are tested and pointed up for uses in improved accountability.
Abstract: A range-adjusted measure (RAM) of efficiency, as recently developed in data envelopment analysis (DEA), is used to evaluate the performance of entities that supply water services in Japan. Its robustness properties are tested and pointed up for uses in improved accountability, and are further pointed up in terms of the potential for help in conducting performance and efficiency audits. The results from DEA are also joined with the Mann–Whitney rank order statistic to show how the two techniques may be jointly used in addressing issues of general policy. The Kanto Region and Kanagawa Prefecture in Japan are used for an illustration.

157 citations

Journal ArticleDOI
TL;DR: In this paper, a necessary and sufficient condition for the presence of (input) congestion was developed and a new measure of congestion was generated to provide the basis for a new unified approach to this and other topics in data envelopment analysis.
Abstract: This paper develops a necessary and sufficient condition for the presence of (input) congestion. Relationships between the two congestion methods presently available are discussed. The equivalence between Fare et al. [12] , [13] and Brockett et al. [2] hold only when the law of variable proportions is applicable. It is shown that the work of Brockett et al. [2] improves upon the work of Fare et al. [12] , [13] in that it not only (1) detects congestion but also (2) determines the amount of congestion and, simultaneously, (3) identifies factors responsible for congestion and distinguishes congestion amounts from other components of inefficiency. These amounts are all obtainable from non-zero slacks in a slightly altered version of the additive model — which we here further extend and modify to obtain additional details. We also generate a new measure of congestion to provide the basis for a new unified approach to this and other topics in data envelopment analysis (DEA).

154 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A nonlinear (nonconvex) programming model provides a new definition of efficiency for use in evaluating activities of not-for-profit entities participating in public programs and methods for objectively determining weights by reference to the observational data for the multiple outputs and multiple inputs that characterize such programs.

25,433 citations

Journal ArticleDOI
TL;DR: The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs as mentioned in this paper.
Abstract: In management contexts, mathematical programming is usually used to evaluate a collection of possible alternative courses of action en route to selecting one which is best. In this capacity, mathematical programming serves as a planning aid to management. Data Envelopment Analysis reverses this role and employs mathematical programming to obtain ex post facto evaluations of the relative efficiency of management accomplishments, however they may have been planned or executed. Mathematical programming is thereby extended for use as a tool for control and evaluation of past accomplishments as well as a tool to aid in planning future activities. The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs. A separation into technical and scale efficiencies is accomplished by the methods developed in this paper without altering the latter conditions for use of DEA directly on observational data. Technical inefficiencies are identified with failures to achieve best possible output levels and/or usage of excessive amounts of inputs. Methods for identifying and correcting the magnitudes of these inefficiencies, as supplied in prior work, are illustrated. In the present paper, a new separate variable is introduced which makes it possible to determine whether operations were conducted in regions of increasing, constant or decreasing returns to scale in multiple input and multiple output situations. The results are discussed and related not only to classical single output economics but also to more modern versions of economics which are identified with "contestable market theories."

14,941 citations

Book
31 Jul 1985
TL;DR: The book updates the research agenda with chapters on possibility theory, fuzzy logic and approximate reasoning, expert systems, fuzzy control, fuzzy data analysis, decision making and fuzzy set models in operations research.
Abstract: Fuzzy Set Theory - And Its Applications, Third Edition is a textbook for courses in fuzzy set theory. It can also be used as an introduction to the subject. The character of a textbook is balanced with the dynamic nature of the research in the field by including many useful references to develop a deeper understanding among interested readers. The book updates the research agenda (which has witnessed profound and startling advances since its inception some 30 years ago) with chapters on possibility theory, fuzzy logic and approximate reasoning, expert systems, fuzzy control, fuzzy data analysis, decision making and fuzzy set models in operations research. All chapters have been updated. Exercises are included.

7,877 citations

Journal ArticleDOI
01 May 1981
TL;DR: This chapter discusses Detecting Influential Observations and Outliers, a method for assessing Collinearity, and its applications in medicine and science.
Abstract: 1. Introduction and Overview. 2. Detecting Influential Observations and Outliers. 3. Detecting and Assessing Collinearity. 4. Applications and Remedies. 5. Research Issues and Directions for Extensions. Bibliography. Author Index. Subject Index.

4,948 citations

Book
30 Nov 1999
TL;DR: In this article, the basic CCR model and DEA models with restricted multipliers are discussed. But they do not consider the effect of non-discretionary and categorical variables.
Abstract: List of Tables. List of Figures. Preface. 1. General Discussion. 2. The Basic CCR Model. 3. The CCR Model and Production Correspondence. 4. Alternative DEA Models. 5. Returns to Scale. 6. Models with Restricted Multipliers. 7. Discretionary, Non-Discretionary and Categorical Variables. 8. Allocation Models. 9. Data Variations. Appendices. Index.

4,395 citations