scispace - formally typeset
Search or ask a question

Showing papers by "William W. Cooper published in 1994"


Journal ArticleDOI
TL;DR: BarNiv et al. as discussed by the authors presented an early warning system for regulatory use in insolvency prediction based on artificial intelligence (in particular, a neural network model), and used financial and other insurer operations data such as those available in the annual statements filed with the National Association of Insurance Commissioners (NAIC).
Abstract: Introduction The definition and measurement of business risk has been a central theme of financial and actuarial literature for years Although the works of Botch (1970), Bierman (1960), Tinsley (1970), and Quirk (1961) dealt with the issue of corporate failure, their models did not lend themselves to empirical testing Additionally, while the work by Altman (1968), Williams and Goodman (1971), Sinkey (1975), and Altman, Haldeman, and Narayanan (1977) attempted to predict bankruptcy by using discriminant analysis, their approaches were static in nature Thus, the dynamics of a firm's operations and the changing economic environment were not included in their analyses Santomero and Vinso (1977), and Vinso (1979) used actuarial ruin theory models to incorporate the dynamic aspects of the cash flow process for assessing the likelihood of bank insolvency However, there appear to be mathematical problems with their development, making their results difficult to implement in practice Insolvency within the insurance industry has become a major issue of public debate and concern, and the identification of potentially troubled firms has become a major regulatory research objective Previous research on the topic of insurer insolvency prediction includes Ambrose and Seward (1988), BarNiv and Hershbarger (1990), BarNiv and MacDonald (1992), Barnoff (1993), and Harrington and Nelson (1986) BarNiv and MacDonald provide a particularly good review of the previous research techniques and results on rating and monitoring insolvency risk for insurers and can be consulted for further background on alternative approaches The research presented in this article aims to construct an early warning system for regulatory use in insolvency prediction The approach we utilize is based upon modern methods in artificial intelligence (in particular, a neural network model), and uses financial and other insurer operations data such as those available in the annual statements filed with the National Association of Insurance Commissioners We also compare the insolvency prediction results using the neural network methodology with those obtained via discriminant analysis, A M Best ratings, and the National Association of Insurance Commissioners' Insurance Regulatory Information System ratings Situational Overview In the context of warning of pending insurer insolvency, the regulator has several sources of information For example, there are reporting and rating services such as the A M Best Company, which rates 3,000 property-liability and life health insurers However, many of the insurers of interest to state regulators are not rated by Best's or by other rating services (eg, Moody's or Standard and Poor's) In addition, the National Association of Insurance Commissioners (NAIC) has developed a system called the Insurance Regulatory Information System (IRIS) This system was designed to provide an early warning system for insurer insolvency based upon financial ratios derived from the regulatory annual statement The IRIS system identifies insurers for further regulatory evaluation if four of the eleven (or twelve, in the case of life insurers) computed financial ratios for a particular company lie outside a given "acceptable" range of values IRIS uses univariate tests, and the acceptable range of values is determined such that, for any given univariate ratio measure, only approximately 15 percent of all firms have results outside of the particular specified "acceptable" range The adequacy of IRIS for predicting troubled insurers has been investigated empirically and found not to be strongly predictive For example, one small-scale comparison study using only five of the IRIS ratio variables from the NAIC data has shown that by using statistical methods it is possible to obtain substantial improvements over the IRIS insolvency prediction rates (cf Barrese, 1990) More recently, the NAIC has implemented a supplementary system based on a number of additional financial ratios …

179 citations


Journal ArticleDOI
01 Dec 1994-Top
TL;DR: The use of DEA has been rapidly expanding and has been accompanied by developments which have enhanced its power and enlarged DEA's utility for additional applications as discussed by the authors, including simulation studies comparing DEA with competing forms of statistical regressions.
Abstract: Rapidly expanding uses of DEA have been accompanied by developments which have enhanced its power and enlarged its utility for additional applications Developments covered in the present paper include simulation studies comparing DEA with competing forms of statistical regressions Other studies covered show how these two approaches can be combined in complementary fashion Another part of this paper deals with Chance Constrained Programming formulations which incorporate probabilistic elements into DEA Included also are discussions of statistical characterizations with accompanying tests of statistical significance for DEA efficiency evaluations This paper concludes with uses of DEA in “discovery processes”-processes that need strengthening (and encouragemnt) in contemporary social science and management science research Suggestions are made for additional research on further developments which extend to uses of DEA to provide new approaches in economics (including econometrics), management and psychology and an Appendix introduces new or recently developed efficiency measures for use in DEA

48 citations


BookDOI
01 Jan 1994
TL;DR: In this paper, the use of Artificial Neural Networks for Estimation of Decision Surface in First Price Sealed Bid Auctions was discussed. And the model of Generalized Goal Programming problems was presented.
Abstract: Introduction W.W. Cooper, A.B. Whinston. Part I: Experimental and Computational Methods for Auction Mechanisms. Sealed Bid Auctions and Economic Market Games G.L. Thompson. The Use of Artificial Neural Networks for Estimation of Decision Surface in First Price Sealed Bid Auctions R.E. Dorsey, J.D. Johnson, M.V. Van Boening. Designing a Real Time Computer Assisted Auction for Natural Gas Networks S. Rassenti, V. Smith, K. McCabe. Part II: Modeling Computational Economic Systems. Nonconvexities in Stochastic Control Models: an Analysis H.M. Amman, D.A. Kendrick. The Modeling and Computation of Generalized Goal Programming Problems A. Nagurney, S. Thore, Jie Pan. Polyhedral Assurance Regions with Linked Constraints R.G. Thompson, R.M. Thrall. Integrative Asset-Liability Planning Using Large Scale Stochastic Optimization J. Mulvey. Part III: Economic Modeling and Computational Systems. Economic Decision Theory as a Paradigm for the Coinstruction and Evaluation of Algorithms and Information Systems J.C. Moore, W.B. Richmond, A.B. Whinston. A General Economic Equilibrium Model of Distributed Computing D.O. Stahl, A.B. Whinston. International Trade, Capital Flows and Sectoral Analysis: Formulation and Solution of Intertemporal Equilibrium Models A.S. Manne, T.F. Rutherford. Stochastic Control of Nonlinear Economic Models R. Neck, J. Matulka.

45 citations


Book ChapterDOI
01 Jan 1994
TL;DR: Data Envelopment Analysis (DEA) is a body of concepts and methodologies that have now been incorporated in a collection of models with accompanying interpretive possibilities as follows: as discussed by the authors.
Abstract: Data Envelopment Analysis (DEA) is a body of concepts and methodologies that have now been incorporated in a collection of models with accompanying interpretive possibilities as follows: 1. the CCR ratio model (1978) (i) yields an objective evaluation of overall efficiency and (ii) identifies the sources and estimates the amounts of the thusidentified inefficiencies 2. the BCC model (1984) distinguishes between technical and scale inefficiencies by (i) estimating pure technical efficiency at the given scale of operation and (ii) identifying whether increasing decreasing, or constant returns to scale possibilities are present for further exploitation 3. the Multiplicative models (Charnes et al., 1982, 1983) provide (i) a log-linear envelopment or (ii) a piecewise Cobb- Douglas interpretation of the production process (by reduction to the antecedent 1981 additive model of Charnes, Cooper, and Seiford); and 4. the Additive model (as better rendered in Charnes et al., 1985) and the extended Additive model (Charnes et al., 1987) (i) relate DEA to the earlier Charnes-Cooper (1959) inefficiency analysis and in the process (ii) relate the efficiency results to the economic concept of Pareto optimality as interpreted in the still earlier work of T. Koopmans (1949) in the volume that published the proceedings of the first conference on linear programming.1

41 citations


Book ChapterDOI
01 Jan 1994
TL;DR: For example, this article found that even in a heavily researched area such as advertising, there is little conclusive evidence as to the shape of these curves, and that all such investigations are limited by vitrue of ignoring interactions of marketing mix variables, not all of which are recognized a priori or a posteriori.
Abstract: Measurement and evaluation of sales response, in a multiattribute sense, for a product in the usual marketing environment of competing brands has been and continues to be an exceedingly complex and difficult task. It is made more so by the inability to obtain either comprehensive data or sample data that are free from noise factors, not all of which are recognized a priori or a posteriori. For example, even a casual review of the marketing literature would lead one to conclude that even in a heavily researched area such as the advertising—sales response curve there is little conclusive evidence as to the shape of these curves, and that all such investigations are limited by vitrue of ignoring interactions of marketing mix variables. These studies also, of course, treat only one response variable at a time.

20 citations


Book ChapterDOI
01 Jan 1994
TL;DR: The basic DEA models discussed in the previous chapter are associated with the way the returns-to-scale, the geometry of the envelopment surface, and the efficient projections are identified.
Abstract: The basic DEA models discussed in the previous chapter are associated with the way the returns-to-scale, the geometry of the envelopment surface, and the efficient projections are identified This array of choices provides considerable flexibility, which can be further increased by incorporating refinements or extensions to the basic theory These valuable additions to the methodology of DEA allow one to fine tune an analysis to reflect managerial or organization factors, sharpen efficiency estimates, and/or overcome inconsistencies

8 citations


Book ChapterDOI
01 Jan 1994
TL;DR: This chapter discusses the process of conducting DEA studies and various uses such as exploratory data analysis, implementation of DEA solutions, and recent model formulations, as well as caveats in applying the method.
Abstract: As is true with the application of any analytical approach to the “art of reckoning” (Eilon, 1984), the use of data envelopment analysis (DEA) requires knowledge about formulation of models, choice of variables, underlying assumptions, data representation, interpretation of results, and knowledge of limitations. In this chapter, we discuss the process of conducting DEA studies and various uses such as exploratory data analysis, implementation of DEA solutions, and recent model formulations, as well as caveats in applying the method. It should be noted that this chapter represents the accumulated experience of many DEA practitioners and researchers in applying DEA. It represents another concrete example of how the practice of DEA not only shaped the evolution of theoretical and model development but also informed the process and understanding of DEA analysis.

6 citations



Book ChapterDOI
01 Jan 1994
TL;DR: At this stage in its development, the maturation of DEA practice and its wider acceptance will be facilitated by standardizing DEA notation and reference to models and providing adequate information regarding the method of computation.
Abstract: It is probably a truism that the lack of simple access to reliable DEA software packages has hampered the diffusion and wider application of DEA analyses. Although, in principle, DEA solutions can be obtained with convential linear programming software, in reality this task can be time-consuming. In principle, DEA solutions require the calculation of as many linear programs as there are DMUs. When using ordinary linear programming software packages, this task can become daunting even for small problems (e.g., 100DMUs). DEA calculations with standard LP software packages are also prone to inaccurate classification of the improperly efficient and nearly efficient DMUs because of the need to calibrate, perhaps by trial and error, the appropriate magnitude of the non-Archimedean infinitesimal1 that introduces lower-bound constraints on all variables (see Charnes, Cooper, and Rhodes, 1979; Lewin and Morey, 1981; see also chapter 4 of this volume and Ali and Seiford (1993) for a comparison of results with different values for the non-Archimedean infinitesimal). Specialized DEA codes eliminate the need to calibrate the non-Archimedean infinitesimal by a preemptive approach. In addition, specialized DEA codes automate the recursive running of LP programs, the scaling of data, and the choice of models (orientation and returns to scale). Most of the empirical DEA papers to date (including many of the chapters in this book) do not provide information on how the DEA calculations were made (e.g., standard LP packages or specialized DEA code). At this stage in its development, the maturation of DEA practice and its wider acceptance will, in our judgment, be facilitated by 1) standardizing DEA notation and reference to models (e.g., as developed in chapters 2 and 3) and 2) providing adequate information regarding the method of computation.

3 citations