scispace - formally typeset
Search or ask a question
Author

Abraham Charnes

Bio: Abraham Charnes is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Linear programming & Data envelopment analysis. The author has an hindex of 57, co-authored 222 publications receiving 63459 citations. Previous affiliations of Abraham Charnes include Carnegie Institution for Science & Northwestern University.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors provide a model to deal with multi-dimensional social goals on a regionalized as well as national basis and to relate these to each other in the context of a national economic system.
Abstract: The objective of this paper is to provide a model to deal with multi-dimensional social goals on a regionalized as well as national basis and to relate these to each other in the context of a national economic system. Input-Output Interindustry Analysis is therefore regionalized and reformulated in a way that connects it to national economic policy variables in a Keynsian framework. Possible multiple objectives and controls are examined and their possible further developments are also sketched in a context of regionalized and social goals that differ from region to region and also from time to time. Various proposals for revenue sharing are exhibited and interpreted in the light of the possibilities this model admits. Special attention is given to the Nixon President Nixon and ACIR Advisory Commission on Intergovernmental Relations plans in contrast to a multi-goal dynamic and adaptive control-incentive scheme with “saturating” indices of goal attainment. The latter, suggested at the end of this paper, is designed so that it can permit variation between regions while maintaining minimum levels of comparable attainment on goals that are common to more than one region.

3 citations

Book ChapterDOI
01 Jan 1994
TL;DR: At this stage in its development, the maturation of DEA practice and its wider acceptance will be facilitated by standardizing DEA notation and reference to models and providing adequate information regarding the method of computation.
Abstract: It is probably a truism that the lack of simple access to reliable DEA software packages has hampered the diffusion and wider application of DEA analyses. Although, in principle, DEA solutions can be obtained with convential linear programming software, in reality this task can be time-consuming. In principle, DEA solutions require the calculation of as many linear programs as there are DMUs. When using ordinary linear programming software packages, this task can become daunting even for small problems (e.g., 100DMUs). DEA calculations with standard LP software packages are also prone to inaccurate classification of the improperly efficient and nearly efficient DMUs because of the need to calibrate, perhaps by trial and error, the appropriate magnitude of the non-Archimedean infinitesimal1 that introduces lower-bound constraints on all variables (see Charnes, Cooper, and Rhodes, 1979; Lewin and Morey, 1981; see also chapter 4 of this volume and Ali and Seiford (1993) for a comparison of results with different values for the non-Archimedean infinitesimal). Specialized DEA codes eliminate the need to calibrate the non-Archimedean infinitesimal by a preemptive approach. In addition, specialized DEA codes automate the recursive running of LP programs, the scaling of data, and the choice of models (orientation and returns to scale). Most of the empirical DEA papers to date (including many of the chapters in this book) do not provide information on how the DEA calculations were made (e.g., standard LP packages or specialized DEA code). At this stage in its development, the maturation of DEA practice and its wider acceptance will, in our judgment, be facilitated by 1) standardizing DEA notation and reference to models (e.g., as developed in chapters 2 and 3) and 2) providing adequate information regarding the method of computation.

3 citations

Journal ArticleDOI
01 Jun 1984
TL;DR: It is shown that no topology or continuity is needed as in Kortanek-Strojwas and that the characterization extends to weighted (l′)-balls with any ordered scalar field and that a Chebyshev ball theorem is false.
Abstract: The extreme point characterization of the (l′)-ball of a generalized finite sequence space by Kortanek and Strojwas was accomplished only for real scalars and by continuity considerations. We show that no topology or continuity is needed as in Kortanek-Strojwas and that the characterization extends to weighted (l′)-balls with any ordered scalar field. We show a Chebyshev ball theorem is false since they haveno extreme points. Via generalizing the LIEP theorem, useful projections of the ball are proved convex hulls of their extreme points.

2 citations

01 Feb 1984
TL;DR: An information-theoretic method for nonparametric density estimation which guarantees that the resulting density is unimodal, which can handle both data derived and non-data derived information.
Abstract: : This document presents an information-theoretic method for nonparametric density estimation which guarantees that the resulting density is unimodal. The method inputs data in the form of moment or quantile information and consequently can handle both data derived and non-data derived information. In the non-data derived situation it yields a method for obtaining unimodal Bayesian prior distribution. (Author)

2 citations

01 Mar 1973
TL;DR: In this paper, a branch and bound method is used to solve integer linear programming problems, where the problem is transformed into an equivalent integer interval linear programming problem, and then a branch-and-bound method is applied to solve the equivalent problem.
Abstract: : The paper presents a new algorithm for solving integer linear programming problems. First the problem is transformed into an equivalent integer interval linear programming problem; and then a branch and bound method is applied to solve the equivalent problem. At each stage an interval linear programming problem of the most trivial type is to be solved. Gains in solution efficiency over other integer programming methods are anticipated. The method is equally valid for mixed integer problems. (Author)

2 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A nonlinear (nonconvex) programming model provides a new definition of efficiency for use in evaluating activities of not-for-profit entities participating in public programs and methods for objectively determining weights by reference to the observational data for the multiple outputs and multiple inputs that characterize such programs.

25,433 citations

Journal ArticleDOI
TL;DR: The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs as mentioned in this paper.
Abstract: In management contexts, mathematical programming is usually used to evaluate a collection of possible alternative courses of action en route to selecting one which is best. In this capacity, mathematical programming serves as a planning aid to management. Data Envelopment Analysis reverses this role and employs mathematical programming to obtain ex post facto evaluations of the relative efficiency of management accomplishments, however they may have been planned or executed. Mathematical programming is thereby extended for use as a tool for control and evaluation of past accomplishments as well as a tool to aid in planning future activities. The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs. A separation into technical and scale efficiencies is accomplished by the methods developed in this paper without altering the latter conditions for use of DEA directly on observational data. Technical inefficiencies are identified with failures to achieve best possible output levels and/or usage of excessive amounts of inputs. Methods for identifying and correcting the magnitudes of these inefficiencies, as supplied in prior work, are illustrated. In the present paper, a new separate variable is introduced which makes it possible to determine whether operations were conducted in regions of increasing, constant or decreasing returns to scale in multiple input and multiple output situations. The results are discussed and related not only to classical single output economics but also to more modern versions of economics which are identified with "contestable market theories."

14,941 citations

Book
31 Jul 1985
TL;DR: The book updates the research agenda with chapters on possibility theory, fuzzy logic and approximate reasoning, expert systems, fuzzy control, fuzzy data analysis, decision making and fuzzy set models in operations research.
Abstract: Fuzzy Set Theory - And Its Applications, Third Edition is a textbook for courses in fuzzy set theory. It can also be used as an introduction to the subject. The character of a textbook is balanced with the dynamic nature of the research in the field by including many useful references to develop a deeper understanding among interested readers. The book updates the research agenda (which has witnessed profound and startling advances since its inception some 30 years ago) with chapters on possibility theory, fuzzy logic and approximate reasoning, expert systems, fuzzy control, fuzzy data analysis, decision making and fuzzy set models in operations research. All chapters have been updated. Exercises are included.

7,877 citations

Journal ArticleDOI
01 May 1981
TL;DR: This chapter discusses Detecting Influential Observations and Outliers, a method for assessing Collinearity, and its applications in medicine and science.
Abstract: 1. Introduction and Overview. 2. Detecting Influential Observations and Outliers. 3. Detecting and Assessing Collinearity. 4. Applications and Remedies. 5. Research Issues and Directions for Extensions. Bibliography. Author Index. Subject Index.

4,948 citations

Book
30 Nov 1999
TL;DR: In this article, the basic CCR model and DEA models with restricted multipliers are discussed. But they do not consider the effect of non-discretionary and categorical variables.
Abstract: List of Tables. List of Figures. Preface. 1. General Discussion. 2. The Basic CCR Model. 3. The CCR Model and Production Correspondence. 4. Alternative DEA Models. 5. Returns to Scale. 6. Models with Restricted Multipliers. 7. Discretionary, Non-Discretionary and Categorical Variables. 8. Allocation Models. 9. Data Variations. Appendices. Index.

4,395 citations