scispace - formally typeset
Search or ask a question

Showing papers in "Expert Systems With Applications in 1990"


Journal ArticleDOI
TL;DR: The goal of EVA is to build an integrated set of generic tools to validate any knowledge-based system written in any expert system shell such as ART, CLIPS, OPS5, KEE and other.
Abstract: The Expert Systems Validation Associate (EVA) is a validation system which has been under development at the Lockheed Artificial Intelligence Center since 1986. The goal of EVA is to build an integrated set of generic tools to validate any knowledge-based system written in any expert system shell such as ART, CLIPS, OPS5, KEE and other. EVA contains tools such as the structure checker, extended structure checker, logic checker, extended logic checker, semanticschecker, omission checker, rule refiner, control checker, behavior verifier, test case generator, uncertainty checker, rule satisfiability checker, and model-based verifier. In this paper, we describe these tools which have been and are being developed.

99 citations


Journal ArticleDOI
TL;DR: A set of acceptability principles for a rulebase is defined, which go beyond mathematical correctness concerns to distribution and simplicity conditions that can signal the existence of errors or awkwardness in the rules.
Abstract: This paper defines a set of acceptability principles for a rulebase. The principles go beyond mathematical correctness concerns to distribution and simplicity conditions that can signal the existence of errors or awkwardness in the rules. The principles are Consistency, Completeness, Irredundancy, Connectivity, and Distribution. The intent of these principles is to assist the rulebase designer in constructing a rulebase and validating its behavior. The five principles are implemented by mathematical and computational criteria that specify algorithms for analyzing rulebases. The Consistency criteria address the logical consistency of the rules, and can rightly be considered “correctness” criteria. The Completeness and Irredundancy criteria preclude oversights in specifications and redundancy in the rules, and are more like “reasonability” criteria for the terms in the rules. The Connectivity criteria concern the inference system defined by the rules, and are like completeness and irredundancy criteria for the inference system. Finally, the Distribution criteria are “esthetic” criteria for the simplicity of the rules and the distinctions they cause, as well as the distribution of the rules and the values implied by them. These procedures do not solve the (hard) problem of choosing a representation for the important features of the system being modeled, and turning the characteristics of the features into rules. They only allow a set of rules to be checked for the various criteria, so that many commonly occurring specification errors can be caught quickly. This paper discusses the formation of rulebases from a set of rules, not the formulation of rules from a system under study.

56 citations


Journal Article
TL;DR: It is found that for beam and plate elements, whose nodal degrees of freedom have non-uniform dimensions, the error caused by the semi-analytic sensitivity analysis method is almost proportional to the square of the number of elements.
Abstract: The integration of FEM, CAD and optimization is becoming an important area of research in the field of computational mechanics. Implementing the integration, one of the main difficulties encountered is to perform a sensitivity analysis for the structural response with respect to any design variable. It has been shown in the literature that the semi-analytic sensitivity analysis method provides an ideal tool to solve this difficulty. Nevertheless, the accuracy of the method remains to be studied. The present paper addresses the accuracy problem. Structures composed of beam elements, 3-node constant plane stress triangular elements, 4-node plane stress isoparametric elements, 8-node plane stress isoparametric elements, 8-node solid elements, and 3-node triangular bending elements are studied. It is found that for beam and plate elements, whose nodal degrees of freedom have non-uniform dimensions, the error caused by the semi-analytic sensitivity analysis method is almost proportional to the square of the number of elements. An alternative forward-backward finite difference scheme solves the problem nicely. For the other elements, the accuracy of the semi-analytic sensitivity analysis is acceptable. The results obtained in the present paper justify the usage of the semi-analytic sensitivity analysis method.

53 citations


Journal ArticleDOI
TL;DR: This paper discusses evaluation strategies from several points of view: classification, validation, verification, and performance analysis, noting that formal analysis is replacing (or enhancing) traditional testing of conventional software.
Abstract: The use of expert systems has increased rapidly during the last few years. There is a growing need for systematic and reliable techniques for evaluating both expert system shells and complete expert systems. In this paper, we discuss evaluation strategies from several points of view: classification, validation, verification, and performance analysis. We note that ther are several respects in which expert system evaluation is similar to software evaluation in general and, consequently, that it may be possible to apply established software engineering techniques to expert system evaluation. In particular, formal analysis is replacing (or enhancing) traditional testing of conventional software. We believe that increasing formalization is an important trend and we indicate ways in which it could be carried further.

51 citations


Journal ArticleDOI
TL;DR: In this article, the problem of identifying errors and problems with which we are familiar and creating new ones that we need to discover is addressed by using knowledge-based systems (KBSs).
Abstract: Computer science has always been concerned with the problem of organizing and representing knowledge to make it effectively computable Expert systems or knowledge-based systems (KBSs)1 are not dramatic departures from other computer programs Rather, they bring together and extend a number of the innovations and concerns that characterize modern computer science However, like the classes of special-application programs before them or like programs written in new languages, they avoid some of the errors with which we are familiar and create new ones that we need to discover Part of our ability to specify and test such programs adequately depends upon our ability to discover as quickly as possible (a) the kinds of errors and problems that their new constructs, goals, and processing qualities introduce and (b) the methods, analyses, and tests that will allow one to identify these errors and to fix them

45 citations


PatentDOI
TL;DR: In this paper, a parallel processing apparatus turns a processing state discrimination flag off, increases a program count by 1 at a time, reads out one instruction, and processes that instruction in an arithmetic unit.
Abstract: When executing successive processing of conventional software, a parallel processing apparatus turns a processing state discrimination flag off, increases a program count by 1 at a time, reads out one instruction, and processes that instruction in an arithmetic unit. When executing parallel processing for new software, the parallel processing apparatus turns the processing state discrimination on, increases the program count by m at a time, reads out m instructions, and exercises parallel processing over m instructions in m arithmetic units. In order to select either of the above described two kinds of processing, a discrimination changeover instruction having function of changing over the processing state discrimination flag is added. Instructions are processed in arithmetic unit(s) in accordance with the processing state discrimination flag. In this way, successive processing and parallel processing are provided with compatibility and are selectively executed. Further, the parallel processing apparatus making great account of compatibility of a great part of software reads out m instructions without using the processing state flag, decodes the m instructions, checks whether a branch instruction exists in the k-th instruction, then executes the first to the (k+1)-th instructions in k+1 arithmetic units, and prevent execution of the (k+ 2)-th to m-th instructions. By executing the k-th branch instruction, the parallel processing apparatus calculates an address nm+h of its branch destination, performs calculation to check whether the condition is satisfied or not, then prevents execution of instructions of addresses nm to nm+h-1, and executes instructions of addresses nm+h to (n+1)m. In this way, the parallel processing apparatus executes a plurality of instructions and successively executes branch instructions.

32 citations


Journal ArticleDOI
TL;DR: The general conclusion is that algorithms integrating statistical and inductive learning concepts are likely to make the most improvement in performance.
Abstract: Inductive learning is a method for automated knowledge acquisition. It converts a set of training data into a knowledge structure. In the process of knowledge induction, statistical techniques can play a major role in improving performance. In this paper, we investigate the competition and integration between the traditional statistical and the inductive learning methods. First, the competition between these two approaches is examined. Then, a general framework for integrating these two approaches is presented. This framework suggests three possible integrations: (1) statistical methods as preprocessors for inductive learning, (2) inductive learning methods as preprocessors for statistical classification, and (3) the combination of the two methods to develop new algorithms. Finally, empirical evidence concerning these three possible integrations are discussed. The general conclusion is that algorithms integrating statistical and inductive learning concepts are likely to make the most improvement in performance.

31 citations


PatentDOI
Hideo Ito1
TL;DR: A conversational-type of language analysis apparatus includes a sentence input unit inputting a sentence, a dictionary unit and a grammar storage unit, which has analysis unit selecting correct relationships without the conversational interaction from relationships other than the correct relationships selected by the conversial-type analysis unit.
Abstract: A conversational-type of language analysis apparatus includes a sentence input unit inputting a sentence, a dictionary unit and a grammar storage unit. The apparatus also includes an all-relationship detection unit dividing into the syntactical units the sentence, the syntactical units being nodes, and detecting, as candidate relationships, all linguistic relationships on the basis of the grammatical rules in the grammar storage units and dictionary information in the dictionary unit, a conversational-type analysis unit selecting correct relationships by conversation interaction with a user from the candidate relationships, the conversational-type analysis unit having conversational object selecting unit for selecting object relationships which are objects of the conversational interaction with the user from the candidate relationships, and conversation unit selecting the correct relationships by conversational interaction with the user. The apparatus also has analysis unit selecting correct relationships without the conversational interaction from relationships other than the correct relationships selected by the conversational-type analysis unit.

30 citations


Journal ArticleDOI
TL;DR: A novel approach for the dynamic testing of expert system rule bases, based on the idea of first testing systems for disastrous safety and integrity problems before testing for primary functions and other classes of problems, and a prioritized series of 10 classes of faults are identified.
Abstract: How to develop knowledge-based and expert systems today is becoming more and more well understood; how to test these systems still poses some challenges. There has been considerable progress in developing techniques for static testing of these systems, checking for problems via formal examination methods; but there has been almost no work on dynamic testing , testing the systems under operating conditions. A novel approach for the dynamic testing of expert system rule bases is presented. This approach, Heuristic Testing , is based on the idea of first testing systems for disastrous safety and integrity problems before testing for primary functions and other classes of problems, and a prioritized series of 10 classes of faults are identified. The Heuristic Testing approach is intended to assure software reliability rather than simply find defects; the reliability is based on the 10 fault clones called compotent reliability. General procedures for conceptualizing and generating test cases were developed for all fault classes, including a Generic Testing Method for generating key test-case values. One of the classes, error-metric, illustrates how complexity-metrics, now used for predicting conventional software problems, could be developed for expert system rule bases. Two key themes are automation (automatically generating test cases) and fix-as-you-go testing (fixing a problem before continuing to test). The overall approach may be generalizable to static rule base testing, to testing of other expert system components, to testing of other nonconventional systems such as neural network and object-oriented systems, and even to conventional software.

30 citations


Journal ArticleDOI
TL;DR: A development model, based on Boehm's spiral model, that integrates verification and validation into development stages is presented, which is loose and adaptable, and suited to small system development.
Abstract: Many small to medium size expert systems are developed without written specifications or a development methodology. This often results in verification and validation being performed in an ad hoc manner toward the end of the development process. This paper presents a development model, based on Boehm's spiral model, that integrates verification and validation into development stages. The model is loose and adaptable, and suited to small system development.

29 citations


Journal ArticleDOI
TL;DR: Four basic architectures for integrating an intelligent decision support system (IDSS) are examined, distinguished by their “degree of integration” and by the “focus of the knowledge base” embedded in the IDSS.
Abstract: In the past, corporate strategic planning has been aided by conventional decision support and database management tools. There are a number of stages in the planning process where these conventional tools offer minimal assistance (e.g., automated diagnosis of problems). In an attempt to overcome these deficiencies, efforts have been made to combine these conventional tools with expert systems technology in order to create an intelligent decision support system (IDSS). This article examines four basic architectures for integrating these systems. The architectures are distinguished by their “degree of integration” and by the “focus of the knowledge base” embedded in the IDSS. Each of the architectures is illustrated by reference to existing prototypes and commercial products. The benefits and limitations of each are also considered.

Journal ArticleDOI
Jae Kyu Lee1, SeKwon Oh1, Jae-eun Shin1
TL;DR: An expert system UNIK-FCST (UNIfied Knowledge-ForeCaST) is developed, which learns from historical judgmental adjustments through generalization and analogy; reasons based on similar cases; and composes and decomposes the impacts of simultaneous judgmental events non-monotonically.
Abstract: Time series models have served as a highly useful forecasting method, but are deficient in that they merely extrapolate past patterns in data without taking into account expected future events and other qualitative factors To overcome this limitation, forecasting experts in practice judgmentally adjust the statistical forecasts To partially replace the role of forecasting expert's judgement, we have developed an expert system UNIK-FCST (UNIfied Knowledge-ForeCaST) The UNIK-FCST learns from historical judgmental adjustments through generalization and analogy; reasons based on similar cases; and composes and decomposes the impacts of simultaneous judgmental events non-monotonically Here, the UNIK-FCST is applied to the demand forecasting of oil products, for which five types of judgmental factors exist

Journal ArticleDOI
TL;DR: The performance of a neural network as a classifier is evaluated and it is found that the performance of the neural network is comparable to the best of otheother methods under a wide variety of modeling assumptions.
Abstract: The task of classifying observations into known groups is a common problem in decision making. A wealth of statistical approaches, commencing with Fisher's linear discriminant function, and including variations to accomodate a variety of modeling assumptions, have been proposed. In addition, nonparametric approaches based on various mathematical programming models have also been proposed as solutions. All of these proposed aolutions have performed well when conditions favorable to the specific model are present. The modeler, therefore, can usually be assured of a good solution to his problem of he chooses a model which fits his situation. In this paper, the performance of a neural network as a classifier is evaluated. It is found that the performance of the neural network is comparable to the best of otheother methods under a wide variety of modeling assumptions. The use of neural networks as classifiers thus relieves the modeler of testing assumptions which would otherwise be critical to the performance of the usual classification techniques.

PatentDOI
Ho-sun Jeong1
TL;DR: A binary adder is provided for adding-processing in a high speed parallel manner two N bit binary digits and includes a number of amplifiers corresponding to the N bit output sum and a carry generation from the result of the adding process.
Abstract: A binary adder is provided for adding-processing in a high speed parallel manner two N bit binary digits. The binary adder is implemented using neural network techniques and includes a number of amplifiers corresponding to the N bit output sum and a carry generation from the result of the adding process; an augend input-synapse group, an addend input-synapse group, a carry input-synapse group, a first bias-synapse group a second bias-synapse group an output feedback-synapse group and inverters. The binary adder is efficient and fast compared to conventional techniques.

Journal ArticleDOI
D. Schutzer1
TL;DR: This paper addresses all of the above questions about artificial intelligence and expert system applications and a specific application, the Trader's Assistant, is provided as a case example to illustrate many of the points made.
Abstract: Artificial Intelligence (AI) and its subfield of Expert Systems, with its focus on emulating human intelligence and its potential for displacing human mental activity in the same way that earlier machines have displaced human and animal physical labor, is prominently at the crest of the automation wave. What is artificial intelligence? What is the current state-of-the-art? What are the areas where artificial intelligence can be best applied in the business world? What are some of the better known commercial applications? Finally, how do we justify and implement artificial intelligence/ expert system applications? This paper addresses all of the above questions. A specific application, the Trader's Assistant, is provided as a case example to illustrate many of the points made.

Journal ArticleDOI
TL;DR: In this paper, several automated methods for diagnosing multiple simultaneous problems range from exhaustive (testing every possible combination and selecting the most likely) to heuristic (testing only a small percentage of the total combinations, yet finding a satisfactory diagnosis).
Abstract: Diagnosis is the process of determining the correct problem from a collection of problems given a set of symptoms that indicate a problem exists. Common experiences with this process include visits to the physician in order to determine our illness (disease) and visits to our local mechanic to determine the cause (fault) of a poorly operating car. In either case, we report the symptoms of the problem to the diagnostician (physician or mechanic) who determines the most likely cause that best explains these symptoms. In terms of the complexity of finding the correct problem, the diagnostician must fund a diagnosis from a set of possible diagnoses. That is, if a total of 10 problems are being considered where only one of these is correct then at most 10 diagnoses will need to be evaluated. However, in the more typical case where several problems (diseases/faults) may simulataneously, the complexity of finding a proper diagnosis increases exponentially with the number of problems. For example, using the 10 problems considered above, the situation changes to where any of the 1024 possible combinations of problems may turn out to be the correct diagnosis. In this paper, we compare several automated methods for diagnosing multiple simultaneous problems. These methods range from exhaustive (testing every possible combination and selecting the most likely) to heuristic (testing only a small percentage of the total combinations, yet finding a satisfactory diagnosis). Advantages and disadvantages of each method are detailed along with a comparison of their respective runtimes and reliability.

Journal ArticleDOI
TL;DR: The approach is an application of well-developed methods developed by Dijkstra and others for the verification of procedural programs to establish invariants and postconditions of forward-chaining rule-based expert systems.
Abstract: We are investigating the problem of establishing computational rather than syntactic properties of forward-chaining rule-based expert systems. We model an expert system as a computation on working memory, define its execution semantics, and present proof techniques suitable for those semantics. Specifically, we model execution as a Dijkstra guarded-do construct, and use Dijkstra's Invariance Theorem and weakest precondition predicate transformers to establish invariants (safety properties) and postconditions (liveness properties). Our approach is an application of well-developed methods developed by Dijkstra and others for the verification of procedural programs. This paper introduces the approach, reports some initial results, and discusses future work.

Journal ArticleDOI
TL;DR: This paper argues that in considering knowledge-based systems with optimization the authors should begin to employ a set of parallel model representations, any one of which the user can see and modify, which can be called horizontal model representations.
Abstract: Most mathematical programming models and knowledge-based systems in optimization from exist in various representations; however, the user is frequently not aware of this. For example, a model which is developed with a knowledge-based system such as the PM system of Krishnan (1988) will have several representations in Prolog and then will be translated into another representation in Structured Modeling before it is solved. Also, a model which is developed in the GAMS language will be translated into an MPS input form internally before the problem is passed to a solver such as MINOS. The results from MINOS are then passed back to GAMS and the user sees the results in the style of the GAMS representation of the model. This could be called a vertical set of model representations since the user can modify only one representation and the models are passed down directly to the solver. This paper argues that in considering knowledge-based systems with optimization we should begin to employ a set of parallel model representations, any one of which the user can see and modify. These can be called horizontal model representations. For example, a given model might be represented in graphical, knowledge base, modeling language, and mathematical forms. The user would be able to modify any of these versions and have the other representations altered automatically to reflect the changes.

Journal ArticleDOI
Jae Kyu Lee1
TL;DR: This special edition on “Integration and Competition of AI with Quantitative Methods for Decision Support” investigates the potential of developing a unified framework for decision support which utilizes AI, quantitative methods, and other unexplored issues necessary for unification.
Abstract: Integrating Artificial Intelligence with quantitative methods such as Operations Research (OR) and Statistics is an area of growing research interest. This special edition on “Integration and Competition of AI with Quantitative Methods for Decision Support” investigates the potential of developing a unified framework for decision support which utilizes AI, quantitative methods, and other unexplored issues necessary for unification. The integration can synergistically benefit AI, OR, and Decision Support Systems areas.


Journal ArticleDOI
TL;DR: A Model Management System that can identify mathematical programming formulations is presented, which attempts to replicate the type of model identification that an expert mathematical programmer would perform when faced with a formulation.
Abstract: We present a Model Management System that we have developed that can identify mathematical programming formulations. The system attempts to replicate the type of model identification that an expert mathematical programmer would perform when faced with a formulation. The system first converts the formulation to a standard form, then extracts the abstract structure from the standard form, and finally, identifies the formulation. In this paper we concentrate on the first phase of the system, converting the formulation to standard form.

Journal ArticleDOI
TL;DR: In this paper intelligent hypertext is first modeled with a semantic net, which is then extended to a Petri net formalism, and a deductive inferencing ability is added to the PetriNet formalism.
Abstract: Hypertext systems present textual information in an intuitive way, while expert systems logically solve problems. Expertext is an approach to combining the precision of expert reasoning processes with the browsing capabilities afforded by hypertext. In this paper intelligent hypertext is first modeled with a semantic net. The semantic net formalism is then extended to a Petri net formalism. Finally, a deductive inferencing ability is added to the Petri net formalism. Examples of how an Electronic Yellow Pages might exploit these methods are presented.

Journal ArticleDOI
TL;DR: A model of expert systems development is proposed that includes the use of slightly adapted techniques of structured systems analysis, and this model has been applied to a “real life” project, and the results encourage further use of structured system analysis in the development of Expert systems.
Abstract: Expert systems development is being presented as an art by many proponents of the expert systems technology. Such a perspective was also popular in the early stages of conventional programming. Today, however, conventional programming has become more of an engineering discipline than an art. The purpose of this paper is to advocate the engineering approach to expert systems development and illustrate it. Specifically, a model of expert systems development is proposed that includes the use of slightly adapted techniques of structured systems analysis. This model has been applied to a “real life” project, and the results encourage further use of structured systems analysis in the development of expert systems.

Journal ArticleDOI
TL;DR: The results of a survey of the testing practices of knowledge-based systems developers are presented and a comprehensive approach to evaluation is described.
Abstract: The field of knowledge-based systems has recently recognized the importance of verification, validation, and testing. This paper presents the results of a survey of the testing practices of knowledge-based systems developers. Common testing strategies are reported and analyzed. Factors affecting testing are discussed. A comprehensive approach to evaluation is described. General conclusions and lessons learned are presented.

Journal ArticleDOI
TL;DR: Nearly 40 current projects, which run the gamut from research prototype to finished product, are reviewed, showing that expert systems will play an important role in the future in telecommunications networks.
Abstract: Expert systems have been successfully applied to many maintenance, provisioning, and administrative tasks in telecommunications networks. Given that they can be appropriately integrated with the existing base of software applications, expert systems will play an important role in the future. We review nearly 40 current projects, which run the gamut from research prototype to finished product.

PatentDOI
TL;DR: In this article, the Darlington phototransistor pairs of this material can achieve a current gain of over 6,000, which satisfies the gain requirement for optical neural network designs.
Abstract: High-gain MOCVD-grown (metal-organic chemical vapor deposition) AlGaAs/GaAs/AlGaAs n-p-n double heterojunction bipolar transistors (DHBTs) (14) and Darlington phototransistor pairs (14, 16) are provided for use in optical neural networks and other optoelectronic integrated circuit applications. The reduced base (22) doping level used herein results in effective blockage of Zn out-diffusion, enabling a current gain of 500, higher than most previously reported values for Zn-diffused-base DHBTs. Darlington phototransistor pairs of this material can achieve a current gain of over 6,000, which satisfies the gain requirement for optical neural network designs, which advantageously may employ novel neurons (10) comprising the Darlington phototransistor pair in series with a light source (12).

Journal ArticleDOI
TL;DR: The expert system is capable of analyzing input parameters by performing statistical analyses of data bases, generating plots and graphs, implementing a set of rules for the selection of inventory models, and choosing a solution procedure.
Abstract: An expert system for inventory management is presented in this paper. The focus is on the development of a simple, user-friendly tool that can be used effectively by managers to increase the cost-effectiveness of their inventory systems. The expert system is capable of analyzing input parameters by performing statistical analyses of data bases, generating plots and graphs, implementing a set of rules for the selection of inventory models, and choosing a solution procedure. The scope of this paper is limited to the single-item, single-location problems.

Journal ArticleDOI
TL;DR: RADA uses a graphical browser to display a conceptual semantic net (CSN) which represents graphically the user's information needs which will ensure that the documents retrieved will be at the correct level of user comprehension and requirements, thus reducing the number of redundant documents retrieved.
Abstract: This paper describes the system RADA (Research and Development Advisor) which is an intelligent information system for use by researchers within a research and development environment. The fundamental concept behind RADA is the incorporation of artificial intelligence techniques and expert system approaches to “overcome” some of the shortcomings of the more traditional information retrieval systems, such as keyword-based retrieval, the need for specific retrieval languages, and the need for human intermediaries. RADA uses a graphical browser to display a conceptual semantic net (CSN) which represents graphically the user's information needs. To tailor the information to a specific user, RADA will have a user-modeling component. The user-modeling functions of RADA will ensure that the documents retrieved will be at the correct level of user comprehension and requirements, thus reducing the number of redundant documents retrieved. RADA is implemented in an object-oriented environment called LOOPS on a Xerox workstation.

Journal ArticleDOI
Sang Bin Lee1, Seung Hyun Oh1
TL;DR: This paper analyzes the two computerized classification procedures called RPA and ACLS on a comparison basis and applies these two procedures to the bankruptcy model in finance to see the conditions under which one of them works better than the other.
Abstract: The purpose of this paper is to compare the two computerized classification procedures called RPA and ACLS from an analytical point of view. RPA was developed to alleviate the methodological or statistical problems of traditional classification procedures. ACLS was developed to solve the bottleneck problems of the direct knowledge acquisition method in developing expert systems. So far, the above two procedures have been compared with discriminant analysis (DA) in the current literature which reported that they performed better than DA. However, the two procedures were not compared against each other. The purpose of this paper is to analyze the two procedures on a comparison basis and to apply these two procedures to the bankruptcy model in finance to see the conditions under which one of them works better than the other.

Journal ArticleDOI
TL;DR: An attempt has been made to integrate the two approaches to spatial planning by developing a hybrid multicriteria alternative selectiion module (HYDAS-HYbrid Discrete Alternative Selection) which will work as a postprocessor for both modules.
Abstract: During the development of an expert system for regional planning (The Shanxi Province Decision Support System) by IIASA's Advanced Computer Applications (ACA) group, two spatial planning systems evolved. PDAS, a system for the optimization of the industrial structure of an area, which is based on numerical multicriteria optimization techniques; and REPLACE, a site-selection system, implemented in PROLOG, which is based on a qualitative, constraint-satisfaction method. Although both approaches partially overlap, each approach has certain advantages over the other. As a natural extension of the Shanxi DSS, within the framework of a project on Hybrid Decision Support Tools sponsored by the U.S. Bureau of Reclamation, an attempt has been made to integrate the two approaches by developing a hybrid multicriteria alternative selectiion module (HYDAS-HYbrid Discrete Alternative Selection) which will work as a postprocessor for both modules. In HYDAS, artificial intelligence (AI) paradigms and numeric multicriteria optimization techniques are combined to arrive at a hybrid approach to discrete alternative selection. These techniques include: (1) qualitative analysis, (2) various statistical checks and recommendations, (3) robustness and sensitivity analysis, and (4) help for defining acceptable regions for analysis. HYDAS is implemented on high performance workstations using C, PROLOG, and the X graphics library and window system.