scispace - formally typeset
Search or ask a question

Showing papers in "Expert Systems in 2003"


Journal ArticleDOI
Yiyu Yao1
TL;DR: The Shannon entropy function is used to quantitatively characterize partitions of a universe and both algebraic and probabilistic rough set approximations are studied, both defined in a decision‐theoretic framework.
Abstract: This paper reviews probabilistic approaches to rough sets in granulation, approximation, and rule induction. The Shannon entropy function is used to quantitatively characterize partitions of a universe. Both algebraic and probabilistic rough set approximations are studied. The probabilistic approximations are defined in a decision-theoretic framework. The problem of rule induction, a major application of rough set theory, is studied in probabilistic and information-theoretic terms. Two types of rules are analyzed, the local, low order rules, and the global, high order rules.

327 citations


Journal ArticleDOI
TL;DR: This work intends to fill an obvious gap by introducing a new definition of intuitionistic fuzzy rough sets, as the most natural generalization of Pawlak's original concept of rough sets.
Abstract: Just like rough set theory, fuzzy set theory addresses the topic of dealing with imperfect knowledge. Recent investigations have shown how both theories can be combined into a more flexible, more expressive framework for modelling and processing incomplete information in information systems. At the same time, intuitionistic fuzzy sets have been proposed as an attractive extension of fuzzy sets, enriching the latter with extra features to represent uncertainty (on top of vagueness). Unfortunately, the various tentative definitions of the concept of an ‘intuitionistic fuzzy rough set’ that were raised in their wake are a far cry from the original objectives of rough set theory. We intend to fill an obvious gap by introducing a new definition of intuitionistic fuzzy rough sets, as the most natural generalization of Pawlak's original concept of rough sets.

180 citations


Journal ArticleDOI
TL;DR: This paper deals with the problem of producing a set of certain and possible rules from incomplete fuzzy data sets based on rough sets, and transforms each fuzzy subset of the domain of every attribute in an incomplete fuzzy information system into a fuzzy subset in the universe, from which fuzzy similarity neighbourhoods of objects in the system are derived.
Abstract: Machine learning can extract desired knowledge from training examples and ease the development bottleneck in building expert systems. Most learning approaches derive rules from complete and incomplete data sets. If attribute values are known as possibility distributions on the domain of the attributes, the system is called an incomplete fuzzy information system. Learning from incomplete fuzzy data sets is usually more difficult than learning from complete data sets and incomplete data sets. In this paper, we deal with the problem of producing a set of certain and possible rules from incomplete fuzzy data sets based on rough sets. The notions of lower and upper generalized fuzzy rough approximations are introduced. By using the fuzzy rough upper approximation operator, we transform each fuzzy subset of the domain of every attribute in an incomplete fuzzy information system into a fuzzy subset of the universe, from which fuzzy similarity neighbourhoods of objects in the system are derived. The fuzzy lower and upper approximations for any subset of the universe are then calculated and the knowledge hidden in the information system is unravelled and expressed in the form of decision rules.

96 citations


Journal ArticleDOI
TL;DR: The basic concept and properties of knowledge reduction based onclusion degree and evidence reasoning theory are discussed, and a knowledge discovery approach based on inclusion degree and Evidence reasoning theory is proposed.
Abstract: The theory of rough sets is an extension of set theory for studying intelligent systems characterized by insufficient and incomplete information. We discuss the basic concept and properties of knowledge reduction based on inclusion degree and evidence reasoning theory, and propose a knowledge discovery approach based on inclusion degree and evidence reasoning theory.

88 citations


Journal ArticleDOI
TL;DR: In this paper, a case-based reasoning model framework is postulated for a 3PL evaluation and selection system and the theoretical basis of this system and its reasoning process is expanded by discussing the advantages and practical value of this framework.
Abstract: The social demands for third-party logistics (3PL) are further developing the model of supply chain management. The analysis of an effective approach for 3PL service supplier evaluation is given in terms of its direct relation to the operational efficiency and benefit of the service-demanding enterprise as well as its supply chain management. An analysis of the traditional academic theoretical results and practical methods for 3PL supplier selection indicates the deficiencies of this approach. In this paper, a case-based reasoning model framework is postulated for a 3PL evaluation and selection system. This work further expands upon the theoretical basis of this system and its reasoning process by discussing the advantages and practical value of this framework.

75 citations


Journal ArticleDOI
TL;DR: It is argued that the linearly structured hierarchy has significant advantages over tree‐structured hierarchy in the framework of the variable precision rough set model.
Abstract: The paper is concerned with the creation of predictive models from data within the framework of the variable precision rough set model. It is focused on two aspects of the model derivation: computation of uncertain, in general, rules from information contained in probabilistic decision tables and forming hierarchies of decision tables with the objective of reduction or elimination of decision boundaries in the resulting classifiers. A new technique of creation of a linearly structured hierarchy of decision tables is introduced and compared to tree-structured hierarchy. It is argued that the linearly structured hierarchy has significant advantages over tree-structured hierarchy.

54 citations


Journal ArticleDOI
TL;DR: Type S1 and type S2 inclusion degrees are introduced, the relationship between them is discussed, and inclusion degrees on interval numbers, divisions, vectors and set vectors are proposed.
Abstract: The probability reasoning method, fuzzy reasoning method, evidential reasoning method and other reasoning methods are main techniques employed in intelligent systems for processing uncertain and vague information. The concept of inclusion degree was proposed earlier and it has been proved that the methods mentioned above are examples of inclusion degrees. In this paper, we introduce type S1 and type S2 inclusion degrees, discuss the relationship between them, and further propose inclusion degrees on interval numbers, divisions, vectors and set vectors. This paper addresses an uncertainty analysis method with different inclusion degrees for intelligent systems and other systems such as fuzzy relational databases.

54 citations


Journal ArticleDOI
TL;DR: An intelligent decision support system, namely Intelligent Agent Selection Assistant for Insurance, is presented, which will help insurance managers to select quality agents by using data mining in a data warehouse environment.
Abstract: The insurance industry of Hong Kong has been experiencing steady growth in the last decade. One of the current problems in the industry is that, in general, insurance agent turnover is high. The selection of new agents is treated as a regular recruitment exercise. This study focuses on the characteristics of data warehousing and the appropriate data mining techniques that can be used to support agent selection in the insurance industry. We examine the application of three popular data mining methods – discriminant analysis, decision trees and artificial neural networks – incorporated with a data warehouse to the prediction of the length of service, sales premiums and persistence indices of insurance agents. An intelligent decision support system, namely Intelligent Agent Selection Assistant for Insurance, is presented, which will help insurance managers to select quality agents by using data mining in a data warehouse environment.

51 citations


Journal ArticleDOI
TL;DR: Based on the feature space theory in data mining, the transformation between extensions and intensions of concepts is discussed in detail and inner transformation of fuzzy relations, inverse inner transformations, and related properties are introduced.
Abstract: Knowledge representation is one of the important topics in data mining research. In this paper, based on the feature space theory in data mining, the transformation between extensions and intensions of concepts is discussed in detail. First, inner projections of fuzzy relations, as a basic mathematical tool, are defined, and properties of inner projections are discussed. Then inner transformation of fuzzy relations, inverse inner transformations, and related properties are introduced. The concept structure is shown by feature spaces. Lastly, transformations between extensions and intensions are discussed.

50 citations


Journal ArticleDOI
TL;DR: Compared with traditional case representation methods based on database tables or frames, the proposed model is able to represent knowledge in the domain of managerial decision‐making at a much deeper level and provide much more support for case‐based systems employed in business decision-making.
Abstract: A scenario-based representation model for cases in the domain of managerial decision-making is proposed. The scenarios in narrative texts are converted to scenario units of knowledge organization. The elements and structure of the scenario unit are defined. The scenario units can be linked together or coupled with others. Compared with traditional case representation methods based on database tables or frames, the proposed model is able to represent knowledge in the domain of managerial decision-making at a much deeper level and provide much more support for case-based systems employed in business decision-making.

43 citations


Journal ArticleDOI
TL;DR: This paper applies a fairly new methodology known as genetic algorithms to solve a relatively large sized constrained version of the p-median problem using two different data sets and shows that this solution procedure performs quite well compared with the results obtained from existing techniques.
Abstract: Locating p facilities to serve a number of customers is a problem in many areas of business. The problem is to determine p facility locations such that the weighted average distance traveled from all the demand points to their nearest facility sites is minimized. A variant of the p-median problem is one in which a maximum distance constraint is imposed between the demand point and its nearest facility location, also known as the p-median problem with maximum distance constraint. In this paper, we apply a fairly new methodology known as genetic algorithms to solve a relatively large sized constrained version of the p-median problem. We present our computational experience on the use of genetic algorithms for solving the constrained version of the p-median problem using two different data sets. Our comparative experimental experience shows that this solution procedure performs quite well compared with the results obtained from existing techniques.

Journal ArticleDOI
Zhongmin Cai1, Xiaohong Guan1, Ping Shao1, Qingke Peng1, Guoji Sun1 
TL;DR: An effective method for anomaly intrusion detection with low overhead and high efficiency based on rough set theory, which is capable of detecting the abnormal operating status of a process and thus reporting a possible intrusion.
Abstract: Intrusion detection is important in the defense-in-depth network security framework. This paper presents an effective method for anomaly intrusion detection with low overhead and high efficiency. The method is based on rough set theory to extract a set of detection rules with a minimal size as the normal behavior model from the system call sequences generated during the normal execution of a process. It is capable of detecting the abnormal operating status of a process and thus reporting a possible intrusion. Compared with other methods, the method requires a smaller size of training data set and less effort to collect training data and is more suitable for real-time detection. Empirical results show that the method is promising in terms of detection accuracy, required training data set and efficiency.

Journal ArticleDOI
TL;DR: A programmable agent is presented for Internet application to retrieve and extract information from the Web with user's guidance and a data translator to export the extracted information into knowledge‐based frame structures.
Abstract: With the tremendous amount of information that is becoming available on the Web, the ability to develop information agents quickly has become a crucial problem. In this paper, a programmable agent is presented for Internet application to retrieve and extract information from the Web with user's guidance. The agent consists of a retrieval script to identify Web sources, an extraction script based on the document object model for the extraction process and a data translator to export the extracted information into knowledge-based frame structures. A GUI tool called Script Writer supports the generation of extraction script visually.

Journal ArticleDOI
TL;DR: This paper attempts to propose a responsive replenishment system which is able to respond to the fluctuating demands of customers and provide a timely supply of needed items in a cost-effective way.
Abstract: In today's competitive business environment, it is important that customers are able to obtain their preferred items in the shops they visit, particularly for convenience store chains such as 7–Eleven where popular items are expected to be readily available on the shelves of the stores for buyers. To minimize the cost of running such store chains, it is essential that stocks be kept to a minimum and at the same time large varieties of popular items are available for customers. In this respect, the replenishment system needs to be able to cope with the taxing demands of minimal inventory but at the same time keeping large varieties of needed items. This paper proposes a replenishment system which is able to respond to the fluctuating demands of customers and provide a timely supply of needed items in a cost–effective way. The proposed system embraces the principle of fuzzy logic which is able to deal with uncertainties by virtue of its fuzzy rules reasoning mechanism, thereby leveraging the responsiveness of the entire replenishment system for the chain stores. To validate the feasibility of the approach, a case study has been conducted in an emulated environment with promising results.

Journal ArticleDOI
TL;DR: This paper discusses the modeling and design aspects of an autonomous, adaptive monitoring agent with layered control architecture and shows that the monitoring agent developed for an e‐mail server is effective.
Abstract: A software agent is defined as an autonomous software entity that is able to interact with its environment. Such an agent is able to respond to other agents and/or its environment to some degree, and has some sort of control over its internal state and actions. In belief–desire–intention (BDI) theory, an agent's behavior is described in terms of a processing cycle. In this paper, based on BDI theory, the processing cycle is studied with a software feedback mechanism. A software feedback or loop-back control mechanism can perform functions without direct external intervention. A feedback mechanism can continuously monitor the output of the system under control (the target system), compare the result against preset values (goals of the feedback control) and feed the difference back to adjust the behavior of the target system in a processing cycle. We discuss the modeling and design aspects of an autonomous, adaptive monitoring agent with layered control architecture. The architecture consists of three layers: a scheduling layer, an optimizing layer and a regulating layer. Experimental results show that the monitoring agent developed for an e-mail server is effective.

Journal ArticleDOI
TL;DR: This paper designs a neural network to solve a well‐known combinatorial problem, namely the flexible flow shop problem, and compares the performance of the network with well-known current heuristics with respect to solution quality.
Abstract: Although neural networks have been successfully used in performing pattern recognition, their application for solving optimization problems has been limited. In this paper we design a neural network to solve a well-known combinatorial problem, namely the flexible flow shop problem. A key feature of our neural network design is the integration of problem structure and heuristic information in the network structure and solution. We compare the performance of our neural network with well-known current heuristics with respect to solution quality. The results indicate that our approach outperforms the heuristics.

Journal ArticleDOI
TL;DR: The regularization method is used to convert the image approximation problem into a solvable variational problem, and a Hopfield-type dynamic neural network is developed in order to solve this problem.
Abstract: In remote sensing image processing, image approximation, or obtaining a high-resolution image from a corresponding low-resolution image, is an ill-posed inverse problem. In this paper, the regularization method is used to convert the image approximation problem into a solvable variational problem. In regularization, the constraints on smoothness and discontinuity are considered, and the original ill-posed problem is thereby converted to a well-posed optimization problem. In order to solve the variational problem, a Hopfield-type dynamic neural network is developed. This neural network possesses two states that describe the discrepancy between a pixel and adjacent pixels, the intensity evolution of a pixel and two kinds of corresponding weights. Based on the experiment in this study with a Landsat TM image free of added noise and a noisy image, the proposed approach provides better results than other methods. The comparison shows the feasibility of the proposed approach.

Journal ArticleDOI
TL;DR: The multiple ant clans concept from parallel genetic algorithms to search solution space using different islands to avoid local minima in order to obtain a global minimum for solving the traveling salesman problem is introduced.
Abstract: In this paper, we present an efficient metaheuristic approach for solving the problem of the traveling salesman. We introduce the multiple ant clans concept from parallel genetic algorithms to search solution space using different islands to avoid local minima in order to obtain a global minimum for solving the traveling salesman problem. Our simulation results indicate that the proposed novel traveling salesman problem method (called the ACOMAC algorithm) performs better than a promising approach named the ant colony system. This investigation is concerned with a real life logistics system design which optimizes the performance of a logistics system subject to a required service level in the vehicle routing problem. In this work, we also concentrate on developing a vehicle routing model by improving the ant colony system and using the multiple ant clans concept. The simulation results reveal that the proposed method is very effective and potentially useful in solving vehicle routing problems.

Journal ArticleDOI
TL;DR: An expert system is presented for interpretation of the Doppler signals of heart valve diseases based on pattern recognition using a wavelet neural network model developed by us and the test results show that this system is effective to detect Dopplers heart sounds.
Abstract: An expert system is presented for interpretation of the Doppler signals of heart valve diseases based on pattern recognition. We deal in particular with the combination of feature extraction and classification from measured Doppler signal waveforms at the heart valve using Doppler ultrasound. A wavelet neural network model developed by us is used. The model consists of two layers: a wavelet layer and a multilayer perceptron. The wavelet layer used for adaptive feature extraction in the time–frequency domain is composed of wavelet decomposition and wavelet entropy. The multilayer perceptron used for classification is a feedforward neural network. The performance of the developed system has been evaluated in 215 samples. The test results show that this system is effective to detect Doppler heart sounds. The classification rate averaged 91% correct for 123 test subjects.

Journal ArticleDOI
TL;DR: Using the concept of rough sets, a comprehensive application where the information system is reduced so as to get a minimum subset of attributes without loss of quality is presented; a set of nine decision rules is obtained.
Abstract: The great volume of information used/obtained in industry nowadays produces the need for systems that are helpful in decision-making. An area of special interest, because of its influence on productive capacity, is maintenance, where the great quantity of influential variations and available facts (including real time) make the analysis almost impossible. Using the concept of rough sets, we present a comprehensive (but real) application where the information system is reduced so as to get a minimum subset of attributes without loss of quality; a set of nine decision rules is obtained.

Journal ArticleDOI
TL;DR: If the personalized prioritizing of news is a very useful feature in wired Web, it becomes essential in wireless Web, the promising next generation of the Web.
Abstract: A formidable synergy can be obtained by putting expert system technology into the Internet. The modern trend of embedding expert systems into Websites turns out to be very promising, in particular for the field of marketing via the Web. The last two years have seen a growing interest in providing Websites with suitable embedded expert systems for one–to–one marketing. One–to– one marketing means marketing in a personalized way, i.e. marketing in a way that is adaptive to the personal needs of the user. A basic feature of this marketing framework consists in personalized prioritizing of news, i.e. presenting information in an order that is relevant to the specific needs of the current user. If the personalized prioritizing of news is a very useful feature in wired Web, it becomes essential in wireless Web, the promising next generation of the Web. The paper presents a general methodology for personalized prioritizing of news. The methodology integrates decision theory with a deep–knowledge–based user model (i.e. causal knowledge linking user preferences to user goals). The deep–knowledge model of the user is a source of power of the methodology because it allows the system to know (and possibly explain) why the user acts the way he/she acts. Another relevant aspect of the methodology is that the burden of personalization is not placed on the user, and in fact the user does not have to declare his/her needs or interests or goals: they are automatically inferred from his/her profile data. In order to investigate the ideas underlying the proposal, a methodology example has been implemented in a prototype and then tested on real cases in the context of a supercomputing portal.


Journal ArticleDOI
TL;DR: A logic‐based approach to rule induction in expert systems is presented which is simple, robust and consistent and applies to the development of rules for the entry decisions of new products.
Abstract: Rule induction is an important part of learning in expert systems. Rules can help managers make more effective decisions and gain insight into the relationships between decision variables. We present a logic-based approach to rule induction in expert systems which is simple, robust and consistent. We also derive bounds on levels of certainty for combining rules. We apply our approach to the development of rules for the entry decisions of new products. We then discuss how the logic-based approach of rule induction can be used to create a decision support system and the methodology to create such a system.

Journal ArticleDOI
TL;DR: A knowledge‐based tool which can support the whole process of software development is provided in this paper and is designed to develop a paradigm for software engineering which integrates the three approaches mentioned above.
Abstract: Many approaches have been proposed to enhance software productivity and reliability. These approaches typically fall into three categories: the engineering approach, the formal approach and the knowledge-based approach. But the optimal gain in software productivity cannot be obtained if one relies on only one of these approaches. This paper describes the work in knowledge-based software engineering conducted by the authors for the past 10 years. The final goal of the research is to develop a paradigm for software engineering which integrates the three approaches mentioned above. A knowledge-based tool which can support the whole process of software development is provided in this paper.

Journal ArticleDOI
TL;DR: This paper designs an experiment and tries to show that the observer can to some degree understand the actor based on its knowledge and some metaphors, i.e. understand what the actor is doing and why.
Abstract: In this paper we design an experiment which can be depicted as a simple scenario, a very limited ‘world’. In this world, there are an actor that can pursue a project and an observer that is keeping its eyes on the actor. We try to show in the experiment that the observer can to some degree understand the actor based on its knowledge and some metaphors, i.e. understand what the actor is doing and why. As the conclusion of this experiment, we try to show some features of ‘understanding’. These are (1) that ‘understanding’ has to be based on some preliminary knowledge; (2) that ‘understanding’ is a process of incremental learning; (3) that, as for symbolic systems, some metaphors are necessary for mapping real entities into concepts in mind.

Journal ArticleDOI
TL;DR: A knowledge-based system for effectively evaluating alternative routes at different traveling times in a medium-size city is presented, which can be regarded as a new tool in engineering orientation for route finding in transportation systems.
Abstract: This paper presents the development of a knowledge-based system for effectively evaluating alternative routes at different traveling times in a medium-size city. A decision analysis technique, the analytical hierarchy process, was incorporated via a preference scale into the reasoning mechanism of the system to evaluate up to 54 possible road segments based on nine major transportation factors. The resulting platform provides travelers with a quantitative measure to compare and contrast alternative routes connecting departing and destination points within the city limits. The developed prototype system, which was successfully validated by independent experts in the field, can be regarded as a new tool in engineering orientation for route finding in transportation systems.

Journal ArticleDOI
TL;DR: The main research institutions in China as well as their main research topics are introduced and a variety of achievements have been made.
Abstract: In recent years, research on intelligent systems is becoming more and more popular in academe Intelligent systems research mainly involves the field of artificial intelligence There are many research groups engaged in the research of intelligent systems in the world, and a variety of achievements have been made In this paper, the main research institutions in China as well as their main research topics are introduced