scispace - formally typeset
Search or ask a question

Showing papers on "Knowledge representation and reasoning published in 2003"


BookDOI
01 Jan 2003
TL;DR: The Description Logic Handbook as mentioned in this paper provides a thorough account of the subject, covering all aspects of research in this field, namely: theory, implementation, and applications, and can also be used for self-study or as a reference for knowledge representation and artificial intelligence courses.
Abstract: Description logics are embodied in several knowledge-based systems and are used to develop various real-life applications. Now in paperback, The Description Logic Handbook provides a thorough account of the subject, covering all aspects of research in this field, namely: theory, implementation, and applications. Its appeal will be broad, ranging from more theoretically oriented readers, to those with more practically oriented interests who need a sound and modern understanding of knowledge representation systems based on description logics. As well as general revision throughout the book, this new edition presents a new chapter on ontology languages for the semantic web, an area of great importance for the future development of the web. In sum, the book will serve as a unique resource for the subject, and can also be used for self-study or as a reference for knowledge representation and artificial intelligence courses.

5,644 citations


Book
01 Jan 2003
TL;DR: Chitta Baral demonstrates how to write programs that behave intelligently by giving them the ability to express knowledge and reason about it and presents a language, AnsProlog, for both knowledge representation and reasoning, and declarative problem solving.
Abstract: Knowledge management and knowledge-based intelligence are areas of importance in today's economy and society, and their exploitation requires representation via the development of a declarative interface whose input language is based on logic. Chitta Baral demonstrates how to write programs that behave intelligently by giving them the ability to express knowledge and reason about it. He presents a language, AnsProlog, for both knowledge representation and reasoning, and declarative problem solving. Many of the results have never appeared before in book form but are organized here for those wishing to learn more about the subject, either in courses or through self-study.

1,532 citations


Journal ArticleDOI
TL;DR: The Foundational Model of Anatomy is proposed as a reference ontology in biomedical informatics for correlating different views of anatomy, aligning existing and emerging ontologies in bioinformatics ontologies and providing a structure-based template for representing biological functions.

1,060 citations


Proceedings ArticleDOI
20 May 2003
TL;DR: It is shown how to interoperate, semantically and inferentially, between the leading Semantic Web approaches to rules and ontologies and define a new intermediate knowledge representation contained within this intersection: Description Logic Programs (DLP), and the closely related Description Horn Logic (DHL).
Abstract: We show how to interoperate, semantically and inferentially, between the leading Semantic Web approaches to rules (RuleML Logic Programs) and ontologies (OWL/DAML+OIL Description Logic) via analyzing their expressive intersection. To do so, we define a new intermediate knowledge representation (KR) contained within this intersection: Description Logic Programs (DLP), and the closely related Description Horn Logic (DHL) which is an expressive fragment of first-order logic (FOL). DLP provides a significant degree of expressiveness, substantially greater than the RDF-Schema fragment of Description Logic. We show how to perform DLP-fusion: the bidirectional translation of premises and inferences (including typical kinds of queries) from the DLP fragment of DL to LP, and vice versa from the DLP fragment of LP to DL. In particular, this translation enables one to "build rules on top of ontologies": it enables the rule KR to have access to DL ontological definitions for vocabulary primitives (e.g., predicates and individual constants) used by the rules. Conversely, the DLP-fusion technique likewise enables one to "build ontologies on top of rules": it enables ontological definitions to be supplemented by rules, or imported into DL from rules. It also enables available efficient LP inferencing algorithms/implementations to be exploited for reasoning over large-scale DL ontologies.

939 citations


Journal ArticleDOI
TL;DR: In this paper, the authors define a new intermediate knowledge representation (KR) contained within this intersection: Description Logic Programs (DLP) and the closely related Description Horn Logic (DHL) which is an expressive fragment of first-order logic (FOL).
Abstract: We show how to interoperate, semantically and inferentially, between the leading Semantic Web approaches to rules (RuleML Logic Programs) and ontologies (OWL/DAML+OIL Description Logic) via analyzing their expressive intersection. To do so, we define a new intermediate knowledge representation (KR) contained within this intersection: Description Logic Programs (DLP), and the closely related Description Horn Logic (DHL) which is an expressive fragment of first-order logic (FOL). DLP provides a significant degree of expressiveness, substantially greater than the RDF-Schema fragment of Description Logic. We show how to perform DLP-fusion: the bidirectional translation of premises and inferences (including typical kinds of queries) from the DLP fragment of DL to LP, and vice versa from the DLP fragment of LP to DL. In particular, this translation enables one to "build rules on top of ontologies": it enables the rule KR to have access to DL ontological definitions for vocabulary primitives (e.g., predicates and individual constants) used by the rules. Conversely, the DLP-fusion technique likewise enables one to "build ontologies on top of rules": it enables ontological definitions to be supplemented by rules, or imported into DL from rules. It also enables available efficient LP inferencing algorithms/implementations to be exploited for reasoning over large-scale DL ontologies.

843 citations


Journal ArticleDOI
TL;DR: A methodology for interpreting linguistic structures that encode hypernymic propositions, in which a more specific concept is in a taxonomic relationship with a more general concept, has the potential to support a range of applications, including information retrieval and ontology engineering.

504 citations


Journal ArticleDOI
TL;DR: D1LP provides a concept of proof-of-compliance that is founded on well-understood principles of logic programming and knowledge representation, and provides a logical framework for studying delegation.
Abstract: We address the problem of authorization in large-scale, open, distributed systems. Authorization decisions are needed in electronic commerce, mobile-code execution, remote resource sharing, privacy protection, and many other applications. We adopt the trust-management approach, in which "authorization" is viewed as a "proof-of-compliance" problem: Does a set of credentials prove that a request complies with a policy?We develop a logic-based language, called Delegation Logic (DL), to represent policies, credentials, and requests in distributed authorization. In this paper, we describe D1LP, the monotonic version of DL. D1LP extends the logic-programming (LP) language Datalog with expressive delegation constructs that feature delegation depth and a wide variety of complex principals (including, but not limited to, k-out-of-n thresholds). Our approach to defining and implementing D1LP is based on tractably compiling D1LP programs into ordinary logic programs (OLPs). This compilation approach enables D1LP to be implemented modularly on top of existing technologies for OLP, for example, Prolog.As a trust-management language, D1LP provides a concept of proof-of-compliance that is founded on well-understood principles of logic programming and knowledge representation. D1LP also provides a logical framework for studying delegation.

462 citations


Journal ArticleDOI
TL;DR: This paper provides a theoretical foundation for the CAIP paradigm, a paradigm for representing and reasoning about plans, and shows how the plans are naturally expressed by networks of constraints, and that the process of planning maps directly to dynamic constraint reasoning.
Abstract: In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. We describe compatibilities, a compact mechanism for describing planning domains. We also demonstrate how this framework incorporates the use of constraint representation and reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.

253 citations


Patent
11 Dec 2003
TL;DR: In this paper, a knowledge-based natural speech dialogue system includes: (i) a knowledge support system, (ii) a flexible dialogue control system, and (iii) a context information system.
Abstract: A knowledge-based natural speech dialogue system includes: (i) a knowledge support system, (ii) a flexible dialogue control system, and (iii) a context information system. Flexibilities of the conversation structure, inherent in mixed-initiative mode for dealing with complex user request, are managed because the knowledge structures involved are represented by additional, powerful knowledge representation tools, and because the context information is retained by more specific data structures, which covers larger temporal scopes by the logic of the conversation, rather than by a fixed locality of the grammar flow. This system provides a simple yet reliable method to compensate for these factors to enable more powerful conversation engines with mixed-initiative capabilities.

216 citations


Book ChapterDOI
22 Jun 2003
TL;DR: This paper is presenting a generic ontology-based user modeling architecture, (OntobUM), applied in the context of a Knowledge Management System (KMS), and it relies on a user ontology, using Semantic Web technologies, based on the IMS LIP specifications, and it is integrated in an ontological-based KMS called Ontologging.
Abstract: This paper is presenting a generic ontology-based user modeling architecture, (OntobUM), applied in the context of a Knowledge Management System (KMS). Due to their powerful knowledge representation formalism and associated inference mechanisms, ontology-based systems are emerging as a natural choice for the next generation of KMSs operating in organizational, interorganizational as well as community contexts. User models, often addressed as user profiles, have been included in KMSs mainly as simple ways of capturing the user preferences and/or competencies. We extend this view by including other characteristics of the users relevant in the KM context and we explain the reason for doing this. The proposed user modeling system relies on a user ontology, using Semantic Web technologies, based on the IMS LIP specifications, and it is integrated in an ontology-based KMS called Ontologging. We are presenting a generic framework for implicit and explicit ontology-based user modeling.

200 citations


Journal ArticleDOI
TL;DR: An overview of the theories within the PSL ontology is given, some of the design principles for theOntology are discussed, and examples of process specifications that are based on the ontology are finished.
Abstract: The PROCESS SPECIFICATION LANGUAGE (PSL) has been designed to facilitate correct and complete exchange of process information among manufacturing systems, such as scheduling, process modeling, process planning, production planning, simulation, project management, work flow, and business-process reengineering. We give an overview of the theories within the PSL ontology, discuss some of the design principles for the ontology, and finish with examples of process specifications that are based on the ontology.

Journal ArticleDOI
TL;DR: The ABC model's ability to mediate and integrate between multimedia metadata vocabularies is evaluated by illustrating how it can provide the foundation to facilitate semantic interoperability between MPEG-7, MPEG-21 and other domain-specific metadata vocABularies.
Abstract: A core ontology is one of the key building blocks necessary to enable the scalable assimilation of information from diverse multimedia sources. A complete and extensible ontology that expresses the basic concepts that are common across a variety of domains and media types and that can provide the basis for specialization into domain-specific concepts and vocabularies, is essential for well-defined mappings between domain-specific knowledge representations (i.e., metadata vocabularies) and the subsequent building of a variety of services such as cross-domain searching, tracking, browsing, data mining and knowledge acquisition. As more and more communities develop metadata application profiles which combine terms from multiple vocabularies (e.g., Dublin Core, MPEG-7, MPEG-21, CIDOC/CRM, FGDC, IMS), a core ontology will provide a common understanding of the basic entities and relationships, which is essential for semantic interoperability and the development of additional services based on deductive inferencing. In this paper, we first propose such a core ontology (the ABC model) which was developed in response to a need to integrate information from multiple genres of multimedia content within digital libraries and archives. Although the MPEG-21 RDD was influenced by the ABC model and is based on a model extremely similar to ABC, we believe that it is important to define a separate and domain-independent top-level extensible ontology for scenarios in which either MPEG-21 is irrelevant or to enable the attachment of ontologies from communities external to MPEG, for example, the museum domain (CIDOC/CRM) or the biomedical domain (ON9.3). We evaluate the ABC model's ability to mediate and integrate between multimedia metadata vocabularies by illustrating how it can provide the foundation to facilitate semantic interoperability between MPEG-7, MPEG-21 and other domain-specific metadata vocabularies. By expressing the semantics of both MPEG-7 and MPEG-21 metadata terms in RDF Schema/DAML+OIL [and eventually the Web Ontology Language (OWL)] and attaching the MPEG-7 and MPEG-21 class and property hierarchies to the appropriate top-level classes and properties of the ABC model, we have defined a single distributed machine-understandable ontology. The resulting ontology provides semantic knowledge which is nonexistent within declarative XML schemas or XML-encoded metadata descriptions. Finally, in order to illustrate how such an ontology will contribute to the interoperability of data and services across the entire multimedia content delivery chain, we describe a number of valuable services which have been developed or could potentially be developed using the resulting merged ontologies.

Proceedings ArticleDOI
09 Jun 2003
TL;DR: Issues of knowledge creation, knowledge conversion and transfer, continuous learning, competence management and team composition, and experience repositories and other tools for knowledge dissemination are examined.
Abstract: This paper presents a comparative analysis of knowledge sharing approaches of agile and Tayloristic (traditional) software development teams. Issues of knowledge creation, knowledge conversion and transfer, continuous learning, competence management and team composition are discussed. Experience repositories and other tools for knowledge dissemination are examined.

Patent
12 Feb 2003
TL;DR: In this article, a method and system for emulating a knowledge representation in a Unified Modeling Language (UML) environment is provided. But this method is limited to a single ontology.
Abstract: According to an embodiment of the present invention, there is provided a method and system for emulating a knowledge representation in a Unified Modeling Language (UML) environment. A Meta-Object Facility metamodel and UML profile are grounded in a foundation ontology. The elements representing the knowledge representation ontology are mapped to elements of UML, based on the grounded Meta-Object Facility metamodel and UML profile, thereby emulating the knowledge representation in a UML environment.

Proceedings ArticleDOI
20 May 2003
TL;DR: In this article, the authors present a rule-based approach to representation of business contracts that enables software agents to create, evaluate, negotiate, and execute contracts with substantial automation and modularity.
Abstract: SweetDeal is a rule-based approach to representation of business contracts that enables software agents to create, evaluate, negotiate, and execute contracts with substantial automation and modularity. It builds upon the situated courteous logic programs knowledge representation in RuleML, the emerging standard for Semantic Web XML rules. Here, we newly extend the SweetDeal approach by also incorporating process knowledge descriptions whose ontologies are represented in DAML+OIL (emerging standard for Semantic Web ontologies) thereby enabling more complex contracts with behavioral provisions, especially for handling exception conditions (e.g., late delivery or non-payment) that might arise during the execution of the contract. This provides a foundation for representing and automating deals about services -- in particular, about Web Services, so as to help search, select, and compose them. Our system is also the first to combine emerging Semantic Web standards for knowledge representation of rules (RuleML) with ontologies (DAML+OIL) for a practical e-business application domain, and further to do so with process knowledge. This also newly fleshes out the evolving concept of Semantic Web Services. A prototype (soon public) is running.

Journal ArticleDOI
TL;DR: The CAWICOMS WORKBENCH for the development of configuration services, offering personalized user interaction as well as distributed configuration of products and services in a supply chain is described.
Abstract: For the last two decades, configuration systems relying on AI techniques have successfully been applied in industrial environments. These systems support the configuration of complex products and services in shorter time with fewer errors and, therefore, reduce the costs of a mass-customization business model. The European Union-funded project entitled CUSTOMER-ADAPTIVE WEB INTERFACE FOR THE CONFIGURATION OF PRODUCTS AND SERVICES WITH MULTIPLE SUPPLIERS (CAWICOMS) aims at the next generation of web-based configuration applications that cope with two challenges of today's open, networked economy: (1) the support for heterogeneous user groups in an open-market environment and (2) the integration of configurable subproducts provided by specialized suppliers.This article describes the CAWICOMS WORKBENCH for the development of configuration services, offering personalized user interaction as well as distributed configuration of products and services in a supply chain. The developed tools and techniques rely on a harmonized knowledge representation and knowledge-acquisition mechanism, open XML-based protocols, and advanced personalization and distributed reasoning techniques. We exploited the workbench based on the real-world business scenario of distributed configuration of services in the domain of information processing-based virtual private networks.

Book ChapterDOI
TL;DR: The principles underlying the design of the lora-2 system are discussed and its salient features, including meta-programming, reification, logical database updates, encapsulation, and support for dynamic modules are described.
Abstract: \(\mathcal{F}\) lora-2 is a rule-based object-oriented knowledge base system designed for a variety of automated tasks on the Semantic Web, ranging from meta-data management to information integration to intelligent agents. The \(\mathcal{F}\) lora-2 system integrates F-logic, HiLog, and Transaction Logic into a coherent knowledge representation and inference language. The result is a flexible and natural framework that combines rule-based and object-oriented paradigms. This paper discusses the principles underlying the design of the \(\mathcal{F}\) lora-2 system and describes its salient features, including meta-programming, reification, logical database updates, encapsulation, and support for dynamic modules.

Journal ArticleDOI
TL;DR: This work addresses the problem of scalable and deployable query systems and presents a simple, but general query interface called GetData, and introduces the concept of Semantic Negotiation, a process by which two programs can bootstrap from small shared vocabularies to larger shared vocABularies.


Journal ArticleDOI
TL;DR: This paper investigates an inherent and autonomous comparison criterion, based on specificity as defined in [POO 85, SIM 92], which is context-sensitive, i.
Abstract: Most formalisms for representing common-sense knowledge allow incomplete and potentially inconsistent information. When strong negation is also allowed, contradictory conclusions can arise. A criterion for deciding between them is needed. The aim of this paper is to investigate an inherent and autonomous comparison criterion, based on specificity as defined in [POO 85, SIM 92]. In contrast to other approaches, we consider not only defeasible, but also strict knowledge. Our criterion is context-sensitive, i. e., preference among defeasible rules is determined dynamically during the dialectical analysis. We show how specificity can be defined in terms of two different approaches: activation sets and derivation trees. This allows us to get a syntactic criterion that can be implemented in a computationally attractive way. The resulting definitions may be applied in general rulebased formalisms. We present theorems linking both characterizations. Finally we discuss other frameworks for defeasible reasoning in ...

Proceedings ArticleDOI
23 Mar 2003
TL;DR: A semantic network is presented as a knowledge representation for encapsulating rooms, users, groups, roles, and other information and its utility is demonstrated as a basis for ongoing work.
Abstract: When building intelligent spaces, the knowledge representation for encapsulating rooms, users, groups, roles, and other information is a fundamental design question. We present a semantic network as such a representation, and demonstrate its utility as a basis for ongoing work.

Journal ArticleDOI
TL;DR: A context-based paradigm for intelligent assistant systems for traffic control that supports operators who monitor a subway line and solve problems when they occur is developed.
Abstract: The author has developed a context-based paradigm for intelligent assistant systems from our experience in real-world applications He concentrates on a system for traffic control (SART, Systeme d'Aide a la Regulation du Trafic) It supports operators who monitor a subway line and solve problems when they occur

Proceedings ArticleDOI
31 May 2003
TL;DR: Syntactically Enhanced LSA is presented here an approach which generalizes LSA by considering a word along with its syntactic neighborhood given by the part-of-speech tag of its preceding word, as a unit of knowledge representation, which provides better discrimination of syntactic-semantic knowledge representation than LSA.
Abstract: Latent semantic analysis (LSA) has been used in several intelligent tutoring systems(ITS's) for assessing students' learning by evaluating their answers to questions in the tutoring domain. It is based on word-document co-occurrence statistics in the training corpus and a dimensionality reduction technique. However, it doesn't consider the word-order or syntactic information, which can improve the knowledge representation and therefore lead to better performance of an ITS. We present here an approach called Syntactically Enhanced LSA (SELSA) which generalizes LSA by considering a word along with its syntactic neighborhood given by the part-of-speech tag of its preceding word, as a unit of knowledge representation. The experimental results on Auto-Tutor task to evaluate students' answers to basic computer science questions by SELSA and its comparison with LSA are presented in terms of several cognitive measures. SELSA is able to correctly evaluate a few more answers than LSA but is having less correlation with human evaluators than LSA has. It also provides better discrimination of syntactic-semantic knowledge representation than LSA.

Journal ArticleDOI
01 Nov 2003
TL;DR: This article proposed a declarative approach to NLG, where the generator directly explores a search space for utterances described by a linguistic grammar and uses a model of interpretation, which characterizes the potential links between the utterance and the domain and context.
Abstract: The process of microplanning in natural language generation (NLG) encompasses a range of problems in which a generator must bridge underlying domain-specific representations and general linguistic representations. These problems include constructing linguistic referring expressions to identify domain objects, selecting lexical items to express domain concepts, and using complex linguistic constructions to concisely convey related domain facts. In this paper, we argue that such problems are best solved through a uniform, comprehensive, declarative process. In our approach, the generator directly explores a search space for utterances described by a linguistic grammar. At each stage of search, the generator uses a model of interpretation, which characterizes the potential links between the utterance and the domain and context, to assess its progress in conveying domain-specific representations. We further address the challenges for implementation and knowledge representation in this approach. We show how to implement this approach effectively by using the lexicalized tree-adjoining grammar (LTAG) formalism to connect structure to meaning and using modal logic programming to connect meaning to context. We articulate a detailed methodology for designing grammatical and conceptual resources which the generator can use to achieve desired microplanning behavior in a specified domain. In describing our approach to microplanning, we emphasize that we are in fact realizing a deliberative process of goal-directed activity. As we formulate it, interpretation offers a declarative representation of a generator's communicative intent. It associates the concrete linguistic structure planned by the generator with inferences that show how the meaning of that structure communicates needed information about some application domain in the current discourse context. Thus, interpretations are plans that the microplanner constructs and outputs. At the same time, communicative intent representations provide a rich and uniform resource for the process of NLG. Using representations of communicative intent, a generator can augment the syntax, semantics, and pragmatics of an incomplete sentence simultaneously, and can work incrementally toward solutions for the various problems of microplanning.

Journal ArticleDOI
TL;DR: The author's JessTab extension lets you write Jess programs that manage Protege ontologies and knowledge bases, a popular, modular ontology development and knowledge acquisition tool.
Abstract: Integration with external systems, such as problem solvers, is becoming increasingly important for ontology development and knowledge-modeling tools. The author's JessTab extension lets you write Jess programs that manage Protege ontologies and knowledge bases. Protege is a popular, modular ontology development and knowledge acquisition tool.

Journal ArticleDOI
TL;DR: A description logic based definition of a configuration problem is given and its equivalence with existing consistency-based definitions is shown, thus joining the two major streams in knowledge-based configuration (description logics and predicate logic/constraint based configuration).
Abstract: Today's economy exhibits a growing trend toward highly specialized solution providers cooperatively offering configurable products and services to their customers. This paradigm shift requires the extension of current standalone configuration technology with capabilities of knowledge sharing and distributed problem solving. In this context a standardized configuration knowledge representation language with formal semantics is needed in order to support knowledge interchange between different configuration environments. Languages such as Ontology Inference Layer (OIL) and DARPA Agent Markup Language (DAML+OIL) are based on such formal semantics (description logic) and are very popular for knowledge representation in the Semantic Web. In this paper we analyze the applicability of those languages with respect to configuration knowledge representation and discuss additional demands on expressivity. For joint configuration problem solving it is necessary to agree on a common problem definition. Therefore, we give a description logic based definition of a configuration problem and show its equivalence with existing consistency-based definitions, thus joining the two major streams in knowledge-based configuration (description logics and predicate logicsconstraint based configuration).

Proceedings ArticleDOI
27 May 2003
TL;DR: The GetSmart system was created to apply knowledge management techniques in a learning environment based on an analysis of learning theory and the information search process and is revealing interesting knowledge representation patterns.
Abstract: The National Science Digital Library (NSDL), launched in December 2002, is emerging as a center of innovation in digital libraries as applied to education. As a part of this extensive project, the GetSmart system was created to apply knowledge management techniques in a learning environment. The design of the system is based on an analysis of learning theory and the information search process. Its key notion is the integration of search tools and curriculum support with concept mapping. More than 100 students at the University of Arizona and Virginia Tech used the system in the fall of 2002. A database of more than one thousand student-prepared concept maps has been collected with more than forty thousand relationships expressed in semantic, graphical, node-link representations. Preliminary analysis of the collected data is revealing interesting knowledge representation patterns.

Patent
25 Jun 2003
TL;DR: In this paper, an integrated human and computer interactive data mining method receives an input database, and a learning, modeling, and analysis method uses the database to create an initial knowledge model, which is processed to create a knowledge presentation output for visualization.
Abstract: An integrated human and computer interactive data mining method receives an input database A learning, modeling, and analysis method uses the database to create an initial knowledge model A query of the initial knowledge model is performed using a query request The initial knowledge model is processed to create a knowledge presentation output for visualization It further comprises a feedback and update request step that updates the initial knowledge model A multiple level integrated human and computer interactive data mining method facilitates overview interactive data mining and dynamic learning and knowledge representation by using the initial knowledge model and the database to create and update a presentable knowledge model It facilitates zoom and filter interactive data mining and dynamic learning and knowledge representation by using the presentable knowledge model and the database to create and update the presentable knowledge model It further facilitates details-on-demand interactive data mining and dynamic learning and knowledge representation by using the presentable knowledge model and the database to create and update the presentable knowledge model The integrated human and computer interactive data mining method allows rule viewing by a parallel coordinate visualization technique that maps a multiple dimensional space onto two display dimensions with data items presented as polygonal lines

Journal ArticleDOI
TL;DR: It is argued that this partition provides a possible explanation of why in KRR context is used to solve different types of problems, or to address the same problems from very different perspectives.

Journal ArticleDOI
01 Mar 2003
TL;DR: The simulation results have shown that the proposed HIS performs better than the individual standalone systems and the comparison results show that the linguistic rules extracted are competitive with or even superior to some well-known methods.
Abstract: In this paper, we propose a novel hybrid intelligent system (HIS) which provides a unified integration of numerical and linguistic knowledge representations. The proposed HIS is a hierarchical integration of an incremental learning fuzzy neural network (ILFN) and a linguistic model, i.e., fuzzy expert system (FES), optimized via the genetic algorithm (GA). The ILFN is a self-organizing network. The linguistic model is constructed based on knowledge embedded in the trained ILFN or provided by the domain expert. The knowledge captured from the low-level ILFN can be mapped to the higher level linguistic model and vice versa. The GA is applied to optimize the linguistic model to maintain high accuracy, comprehensibility, completeness, compactness, and consistency. The resulted HIS is capable of dealing with low-level numerical computation and higher level linguistic computation. After the system is completely constructed, it can incrementally learn new information in both numerical and linguistic forms. To evaluate the system's performance, the well-known benchmark Wisconsin breast cancer data set was studied for an application to medical diagnosis. The simulation results have shown that the proposed HIS performs better than the individual standalone systems. The comparison results show that the linguistic rules extracted are competitive with or even superior to some well-known methods. Our interest is not only on improving the accuracy of the system, but also enhancing the comprehensibility of the resulted knowledge representation.