scispace - formally typeset
Search or ask a question

Showing papers on "Upper ontology published in 1999"


Book ChapterDOI
04 Jan 1999
TL;DR: Ontobroker is developed which uses formal ontologies to extract, reason, and generate metadata in the WWW, and the generation of RDF descriptions enables the exploitation of the ontological information in RDF-based applications.
Abstract: The World Wide Web (WWW) can be viewed as the largest multimedia database that has ever existed. However, its support for query answering and automated inference is very limited. Metadata and domain specific ontologies were proposed by several authors to solve this problem. We developed Ontobroker which uses formal ontologies to extract, reason, and generate metadata in the WWW. The paper describes the formalisms and tools for formulating queries, defining ontologies, extracting metadata, and generating metadata in the format of the Resource Description Framework (RDF), as recently proposed by the World Wide Web Consortium (W3C). These methods provide a means for semantic based query handling even if the information is spread over several sources. Furthermore, the generation of RDF descriptions enables the exploitation of the ontological information in RDF-based applications.

555 citations


01 Jan 1999
TL;DR: This paper identifies three main categories of ontology applications: 1) neutral authoring, 2) common access to information, and 3) indexing for search and identifies specific ontology application scenarios.
Abstract: In1 this paper, we draw attention to common goals and supporting technologies of several relatively distinct communities to facilitate closer cooperation and faster progress. The common thread is the need for sharing the meaning of terms in a given domain, which is a central role of ontologies. The different communities include ontology research groups, software developers and standards organizations. Using a broad definition of ‘ontology’, we show that much of the work being done by those communities may be viewed as practical applications of ontologies. To achieve this, we present a framework for understanding and classifying ontology applications. We identify three main categories of ontology applications: 1) neutral authoring, 2) common access to information, and 3) indexing for search. In each category, we identify specific ontology application scenarios. For each, we indicate their intended purpose, the role of the ontology, the supporting technologies and who the principal actors are and what they do. We illuminate the similarities and differences between scenarios. The copyright of this paper belongs to the papers authors. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage. Proceedings of the IJCAI-99 workshop on Ontologies and Problem-Solving Methods (KRR5) Stockholm, Sweden, August 2, 1999 (V.R. Benjamins, B. Chandrasekaran, A. Gomez-Perez, N. Guarino, M. Uschold, eds.) http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-18/ 1The order of authors was determined by a coin flip.

555 citations


Journal ArticleDOI
TL;DR: This work presents the experience in using Methontology and ODE to build the chemical ontology and the Ontology Development Environment.
Abstract: Methontology provides guidelines for specifying ontologies at the knowledge level, as a specification of a conceptualization. ODE enables ontology construction, covering the entire life cycle and automatically implementing ontologies. To meet the challenge of building ontologies, we have developed Methontology, a framework for specifying ontologies at the knowledge level, and the Ontology Development Environment. We present our experience in using Methontology and ODE to build the chemical ontology.

523 citations


01 Jan 1999
TL;DR: ONION, a user-friendly toolkit, provides a sound foundation to simplify the work of domain experts, enables integration with public semantic dictionaries, like Wordnet, and will derive ODMG-compliant mediators automatically.

337 citations


31 Jul 1999
TL;DR: This paper examines the potential for object-oriented standards to be used for ontology modelling, and in particular presents an ontology representation language based on a subset of the Unified Modeling Language together with its associated Object Constraint Language.
Abstract: Current tools and techniques for ontology development are based on the traditions of AI knowledge representation research. This research has led to popular formalisms such as KIF and KL-ONE style languages. However, these representations are little known outside AI research laboratories. In contrast, commercial interest has resulted in ideas from the object-oriented programming community maturing into industry standards and powerful tools for object-oriented analysis, design and implementation. These standards and tools have a wide and rapidly growing user community. This paper examines the potential for object-oriented standards to be used for ontology modelling, and in particular presents an ontology representation language based on a subset of the Unified Modeling Language together with its associated Object Constraint Language.

288 citations


Proceedings Article
01 Jan 1999
TL;DR: The different meanings of the word “integration” are discussed, and the main characteristics of the three different processes and proposethree words to distinguish among those meanings are identified:integration, merge and use.
Abstract: The word integration has been used with different meanings in the ontology field. This article aims at clarifying the meaning of the word “integration” and presenting some of the relevant work done in integration. We identify three meanings of ontology “integration”: when building a new ontology reusing (by assembling, extending, specializing or adapting) other ontologies already available; when building an ontology by merging several ontologies into a single one that unifies all of them; when building an application using one or more ontologies. We discuss the different meanings of “integration”, identify the main characteristics of the three different processes and proposethree words to distinguish among those meanings:integration, merge and use.

276 citations


Journal ArticleDOI
TL;DR: The present paper describes the mechanisms used for delivering the TaO and discusses the ontology's design and organization, which are crucial for maintaining the coherence of a large collection of concepts and their relationships.
Abstract: Motivation: An ontology of biological terminology provides a model of biological concepts that can be used to form a semantic framework for many data storage, retrieval and analysis tasks. Such a semantic framework could be used to underpin a range of important bioinformatics tasks, such as the querying of heterogeneous bioinformatics sources or the systematic annotation of experimental results. Results: This paper provides an overview of an ontology [the Transparent Access to Multiple Biological Information Sources (TAMBIS) ontology or TaO] that describes a wide range of bioinformatics concepts. The present paper describes the mechanisms used for delivering the ontology and discusses the ontology's design and organization, which are crucial for maintaining the coherence of a large collection of concepts and their relationships. Availability: The TAMBIS system, which uses a subset of the TaO described here, is accessible over the Web via http://img.cs.man.ac.uk/tambis (although in the first instance, we will use a password mechanism to limit the load on our server). The complete model is also available on the Web at the above URL. Contact: tambis@cs.man.ac.uk.

240 citations


01 Jan 1999
TL;DR: SMART, an algorithm that provides a semi-automatic approach to ontology merging and alignment, is developed, based on an extremely general knowledge model and, therefore, can be applied across various platforms.
Abstract: As researchers in the ontology-design field develop the content of a growing number of ontologies, the need for sharing and reusing this body of knowledge becomes increasingly critical. Aligning and merging existing ontologies, which is usually handled manually, often constitutes a large and tedious portion of the sharing process. We have developed SMART, an algorithm that provides a semi-automatic approach to ontology merging and alignment. SMART assists the ontology developer by performing certain tasks automatically and by guiding the developer to other tasks for which his intervention is required. SMART also determines possible inconsistencies in the state of the ontology that may result from the user’s actions, and suggests ways to remedy these inconsistencies. We define the set of basic operations that are performed during merging and alignment of ontologies, and determine the effects that invocation of each of these operations has on the process. SMART is based on an extremely general knowledge model and, therefore, can be applied across various platforms. 1. Merging Versus Alignment In recent years, researchers have developed many ontologies. These different groups of researchers are now beginning to work with one another, so they must bring together these disparate source ontologies. Two approaches are possible: (1) merging the ontologies to create a single coherent ontology, or (2) aligning the ontologies by establishing links between them and allowing the aligned ontologies to reuse information from one another. As an illustration of the possible processes that establish correspondence between different ontologies, we consider the ontologies that natural languages embody. A researcher trying to find common ground between two such languages may perform one of several tasks. He may create a mapping between the two languages to be used in, say, a machine-translation system. Differences in the ontologies underlying the two languages often do not allow simple one-to-one correspondence, so a mapping must account for these differences. Alternatively, Esperanto language (an international language that was constructed from words in different European languages) was created through merging: All the languages and their underlying ontologies were combined to create a single language. Aligning languages (ontologies) is a third task. Consider how we learn a new domain language that has an extensive vocabulary, such as the language of medicine. The new ontology (the vocabulary of the medical domain) needs to be linked in our minds to the knowledge that we already have (our existing ontology of the world). The creation of these links is alignment. We consider merging and alignment in our work that we describe in this paper. For simplicity, throughout the discussion, we assume that only two ontologies are being merged or aligned at any given time. Figure 1 illustrates the difference between ontology merging and alignment. In merging, a single ontology that is a merged version of the original ontologies is created. Often, the original ontologies cover similar or overlapping domains. For example, the Unified Medical Language System (Humphreys and Lindberg 1993; UMLS 1999) is a large merged ontology that reconciles differences in terminology from various machine-readable biomedical information sources. Another example is the project that was merging the top-most

235 citations


Book ChapterDOI
TL;DR: A framework for ontology-based geographic data set integration, an ontology being a collection of shared concepts, is explored, formalized in the Prolog language, illustrated with a fictitious example, and tested on a practical example.
Abstract: In order to develop a system to propagate updates we investigate the semantic and spatial relationships between independently produced geographic data sets of the same region (data set integration). The goal of this system is to reduce operator intervention in update operations between corresponding (semantically similar) geographic object instances. Crucial for this reduction is certainty about the semantic similarity of different object representations. In this paper we explore a framework for ontology-based geographic data set integration, an ontology being a collection of shared concepts. Components of this formal approach are an ontology for topographic mapping (a domain ontology), an ontology for every geographic data set involved (the application ontologies), and abstraction rules (or capture criteria). Abstraction rules define at the class level the relationships between domain ontology and application ontology. Using these relationships, it is possible to locate semantic similarity at the object instance level with methods from computational geometry (like overlay operations). The components of the framework are formalized in the Prolog language, illustrated with a fictitious example, and tested on a practical example.

111 citations


01 Jan 1999
TL;DR: SMART, an algorithm that provides a semi-automatic approach to ontology merging and alignment, is developed, based on an extremely general knowledge model and, therefore, can be applied across various platforms.
Abstract: As researchers in the ontology-design field develop the content of a growing number of ontologies, the need for sharing and reusing this body of knowledge becomes increasingly critical. Aligning and merging existing ontologies, which is usually handled manually, often constitutes a large and tedious portion of the sharing process. We have developed SMART, an algorithm that provides a semi-automatic approach to ontology merging and alignment. SMART assists the ontology developer by performing certain tasks automatically and by guiding the developer to other tasks for which his intervention is required. SMART also determines possible inconsistencies in the state of the ontology that may result from the user’s actions, and suggests ways to remedy these inconsistencies. We define the set of basic operations that are performed during merging and alignment of ontologies, and determine the effects that invocation of each of these operations has on the process. SMART is based on an extremely general knowledge model and, therefore, can be applied across various platforms. 1 Merging Versus Alignment In recent years, researchers have developed many ontologies. These different groups of researchers are now beginning to work with one another, so they must bring together these disparate source ontologies. Two approaches are possible: (1) merging the ontologies to create a single coherent ontology, or (2) aligning the ontologies by establishing links between them and allowing them to reuse information from one another. As an illustration of the possible processes that establish correspondence between different ontologies, we consider the ontologies that natural languages embody. A researcher trying to find common ground between two such languages may perform one of several tasks. He may create a mapping between the two languages to be used in, say, a machine-translation system. Differences in the ontologies underlying the two languages often do not allow simple one-to-one correspondence, so a mapping must account for these differences. Alternatively, Esperanto language (an international language that was constructed from words in different European languages) was created through merging: All the languages and their underlying ontologies were combined to create a single language. Aligning languages (ontologies) is a third task. Consider how we learn a new domain language that has an extensive vocabulary, such as the language of medicine. The new ontology (the vocabulary of the medical domain) needs to be linked in our minds to the knowledge that we already have (our existing ontology of the world). The creation of these links is alignment. We consider merging and alignment in this paper. For simplicity, throughout the discussion, we assume that only two ontologies are being merged or aligned at any given time. Figure 1 illustrates the difference between ontology merging and alignment. In merging, a single ontology that is a merged version of the original ontologies is created. Often, the original ontologies cover similar or overlapping domains. For example, the Unified Medical Language System (Humphreys and Lindberg 1993; UMLS 1999) is a large merged ontology that reconciles differences in terminology from various machine-readable biomedical information sources. Another example is the project that was merging the top-most levels of two general commonsense-knowledge ontologies—SENSUS (Knight and Luk 1994) and Cyc (Lenat 1995)—to create a single top-level ontology of world knowledge (Hovy 1997). In alignment, the two original ontologies persist, with links established between them. Alignment usually is performed when the ontologies cover domains that are complementary to each other. For example, part of the High Performance Knowledge Base (HPKB) program sponsored by the Defense Advanced Research Projects Agency (DARPA) (Cohen et al. 1999) is structured around one central ontology, the Cyc knowledge base (Lenat 1995). Several teams of researchers develop ontologies in the domain of military tactics to cover the types of military units and weapons, tasks the units can perform, constraints on the units and tasks, and so on. These developers then align these more domain-specific ontologies to Cyc by establishing links into Cyc’c upperand middle-level ontologies. The domain-specific ontologies do not become part of the Cyc knowledge base; rather, they are separate ontologies that include Cyc and use its top-level distinctions. 1 Most knowledge representation systems would require one ontology to be included in the other for the links to be established.

96 citations


Book ChapterDOI
26 May 1999
TL;DR: The paper discusses how the ontological reengineering process has been applied to the Standard-Units ontology, which is included in a Chemical-Elements ontology and to a Monatomic-Ions and Environmental-Pollutants ontologies.
Abstract: This paper presents the concept of Ontological Reengineering as the process of retrieving and transforming a conceptual model of an existing and implemented ontology into a new, more correct and more complete conceptual model which is reimplemented. Three activities have been identified in this process: reverse engineering, restructuring and forward engineering. The aim of Reverse Engineering is to output a possible conceptual model on the basis of the code in which the ontology is implemented. The goal of Restructuring is to reorganize this initial conceptual model into a new conceptual model, which is built bearing in mind the use of the restructured ontology by the ontology/application that reuses it. Finally, the objective of Forward Engineering is output a new implementation of the ontology. The paper also discusses how the ontological reengineering process has been applied to the Standard-Units ontology [18], which is included in a Chemical-Elements [12] ontology. These two ontologies will be included in a Monatomic-Ions and Environmental-Pollutants ontologies.

01 Jan 1999
TL;DR: This work presents SHOE, a web-based knowledge representation language that supports multiple versions of ontologies, and discusses the features of SHOE that address ontology versioning, the affects of ontology revision on SHOE web pages, and methods for implementing ontology integration using SHOE’s extension and version mechanisms.
Abstract: We discuss the problems associated with versioning ontologies in distributed environments. This is an important issue because ontologies can be of great use in structuring and querying intemet information, but many of the Intemet’s characteristics, such as distributed ownership, rapid evolution, and heterogeneity, make ontology management difficult. We present SHOE, a web-based knowledge representation language that supports multiple versions of ontologies. We then discuss the features of SHOE that address ontology versioning, the affects of ontology revision on SHOE web pages, and methods for implementing ontology integration using SHOE’s extension and version mechanisms.

01 Jan 1999
TL;DR: A major conclusion is that emphasis has shifted from skill acquisition to obtaining insight and understanding in educational research, and types of knowledge distinguished in core ontologies can make up categories that provide the similar decompositions as task analyses, but apparently in a more ‘natural’ way.
Abstract: Constructing ontologies in educational design is not really new. The specification of educational goals is what is called now a days an ontology. Although content has always been considered a crucial factor in education, the emphasis in educational research has been on form, as is also pointed out by [Mizoguchi et al., 1997]. Ontological engineering for constructing educational systems may look like putting the same old wine in new barrels, but we should be aware that that these new barrels may give a new flavour to this wine. As an example we discuss a core ontology about law, used in the development of educational systems. 1 A core ontology mediates a top ontology, that reflects our common sense understanding of the world, and an ontology that defines the concepts and structures in a domain. A core ontology tells us what a domain is about. The core ontology discussed is FOLaw [Valente, 1995], a functional ontology of law, as applied in PROSA, a system that trains students to solve problems (cases) in adminstrative law. A major conclusion is that emphasis has shifted from skill acquisition to obtaining insight and understanding. Anaother benefit of this ‘ontological view’ is that types of knowledge distinguished in core ontologies can make up categories that provide the similar decompositions as task analyses, but apparently in a more ‘natural’ way.

Journal ArticleDOI
TL;DR: This article shows how existing knowledge base verification techniques can be applied to verify the commitment of a knowledge-based system to a given ontology, by incorporating translation into the verification procedure.
Abstract: An ontology defines the terminology of a domain of knowledge: the concepts that constitute the domain, and the relationships between those concepts. In order for two or more knowledge-based systems to interoperate—for example, by exchanging knowledge, or collaborating as agents in a co-operative problem-solving process—they must commit to the definitions in a common ontology. Verifying such commitment is therefore a prerequisite for reliable knowledge-based system interoperability. This article shows how existing knowledge base verification techniques can be applied to verify the commitment of a knowledge-based system to a given ontology. The method takes account of the fact that an ontology will typically be expressed using a different knowledge representation language to the knowledge base, by incorporating translation into the verification procedure. While the representation languages used are specific to a particular project, their features are general and the method has broad applicability.

Proceedings ArticleDOI
19 Oct 1999
TL;DR: This paper considers an ontology to be composed of four elements: classes, relations, functions and instances, and shows that these four elements can be extracted from the code of the concerned system using the existing software re-engineering tools.
Abstract: Ontology has been investigated in the context of knowledge sharing among heterogeneous and disparate database and knowledge base systems. Our recent study and experiments suggest that ontology also have a great potential for legacy software understanding and re-engineering. In this paper we consider an ontology to be composed of four elements: classes, relations, functions and instances. We show these four elements forming an ontology for a legacy system can be extracted from the code of the concerned system using the existing software re-engineering tools. We then present our vision how the obtained ontology can be applied to understanding and eventually better re-engineering the legacy systems.

Journal ArticleDOI
TL;DR: An approach to task-driven ontology design which is based on information discovery from database schemas, using techniques for semi-automatically discovering terms and relationships used in the information space, denoting concepts, their properties and links is introduced.
Abstract: In this paper, we introduce an approach to task-driven ontology design which is based on information discovery from database schemas. Techniques for semi-automatically discovering terms and relationships used in the information space, denoting concepts, their properties and links are proposed, which are applied in two stages. At the first stage, the focus is on the discovery of heterogeneity/ambiguity of data representations in different schemas. For this purpose, schema elements are compared according to defined comparison features and similarity coefficients are evaluated. This stage produces a set of candidates for unification into ontology concepts. At the second stage, decisions are made on which candidates to unify into concepts and on how to relate concepts by semantic links. Ontology concepts and links can be accessed according to different perspectives, so that the ontology can serve different purposes, such as, providing a search space for powerful mechanisms for concept location, setting a basis for query formulation and processing, and establishing a reference for recognizing terminological relationships between elements in different schemas.

01 Jan 1999
TL;DR: It is argued nevertheless that large public lexicons should be simple, i.e. their semantics become implicit by agreement among "all" users, and ideally completely application independent, and the lexicon or thesaurus then becomes the semantic domain for semantics.
Abstract: The availability of computerized lexicons, thesauri and "ontologies" –we discuss this termi nology– makes it possible to formalize semantic aspects of information as used in the analysis, design and implementation of information systems (and in fact general software systems) in new and useful ways. We survey a selection of relevant ongoing work, discuss different issues of semantics that arise, and characterize the resulting computerized information systems, called CLASS for Computer-Lexicon Assisted Software Systems. The need for a "global" common ontology (lexicon, thesaurus) is conjectured, and some desirable properties are proposed. We give a few examples of such CLASS-s and indicate avenues of current and future research in this area. In particular, certain problems can be identified with well-known existing lexicons such as CYC and WordNet, as well as with sophisticated representationand inference engines such as KIF or SHOE. We argue nevertheless that large public lexicons should be simple, i.e. their semantics become implicit by agreement among "all" users, and ideally completely application independent. In short, the lexicon or thesaurus then becomes the semantic domain for

Journal ArticleDOI
01 Sep 1999
TL;DR: This work identifies five types of problem that may be encountered in moving from an informal description of a domain to a formal representation of hierarchical knowledge in an ontology.
Abstract: Early ontological engineering methodologies have necessarily focussed on the management of the whole ontology development process. There is a corresponding need to provide advice to the ontological engineer on the finer details of ontology construction. Here, we specifically address the representation of hierarchical relationships in an ontology. We identify five types of problem that may be encountered in moving from an informal description of a domain to a formal representation of hierarchical knowledge. Each problem type is discussed from the perspective of knowledge sharing and examples from biological ontologies are used to illustrate each type.


01 Jan 1999
TL;DR: This document discusses issues relating to ontology management expected to be encountered during this just-commenced project and the planned approach to dealing with these issues.
Abstract: An effort to prepare an ontology for geospatial information is described. This effort concentrates in the first instance on marine navigational information. This document discusses issues relating to ontology management expected to be encountered during this just-commenced project and our planned approach to dealing with these issues.

Journal Article
TL;DR: This work encodes the first heuristic as a density function and uses probabilistic models for the second and third and argues that these heuristics and computational models correctly determine the suitability of a Web document for a given ontology.
Abstract: Ontology based data extraction from multi-record Web documents works well, but only if the ontology is suitable for the Web document. How do we know whether the ontology is suitable? To resolve this question, we present an approach based on three heuristics: density, schema, and grouping. We encode the first heuristic as a density function and use probabilistic models for the second and third. We argue that these heuristics and our computational models for these heuristics correctly determine the suitability of a Web document for a given ontology.

Book ChapterDOI
01 Jan 1999
TL;DR: A generic ontology to support N-dimensional spatial reasoning applications and is intended to support both quantitative and qualitative approaches and is expressed using set notation.
Abstract: In this paper we describe a generic ontology to support N-dimensional spatial reasoning applications. The ontology is intended to support both quantitative and qualitative approaches and is expressed using set notation. Using the ontology; spatial domains of discourse, spatial objects and their attributes, and the relationships that can link spatial objects can be expressed in terms of sets, and sets of sets. The ontology has been developed through a series of application studies. For each study a directed application ontology was first developed which was then merged into the generic ontology. Application areas that have been investigated include: Geographic Information Systems (GIS), noise pollution monitoring, environmental impact assessment, shape fitting, timetabling and scheduling, and AI problems such as the N-queens problem.

Proceedings ArticleDOI
31 Aug 1999
TL;DR: A new information retriever on the Web is proposed which automatically classifies collected documents; and retrieves multi-lingual information (e.g., Japanese and/or English) by the mechanism of ontology processing.
Abstract: We propose a new information retriever on the Web which: automatically classifies collected documents; and retrieves multi-lingual information (e.g., Japanese and/or English). This is attained by the mechanism of ontology processing. We set up a multi-lingual ontology as an index dictionary. The ontology manages the relations of keywords and the weights of keywords for domain categories to classify the documents. The system has the following features: users can use any keywords expressed in their native language to search for the Web documents, even if they contain multi lingual information; and the documents can be classified into specific domain categories by tuning the keyword weights up.


Book ChapterDOI
02 Jun 1999
TL;DR: A living set of features that allow us to characterize ontologies from the user point of view and have the same logical organization are presented and a living domain ontology about ontologies (called Reference Ontology) that gathers, describes and has links to existing ontologies is presented.
Abstract: Knowledge reuse by means of ontologies now faces three important problems: (1) there are no standardized identifying features that characterize ontologies from the user point of view; (2) there are no web sites using the same logical organization, presenting relevant information about ontologies; and (3) the search for appropriate ontologies is hard, time-consuming and usually fruitless. To solve the above problems, we present: (1) a living set of features that allow us to characterize ontologies from the user point of view and have the same logical organization; (2) a living domain ontology about ontologies (called Reference Ontology) that gathers, describes and has links to existing ontologies; and (3) Reference Ontology as a source of its knowledge and retrieves descriptions of ontologies that satisfy a given set of constraints. (ONTO)2 Agent is available at http://delicias.dia.fi.upm.es/REFERENCE ONTOLOGY/

01 Jan 1999
TL;DR: This paper discusses the implementation of a system that provides retail sales consultants with fast, easy access to reliable product information that closely matches the customer’s desired preferences.
Abstract: The purchase of a very expensive, complex product or service such as an international holiday requires a great deal of information and reliable expert advice. The risks involved in such a purchase are substantial. This paper discusses the implementation of a system that provides retail sales consultants with fast, easy access to reliable product information that closely matches the customer’s desired preferences. An architecture for a World Wide Web based semantic matching system using software agents, XML and formal ontologies is proposed.

Proceedings ArticleDOI
N. Ono1, Ryosuke Kainuma, Hiroshi Ohtani, Kiyohito Ishida, M. Kato 
01 Jul 1999
TL;DR: Through the description of the analysis and coding of phase diagrams, the nature of the task of ontology construction is illustrated and an object-oriented design for phase diagram databases has been made.
Abstract: Due to the close similarity between the common forms of ontology specification and classes in object-oriented systems, a set of object-oriented classes in a domain may provide the skeleton of the ontology of the domain. An object-oriented design for phase diagram databases has been made with special attention to this. Through the description of the analysis and coding of phase diagrams, the nature of the task of ontology construction is illustrated.

01 Jan 1999
TL;DR: The approach to ontology development that is part of the Disciple-LAS shell and methodology for the building of knowledge-based agents is presented, used to develop an ontology for an agent that critiques military courses of actions.
Abstract: This paper presents the approach to ontology development that is part of the Disciple-LAS shell and methodology for the building of knowledge-based agents. A characteristic feature of this approach is that a detailed specification of the ontology to be developed results from a conceptual modeling of the application domain of the knowledge-based agent to be built. This specification guides the process of building the ontology which consists in importing knowledge from external knowledge servers, and in using the ontology building tools of Disciple. Knowledge import and reuse are facilitated by the fact that the representation of the ontology is based on the OKBC knowledge model. This approach is used to develop an ontology for an agent that critiques military courses of actions.

Book ChapterDOI
29 Sep 1999
TL;DR: A medical ontology is proposed, which is inherent to the ophthalmology domain, but it can be reused by another medical domains with the same representation requirements, and made a difference between core and peripheral concepts.
Abstract: Ontologies provide an explicit and shared specification of the domain knowledge in some field. With the objective of facilitating the integration of case-based reasoning, rule-based reasoning and patient databases, we propose a medical ontology, which is inherent to the ophthalmology domain, but it can be reused by another medical domains with the same representation requirements. In order to achieve this degree of reusing, the ontology was designed making a difference between core and peripheral concepts, domain-specific and method-specific concepts and task(method)-relevant and task(method)-specific concepts.

Proceedings Article
18 Jul 1999
TL;DR: The Disciple approach for building a knowledge based agent relies on importing ontologies from existing repositories of knowl edge, and on teaching the agent how to perform various tasks in a way that resembles how an expert would teach a human apprentice when solving problems in cooperation.
Abstract: We are currently witnessing a trend toward an architectural separation of a knowledge base (KB) into an ontology and a set of rules. The ontology is a description of the concepts and relationships from the application domain; the rules are problem solving procedures expressed with the terms from the ontology. Moreover, terminological standardization taking place in more and more domains has led to the development of domain ontologies. These two developments raise the prospect of reusing existing ontologies when building a new knowledge based system. For instance, the Disciple approach for building a knowledge based agent relies on importing ontologies from existing repositories of knowl edge, and on teaching the agent how to perform various tasks, in a way that resembles how an expert would teach a human apprentice when solving problems in cooperation (Tecuci, 1998; Tecuci et al. 1999). In Disciple, the ontology serves as the generalization hierarchy for learning, an example being basically generalized to a rule by replacing its objects with more general objects from the ontology. However, the learning works well only if the ontology contains all the concepts needed to represent the application domain. We make the assumption that an ontology built from previously developed KBs will contain useful concepts, but it is incomplete and will not contain the more subtle distinctions needed for competent and efficient problem solving in a particular domain. These missing concepts will manifest themselves as exceptions to the learned problem solving rules. A negative exception of the rule is a negative example that is covered by the rule and any specialization (within the current ontology) of the rule that would uncover the exception would result also in uncovering of positive examples of the rule. Similarly, a positive exception of the rule is a positive example that is not covered by the rule and any generalization of the rule that would cover the exception would also result in covering of negative examples of the rule. We are enhancing Disciple by developing a mixedinitiative multistrategy approach to KB revision that will result in an extended and domain-adapted ontology, as well as a set of rules with fewer (if any) exceptions. We are developing two classes of KB revision methods, a class of local methods and a class of global methods. The local methods focus on one rule with its exceptions at a time, in conjunction with the current ontology. Some of the local methods use analogical transfer of discriminatin g features from some objects to other objects in the positive examples of a rule, by considering the similarities between the positive examples and their dissimilarities with the negative exceptions. Other local methods use explanationbased techniques, and similarities between the current rule and other rules, to discover or elicit from the expert discriminating features in the form of explanations of why a negative exception of a rule is not a correct problem solving episode. These local methods work well when the ontology already contains the definitions of the discriminating features, but the descriptions of some of the objects from the ontology are incomplete with respect to those features. The methods perform a local extension of the ontology that leads to a refinement of a rule and a removal of some of its exceptions. Other local methods do not immediately extend the ontology, but suggest characterizations of new concepts that would remove the exceptions. An example of such a characterization is the following: a concept that covers the maximum number of objects from a set of objects and the minimum number of objects from another set of objects . The global methods analyze the alternative concept characterizations suggested by the local methods and attempt to discover a reduced set of concept characterizations that would remove exceptions from a set of rules. In this process, they use various specialization and generalization operators that combine the concept characterizations into a reduced set. They then interact with the domain expert to identify which of the most useful characterizations correspond to meaningful concepts or features in the application domain. The global methods lead to the extension of the ontology with definitions of new objects and features. A significant part of our research effort is also devoted to the evaluation of the developed approach by measuring the effectiveness of the exception-driven discovery and learning with respect to the number of exceptions removed, the impact of the discovered knowledge on the remaining rules, the knowledge acquisition effort required from the domain expert, and the effect of the discovered knowledge on the agent's performance on a set of tasks. In conclusion, we are developing a suite of methods that continuously extend the ontology and revise the rules in the KB through discovery, learning and an interaction with a domain expert in order to achieve competent and efficient problem solving in a particular domain.