scispace - formally typeset
Search or ask a question

Showing papers on "Upper ontology published in 2004"


01 Jan 2004
TL;DR: This document provides an introduction to OWL by informally describing the features of each of the sublanguages of OWL, the Web Ontology Language by providing additional vocabulary along with a formal semantics.
Abstract: The OWL Web Ontology Language is designed for use by applications that need to process the content of information instead of just presenting information to humans. OWL facilitates greater machine interpretability of Web content than that supported by XML, RDF, and RDF Schema (RDF-S) by providing additional vocabulary along with a formal semantics. OWL has three increasingly-expressive sublanguages: OWL Lite, OWL DL, and OWL Full. This document is written for readers who want a first impression of the capabilities of OWL. It provides an introduction to OWL by informally describing the features of each of the sublanguages of OWL. Some knowledge of RDF Schema is useful for understanding this document, but not essential. After this document, interested readers may turn to the OWL Guide for more detailed descriptions and extensive examples on the features of OWL. The normative formal definition of OWL can be found in the OWL Semantics and Abstract Syntax. Status of this document OWL Web Ontology Language Overview https://www.w3.org/TR/owl-features/ 1 de 14 09/05/2017 08:32 a.m. This document has been reviewed by W3C Members and other interested parties, and it has been endorsed by the Director as a W3C Recommendation. W3C's role in making the Recommendation is to draw attention to the specification and to promote its widespread deployment. This enhances the functionality and interoperability of the Web. This is one of six parts of the W3C Recommendation for OWL, the Web Ontology Language. It has been developed by the Web Ontology Working Group as part of the W3C Semantic Web Activity (Activity Statement, Group Charter) for publication on 10 February 2004. The design of OWL expressed in earlier versions of these documents has been widely reviewed and satisfies the Working Group's technical requirements. The Working Group has addressed all comments received, making changes as necessary. Changes to this document since the Proposed Recommendation version are detailed in the change log. Comments are welcome at public-webont-comments@w3.org (archive) and general discussion of related technology is welcome at www-rdf-logic@w3.org (archive). A list of implementations is available. The W3C maintains a list of any patent disclosures related to this work. This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

4,147 citations



Journal ArticleDOI
01 Dec 2004
TL;DR: The goal of the paper is to provide a reader who may not be very familiar with ontology research with introduction to major themes in this research and with pointers to different research projects.
Abstract: Semantic integration is an active area of research in several disciplines, such as databases, information-integration, and ontologies. This paper provides a brief survey of the approaches to semantic integration developed by researchers in the ontology community. We focus on the approaches that differentiate the ontology research from other related areas. The goal of the paper is to provide a reader who may not be very familiar with ontology research with introduction to major themes in this research and with pointers to different research projects. We discuss techniques for finding correspondences between ontologies, declarative ways of representing these correspondences, and use of these correspondences in various semantic-integration tasks

1,142 citations


Proceedings ArticleDOI
03 Sep 2004
TL;DR: How SOUPA can be extended and used to support the applications of CoBrA, a broker-centric agent architecture for building smart meeting rooms, and MoGATU, a peer-to-peer data management for pervasive environments are discussed.
Abstract: We describe a shared ontology called SOUPA - standard ontology for ubiquitous and pervasive applications. SOUPA is designed to model and support pervasive computing applications. This ontology is expressed using the Web ontology language OWL and includes modular component vocabularies to represent intelligent agents with associated beliefs, desires, and intentions, time, space, events, user profiles, actions, and policies for security and privacy. We discuss how SOUPA can be extended and used to support the applications of CoBrA, a broker-centric agent architecture for building smart meeting rooms, and MoGATU, a peer-to-peer data management for pervasive environments.

660 citations


Journal ArticleDOI
TL;DR: It is argued that dynamic spatial ontology must combine these two distinct types of inventory of the entities and relationships in reality, and characterizations of spatiotemporal reasoning in the light of the interconnections between them are provided.
Abstract: (2004). SNAP and SPAN: Towards Dynamic Spatial Ontology. Spatial Cognition & Computation: Vol. 4, No. 1, pp. 69-104.

629 citations


Journal ArticleDOI
TL;DR: Differences between database-schema evolution and ontology evolution will allow us to build on the extensive research in schema evolution, but there are also important differences between database schemas and ontologies.
Abstract: As ontology development becomes a more ubiquitous and collaborative process, ontology versioning and evolution becomes an important area of ontology research. The many similarities between database-schema evolution and ontology evolution will allow us to build on the extensive research in schema evolution. However, there are also important differences between database schemas and ontologies. The differences stem from different usage paradigms, the presence of explicit semantics and different knowledge models. A lot of problems that existed only in theory in database research come to the forefront as practical problems in ontology evolution. These differences have important implications for the development of ontology-evolution frameworks: The traditional distinction between versioning and evolution is not applicable to ontologies. There are several dimensions along which compatibility between versions must be considered. The set of change operations for ontologies is different. We must develop automatic techniques for finding similarities and differences between versions.

566 citations


Book ChapterDOI
01 Jan 2004
TL;DR: This chapter studies ontology matching: the problem of finding the semantic mappings between two given ontologies, which lies at the heart of numerous information processing applications.
Abstract: This chapter studies ontology matching: the problem of finding the semantic mappings between two given ontologies. This problem lies at the heart of numerous information processing applications. Virtually any application that involves multiple ontologies must establish semantic mappings among them, to ensure interoperability. Examples of such applications arise in myriad domains, including e-commerce, knowledge management, e-learning, information extraction, bio-informatics, web services, and tourism (see Part D of this book on ontology applications).

531 citations


Book ChapterDOI
TL;DR: An approach to integrate various similarity methods is presented, which determines similarity through rules which have been encoded by ontology experts and are then combined for one overall result.
Abstract: Ontology mapping is important when working with more than one ontology Typically similarity considerations are the basis for this In this paper an approach to integrate various similarity methods is presented In brief, we determine similarity through rules which have been encoded by ontology experts These rules are then combined for one overall result Several boosting small actions are added All this is thoroughly evaluated with very promising results

473 citations


Proceedings Article
22 Aug 2004
TL;DR: A universal measure for comparing the entities of two ontologies that is based on a simple and homogeneous comparison principle and one-to-many relationships and circularity in entity descriptions constitute the key difficulties.
Abstract: Interoperability of heterogeneous systems on the Web will be admittedly achieved through an agreement between the underlying ontologies. However, the richer the ontology description language, the more complex the agreement process, and hence the more sophisticated the required tools. Among current ontology alignment paradigms, similarity-based approaches are both powerful and flexible enough for aligning ontologies expressed in languages like OWL. We define a universal measure for comparing the entities of two ontologies that is based on a simple and homogeneous comparison principle: Similarity depends on the type of entity and involves all the features that make its definition (such as superclasses, properties, instances, etc.). One-to-many relationships and circularity in entity descriptions constitute the key difficulties in this context: These are dealt with through local matching of entity sets and iterative computation of recursively dependent similarities, respectively.

439 citations


Proceedings Article
01 May 2004
TL;DR: It is proposed in this paper that one approach to ontology evaluation should be corpus or data driven, because a corpus is the most accessible form of knowledge and its use allows a measure to be derived of the ‘fit’ between an ontology and a domain of knowledge.
Abstract: The evaluation of ontologies is vital for the growth of the Semantic Web. We consider a number of problems in evaluating a knowledge artifact like an ontology. We propose in this paper that one approach to ontology evaluation should be corpus or data driven. A corpus is the most accessible form of knowledge and its use allows a measure to be derived of the ‘fit’ between an ontology and a domain of knowledge. We consider a number of methods for measuring this ‘fit’ and propose a measure to evaluate structural fit, and a probabilistic approach to identifying the best ontology.

407 citations


Journal ArticleDOI
TL;DR: The most representative methodologies for building ontologies from scratch are presented, and the proposed techniques, guidelines and methods to help in the construction task are described.
Abstract: Ontologies are an important component in many areas, such as knowledge management and organization, electronic commerce and information retrieval and extraction. Several methodologies for ontology building have been proposed. In this article, we provide an overview of ontology building. We start by characterizing the ontology building process and its life cycle. We present the most representative methodologies for building ontologies from scratch, and the proposed techniques, guidelines and methods to help in the construction task. We analyze and compare these methodologies. We describe current research issues in ontology reuse. Finally, we discuss the current trends in ontology building and its future challenges, namely, the new issues for building ontologies for the Semantic Web.

Journal Article
TL;DR: A modular formal ontology of the biomedical domain with two components, one for biological objects, corresponding broadly to anatomy, and one for Biological processes, corresponding broad to physiology, is proposed.
Abstract: We propose a modular formal ontology of the biomedical domain with two components, one for biological objects, corresponding broadly to anatomy, and one for biological processes, corresponding broadly to physiology. The result constitutes what might be described as a joint venture between two perspectives--of so-called three-dimensionalism and four-dimensionalism--which are normally regarded as incompatible. We outline an approach which allows them to be combined together, and provide examples of its application in biomedicine.

Proceedings ArticleDOI
17 May 2004
TL;DR: A search architecture that combines classical search techniques with spread activation techniques applied to a semantic model of a given domain and it was observed that the proposed hybrid spread activation, combining the symbolic and the sub-symbolic approaches, achieved better results when compared to each of the approaches alone.
Abstract: This paper presents a search architecture that combines classical search techniques with spread activation techniques applied to a semantic model of a given domain. Given an ontology, weights are assigned to links based on certain properties of the ontology, so that they measure the strength of the relation. Spread activation techniques are used to find related concepts in the ontology given an initial set of concepts and corresponding initial activation values. These initial values are obtained from the results of classical search applied to the data associated with the concepts in the ontology. Two test cases were implemented, with very positive results. It was also observed that the proposed hybrid spread activation, combining the symbolic and the sub-symbolic approaches, achieved better results when compared to each of the approaches alone.


Book ChapterDOI
TL;DR: A plug-in for the widely used Protege ontology development tool that supports the interactive extraction and/or extension of ontologies from text and provides an environment for the integration of linguistic analysis in ontology engineering through the definition of mapping rules.
Abstract: In this paper we describe a plug-in (OntoLT) for the widely used Protege ontology development tool that supports the interactive extraction and/or extension of ontologies from text. The OntoLT approach provides an environment for the integration of linguistic analysis in ontology engineering through the definition of mapping rules that map linguistic entities in annotated text collections to concept and attribute candidates (i.e. Protege classes and slots). The paper ex-plains this approach in more detail and discusses some initial experiments on deriving a shallow ontology for the neurology domain from a corresponding collection of neurological scientific abstracts.

Proceedings ArticleDOI
05 Jan 2004
TL;DR: This work proposes to incorporate Bayesian networks (BN), a widely used graphic model for knowledge representation under uncertainty and OWL, the de facto industry standard ontology language recommended by W3C to support uncertain ontology representation and ontology reasoning and mapping.
Abstract: To support uncertain ontology representation and ontology reasoning and mapping, we propose to incorporate Bayesian networks (BN), a widely used graphic model for knowledge representation under uncertainty and OWL, the de facto industry standard ontology language recommended by W3C. First, OWL is augmented to allow additional probabilistic markups, so probabilities can be attached with individual concepts and properties in an OWL ontology. Secondly, a set of translation rules is defined to convert this probabilistically annotated OWL ontology into the directed acyclic graph (DAG) of a BN. Finally, the BN is completed by constructing conditional probability tables (CPT) for each node in the DAG. Our probabilistic extension to OWL is consistent with OWL semantics, and the translated BN is associated with a joint probability distribution over the application domain. General Bayesian network inference procedures (e.g., belief propagation or junction tree) can be used to compute P(C/spl bsol/e): the degree of the overlap or inclusion between a concept C and a concept represented by a description e. We also provide a similarity measure that can be used to find the most similar concept that a given description belongs to.

Proceedings Article
22 Aug 2004
TL;DR: This paper presents the DILIGENT methodology, which is intended to support domain experts in a distributed setting to engineer and evolve ontologies with the help of a fine-grained methodological approach based on Rhetorical Structure Theory.
Abstract: Ontology engineering processes in truly distributed settings like the Semantic Web or global peer-to-peer systems may not be adequately supported by conventional, centralized ontology engineering methodologies. In this paper, we present our work towards the DILIGENT methodology, which is intended to support domain experts in a distributed setting to engineer and evolve ontologies with the help of a fine-grained methodological approach based on Rhetorical Structure Theory, viz. the DILIGENT model of ontology engineering by argumentation.

Journal ArticleDOI
TL;DR: A collaboratively engineered general-purpose knowledge management (KM) ontology that can be used by practitioners, researchers, and educators is described that evolved from a Delphi-like process involving a diverse panel of over 30 KM practitioners and researchers.
Abstract: This article describes a collaboratively engineered general-purpose knowledge management (KM) ontology that can be used by practitioners, researchers, and educators. The ontology is formally characterized in terms of nearly one hundred definitions and axioms that evolved from a Delphi-like process involving a diverse panel of over 30 KM practitioners and researchers. The ontology identifies and relates knowledge manipulation activities that an entity (e.g., an organization) can perform to operate on knowledge resources. It introduces a taxonomy for these resources, which indicates classes of knowledge that may be stored, embedded, and/or represented in an entity. It recognizes factors that influence the conduct of KM both within and across KM episodes. The Delphi panelists judge the ontology favorably overall: its ability to unify KM concepts, its comprehensiveness, and utility. Moreover, various implications of the ontology for the KM field are examined as indicators of its utility for practitioners, educators, and researchers.

01 Jan 2004
TL;DR: The state of the art is not restricted to any discipline and consider as some form of ontology alignment the work made on schema matching within the database area for instance.
Abstract: In this document we provide an overall view of the state of the art in ontology alignment. It is organised as a description of the need for ontology alignment, a presentation of the techniques currently in use for ontology alignment and a presentation of existing systems. The state of the art is not restricted to any discipline and consider as some form of ontology alignment the work made on schema matching within the database area for instance. Keyword list: schema matching, ontology matching, ontology alignment, ontology mapping, schema mapping, Copyright c 2004 The contributors Document Identifier KWEB/2004/D2.2.3/v1.2 Project KWEB EU-IST-2004-507482 Version v1.2 Date August 2, 2004 State final Distribution public Knowledge Web Consortium This document is part of a research project funded by the IST Programme of the Commission of the European Communities as project number IST-2004-507482. University of Innsbruck (UIBK) Coordinator Institute of Computer Science Technikerstrasse 13 A-6020 Innsbruck Austria Contact person: Dieter Fensel E-mail address: dieter.fensel@uibk.ac.at École Polytechnique Fédérale de Lausanne (EPFL) Computer Science Department Swiss Federal Institute of Technology IN (Ecublens), CH-1015 Lausanne Switzerland Contact person: Boi Faltings E-mail address: boi.faltings@epfl.ch France Telecom (FT) 4 Rue du Clos Courtel 35512 Cesson Sévigné France. PO Box 91226 Contact person : Alain Leger E-mail address: alain.leger@rd.francetelecom.com Freie Universität Berlin (FU Berlin) Takustrasse 9 14195 Berlin Germany Contact person: Robert Tolksdorf E-mail address: tolk@inf.fu-berlin.de Free University of Bozen-Bolzano (FUB) Piazza Domenicani 3 39100 Bolzano Italy Contact person: Enrico Franconi E-mail address: franconi@inf.unibz.it Institut National de Recherche en Informatique et en Automatique (INRIA) ZIRST 655 avenue de l’Europe Montbonnot Saint Martin 38334 Saint-Ismier France Contact person: Jérôme Euzenat E-mail address: Jerome.Euzenat@inrialpes.fr Centre for Research and Technology Hellas / Informatics and Telematics Institute (ITI-CERTH) 1st km Thermi Panorama road 57001 Thermi-Thessaloniki Greece. Po Box 361 Contact person: Michael G. Strintzis E-mail address: strintzi@iti.gr Learning Lab Lower Saxony (L3S) Expo Plaza 1 30539 Hannover Germany Contact person: Wolfgang Nejdl E-mail address: nejdl@learninglab.de National University of Ireland Galway (NUIG) National University of Ireland Science and Technology Building University Road Galway Ireland Contact person: Christoph Bussler E-mail address: chris.bussler@deri.ie The Open University (OU) Knowledge Media Institute The Open University Milton Keynes, MK7 6AA United Kingdom Contact person: Enrico Motta E-mail address: e.motta@open.ac.uk Universidad Politécnica de Madrid (UPM) Campus de Montegancedo sn 28660 Boadilla del Monte Spain Contact person: Asunción Gómez Pérez E-mail address: asun@fi.upm.es University of Karlsruhe (UKARL) Institut für Angewandte Informatik und Formale Beschreibungsverfahren AIFB Universität Karlsruhe D-76128 Karlsruhe Germany Contact person: Rudi Studer E-mail address: studer@aifb.uni-karlsruhe.de University of Liverpool (UniLiv) Chadwick Building, Peach Street L697ZF Liverpool United Kingdom Contact person: Michael Wooldridge E-mail address: M.J.Wooldridge@csc.liv.ac.uk University of Manchester (UoM) Room 2.32. Kilburn Building, Department of Computer Science, University of Manchester, Oxford Road Manchester, M13 9PL United Kingdom Contact person: Carole Goble E-mail address: carole@cs.man.ac.uk University of Sheffield (USFD) Regent Court, 211 Portobello street S14DP Sheffield United Kingdom Contact person: Hamish Cunningham E-mail address: hamish@dcs.shef.ac.uk University of Trento (UniTn) Via Sommarive 14 38050 Trento Italy Contact person: Fausto Giunchiglia E-mail address: fausto@dit.unitn.it Vrije Universiteit Amsterdam (VUA) De Boelelaan 1081a 1081HV. Amsterdam The Netherlands Contact person: Frank van Harmelen E-mail address: Frank.van.Harmelen@cs.vu.nl Vrije Universiteit Brussel (VUB) Pleinlaan 2, Building G1

Journal ArticleDOI
TL;DR: An automatic ontology building approach that starts from a small ontology kernel and constructs the ontology through text understanding automatically, and extracts lexical and ontological knowledge from Persian (Farsi) texts is proposed.
Abstract: Research on ontology is becoming increasingly widespread in the computer science community. The major problems in building ontologies are the bottleneck of knowledge acquisition and time-consuming construction of various ontologies for various domains/ applications. Meanwhile moving toward automation of ontology construction is a solution.We proposed an automatic ontology building approach. In this approach, the system starts from a small ontology kernel and constructs the ontology through text understanding automatically. The kernel contains the primitive concepts, relations and operators to build an ontology. The features of our proposed model are being domain/application independent, building ontologies upon a small primary kernel, learning words, concepts, taxonomic and non-taxonomic relations and axioms and applying a symbolic, hybrid ontology learning approach consisting of logical, linguistic based, template driven and semantic analysis methods.Hasti is an ongoing project to implement and test the automatic ontology building approach. It extracts lexical and ontological knowledge from Persian (Farsi) texts.In this paper, at first, we will describe some ontology engineering problems, which motivated our approach. In the next sections, after a brief description of Hasti, its features and its architecture, we will discuss its components in detail. In each part, the learning algorithms will be described. Then some experimental results will be discussed and at last, we will have an overview of related works and will introduce a general framework to compare ontology learning systems and will compare Hasti with related works according to the framework.

Journal ArticleDOI
TL;DR: This paper discusses ontologies that guide conceptualization of artefacts from the functional point of view that are based on an extended device ontology and its application in the mechanical domain.
Abstract: It has been recognized that design knowledge is scattered around technology and target domains. One of the two major reasons for it is that different frameworks (viewpoints) for conceptualization of design knowledge are used when people try to describe knowledge in different domains. The other is that several key functional concepts are left undefined or even unidentified. In this paper, we first overview the state of the art of ontological engineering, which we believe is able to make a considerable contribution to resolving these difficulties. We then discuss our enterprise aiming at systematization of functional knowledge used for synthesis. We discuss ontologies that guide conceptualization of artefacts from the functional point of view. The framework for knowledge systematization is based on an extended device ontology and a functional concept ontology built on top of the extended device ontology. This paper particularly discusses the extended device ontology and its application in the mechanical dom...

Journal ArticleDOI
TL;DR: A uniform framework is presented here that lets developers compare different ontologies and map similarities and differences among them, and helps users manage multiple ontologies by leveraging data and algorithms developed for one tool in another.
Abstract: Ontologies have become ubiquitous in information systems. They constitute the semantic Web's backbone, facilitate e-commerce, and serve such diverse application fields as bioinformatics and medicine. As ontology development becomes increasingly widespread and collaborative, developers are creating ontologies using different tools and different languages. These ontologies cover unrelated or overlapping domains at different levels of detail and granularity. A uniform framework, which we present here, helps users manage multiple ontologies by leveraging data and algorithms developed for one tool in another. For example, by using an algorithm we developed for structural evaluation of ontology versions, this framework lets developers compare different ontologies and map similarities and differences among them. Multiple-ontology management includes these tasks: maintain ontology libraries, import and reuse ontologies, translate ontologies from one formalism to another, support ontology versioning, specify transformation rules between different ontologies and version, merge ontologies, align and map between ontologies, extract an ontology's self-contained parts, support inference across multiple ontologies, support query across multiple ontologies.

Journal ArticleDOI
TL;DR: This work has developed a generic component-based ontology for real-world services, a formalization of concepts that represent the consensus in the business science literature on service management and marketing, and developed support tools that facilitate end-user modeling of services.
Abstract: Real-world services - that is, nonsoftware-based services - differ significantly from Web services, usually defined as software functionality accessible and configurable over the Web. Because of the economic, social, and business importance of the service concept in general, we believe it's necessary to rethink what this concept means in an ontological and computational sense. We deal about the OBELIX (ontology-based electronic integration of complex products and value chains) project has therefore developed a generic component-based ontology for real-world services. This OBELIX service ontology is first of all a formalization of concepts that represent the consensus in the business science literature on service management and marketing. We express our service ontology in a graphical, network-style representation, and we've developed support tools that facilitate end-user modeling of services. Then, automated knowledge-based configuration methods let business designers and analysts analyze service bundles. We've tested our ontology, methods, and tools on applications in real-world case studies of different industry sectors.

Journal ArticleDOI
TL;DR: This work proposes a common ontology called semantic conflict resolution ontology (SCROL) that addresses the inherent difficulties in the conventional approaches to semantic interoperability of heterogeneous databases, i.e., federated schema and domain ontology approaches.
Abstract: Establishing semantic interoperability among heterogeneous information sources has been a critical issue in the database community for the past two decades. Despite the critical importance, current approaches to semantic interoperability of heterogeneous databases have not been sufficiently effective. We propose a common ontology called semantic conflict resolution ontology (SCROL) that addresses the inherent difficulties in the conventional approaches, i.e., federated schema and domain ontology approaches. SCROL provides a systematic method for automatically detecting and resolving various semantic conflicts in heterogeneous databases. SCROL provides a dynamic mechanism of comparing and manipulating contextual knowledge of each information source, which is useful in achieving semantic interoperability among heterogeneous databases. We show how SCROL is used for detecting and resolving semantic conflicts between semantically equivalent schema and data elements. In addition, we present evaluation results to show that SCROL can be successfully used to automate the process of identifying and resolving semantic conflicts.

Journal ArticleDOI
TL;DR: 13 methods and 14 tools for semi-automatically building ontologies from texts and their relationships with the techniques each method follows and three groups have been identified: one based on linguistics, one on statistics, and one on machine learning.
Abstract: Ontology learning aims at reducing the time and efforts in the ontology development process. In recent years, several methods and tools have been proposed to speed up this process using different sources of information and different techniques. In this paper, we have reviewed 13 methods and 14 tools for semi-automatically building ontologies from texts and their relationships with the techniques each method follows. The methods have been grouped according to the main techniques followed and three groups have been identified: one based on linguistics, one on statistics, and one on machine learning. Regarding the tools, the criterion for grouping them, which has been the main aim of the tool, is to distinguish what elements of the ontology can be learned with each tool. According to this, we have identified three kinds of tools: tools for learning relations, tools for learning new concepts, and assisting tools for building up taxonomies.

Journal ArticleDOI
TL;DR: This work deals with two types of ontology evaluation, content evaluation and ontology technology evaluation, and discusses ontology libraries, ontology tool, and formal evaluation of ontological quality.
Abstract: We deal with two types of ontology evaluation, content evaluation and ontology technology evaluation. Evaluating content is a must for preventing applications from using inconsistent, incorrect, or redundant ontologies. It's unwise to publish an ontology that one or more software applications will use without first evaluating it. A well-evaluated ontology won't guarantee the absence of problems, but it makes its use safer. Similarly, evaluating ontology technology eases its integration with other software environments, ensuring a correct technology transfer from the academic to the industrial world. We also discuss ontology libraries, ontology tool, and formal evaluation of ontology quality.

Journal ArticleDOI
TL;DR: The methodology, design and development of ontology for UTCMLS is focused on, which uses Protégé 2000 for ontology development of concepts and relationships that represent the domain and that will permit storage of TCM knowledge.

Proceedings Article
01 Jan 2004
TL;DR: A new algorithm for matching two ontologies based on all the information available about the given ontologies (e.g. their concepts, relations, information about the structure of each hierarchy of concepts, or of relations) is proposed.
Abstract: Ontologies are nowadays used in many domains such as Semantic Web, information systems... to represent meaning of data and data sources. In the framework of knowledge management in an heterogeneous organization, the materialization of the organizational memory in a “corporate semantic web” may require to integrate the various ontologies of the different groups of this organization. To be able to build a corporate semantic web in an heterogeneous, multi-communities organization, it is essential to have methods for comparing, aligning, integrating or mapping different ontologies. This paper proposes a new algorithm for matching two ontologies based on all the information available about the given ontologies (e.g. their concepts, relations, information about the structure of each hierarchy of concepts, or of relations), applying TF/IDF scheme (a method widely used in the information retrieval community) and integrating WordNet (an electronic lexical database) in the process of ontology matching.

01 Sep 2004
TL;DR: This paper attempts to examine current candidate standard upper ontologies and assess their applicability for a U.S. Government or U.s. Military domain.
Abstract: : Momentum is gaining to develop a Semantic Web to allow people and machines to share the meaning (semantics) of data and ultimately of applications Key to the vision of a Semantic Web is the ability to capture data and application semantics in ontologies and map these ontologies together via related concepts One approach for mapping disparate ontologies is to use a standard upper ontology In determining how Semantic Web technologies might be applied to United States (US) Government domains, the authors consider whether the use of standard upper ontologies makes sense in these environments This paper attempts to examine current candidate standard upper ontologies and assess their applicability for a US Government or US Military domain They evaluate the state of the art and applicability of upper ontologies through the lens of potential application in these domains The evaluation includes consideration of the ontology purpose, ontological content decisions, licensing restrictions, structural differences, and maturity The report concludes with some recommendations and predictions

Book ChapterDOI
31 Aug 2004
TL;DR: This approach enables users to reference ontology data directly from SQL using the semantic match operators, thereby opening up possibilities of combining with other operations such as joins as well as making the ontology-driven applications easy to develop and efficient.
Abstract: Ontologies are increasingly being used to build applications that utilize domain-specific knowledge. This paper addresses the problem of supporting ontology-based semantic matching in RDBMS. Specifically, 1) A set of SQL operators, namely ONT_RELATED, ONT_EXPAND, ONT_DISTANCE, and ONT_PATH, are introduced to perform ontology-based semantic matching, 2) A new indexing scheme ONT_INDEXTYPE is introduced to speed up ontology-based semantic matching operations, and 3) System-defined tables are provided for storing ontologies specified in OWL. Our approach enables users to reference ontology data directly from SQL using the semantic match operators, thereby opening up possibilities of combining with other operations such as joins as well as making the ontology-driven applications easy to develop and efficient. In contrast, other approaches use RDBMS only for storage of ontologies and querying of ontology data is typically done via APIs. This paper presents the ontology-related functionality including inferencing, discusses how it is implemented on top of Oracle RDBMS, and illustrates the usage with several database applications.