scispace - formally typeset
Search or ask a question

Showing papers on "Upper ontology published in 2003"


Journal ArticleDOI
TL;DR: Ontology mapping is seen as a solution provider in today's landscape of ontology research as mentioned in this paper and provides a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners.
Abstract: Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mappings has been the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.

1,384 citations


Journal ArticleDOI
TL;DR: This work presents an approach to computing semantic similarity that relaxes the requirement of a single ontology and accounts for differences in the levels of explicitness and formalization of the different ontology specifications.
Abstract: Semantic similarity measures play an important role in information retrieval and information integration. Traditional approaches to modeling semantic similarity compute the semantic distance between definitions within a single ontology. This single ontology is either a domain-independent ontology or the result of the integration of existing ontologies. We present an approach to computing semantic similarity that relaxes the requirement of a single ontology and accounts for differences in the levels of explicitness and formalization of the different ontology specifications. A similarity function determines similar entity classes by using a matching process over synonym sets, semantic neighborhoods, and distinguishing features that are classified into parts, functions, and attributes. Experimental results with different ontologies indicate that the model gives good results when ontologies have complete and detailed representations of entity classes. While the combination of word matching and semantic neighborhood matching is adequate for detecting equivalent entity classes, feature matching allows us to discriminate among similar, but not necessarily equivalent entity classes.

948 citations


Journal ArticleDOI
TL;DR: The COBRA-ONT ontology as discussed by the authors is a collection of ontologies for describing places, agents and events and their associated properties in an intelligent meeting-room domain, expressed in the Web Ontology Language OWL.
Abstract: This document describes COBRA-ONT, an ontology for supporting pervasive context-aware systems. COBRA-ONT, expressed in the Web Ontology Language OWL, is a collection of ontologies for describing places, agents and events and their associated properties in an intelligent meeting-room domain. This ontology is developed as a part of the Context Broker Architecture (CoBrA), a broker-centric agent architecture that provides knowledge sharing, context reasoning and privacy protection supports for pervasive context-aware systems. We also describe an inference engine for reasoning with information expressed using the COBRA-ONT ontology and the ongoing research in using the DAML-Time ontology for context reasoning.

855 citations


Journal ArticleDOI
TL;DR: A suite of tools for managing multiple ontologies provides users with a uniform framework for comparing, aligning, and merging ontologies, maintaining versions, translating between different formalisms, and identifying inconsistencies and potential problems.
Abstract: Researchers in the ontology-design field have developed the content for ontologies in many domain areas. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. We developed a suite of tools for managing multiple ontologies. These suite provides users with a uniform framework for comparing, aligning, and merging ontologies, maintaining versions, translating between different formalisms. Two of the tools in the suite support semi-automatic ontology merging: IPROMPT is an interactive ontology-merging tool that guides the user through the merging process, presenting him with suggestions for next steps and identifying inconsistencies and potential problems. ANCHORPROMPT uses a graph structure of ontologies to find correlation between concepts and to provide additional information for IPROMPT.

799 citations


Journal ArticleDOI
01 Jul 2003
TL;DR: This paper reviews and compares the main methodologies, tools and languages for building ontologies that have been reported in the literature, as well as the main relationships among them.
Abstract: In this paper we review and compare the main methodologies, tools and languages for building ontologies that have been reported in the literature, as well as the main relationships among them. Ontology technology is nowadays mature enough: many methodologies, tools and languages are already available. The future work in this field should be driven towards the creation of a common integrated workbench for ontology developers to facilitate ontology development, exchange, evaluation, evolution and management, to provide methodological support for these tasks, and translations to and from different ontology languages. This workbench should not be created from scratch, but instead integrating the technology components that are currently available.

794 citations


Journal ArticleDOI
01 Nov 2003
TL;DR: GLUE is described, a system that employs machine learning techniques to find semantic mappings between ontologies and is distinguished in that it works with a variety of well-defined similarity notions and that it efficiently incorporates multiple types of knowledge.
Abstract: On the Semantic Web, data will inevitably come from many different ontologies, and information processing across ontologies is not possible without knowing the semantic mappings between them. Manually finding such mappings is tedious, error-prone, and clearly not possible on the Web scale. Hence the development of tools to assist in the ontology mapping process is crucial to the success of the Semantic Web. We describe GLUE, a system that employs machine learning techniques to find such mappings. Given two ontologies, for each concept in one ontology GLUE finds the most similar concept in the other ontology. We give well-founded probabilistic definitions to several practical similarity measures and show that GLUE can work with all of them. Another key feature of GLUE is that it uses multiple learning strategies, each of which exploits well a different type of information either in the data instances or in the taxonomic structure of the ontologies. To further improve matching accuracy, we extend GLUE to incorporate commonsense knowledge and domain constraints into the matching process. Our approach is thus distinguished in that it works with a variety of well-defined similarity notions and that it efficiently incorporates multiple types of knowledge. We describe a set of experiments on several real-world domains and show that GLUE proposes highly accurate semantic mappings. Finally, we extend GLUE to find complex mappings between ontologies and describe experiments that show the promise of the approach.

533 citations


Journal ArticleDOI
TL;DR: The OntoLearn system is an infrastructure for automated ontology learning from domain text that uses natural language processing and machine learning techniques, and is part of a more general ontology engineering architecture.
Abstract: Our OntoLearn system is an infrastructure for automated ontology learning from domain text. It is the only system, as far as we know, that uses natural language processing and machine learning techniques, and is part of a more general ontology engineering architecture. We describe the system and an experiment in which we used a machine-learned tourism ontology to automatically translate multiword terms from English to Italian. The method can apply to other domains without manual adaptation.

357 citations


01 Jan 2003
TL;DR: This paper describes the upperlevel ontology SUMO (Suggested Upper Merged Ontology), which has been proposed as the initial version of an eventual Standard Upper Ontology (SUO), and describes the popular, free, and structured WordNet lexical database.
Abstract: Ontologies are becoming extremely useful tools for sophisticated software engineering. Designing applications, databases, and knowledge bases with reference to a common ontology can mean shorter development cycles, easier and faster integration with other software and content, and a more scalable product. Although ontologies are a very promising solution to some of the most pressing problems that confront software engineering, they also raise some issues and difficulties of their own. Consider, for example, the questions below: • How can a formal ontology be used effectively by those who lack extensive training in logic and mathematics? • How can an ontology be used automatically by applications (e.g. Information Retrieval and Natural Language Processing applications) that process free text? • How can we know when an ontology is complete? In this paper we will begin by describing the upperlevel ontology SUMO (Suggested Upper Merged Ontology), which has been proposed as the initial version of an eventual Standard Upper Ontology (SUO). We will then describe the popular, free, and structured WordNet lexical database. After this preliminary discussion, we will describe the methodology that we are using to align WordNet with the SUMO. We close this paper by discussing how this alignment of WordNet with SUMO will provide answers to the questions posed above. Ontologies are becoming extremely useful tools for sophisticated software engineering. Designing applications, databases, and knowledge bases with reference to a common ontology can mean shorter development cycles, easier and faster integration with other software and content, and a more scalable product. Although ontologies are a very promising solution to some of the most pressing problems that confront software engineering, they also raise some issues and difficulties of their own. Consider, for example, the questions below: • How can a formal ontology be used effectively by those who lack extensive training in logic and mathematics? • How can an ontology be used automatically by applications (e.g. Information Retrieval and Natural Language Processing applications) that process free text? • How can we know when an ontology is complete? In this paper we will begin by describing the upperlevel ontology SUMO (Suggested Upper Merged Ontology), which has been proposed as the initial version of an eventual Standard Upper Ontology (SUO). We will then describe the popular, free, and structured WordNet lexical database. After this preliminary discussion, we will describe the methodology that we are using to align WordNet with the SUMO. We close this paper by discussing how this alignment of WordNet with SUMO will provide answers to the questions posed above. keywords: natural language, ontology 1. SUMO The SUMO (Suggested Upper Merged Ontology) is an ontology that was created at Teknowledge Corporation with extensive input from the SUO mailing list, and it has been proposed as a starter document for the IEEE-sanctioned SUO Working Group [1]. The SUMO was created by merging publicly available ontological content into a single, comprehensive, and cohesive structure [2,3]. As of February 2003, the ontology contains 1000 terms and 4000 assertions. The ontology can be browsed online (http://ontology.teknowledge.com), and source files for all of the versions of the ontology can be freely downloaded (http://ontology.teknowledge.com/cgibin/cvsweb.cgi/SUO/).

333 citations


Journal ArticleDOI
TL;DR: The authors present an integrated enterprise-knowledge management architecture, focusing on how to support multiple ontologies and manage ontology evolution.
Abstract: Several challenges exist related to applying ontologies in real-world environments. The authors present an integrated enterprise-knowledge management architecture, focusing on how to support multiple ontologies and manage ontology evolution.

304 citations


Book ChapterDOI
TL;DR: The Semantic Web is a powerful vision that is getting to grips with the challenge of providing more human-oriented web services, and reasoning with and across distributed, partially implicit assumptions (contextual knowledge) is a milestone.
Abstract: The Semantic Web is a powerful vision that is getting to grips with the challenge of providing more human-oriented web services. Hence, reasoning with and across distributed, partially implicit assumptions (contextual knowledge), is a milestone.

299 citations


01 Jan 2003
TL;DR: Racer is described, which can be considered as a core inference engine for the semantic web, which currently supports the web ontology languages DAML+OIL, RDF, and OWL.
Abstract: In this paper we describe Racer, which can be considered as a core inference engine for the semantic web. The Racer inference server offers two APIs that are already used by at least three different network clients, i.e., the ontology editor OilEd, the visualization tool RICE, and the ontology development environment Protege 2. The Racer server supports the standard DIG protocol via HTTP and a TCP based protocol with extensive query facilities. Racer currently supports the web ontology languages DAML+OIL, RDF, and OWL.

Proceedings ArticleDOI
08 Jul 2003
TL;DR: An explanation of the meaning of this ontology, its expressiveness and its extensibility are provided, and it is shown how the ontology can be adapted to handle domain-specific situations by readily extending the core language.
Abstract: In this paper we present an ontology for situation awareness. One of our goals is to support the claim that this ontology is a reasonable candidate for representing various scenarios of situation awareness. Towards this aim we provide an explanation of the meaning of this ontology, show its expressiveness and demonstrate its extensibility. We also compare the expressiveness of this ontology with alternative approaches we considered during the design of the ontology. We then show how the ontology can be adapted to handle domain-specific situations by readily extending the core language. The extensions include adding subclasses, sub-properties and additional attributes to the core ontology. We conclude with an example of how the ontology can be used to annotate specific instances of a situation.

Proceedings Article
01 Jan 2003
TL;DR: The results of applying certain organizing principles drawn from philosophical ontology to GO are explored with a view to improving GO's consistency and coherence and thus its future applicability in the automated processing of biological data.
Abstract: The rapidly increasing wealth of genomic data has driven the development of tools to assist in the task of representing and processing information about genes, their products and their functions. One of the most important of these tools is the Gene Ontology (GO), which is being developed in tandem with work on a variety of bioinformatics databases. An examination of the structure of GO, however, reveals a number of problems, which we believe can be resolved by taking account of certain organizing principles drawn from philosophical ontology. We shall explore the results of applying such principles to GO with a view to improving GO's consistency and coherence and thus its future applicability in the automated processing of biological data.

Proceedings ArticleDOI
07 Nov 2003
TL;DR: A Semantic Web portal, called OntoKhoj that is designed to simplify the Ontology Engineering process and allow agents and ontology engineers to retrieve trustworthy, authoritative knowledge, and expedite the process of ontology engineering through extensive reuse of ontologies is proposed.
Abstract: The goal of the next generation Web is to build virtual communities, wherein software agents and people can work in cooperation by sharing knowledge. To achieve this goal, the emerging Semantic Web community has proposed ontologies to express knowledge in a machine understandable way. The process of building and maintaining ontologies, which is known as Ontology Engineering, presents unique challenges. These challenges are related to lack of trustworthy and authoritative knowledge sources and absence of a centralized repository to locate ontologies to be reused. In this paper, we propose a Semantic Web portal, called OntoKhoj that is designed to simplify the Ontology Engineering process. The methodology in developing OntoKhoj is based on algorithms used for searching, aggregating, ranking and classifying ontologies in Semantic Web. The proposed OntoKhoj would 1) allow agents and ontology engineers to retrieve trustworthy, authoritative knowledge, and 2) expedite the process of ontology engineering through extensive reuse of ontologies. We have implemented the OntoKhoj portal and further validated our system on the real ontological data in the Semantic Web.

Book ChapterDOI
22 Jun 2003
TL;DR: This paper is presenting a generic ontology-based user modeling architecture, (OntobUM), applied in the context of a Knowledge Management System (KMS), and it relies on a user ontology, using Semantic Web technologies, based on the IMS LIP specifications, and it is integrated in an ontological-based KMS called Ontologging.
Abstract: This paper is presenting a generic ontology-based user modeling architecture, (OntobUM), applied in the context of a Knowledge Management System (KMS). Due to their powerful knowledge representation formalism and associated inference mechanisms, ontology-based systems are emerging as a natural choice for the next generation of KMSs operating in organizational, interorganizational as well as community contexts. User models, often addressed as user profiles, have been included in KMSs mainly as simple ways of capturing the user preferences and/or competencies. We extend this view by including other characteristics of the users relevant in the KM context and we explain the reason for doing this. The proposed user modeling system relies on a user ontology, using Semantic Web technologies, based on the IMS LIP specifications, and it is integrated in an ontology-based KMS called Ontologging. We are presenting a generic framework for implicit and explicit ontology-based user modeling.

Book ChapterDOI
08 Sep 2003
TL;DR: In this paper, the authors present an ontology specifying a model of computer attack using the DARPA Agent Markup Language+Ontology Inference Layer, a descriptive logic language implemented using DAMLJessKB.
Abstract: We state the benefits of transitioning from taxonomies to ontologies and ontology specification languages, which are able to simultaneously serve as recognition, reporting and correlation languages. We have produced an ontology specifying a model of computer attack using the DARPA Agent Markup Language+Ontology Inference Layer, a descriptive logic language. The ontology’s logic is implemented using DAMLJessKB. We compare and contrast the IETF’s IDMEF, an emerging standard that uses XML to define its data model, with a data model constructed using DAML+OIL. In our research we focus on low level kernel attributes at the process, system and network levels, to serve as those taxonomic characteristics. We illustrate the benefits of utilizing an ontology by presenting use case scenarios within a distributed intrusion detection system.

Journal ArticleDOI
01 Nov 2003
TL;DR: This article presents an integrated framework for managing multiple and distributed ontologies on the Semantic Web, based on the representation model for ontologies, trading off between expressivity and tractability.
Abstract: In traditional software systems, significant attention is devoted to keeping modules well separated and coherent with respect to functionality, thus ensuring that changes in the system are localized to a handful of modules. Reuse is seen as the key method in reaching that goal. Ontology-based systems on the Semantic Web are just a special class of software systems, so the same principles apply. In this article, we present an integrated framework for managing multiple and distributed ontologies on the Semant ic Web. It is based on the representation model for ontologies, trading off between expressivity and tractability. In our framework, we provide features for reusing existing ontologies and for evolving them while retaining the consistency. The approach is implemented within KAON, the Karlsruhe Ontology and Semantic Web tool suite.

Journal Article
TL;DR: It is argued that a core ontology is one of the key building blocks necessary to enable the scalable assimilation of information from diverse sources and the subsequent building of a variety of services such as cross-domain searching, browsing, data mining and knowledge extraction.
Abstract: In this paper, we argue that a core ontology is one of the key building blocks necessary to enable the scalable assimilation of information from diverse sources. A complete and extensible ontology that expresses the basic concepts that are common across a variety of domains and can provide the basis for specialization into domain-specific concepts and vocabularies, is essential for well-defined mappings between domain-specific knowledge representations (i.e., metadata vocabularies) and the subsequent building of a variety of services such as cross-domain searching, browsing, data mining and knowledge extraction. This paper describes the results of a series of three workshops held in 2001 and 2002 which brought together representatives from the cultural heritage and digital library communities with the goal of harmonizing their knowledge perspectives and producing a core ontology. The knowledge perspectives of these two communities were represented by the CIDOC/CRM [31], an ontology for information exchange in the cultural heritage and museum community, and the ABC ontology [33], a model for the exchange and integration of digital library information. This paper describes the mediation process between these two different knowledge biases and the results of this mediation - the harmonization of the ABC and CIDOC/CRM ontologies, which we believe may provide a useful basis for information integration in the wider scope of the involved communities.

01 Jan 2003
TL;DR: It is shown how different representations in the framework are related by describing some techniques and heuristics that supplement information in one representation with information from other representations, which is the kernel of the framework.
Abstract: Support for ontology evolution becomes extremely important in distributed development and use of ontologies. Information about change can be represented in many different ways. We describe these different representations and propose a framework that integrates them. We show how different representations in the framework are related by describing some techniques and heuristics that supplement information in one representation with information from other representations. We present an ontology of change operations, which is the kernel of our framework. 1 Support for Ontology Evolution Ontologies are increasing in popularity, and researchers and developers use them in more and more application areas. Ontologies are used as shared vocabularies, to improve information retrieval, or to help data integration. Neither the ontology development itself nor its product—the ontology— is a single-person enterprise. Large standardized ontologies are often developed by several researchers in parallel (e.g. SUO1 [9]); a number of ontologies grow in the context of peer-to-peer applications (e.g. Edutella [5]); other ontologies are constructed dynamically [2]. Successful applications of ontologies in such uncontrolled, de-centralized and distributed environments require substantial support for change management in ontologies and ontology evolution [7]. Given an ontology O and its two versions, Vold and Vnew, a complete support for change management in an ontology environment includes support for the following tasks.2 Data Transformation: When an ontology version Vold is changed to Vnew, data described by Vold might need to translated to bring it in line with Vnew. For example, if we merge two concepts A and B from Vold into C in Vnew, we must combine instances of A and B as well. http://suo.ieee.org/ Note that Vnew is not necessarily a unique replacement for Vold. There might be several new versions based on the old version, and all of them could exist in parallel. The labels are just used to refer to two versions of an ontology where Vnew has evolved from Vold. Data Access: Even if data is not being transformed, if there exists data conforming to Vold, we often want to access this data and interpret it correctly via Vnew. That is, we should be able to retrieve all data that was accessible via queries in terms of Vold with queries in terms of Vnew. Furthermore, instances of concepts in Vold should be instances of equivalent concepts in Vnew. This task is a very common one in the context of the Semantic Web, where ontologies describe pieces of data on the web. Ontology Update: When we adapt a remote ontology to specific local needs, and the remote ontology changes, we must propagate the changes in the remote ontology to the adapted local ontology [8]. Consistent Reasoning: Ontologies, being formal descriptions, are often used as logical theories. When ontology changes occur, we must analyze the changes to determine whether specific axioms that were valid in Vold are still valid in Vnew. For example, it might be useful to know that a change does not affect the subsumption relationship between two concepts: if A v B is valid in Vold it is also valid in Vnew. While a change in the logical theory will always affects reasoning in general, answers to specific queries may remain unchanged. Verification and Approval: Sometimes developers need to verify and approve ontology changes. This situation often happens when several people are developing a centralized ontology, or when developers want to apply changes selectively. There must be a user interface that simplifies such verification and allows developers to accept or reject specific changes, enabling execution of some changes and rolling back of others. This list of tasks is not exhaustive. The tools that exist today support these tasks in isolation. For example, the KAON framework [10] supports evolution strategies, allowing developers to specify strategies for updating data when changes in an ontology occur. The SHOE versioning system specifies which versions of the ontology the current version is backward compatible with [3]. Many ontology-editing environments (e.g., Protege [1]) provide logs of changes between versions. While these tools support some of the ontologyevolution tasks, there is no interaction or sharing of information among the tools. However, many of these tasks require the same elements in the representation of change. Imple-

Journal ArticleDOI
TL;DR: The state of the art in Ontology Learning is presented and a framework for classifying and comparing OL systems is introduced and a guideline for researchers to choose the appropriate features to create or use an OL system for their own domain or application is presented.
Abstract: In recent years there have been some efforts to automate the ontology acquisition and construction process. The proposed systems differ from each other in some factors and have many features in common. This paper presents the state of the art in Ontology Learning (OL) and introduces a framework for classifying and comparing OL systems. The dimensions of the framework concern what to learn, from where to learn it and how it may be learnt. They include features of the input, the methods of learning and knowledge acquisition, the elements learned, the resulting ontology and also the evaluation process. To extract this framework, over 50 OL systems or modules thereof that have been described in recent articles are studied here and seven prominent ones, which illustrate the greatest differences, are selected for analysis according to our framework. In this paper after a brief description of the seven selected systems we describe the dimensions of the framework. Then we place the representative ontology learning systems into our framework. Finally, we describe the differences, strengths and weaknesses of various values for our dimensions in order to present a guideline for researchers to choose the appropriate features to create or use an OL system for their own domain or application.

Journal ArticleDOI
TL;DR: The ABC model's ability to mediate and integrate between multimedia metadata vocabularies is evaluated by illustrating how it can provide the foundation to facilitate semantic interoperability between MPEG-7, MPEG-21 and other domain-specific metadata vocABularies.
Abstract: A core ontology is one of the key building blocks necessary to enable the scalable assimilation of information from diverse multimedia sources. A complete and extensible ontology that expresses the basic concepts that are common across a variety of domains and media types and that can provide the basis for specialization into domain-specific concepts and vocabularies, is essential for well-defined mappings between domain-specific knowledge representations (i.e., metadata vocabularies) and the subsequent building of a variety of services such as cross-domain searching, tracking, browsing, data mining and knowledge acquisition. As more and more communities develop metadata application profiles which combine terms from multiple vocabularies (e.g., Dublin Core, MPEG-7, MPEG-21, CIDOC/CRM, FGDC, IMS), a core ontology will provide a common understanding of the basic entities and relationships, which is essential for semantic interoperability and the development of additional services based on deductive inferencing. In this paper, we first propose such a core ontology (the ABC model) which was developed in response to a need to integrate information from multiple genres of multimedia content within digital libraries and archives. Although the MPEG-21 RDD was influenced by the ABC model and is based on a model extremely similar to ABC, we believe that it is important to define a separate and domain-independent top-level extensible ontology for scenarios in which either MPEG-21 is irrelevant or to enable the attachment of ontologies from communities external to MPEG, for example, the museum domain (CIDOC/CRM) or the biomedical domain (ON9.3). We evaluate the ABC model's ability to mediate and integrate between multimedia metadata vocabularies by illustrating how it can provide the foundation to facilitate semantic interoperability between MPEG-7, MPEG-21 and other domain-specific metadata vocabularies. By expressing the semantics of both MPEG-7 and MPEG-21 metadata terms in RDF Schema/DAML+OIL [and eventually the Web Ontology Language (OWL)] and attaching the MPEG-7 and MPEG-21 class and property hierarchies to the appropriate top-level classes and properties of the ABC model, we have defined a single distributed machine-understandable ontology. The resulting ontology provides semantic knowledge which is nonexistent within declarative XML schemas or XML-encoded metadata descriptions. Finally, in order to illustrate how such an ontology will contribute to the interoperability of data and services across the entire multimedia content delivery chain, we describe a number of valuable services which have been developed or could potentially be developed using the resulting merged ontologies.

Book
12 Dec 2003
TL;DR: Theoretical Foundations of Ontologies, Methodologies and Methods for Building Ontologies and Languages for building Ontologies are presented.
Abstract: Ontologies provide a common vocabulary of an area and define - with different levels of formality - the meaning of the terms and the relationships between them. Ontologies may be reused and shared across applications and groups Concepts in the ontology are usually organized in taxonomies and relations between concepts, properties of concepts, and axioms are typically used for representing the knowledge contained in ontologies. With the growth of information available, e.g. on the WWW, they are popularly applied in knowledge management, semantic web, natural language generation, enterprise modelling, knowledge-based systems, ontology-based brokers, e-commerce platforms and interoperability between systems. This book looks at questions such as: * What is an ontology? * What are the uses of ontologies? * What types of ontologies exist? What are the most well-known ones? * How do I select the best ontology for my application? * What are the principles for building an ontology? * What methodologies should I use to build my own ontology? Which techniques are appropriate for each step? * How do software tools support the process of building and using ontologies? * What language can I use to implement ontologies? * How can I integrate ontologies in a given language? The book presents the theoretical foundations of ontological engineering and covers the practical aspects of selecting and applying methodologies, tools and languages for building ontologies. The applications of ontologies are also illustrated with case studies taken from the areas of knowledge management, e-commerce and the semantic web.

Proceedings ArticleDOI
20 May 2003
TL;DR: An infrastructure for management of ontology changes, taking into account dependencies between ontologies is needed and is presented, addressing problems of proliferation of well-known ontologies describing different domains.
Abstract: The vision of the Semantic Web can only be realized through proliferation of well-known ontologies describing different domains. To enable interoperability in the Semantic Web, it will be necessary to break these ontologies down into smaller, well-focused units that may be reused. Currently, three problems arise in that scenario. Firstly, it is difficult to locate ontologies to be reused, thus leading to many ontologies modeling the same thing. Secondly, current tools do not provide means for reusing existing ontologies while building new ontologies. Finally, ontologies are rarely static, but are being adapted to changing requirements. Hence, an infrastructure for management of ontology changes, taking into account dependencies between ontologies is needed. In this paper we present such an infrastructure addressing the aforementioned problems.

Proceedings Article
07 Sep 2003
TL;DR: H-match is presented, an algorithm for dynamically matching distributed ontologies by exploiting ontology knowledge descriptions to dynamically perform ontology matching at different levels of depth, with different degrees of flexibility and accuracy.
Abstract: In this paper, we present H-match, an algorithm for dynamically matching distributed ontologies. By exploiting ontology knowledge descriptions, H-match can be used to dynamically perform ontology matching at different levels of depth, with different degrees of flexibility and accuracy. H-match has been developed in the Helios framework, conceived for supporting knowledge sharing and ontology-addressable content retrieval in peer-based systems.

Journal ArticleDOI
TL;DR: Four tools for developing and maintaining ontologies are compared, Protégé-2000, Chimaera, DAG-Edit and OilEd; each system has its own strengths and weaknesses.
Abstract: Ontologies are being used nowadays in many areas, including bioinformatics. To assist users in developing and maintaining ontologies a number of tools have been developed. In this paper we compare four such tools, Protege-2000, Chimaera, DAG-Edit and OilEd. As test ontologies we have used ontologies from the Gene Ontology Consortium. No system is preferred in all situations, but each system has its own strengths and weaknesses.

01 Jan 2003
TL;DR: Racer, which can be considered as a core reasoning agent for the semantic web, is briefly described and supported by various clients such as ontology editors, ontology development and visualization tools.
Abstract: Racer, which can be considered as a core reasoning agent for the semantic web, is briefly described. Racer currently supports a wide range of inference services about ontologies specified in the Ontology Web Language (OWL). These services are made available to other agents via network based APIs. Racer is currently used by various clients such as ontology editors, ontology development and visualization tools, and a first webbased prototype for exploration and analysis of OWL ontolo-

Book ChapterDOI
01 Jan 2003
TL;DR: Ontology and the related term "semantics" have recently found increased attention in database discussions as mentioned in this paper, and early discussions of ontology issues important for databases were lost in a sea of papers on technical, mostly performance issues, despite the fact that textbooks as early as [134] discussed briefly the relationship between information system and real world.
Abstract: Ontology and the related term “semantics” have recently found increased attention in database discussions. Early discussions of ontology issues important for databases [126,78] were lost in a sea of papers on technical, mostly performance issues, despite the fact that textbooks as early as [134] discussed briefly the relationship between information system and real world.

01 Jan 2003
TL;DR: This discussion in light of the ontology and (meta)data standards that exist in the Semantic Web is presented and a corresponding implementation, the KAON SERVER, is presented.
Abstract: The growing use of ontologies in applications creates the need for an infrastructure that allows developers to more easily combine different software modules like ontology stores, editors, or inference engines towards comprehensive ontology-based solutions. We call such an infrastructure Ontology Software Environment. The papers discusses requirements and design issues of such an Ontology Software Environment. In particular, we present this discussion in light of the ontology and (meta)data standards that exist in the Semantic Web and present our corresponding implementation, the KAON SERVER.

Journal ArticleDOI
TL;DR: The Mediator Environment for Multiple Information Sources (Momis) supports semiautomatic building, annotation, and extension of domain ontologies.
Abstract: The Mediator Environment for Multiple Information Sources (Momis) supports semiautomatic building, annotation, and extension of domain ontologies.

Proceedings ArticleDOI
08 Sep 2003
TL;DR: This tutorial surveys the basic principles behind ontologies as they are being implemented and used by the semantic Web community today, including ontology languages, tools and construction methods, and focuses on a process for ontology construction centered on the concept of application languages.
Abstract: The "semantic Web" community poses a new nonfunctional requirement for Web applications. In order to secure interoperability and allow autonomous agent interaction, software for the Web will be required to provide machine processable ontologies. We understand that the responsibility, not only for making explicit this requirement, but also to implement the ontology, belongs to requirements engineers. As such, we see the ontology of a Web application as a sub-product of the requirements engineering activity. In this tutorial we survey the basic principles behind ontologies as they are being implemented and used by the semantic Web community today. Those include ontology languages, tools and construction methods. We focus on a process for ontology construction centered on the concept of application languages. This concept is rooted on a representation scheme called the language extended lexicon (LEL). We demonstrate our approach with examples in which we implement machine processable ontologies in the DAML+OIL language. We finalize with a discussion of today's research issues in ontology engineering, including ontology evolution, integration and validation.