scispace - formally typeset
Search or ask a question

Showing papers on "Ontology-based data integration published in 2002"


01 Jan 2002
TL;DR: An ontology defines a common vocabulary for researchers who need to share information in a domain that includes machine-interpretable definitions of basic concepts in the domain and relations among them.
Abstract: 1 Why develop an ontology? In recent years the development of ontologies—explicit formal specifications of the terms in the domain and relations among them (Gruber 1993)—has been moving from the realm of ArtificialIntelligence laboratories to the desktops of domain experts. Ontologies have become common on the World-Wide Web. The ontologies on the Web range from large taxonomies categorizing Web sites (such as on Yahoo!) to categorizations of products for sale and their features (such as on Amazon.com). The WWW Consortium (W3C) is developing the Resource Description Framework (Brickley and Guha 1999), a language for encoding knowledge on Web pages to make it understandable to electronic agents searching for information. The Defense Advanced Research Projects Agency (DARPA), in conjunction with the W3C, is developing DARPA Agent Markup Language (DAML) by extending RDF with more expressive constructs aimed at facilitating agent interaction on the Web (Hendler and McGuinness 2000). Many disciplines now develop standardized ontologies that domain experts can use to share and annotate information in their fields. Medicine, for example, has produced large, standardized, structured vocabularies such as SNOMED (Price and Spackman 2000) and the semantic network of the Unified Medical Language System (Humphreys and Lindberg 1993). Broad general-purpose ontologies are emerging as well. For example, the United Nations Development Program and Dun & Bradstreet combined their efforts to develop the UNSPSC ontology which provides terminology for products and services (www.unspsc.org). An ontology defines a common vocabulary for researchers who need to share information in a domain. It includes machine-interpretable definitions of basic concepts in the domain and relations among them. Why would someone want to develop an ontology? Some of the reasons are:

4,838 citations


Proceedings ArticleDOI
03 Jun 2002
TL;DR: The tutorial is focused on some of the theoretical issues that are relevant for data integration: modeling a data integration application, processing queries in data integration, dealing with inconsistent data sources, and reasoning on queries.
Abstract: Data integration is the problem of combining data residing at different sources, and providing the user with a unified view of these data. The problem of designing data integration systems is important in current real world applications, and is characterized by a number of issues that are interesting from a theoretical point of view. This document presents on overview of the material to be presented in a tutorial on data integration. The tutorial is focused on some of the theoretical issues that are relevant for data integration. Special attention will be devoted to the following aspects: modeling a data integration application, processing queries in data integration, dealing with inconsistent data sources, and reasoning on queries.

2,716 citations


Book
28 Feb 2002
TL;DR: The authors present an ontology learning framework that extends typical ontology engineering environments by using semiautomatic ontology construction tools and encompasses ontology import, extraction, pruning, refinement and evaluation.
Abstract: The Semantic Web relies heavily on formal ontologies to structure data for comprehensive and transportable machine understanding. Thus, the proliferation of ontologies factors largely in the Semantic Web's success. The authors present an ontology learning framework that extends typical ontology engineering environments by using semiautomatic ontology construction tools. The framework encompasses ontology import, extraction, pruning, refinement and evaluation.

2,061 citations


Book ChapterDOI
01 Oct 2002
TL;DR: A set of ontology similarity measures and a multiple-phase empirical evaluation are presented for measuring the similarity between ontologies for the task of detecting and retrieving relevant ontologies.
Abstract: Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms However, with their wide-spread usage there come problems concerning their proliferation Ontology engineers or users frequently have a core ontology that they use, eg, for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies We present a set of ontology similarity measures and a multiple-phase empirical evaluation

847 citations


Book ChapterDOI
09 Jun 2002
TL;DR: This paper focuses on collaborative development of ontologies with OntoEdit which is guided by a comprehensive methodology.
Abstract: Ontologies now play an important role for enabling the semantic web. They provide a source of precisely defined terms e.g. for knowledge-intensive applications. The terms are used for concise communication across people and applications. Typically the development of ontologies involves collaborative efforts of multiple persons. OntoEdit is an ontology editor that integrates numerous aspects of ontology engineering. This paper focuses on collaborative development of ontologies with OntoEdit which is guided by a comprehensive methodology.

422 citations


Book ChapterDOI
01 Oct 2002
TL;DR: This paper identifies a possible six-phase evolution process and introduces the concept of an evolution strategy encapsulating policy for evolution with respect to user?s requirements, focusing on providing the user with capabilities to control and customize it.
Abstract: With rising importance of knowledge interchange, many industrial and academic applications have adopted ontologies as their conceptual backbone. However, industrial and academic environments are very dynamic, thus inducing changes to application requirements. To fulfill these changes, often the underlying ontology must be evolved as well. As ontologies grow in size, the complexity of change management increases, thus requiring a well-structured ontology evolution process. In this paper we identify a possible six-phase evolution process and focus on providing the user with capabilities to control and customize it. We introduce the concept of an evolution strategy encapsulating policy for evolution with respect to user?s requirements.

397 citations


Journal ArticleDOI
01 Dec 2002
TL;DR: The DOGMA ontology engineering approach is introduced that separates "atomic" conceptual relations from "predicative" domain rules and a layer of "relatively generic" ontological commitments that hold the domain rules.
Abstract: Ontologies in current computer science parlance are computer based resources that represent agreed domain semantics. Unlike data models, the fundamental asset of ontologies is their relative independence of particular applications, i.e. an ontology consists of relatively generic knowledge that can be reused by different kinds of applications/tasks. The first part of this paper concerns some aspects that help to understand the differences and similarities between ontologies and data models. In the second part we present an ontology engineering framework that supports and favours the genericity of an ontology. We introduce the DOGMA ontology engineering approach that separates "atomic" conceptual relations from "predicative" domain rules. A DOGMA ontology consists of an ontology base that holds sets of intuitive context-specific conceptual relations and a layer of "relatively generic" ontological commitments that hold the domain rules. This constitutes what we shall call the double articulation of a DOGMA ontology 1.

395 citations


01 Jan 2002
TL;DR: The development and application of a large formal ontology to the semantic web and this upper ontology is extremely broad in scope and can serve as a semantic foundation for search, interoperation, and communication on the semanticweb.
Abstract: In this paper we discuss the development and application of a large formal ontology to the semantic web The Suggested Upper Merged Ontology (SUMO) (Niles & Pease, 2001) (SUMO, 2002) is a “starter document” in the IEEE Standard Upper Ontology effort This upper ontology is extremely broad in scope and can serve as a semantic foundation for search, interoperation, and communication on the semantic web

356 citations


Proceedings ArticleDOI
01 Dec 2002
TL;DR: Detailed investigation of the properties of these information content based measures are presented, and various properties of GO are examined, which may have implications for its future design.
Abstract: Many bioinformatics resources hold data in the form of sequences. Often this sequence data is associated with a large amount of annotation. In many cases this data has been hard to model, and has been represented as scientific natural language, which is not readily computationally amenable. The development of the Gene Ontology provides us with a more accessible representation of some of this data. However it is not clear how this data can best be searched, or queried. Recently we have adapted information content based measures for use with the Gene Ontology (GO). In this paper we present detailed investigation of the properties of these measures, and examine various properties of GO, which may have implications for its future design.

248 citations


Journal ArticleDOI
TL;DR: It is identified that shallow information extraction and natural language processing techniques are deployed to extract concepts or classes from free-text or semi-structured data, but relation extraction is a very complex and difficult issue to resolve and it has turned out to be the main impediment to ontology learning and applicability.
Abstract: Ontology is an important emerging discipline that has the huge potential to improve information organization, management and understanding. It has a crucial role to play in enabling content-based access, interoperability, communications, and providing qualitatively new levels of services on the next wave of web transformation in the form of the Semantic Web. The issues pertaining to ontology generation, mapping and maintenance are critical key areas that need to be understood and addressed. This survey is presented in two parts. The first part reviews the state-of-the-art techniques and work done on semi-automatic and automatic ontology generation, as well as the problems facing such research. The second complementary survey is dedicated to ontology mapping and ontology ‘evolving’. Through this survey, we have identified that shallow information extraction and natural language processing techniques are deployed to extract concepts or classes from free-text or semi-structured data. However, relation extrac...

247 citations


Patent
26 Mar 2002
TL;DR: In this paper, a distributed ontology system including a central computer comprising a global ontology directory, a plurality of ontology server computers, each including a repository of class and relation definitions, and a server for responding to queries relating to class and relations definitions in the repository, is described.
Abstract: A distributed ontology system including a central computer comprising a global ontology directory, a plurality of ontology server computers, each including a repository of class and relation definitions, and a server for responding to queries relating to class and relation definitions in the repository, and a computer network connecting the central computer with the plurality of ontology server computers. A method is also described and claimed.

Journal ArticleDOI
01 Sep 2002
TL;DR: This research proposes a methodology for creating and managing domain ontologies, and an architecture for an ontology management system is presented and implemented in a prototype.
Abstract: Although ontologies have been proposed as an important and natural means of representing real world knowledge for the development of database designs, most ontology creation is not carried out systematically. To be truly useful, a repository of ontologies, organized by application domain is needed, along with procedures for creating and integrating ontologies into database design methodologies. This research proposes a methodology for creating and managing domain ontologies. An architecture for an ontology management system is presented and implemented in a prototype. Empirical validation of the prototype demonstrates the effectiveness of the research.

Journal ArticleDOI
TL;DR: It is apparent from the reviews that current research into semi-automatic or automatic ontology research in all the three aspects of generation, mapping and evolving have so far achieved limited success.
Abstract: This is the second of a two-part paper to review ontology research and development, in particular, ontology mapping and evolving. Ontology is defined as a formal explicit specification of a shared conceptualization. Ontology itself is not a static model so that it must have the potential to capture changes of meanings and relations. As such, mapping and evolving ontologies is part of an essential task of ontology learning and development. Ontology mapping is concerned with reusing existing ontologies, expanding and combining them by some means and enabling a larger pool of information and knowledge in different domains to be integrated to support new communication and use. Ontology evolving, likewise, is concerned with maintaining existing ontologies and extending them as appropriate when new information or knowledge is acquired. It is apparent from the reviews that current research into semi-automatic or automatic ontology research in all the three aspects of generation, mapping and evolving have so far ...

Proceedings ArticleDOI
04 Nov 2002
TL;DR: A new mechanism that can generate ontology automatically is proposed in order to make the approach scalable and it is observed that the modified SOTA outperforms hierarchical agglomerative clustering (HAC) and an automatic concept selection algorithm from WordNet called linguistic ontology is proposed.
Abstract: Technology in the field of digital media generates huge amounts of non-textual information, audio, video, and images, along with more familiar textual information. The potential for exchange and retrieval of information is vast and daunting. The key problem in achieving efficient and user-friendly retrieval is the development of a search mechanism to guarantee delivery of minimal irrelevant information (high precision) while ensuring relevant information is not overlooked (high recall). The traditional solution employs keyword-based search. The only documents retrieved are those containing user specified keywords. But many documents convey desired semantic information without containing these keywords. One can overcome this problem by indexing documents according to meanings rather than words, although this will entail a way of converting words to meanings and the creation of ontology. We have solved the problem of an index structure through the design and implementation of a concept-based model using domain-dependent ontology. Ontology is a collection of concepts and their interrelationships, which provide an abstract view of an application domain. We propose a new mechanism that can generate ontology automatically in order to make our approach scalable. For this we modify the existing self-organizing tree algorithm (SOTA) that constructs a hierarchy from top to bottom. Furthermore, in order to find an appropriate concept for each node in the hierarchy we propose an automatic concept selection algorithm from WordNet called linguistic ontology. To illustrate the effectiveness of our automatic ontology construction method, we have explored our ontology construction in text documents. The Reuters21578 text document corpus has been used. We have observed that our modified SOTA outperforms hierarchical agglomerative clustering (HAC).

Journal ArticleDOI
TL;DR: A software environment, centered around the OntoLearn tool, which can build and assess a domain ontology for intelligent information integration within a virtual user community and which can easily adapt to work with other general-purpose ontologies.
Abstract: Developing the Semantic Web, seeking to improve the semantic awareness of computers connected via the Internet, requires a systematic, computer-oriented world representation. Researchers often refer to such a model as an ontology. Despite the work done on them in recent years, ontologies have yet to be widely applied and used. Research has devoted only limited attention to such practical issues as techniques and tools aimed at an ontology's actual construction and content. The authors have built a software environment, centered around the OntoLearn tool, which can build and assess a domain ontology for intelligent information integration within a virtual user community. OntoLearn has already been tested in two European projects, where it functioned as the basis for a semantic interoperability platform used by small- and medium-sized tourism enterprises. Further, developers can easily adapt OntoLearn to work with other general-purpose ontologies.

01 Jan 2002
TL;DR: Criteria for evaluating ontology-development tools and tools for mapping, aligning, or merging ontologies are presented and what resources as a community need to develop are discussed in order to make performance comparisons within each group of merging and mapping tools useful and effective.
Abstract: The appearance of a large number of ontology tools may leave a user looking for an appropriate tool overwhelmed and uncertain on which tool to choose. Thus evaluation and comparison of these tools is important to help users determine which tool is best suited for their tasks. However, there is no “one size fits all” comparison framework for ontology tools: different classes of tools require very different comparison frameworks. For example, ontology-development tools can easily be compared to one another since they all serve the same task: define concepts, instances, and relations in a domain. Tools for ontology merging, mapping, and alignment however are so different from one another that direct comparison may not be possible. They differ in the type of input they require (e.g., instance data or no instance data), the type of output they produce (e.g., one merged ontology, pairs of related terms, articulation rules), modes of interaction and so on. This diversity makes comparing the performance of mapping tools to one another largely meaningless. We present criteria that partition the set of such tools in smaller groups allowing users to choose the set of tools that best fits their tasks. We discuss what resources we as a community need to develop in order to make performance comparisons within each group of merging and mapping tools useful and effective. These resources will most likely come as results of evaluation experiments of stand-alone tools. As an example of such an experiment, we discuss our experiences and results in evaluating PROMPT, an interactive ontology-merging tool. Our experiment produced some of the resources that we can use in more general evaluation. However, it has also shown that comparing the performance of different tools can be difficult since human experts do not agree on how ontologies should be merged, and we do not yet have a good enough metric for comparing ontologies. 1 Ontology-Mapping Tools Versus Ontology-Development Tools Consider two types of ontology tools: (1) tools for developing ontologies and (2) tools for mapping, aligning, or merging ontologies. By ontology-development tools (which we will call development tools in the paper) we mean ontology editors that allow users to define new concepts, relations, and instances. These tools usually have capabilities for importing and extending existing ontologies. Development tools may include graphical browsers, search capabilities, and constraint checking. Protégé-2000 [17], OntoEdit [19], OilEd [2], WebODE [1], and Ontolingua [7] are some examples of development tools. Tools for mapping, aligning, and merging ontologies (which we will call mapping tools) are the tools that help users find similarities and differences between source ontologies. Mapping tools either identify potential correspondences automatically or provide the environment for the users to find and define these correspondences, or both. Mapping tools are often extensions of development tools. Mapping tool and algorithm examples include PROMPT[16], ONION [13], Chimaera [11], FCA-Merge [18], GLUE [5], and OBSERVER [12]. Even though theories on how to evaluate either type of tools are not well articulated at this point, there are already several frameworks for evaluating ontologydevelopment tools. For example, Duineveld and colleagues [6] in their comparison experiment used different development tools to represent the same domain ontology. Members of the Ontology-environments SIG in the OntoWeb initiative designed an extensive set of criteria for evaluating ontology-development tools and applied these criteria to compare a number of projects. Some of the aspects that these frameworks compare include: – interoperability with other tools and the ability to import and export ontologies in different representation languages; – expressiveness of the knowledge model; – scalability and extensibility; – availability and capabilities of inference services; – usability of the tools. Let us turn to the second class of ontology tools: tools for mapping, aligning, or merging ontologies. It is tempting to reuse many of the criteria from evaluation of development tools. For example, expressiveness of the underlying language is important and so is scalability and extensibility. We need to know if a mapping tool can work with ontologies from different languages. However, if we look at the mapping tools more closely, we see that their comparison and evaluation must be very different from the comparison and evaluation of development tools. All the ontology-development tools have very similar inputs and the desired outputs: we have a domain, possibly a set of ontologies to reuse, and a set of requirements for the ontology, and we need to use a tool to produce an ontology of that domain satisfying the requirements. Unlike the ontology-development tools, the 1 http://delicias.dia.fi.upm.es/ontoweb/sig-tools/ ontology-mapping tools vary with respect to the precise task that they perform, the inputs on which they operate and the outputs that they produce. First, the tasks for which the mapping tools are designed, differ greatly. On the one hand, all the tools are designed to find similarities and differences between source ontologies in one way or another. In fact, researchers have suggested a uniform framework for describing and analyzing this information regardless of what the final task is [3, 10]. On the other hand, from the user’s point of view the tools differ greatly in what tasks this analysis of similarities and differences supports. For example, Chimaera and PROMPT allow users to merge source ontologies into a new ontology that includes concepts from both sources. The output of ONION is a set of articulation rules between two ontologies; these rules define what the similarities and differences are. The articulation rules can later be used for querying and other tasks. The task of GLUE, AnchorPROMPT [14] and FCA-Merge is to provide a set of pairs of related concepts with some certainty factor associated with each pair. Second, different mapping tools rely on different inputs: Some tools deal only with class hierarchies of the sources and are agnostic in their merging algorithms about slots or instances (e.g., Chimaera). Other tools use not only classes but also slots and value restrictions in their analysis (e.g., PROMPT). Other tools rely in their algorithms on the existence of instances in each of the source ontologies (e.g., GLUE). Yet another set of tools require not only that instances are present, but also that source ontologies share a set of instances (e.g., FCA-Merge). Some tools work independently and produce suggestions to the user at the end, allowing the user to analyze the suggestions (e.g., GLUE, FCAMerge). Some tools expect that the source ontologies follow a specific knowledgerepresentation paradigm (e.g., Description Logic for OBSERVER). Other tools rely heavily on interaction with the user and base their analysis not only on the source ontologies themselves but also on the merging or alignment steps that the user performs (e.g., PROMPT, Chimaera). Third, since the tasks that the mapping tools support differ greatly, the interaction between a user and a tool is very different from one tool to another. Some tools provide a graphical interface which allows users to compare the source ontologies visually, and accept or reject the results of the tool analysis (e.g., PROMPT, Chimaera, ONION), the goal of other tools is to run the algorithms which find correlations between the source ontologies and output the results to the user in a text file or on the terminal–the users must then use the results outside the tool itself. The goal of this paper is to start a discussion on a framework for evaluating ontology-mapping tools that would account for this great variety in underlying assumptions and requirements. We argue that many of the tools cannot be compared directly with one another because they are so different in the tasks that they support. We identify the criteria for determining the groups of tools that can be compared directly, define what resources we need to develop to make such comparison possible and discuss our experiences in evaluating our merging tool, PROMPT, as well as the results of this evaluation. 2 Requirements for Evaluating Mapping Tools Before we discuss the evaluation requirements for mapping tools, we must answer the following question which will certainly affect the requirements: what is the goal of such potential evaluation? It is tempting to say “find the best tool.” However, as we have just discussed, given the diversity in the tasks that the tools support, their modes of interaction, the input data they rely on, it is impossible to compare the tools to one another and to find one or even several measures to identify the “best” tool. Therefore, we suggest that the questions driving such evaluation must be user-oriented. A user may ask either what is the best tool for his task or whether a particular tool is good enough for his task. Depending on what the user’s source ontologies are, how much manual work he is willing to put in, how important the precision of the results is, one or another tool will be more appropriate. Therefore, the first set of evaluation criteria are pragmatic criteria. These criteria include but are not limited to the following: Input requirements What elements from the source ontologies does the tool use? Which of these elements does the tool require? This information may include: concept names, class hierarchy, slot definitions, facet values, slot values, instances. Does the tool require that source ontologies use a particular knowledge-representation paradigm? Level of user interaction Does the tool perform the comparison in a “batch mode,” presenting the results at the end, or is it an interactive tool where intermediate results are analyzed by the user, and the tool uses the feedback for further analysis? Type o


Book ChapterDOI
30 Oct 2002
TL;DR: OntoEdit is an ontology editor that has been developed keeping five main objectives in mind: Ease of use, methodology-guided development of ontologies, extensibility through plug-in structure, development of Ontology axioms, and development ofOntologyAxioms.
Abstract: Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. The terms are used for concise communication across people and applications. OntoEdit is an ontology editor that has been developed keeping five main objectives in mind: 1. Ease of use. 2. Methodology-guided development of ontologies. 3. Ontology development with help of inferencing. 4. Development of ontology axioms. 5. Extensibility through plug-in structure. This paper is about the first four of these items.

Book ChapterDOI
30 Oct 2002
TL;DR: This paper presents a specifically database-inspired approach (called DOGMA) for engineering formal ontologies, implemented as shared resources used to express agreed formal semantics for a real world domain, and claims it leads to methodological approaches that naturally extend key aspects of database modeling theory and practice.
Abstract: This paper presents a specifically database-inspired approach (called DOGMA) for engineering formal ontologies, implemented as shared resources used to express agreed formal semantics for a real world domain. We address several related key issues, such as knowledge reusability and shareability, scalability of the ontology engineering process and methodology, efficient and effective ontology storage and management, and coexistence of heterogeneous rule systems that surround an ontology mediating between it and application agents. Ontologies should represent a domain's semantics independently from "language", while any process that creates elements of such an ontology must be entirely rooted in some (natural) language, and any use of it will necessarily be through a (in general an agent's computer) language.To achieve the claims stated, we explicitly decompose ontological resources into ontology bases in the form of simple binary facts called lexons and into socalled ontological commitments in the form of description rules and constraints. Ontology bases in a logic sense, become "representationless" mathematical objects which constitute the range of a classical interpretation mapping from a first order language, assumed to lexically represent the commitment or binding of an application or task to such an ontology base. Implementations of ontologies become database-like on-line resources in the model-theoretic sense. The resulting architecture allows to materialize the (crucial) notion of commitment as a separate layer of (software agent) services, mediating between the ontology base and those application instances that commit to the ontology. We claim it also leads to methodological approaches that naturally extend key aspects of database modeling theory and practice. We discuss examples of the prototype DOGMA implementation of the ontology base server and commitment server.

Book ChapterDOI
01 Oct 2002
TL;DR: This work describes a procedure to automatically extend an ontology such as WordNet with domain-specific knowledge, which is completely unsupervised, so it can be applied to different languages and domains.
Abstract: Ontologies are a tool for Knowledge Representation that is now widely used, but the effort employed to build an ontology is high. We describe here a procedure to automatically extend an ontology such as WordNet with domain-specific knowledge. The main advantage of our approach is that it is completely unsupervised, so it can be applied to different languages and domains. Our experiments, in which several domain-specific concepts from a book have been introduced, with no human supervision, into WordNet, have been successful.

Dissertation
07 Nov 2002
TL;DR: This work concerns multi-agents systems for the management of a corporate semantic web based on an ontology O'CoMMA focusing on two application scenarios: support technology monitoring activities and assist the integration of a new employee to the organisation.
Abstract: This work concerns multi-agents systems for the management of a corporate semantic web based on an ontology. It was carried out in the context of the European project CoMMA focusing on two application scenarios: support technology monitoring activities and assist the integration of a new employee to the organisation. Three aspects were essentially developed in this work: the design of a multi-agents architecture supporting both scenarios, and the organisational top-down approach followed to identify the societies, the roles and the interactions of agents; the construction of the ontology O'CoMMA and the structuring of a corporate memory exploiting semantic Web technologies; the design and implementation of the sub-societies of agents dedicated to the management of the annotations and the ontology and of the protocols underlying these groups of agents, in particular techniques for distributing annotations and queries between the agents.

Journal Article
TL;DR: This paper focuses on capturing and reasoning about semantic aspects of schematic descriptions of heterogeneous information sources for supporting integration and query optimization.
Abstract: How to effectively intergrate multiple heterogeneous data sources is a challenging issue for cooperation and interoperability to become a CIMS enterprisethis paper focuses on capturing and reasoning about semantic aspects of schematic descriptions of heterogeneous information sources for supporting integration and query optimization

Book ChapterDOI
30 Oct 2002
TL;DR: This paper draws on the proven theoretical ground of Information Flow and channel theory, and provides a systematic and mechanised methodology for deploying it on a distributed environment to perform ontology mapping among a variety of different ontologies.
Abstract: As ontologies become ever more important for semanticallyrich information exchange and a crucial element for supporting knowledge sharing in a large distributed environment, like the Web, the demand for sharing them increases accordingly. One way of achieving this ambitious goal is to provide mechanised ways for mapping and merging ontologies. This has been the focus of recent research in knowledge engineering. However, we observe a dearth of mapping methods that are based on a strong theoretical ground, are easy to replicate in different settings, and use semantically-rich mechanisms for performing ontology mapping. In this paper, we aim to fill in these gaps with a method we propose for Information-Flow-based ontology mapping. Our method draws on the proven theoretical ground of Information Flow and channel theory, and we provide a systematic and mechanised methodology for deploying it on a distributed environment to perform ontology mapping among a variety of different ontologies. We applied our method at a large-scale experiment of mapping five ontologies modelling Computer Science departments in five UK Universities. We elaborate on a theory for ontology mapping, analyse the mechanised steps of applying it, and assess its ontology mapping results.

01 Jan 2002
TL;DR: In this paper, the similarities between database-schema evolution and ontology evolution will allow us to build on the extensive research in schema evolution, but there are also important differences between database schemas and ontologies, such as different usage paradigms, the presence of explicit semantics and different knowledge models.
Abstract: As ontology development becomes a more ubiquitous and collaborative process, ontology versioning and evolution becomes an important area of ontology research. The many similarities between database-schema evolution and ontology evolution will allow us to build on the extensive research in schema evolution. However, there are also important differences between database schemas and ontologies. The differences stem from different usage paradigms, the presence of explicit semantics and different knowledge models. A lot of problems that existed only in theory in database research come to the forefront as practical problems in ontology evolution. These differences have important implications for the development of ontology-evolution frameworks: The traditional distinction between versioning and evolution is not applicable to ontologies. There are several dimensions along which compatibility between versions must be considered. The set of change operations for ontologies is different. We must develop automatic techniques for finding similarities and differences between versions.

Proceedings ArticleDOI
02 Sep 2002
TL;DR: A study of transformation of a particular ontology from the manufacturing domain into the form suitable for communication with Semantic Web agents is presented, focusing on the problem of scalable interoperability in open heterogeneous multi-agent systems, such as supply chains.
Abstract: Ontologies play important role in knowledge sharing and exploration, particularly in communication in multi-agent systems. We briefly survey, compare and analyze current usage of ontologies in the area of manufacturing and compare it with the ontology modelling for the Semantic Web. We focus on the problem of scalable interoperability in open heterogeneous multi-agent systems, such as supply chains. A study of transformation of a particular ontology from the manufacturing domain into the form suitable for communication with Semantic Web agents is presented. We conclude with a discussion of what we see as the next important steps in the development of ontologies in the manufacturing domain in order to have more automated approaches for ontological knowledge integration.

Book ChapterDOI
01 Jan 2002
TL;DR: This chapter illustrates how a logic of the Description Logics family is used to model a mediated schema of an integration system, to specify the semantics of the data sources, and finally to support the query answering process by means of the associated reasoning methods.
Abstract: Information integration is the problem of combining the data residing at different, heterogeneous sources, and providing the user with a unified view of these data, called mediated schema The mediated schema is therefore a reconciled view of the information, which can be queried by the user It is the task of the system to free the user from the knowledge on where data are, and how data are structured at the sourcesIn this chapter, we discuss data integration in general, and describe a logic-based approach to data integration A logic of the Description Logics family is used to model the information managed by the integration system, to formulate queries posed to the system, and to perform several types of automated reasoning supporting both the modeling, and the query answering process We focus, in particular, on a specific Description Logic, called DLR, specifically designed for database applications In the chapter, we illustrate how DLR is used to model a mediated schema of an integration system, to specify the semantics of the data sources, and finally to support the query answering process by means of the associated reasoning methods

01 Jan 2002
TL;DR: The requirements for the ontology editors in order to support ontology evolution are discussed and changes are the force that drives the evolution process.
Abstract: An ontology over a period of time needs to be modified to reflect changes in the real world, changes in the user’s requirements, drawbacks in the initial design, to incorporate additional functionality or to allow for incremental improvement. Although changes are inevitable during the development and deployment of an ontology, most of the current ontology editors unfortunately do not provide enough support for efficient copying with changes. Since changes are the force that drives the evolution process, in this paper we discuss the requirements for the ontology editors in order to support ontology evolution.

Journal ArticleDOI
TL;DR: A modification to the UML metamodel is proposed to address some of the most problematic differences between UML and the ontology languages RDF and DAML+OIL and is backward-compatible with existing UML models while enhancing its viability for ontology modeling.
Abstract: There is rapidly growing momentum for web enabled agents that reason about and dynamically integrate the appropriate knowledge and services at run-time. The dynamic integration of knowledge and services depends on the existence of explicit declarative semantic models (ontologies). We have been building tools for ontology development based on the Unified Modeling Language (UML). This allows the many mature UML tools, models and expertise to be applied to knowledge representation systems, not only for visualizing complex ontologies but also for managing the ontology development process. UML has many features, such as profiles, global modularity and extension mechanisms that are not generally available in most ontology languages. However, ontology languages have some features that UML does not support. Our paper identifies the similarities and differences (with examples) between UML and the ontology languages RDF and DAML+OIL. To reconcile these differences, we propose a modification to the UML metamodel to address some of the most problematic differences. One of these is the ontological concept variously called a property, relation or predicate. This notion corresponds to the UML concepts of association and attribute. In ontology languages properties are first-class modeling elements, but UML associations and attributes are not first-class. Our proposal is backward-compatible with existing UML models while enhancing its viability for ontology modeling. While we have focused on RDF and DAML+OIL in our research and development activities, the same issues apply to many of the knowledge representation languages. This is especially the case for semantic network and concept graph approaches to knowledge representations.


01 Mar 2002
TL;DR: This report presents the object that is called "an ontology" and a state of the art of engineering techniques for ontologies, a project for which an ontology was developed and used to improve knowledge management.
Abstract: Ontology is a new object of IA that recently came to maturity and a powerful conceptual tool of Knowledge Modeling. It provides a coherent base to build on, and a shared reference to align with, in the form of a consensual conceptual vocabulary, on which one can build descriptions and communication acts. This report presents the object that is called "an ontology" and a state of the art of engineering techniques for ontologies. Then it describes a project for which we developed an ontology and used it to improve knowledge management. Finally it describes the design process and discuss the resulting ontology.