scispace - formally typeset
Search or ask a question

Showing papers on "Ontology (information science) published in 2003"


Journal ArticleDOI
TL;DR: This paper discusses how the philosophy and features of OWL can be traced back to these older formalisms, with modifications driven by several other constraints on OWL.

1,630 citations


Journal ArticleDOI
TL;DR: Ontology mapping is seen as a solution provider in today's landscape of ontology research as mentioned in this paper and provides a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners.
Abstract: Ontology mapping is seen as a solution provider in today's landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mappings has been the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.

1,384 citations


01 Jan 2003
TL;DR: This chapter analyses the limitations of RDF Schema and derive requirements for a richer Web Ontology Language, and describes the three-layered architecture of the OWL language.
Abstract: In order to extend the limited expressiveness of RDF Schema, a more expressive Web Ontology Language (OWL) has been defined by the World Wide Web Consortium (W3C). In this chapter we analyse the limitations of RDF Schema and derive requirements for a richer Web Ontology Language. We then describe the three-layered architecture of the OWL language, and we describe all of the language constructs of OWL in some detail. The chapter concludes with two extensive examples of OWL ontologies.

1,251 citations


Journal ArticleDOI
TL;DR: The Foundational Model of Anatomy is proposed as a reference ontology in biomedical informatics for correlating different views of anatomy, aligning existing and emerging ontologies in bioinformatics ontologies and providing a structure-based template for representing biological functions.

1,060 citations


Journal ArticleDOI
TL;DR: This work presents an approach to computing semantic similarity that relaxes the requirement of a single ontology and accounts for differences in the levels of explicitness and formalization of the different ontology specifications.
Abstract: Semantic similarity measures play an important role in information retrieval and information integration. Traditional approaches to modeling semantic similarity compute the semantic distance between definitions within a single ontology. This single ontology is either a domain-independent ontology or the result of the integration of existing ontologies. We present an approach to computing semantic similarity that relaxes the requirement of a single ontology and accounts for differences in the levels of explicitness and formalization of the different ontology specifications. A similarity function determines similar entity classes by using a matching process over synonym sets, semantic neighborhoods, and distinguishing features that are classified into parts, functions, and attributes. Experimental results with different ontologies indicate that the model gives good results when ontologies have complete and detailed representations of entity classes. While the combination of word matching and semantic neighborhood matching is adequate for detecting equivalent entity classes, feature matching allows us to discriminate among similar, but not necessarily equivalent entity classes.

948 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the use of ontological annotation to measure the similarities in knowledge content or "semantic similarity" between entries in a data resource, and present a simple extension that enables a semantic search of the knowledge held within sequence databases.
Abstract: Motivation: Many bioinformatics data resources not only hold data in the form of sequences, but also as annotation. In the majority of cases, annotation is written as scientific natural language: this is suitable for humans, but not particularly useful for machine processing. Ontologies offer a mechanism by which knowledge can be represented in a form capable of such processing. In this paper we investigate the use of ontological annotation to measure the similarities in knowledge content or ‘semantic similarity’ between entries in a data resource. These allow a bioinformatician to perform a similarity measure over annotation in an analogous manner to those performed over sequences. Am easure of semantic similarity for the knowledge component of bioinformatics resources should afford a biologist a new tool in their repetoire of analyses. Results: We present the results from experiments that investigate the validity of using semantic similarity by comparison with sequence similarity. We show a simple extension that enables a semantic search of the knowledge held within sequence databases. Availability: Software available from http://www.russet.

903 citations


Journal ArticleDOI
TL;DR: The COBRA-ONT ontology as discussed by the authors is a collection of ontologies for describing places, agents and events and their associated properties in an intelligent meeting-room domain, expressed in the Web Ontology Language OWL.
Abstract: This document describes COBRA-ONT, an ontology for supporting pervasive context-aware systems. COBRA-ONT, expressed in the Web Ontology Language OWL, is a collection of ontologies for describing places, agents and events and their associated properties in an intelligent meeting-room domain. This ontology is developed as a part of the Context Broker Architecture (CoBrA), a broker-centric agent architecture that provides knowledge sharing, context reasoning and privacy protection supports for pervasive context-aware systems. We also describe an inference engine for reasoning with information expressed using the COBRA-ONT ontology and the ongoing research in using the DAML-Time ontology for context reasoning.

855 citations


Proceedings ArticleDOI
20 May 2003
TL;DR: This paper investigates how Semantic and Web Services technologies can be used to support service advertisement and discovery in e-commerce with the design and implementation of a service matchmaking prototype which uses a DAML-S based ontology and a Description Logic reasoner to compare ontology based service descriptions.
Abstract: An important objective of the Semantic Web is to make Electronic Commerce interactions more flexible and automated. To achieve this, standardization of ontologies, message content and message protocols will be necessary.In this paper we investigate how Semantic and Web Services technologies can be used to support service advertisement and discovery in e-commerce. In particular, we describe the design and implementation of a service matchmaking prototype which uses a DAML-S based ontology and a Description Logic reasoner to compare ontology based service descriptions. We also present the results of initial experiments testing the performance of this prototype implementation in a realistic agent based e-commerce scenario.

833 citations


Journal ArticleDOI
TL;DR: A suite of tools for managing multiple ontologies provides users with a uniform framework for comparing, aligning, and merging ontologies, maintaining versions, translating between different formalisms, and identifying inconsistencies and potential problems.
Abstract: Researchers in the ontology-design field have developed the content for ontologies in many domain areas. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. We developed a suite of tools for managing multiple ontologies. These suite provides users with a uniform framework for comparing, aligning, and merging ontologies, maintaining versions, translating between different formalisms. Two of the tools in the suite support semi-automatic ontology merging: IPROMPT is an interactive ontology-merging tool that guides the user through the merging process, presenting him with suggestions for next steps and identifying inconsistencies and potential problems. ANCHORPROMPT uses a graph structure of ontologies to find correlation between concepts and to provide additional information for IPROMPT.

799 citations


Journal ArticleDOI
01 Jul 2003
TL;DR: This paper reviews and compares the main methodologies, tools and languages for building ontologies that have been reported in the literature, as well as the main relationships among them.
Abstract: In this paper we review and compare the main methodologies, tools and languages for building ontologies that have been reported in the literature, as well as the main relationships among them. Ontology technology is nowadays mature enough: many methodologies, tools and languages are already available. The future work in this field should be driven towards the creation of a common integrated workbench for ontology developers to facilitate ontology development, exchange, evaluation, evolution and management, to provide methodological support for these tasks, and translations to and from different ontology languages. This workbench should not be created from scratch, but instead integrating the technology components that are currently available.

794 citations


Journal ArticleDOI
TL;DR: This article presents the methodology that has been successfully used over the past seven years to create the International Committee for Documentation of the International Council of Museums (CIDOC) CONCEPTUAL REFERENCE MODEL (CRM), a high-level ontology to enable information integration for cultural heritage data and their correlation with library and archive information.
Abstract: This article presents the methodology that has been successfully used over the past seven years by an interdisciplinary team to create the International Committee for Documentation of the International Council of Museums (CIDOC) CONCEPTUAL REFERENCE MODEL (CRM), a high-level ontology to enable information integration for cultural heritage data and their correlation with library and archive information. The CIDOC CRM is now in the process to become an International Organization for Standardization (ISO) standard. This article justifies in detail the methodology and design by functional requirements and gives examples of its contents. The CIDOC CRM analyzes the common conceptualizations behind data and metadata structures to support data transformation, mediation, and merging. It is argued that such ontologies are property-centric, in contrast to terminological systems, and should be built with different methodologies. It is demonstrated that ontological and epistemological arguments are equally important for an effective design, in particular when dealing with knowledge from the past in any domain. It is assumed that the presented methodology and the upper level of the ontology are applicable in a far wider domain.

Journal ArticleDOI
TL;DR: In this paper, Sider presents an ontology of persistence and time in the context of four-dimensionalism, which he calls Four-Dimensionalism: An Ontology of Persistence and Time.
Abstract: Book Information Four-Dimensionalism: An Ontology of Persistence and Time. By Theodore Sider. Clarendon Press. Oxford. 2001. Pp. xxiv + 255. £30.

Journal ArticleDOI
01 Nov 2003
TL;DR: GLUE is described, a system that employs machine learning techniques to find semantic mappings between ontologies and is distinguished in that it works with a variety of well-defined similarity notions and that it efficiently incorporates multiple types of knowledge.
Abstract: On the Semantic Web, data will inevitably come from many different ontologies, and information processing across ontologies is not possible without knowing the semantic mappings between them. Manually finding such mappings is tedious, error-prone, and clearly not possible on the Web scale. Hence the development of tools to assist in the ontology mapping process is crucial to the success of the Semantic Web. We describe GLUE, a system that employs machine learning techniques to find such mappings. Given two ontologies, for each concept in one ontology GLUE finds the most similar concept in the other ontology. We give well-founded probabilistic definitions to several practical similarity measures and show that GLUE can work with all of them. Another key feature of GLUE is that it uses multiple learning strategies, each of which exploits well a different type of information either in the data instances or in the taxonomic structure of the ontologies. To further improve matching accuracy, we extend GLUE to incorporate commonsense knowledge and domain constraints into the matching process. Our approach is thus distinguished in that it works with a variety of well-defined similarity notions and that it efficiently incorporates multiple types of knowledge. We describe a set of experiments on several real-world domains and show that GLUE proposes highly accurate semantic mappings. Finally, we extend GLUE to find complex mappings between ontologies and describe experiments that show the promise of the approach.


Journal ArticleDOI
TL;DR: The Artequakt project is considered, which links a knowledge extraction tool with an ontology to achieve continuous knowledge support and guide information extraction and is further enhanced using a lexicon-based term expansion mechanism that provides extended ontology terminology.
Abstract: To bring the Semantic Web to life and provide advanced knowledge services, we need efficient ways to access and extract knowledge from Web documents. Although Web page annotations could facilitate such knowledge gathering, annotations are rare and will probably never be rich or detailed enough to cover all the knowledge these documents contain. Manual annotation is impractical and unscalable, and automatic annotation tools remain largely undeveloped. Specialized knowledge services therefore require tools that can search and extract specific knowledge directly from unstructured text on the Web, guided by an ontology that details what type of knowledge to harvest. An ontology uses concepts and relations to classify domain knowledge. Other researchers have used ontologies to support knowledge extraction, but few have explored their full potential in this domain. The paper considers the Artequakt project which links a knowledge extraction tool with an ontology to achieve continuous knowledge support and guide information extraction. The extraction tool searches online documents and extracts knowledge that matches the given classification structure. It provides this knowledge in a machine-readable format that will be automatically maintained in a knowledge base (KB). Knowledge extraction is further enhanced using a lexicon-based term expansion mechanism that provides extended ontology terminology.

Journal ArticleDOI
03 Jul 2003
TL;DR: (my)Grid is building high level services for data and application integration such as resource discovery, workflow enactment and distributed query processing, and semantically rich metadata expressed using ontologies necessary to discover, select and compose services into dynamic workflows.
Abstract: Motivation: The my Grid project aims to exploit Grid technology, with an emphasis on the Information Grid, and provide middleware layers that make it appropriate for the needs of bioinformatics. my Grid is building high level services for data and application integration such as resource discovery, workflow enactment and distributed query processing. Additional services are provided to support the scientific method and best practice found at the bench but often neglected at the workstation, notably provenance management, change notification and personalisation. Results: We give an overview of these services and their metadata. In particular, semantically rich metadata expressed using ontologies necessary to discover, select and compose services into dynamic workflows. Availability: Software is available on request from the authors and information from http://www.mygrid.org.uk.

Journal Article
TL;DR: Research is reported on that adapts information navigation based on a user profile structured as a weighted concept hierarchy that shows that these automatically created profiles reflect the user's interests quite well and they are able to produce moderate improvements when applied to search results.
Abstract: As the number of Internet users and the number of accessible Web pages grows, it is becoming increasingly difficult for users to find documents that are relevant to their particular needs. Users must either browse through a large hierarchy of concepts to find the information for which they are looking or submit a query to a publicly available search engine and wade through hundreds of results, most of them irrelevant. The core of the problem is that whether the user is browsing or searching, whether they are an eighth grade student or a Nobel prize winner, the identical information is selected and it is presented the same way. In this paper, we report on research that adapts information navigation based on a user profile structured as a weighted concept hierarchy. A user may create his or her own concept hierarchy and use them for browsing Web sites. Or, the user profile may be created from a reference ontology by 'watching over the user's shoulder' while they browse. We show that these automatically created profiles reflect the user's interests quite well and they are able to produce moderate improvements when applied to search results.

Proceedings ArticleDOI
20 May 2003
TL;DR: Piazza offers a language for mediating between data sources on the Semantic Web, which maps both the domain structure and document structure and enables interoperation of XML data with RDF data that is accompanied by rich OWL ontologies.
Abstract: The Semantic Web envisions a World Wide Web in which data is described with rich semantics and applications can pose complex queries. To this point, researchers have defined new languages for specifying meanings for concepts and developed techniques for reasoning about them, using RDF as the data model. To flourish, the Semantic Web needs to be able to accommodate the huge amounts of existing data and the applications operating on them. To achieve this, we are faced with two problems. First, most of the world's data is available not in RDF but in XML; XML and the applications consuming it rely not only on the domain structure of the data, but also on its document structure. Hence, to provide interoperability between such sources, we must map between both their domain structures and their document structures. Second, data management practitioners often prefer to exchange data through local point-to-point data translations, rather than mapping to common mediated schemas or ontologies.This paper describes the Piazza system, which addresses these challenges. Piazza offers a language for mediating between data sources on the Semantic Web, which maps both the domain structure and document structure. Piazza also enables interoperation of XML data with RDF data that is accompanied by rich OWL ontologies. Mappings in Piazza are provided at a local scale between small sets of nodes, and our query answering algorithm is able to chain sets mappings together to obtain relevant data from across the Piazza network. We also describe an implemented scenario in Piazza and the lessons we learned from it.

Book ChapterDOI
20 Oct 2003
TL;DR: This paper shows how ontologies can be contextualized, thus acquiring certain useful properties that a pure shared approach cannot provide, and develops Context OWL (C-OWL), a language whose syntax and semantics have been obtained by extending the OWLntax and semantics to allow for the representation of contextual ontologies.
Abstract: Ontologies are shared models of a domain that encode a view which is common to a set of different parties. Contexts are local models that encode a party's subjective view of a domain. In this paper we show how ontologies can be contextualized, thus acquiring certain useful properties that a pure shared approach cannot provide. We say that an ontology is contextualized or, also, that it is a contextual ontology, when its contents are kept local, and therefore not shared with other ontologies, and mapped with the contents of other ontologies via explicit (context) mappings. The result is Context OWL (C-OWL), a language whose syntax and semantics have been obtained by extending the OWL syntax and semantics to allow for the representation of contextual ontologies.

Proceedings ArticleDOI
04 Jun 2003
TL;DR: The KAoS services rely on a DAML description-logic-based ontology of the computational environment, application context, and the policies themselves that enables runtime extensibility and adaptability of the system, as well as the ability to analyze policies relating to entities described at different levels of abstraction.
Abstract: We describe our initial implementation of the KAoS policy and domain services. While primarily oriented to the dynamic and complex requirements of software agent applications, the services are also being adapted to general-purpose grid computing and Web services environments as well. The KAoS services rely on a DAML description-logic-based ontology of the computational environment, application context, and the policies themselves that enables runtime extensibility and adaptability of the system, as well as the ability to analyze policies relating to entities described at different levels of abstraction.

Book ChapterDOI
20 Oct 2003
TL;DR: A simplistic upper-level ontology is introduced which starts with some basic philosophic distinctions and goes down to the most popular entity types, thus providing many of the inter-domain common sense concepts and allowing easy domain-specific extensions.
Abstract: The Semantic Web realization depends on the availability of critical mass of metadata for the web content, linked to formal knowledge about the world. This paper presents our vision about a holistic system allowing annotation, indexing, and retrieval of documents with respect to real-world entities. A system (called KIM), partially implementing this concept is shortly presented and used for evaluation and demonstration. Our understanding is that a system for semantic annotation should be based upon specific knowledge about the world, rather than indifferent to any ontological commitments and general knowledge. To assure efficiency and reusability of the metadata we introduce a simplistic upper-level ontology which starts with some basic philosophic distinctions and goes down to the most popular entity types (people, companies, cities, etc.), thus providing many of the inter-domain common sense concepts and allowing easy domain-specific extensions. Based on the ontology, an extensive knowledge base of entities descriptions is maintained. Semantically enhanced information extraction system providing automatic annotation with references to classes in the ontology and instances in the knowledge base is presented. Based on these annotations, we perform IR-like indexing and retrieval, further extended using the ontology and knowledge about the specific entities.

Journal ArticleDOI
TL;DR: The OntoLearn system is an infrastructure for automated ontology learning from domain text that uses natural language processing and machine learning techniques, and is part of a more general ontology engineering architecture.
Abstract: Our OntoLearn system is an infrastructure for automated ontology learning from domain text. It is the only system, as far as we know, that uses natural language processing and machine learning techniques, and is part of a more general ontology engineering architecture. We describe the system and an experiment in which we used a machine-learned tourism ontology to automatically translate multiword terms from English to Italian. The method can apply to other domains without manual adaptation.

Book ChapterDOI
01 Feb 2003
Abstract: Some of the liveliest debates about methodology in the social sciences center on comparative research. This essay concentrates on comparative politics, a field often defined by reference to the use of a particular “comparative method,” but it also bears on sociology, where there is active controversy about methodological issues. I use the term “methodology” to refer to the means scholars employ to increase confidence that the inferences they make about the social and political world are valid. The most important of these are inferences about causal relationships, where the object of a methodology is to increase confidence in assertions that one variable or event ( x ) exerts a causal effect on another ( y ). One of the curious features of contemporary debates is that they pay more attention to methodology than to issues of ontology. “Ontology” refers to the character of the world as it actually is. Accordingly, I use the term to refer to the fundamental assumptions scholars make about the nature of the social and political world and especially about the nature of causal relationships within that world. If a methodology consists of techniques for making observations about causal relations, an ontology consists of premises about the deep causal structures of the world from which analysis begins and without which theories about the social world would not make sense. At a fundamental level, it is how we imagine the social world to be.

Book ChapterDOI
17 Nov 2003
TL;DR: This paper describes a context modelling approach using ontologies as a formal fundament, and introduces the Aspect-Scale-Context (ASC) model, which may be used to enable context-awareness and contextual interoperability during service discovery and execution in a proposed distributed system architecture.
Abstract: This paper describes a context modelling approach using ontologies as a formal fundament. We introduce our Aspect-Scale-Context (ASC) model and show how it is related to some other models. A Context Ontology Language (CoOL) is derived from the model, which may be used to enable context-awareness and contextual interoperability during service discovery and execution in a proposed distributed system architecture. A core component of this architecture is a reasoner which infers conclusions about the context based on an ontology built with CoOL.

Book
01 Jan 2003
TL;DR: Description Logics and related formalisms are being applied in at least five applications in medical informatics - terminology, intelligent user interfaces, decision support and semantic indexing, language technology, and systems integration.
Abstract: Description Logics and related formalisms are being applied in at least five applications in medical informatics - terminology, intelligent user interfaces, decision support and semantic indexing, language technology, and systems integration. Important issues include size, complexity, connectivity, and the wide range of granularity required - medical terminologies require on the order of 250,000 concepts, some involving a dozen or more conjuncts with deep nesting; the nature of anatomy and physiology is that everything connects to everything else; and notions to be represented range from psychology to molecular biology. Technical issues for expressivity have focused on problems of part-whole relations and the need to provide "frame-like" functionality - i.e., the ability to determine efficiently what can sensibly be said about any particular concept and means of handling at least limited cases of defaults with exceptions. There are also significant problems with "semantic normalization" and "clinical pragmatics" because understanding medical notions often depends on implicit knowledge and some notions defy easy logical formulation. The two best known efforts - OpenGALEN and SNOMED-RT - both use idiosyncratic Description Logics with generally limited expressivity but specialized extensions to cope with issues around part-whole and other transitive relations. There is also a conflict between the needs for re-use and the requirement for easy understandability by domain expert authors. OpenGALEN has coped with this conflict by introducing a layered architecture with a high level "Intermediate Representation" which insulates authors from the details of the Description Logic, which is treated as an "assembly language" rather than the primary medium for expressing the ontology.

01 Jan 2003
TL;DR: This paper describes the upperlevel ontology SUMO (Suggested Upper Merged Ontology), which has been proposed as the initial version of an eventual Standard Upper Ontology (SUO), and describes the popular, free, and structured WordNet lexical database.
Abstract: Ontologies are becoming extremely useful tools for sophisticated software engineering. Designing applications, databases, and knowledge bases with reference to a common ontology can mean shorter development cycles, easier and faster integration with other software and content, and a more scalable product. Although ontologies are a very promising solution to some of the most pressing problems that confront software engineering, they also raise some issues and difficulties of their own. Consider, for example, the questions below: • How can a formal ontology be used effectively by those who lack extensive training in logic and mathematics? • How can an ontology be used automatically by applications (e.g. Information Retrieval and Natural Language Processing applications) that process free text? • How can we know when an ontology is complete? In this paper we will begin by describing the upperlevel ontology SUMO (Suggested Upper Merged Ontology), which has been proposed as the initial version of an eventual Standard Upper Ontology (SUO). We will then describe the popular, free, and structured WordNet lexical database. After this preliminary discussion, we will describe the methodology that we are using to align WordNet with the SUMO. We close this paper by discussing how this alignment of WordNet with SUMO will provide answers to the questions posed above. Ontologies are becoming extremely useful tools for sophisticated software engineering. Designing applications, databases, and knowledge bases with reference to a common ontology can mean shorter development cycles, easier and faster integration with other software and content, and a more scalable product. Although ontologies are a very promising solution to some of the most pressing problems that confront software engineering, they also raise some issues and difficulties of their own. Consider, for example, the questions below: • How can a formal ontology be used effectively by those who lack extensive training in logic and mathematics? • How can an ontology be used automatically by applications (e.g. Information Retrieval and Natural Language Processing applications) that process free text? • How can we know when an ontology is complete? In this paper we will begin by describing the upperlevel ontology SUMO (Suggested Upper Merged Ontology), which has been proposed as the initial version of an eventual Standard Upper Ontology (SUO). We will then describe the popular, free, and structured WordNet lexical database. After this preliminary discussion, we will describe the methodology that we are using to align WordNet with the SUMO. We close this paper by discussing how this alignment of WordNet with SUMO will provide answers to the questions posed above. keywords: natural language, ontology 1. SUMO The SUMO (Suggested Upper Merged Ontology) is an ontology that was created at Teknowledge Corporation with extensive input from the SUO mailing list, and it has been proposed as a starter document for the IEEE-sanctioned SUO Working Group [1]. The SUMO was created by merging publicly available ontological content into a single, comprehensive, and cohesive structure [2,3]. As of February 2003, the ontology contains 1000 terms and 4000 assertions. The ontology can be browsed online (http://ontology.teknowledge.com), and source files for all of the versions of the ontology can be freely downloaded (http://ontology.teknowledge.com/cgibin/cvsweb.cgi/SUO/).


Journal ArticleDOI
TL;DR: The authors present an integrated enterprise-knowledge management architecture, focusing on how to support multiple ontologies and manage ontology evolution.
Abstract: Several challenges exist related to applying ontologies in real-world environments. The authors present an integrated enterprise-knowledge management architecture, focusing on how to support multiple ontologies and manage ontology evolution.

Patent
08 Aug 2003
TL;DR: In this article, a method for mapping data schemas into an ontology model, including providing an ontological model including classes and properties of classes, providing a data schema, identifying a primary data construct within the data schema.
Abstract: A method for mapping data schemas into an ontology model, including providing an ontology model including classes and properties of classes, providing a data schema, identifying a primary data construct within the data schema, identifying a secondary data construct within the primary data construct, mapping the primary data construct to a corresponding class of the ontology model, and mapping the secondary data construct to a property of the corresponding class of the ontology model. A system and a computer readable storage medium are also described and claimed.

Book ChapterDOI
TL;DR: The Semantic Web is a powerful vision that is getting to grips with the challenge of providing more human-oriented web services, and reasoning with and across distributed, partially implicit assumptions (contextual knowledge) is a milestone.
Abstract: The Semantic Web is a powerful vision that is getting to grips with the challenge of providing more human-oriented web services. Hence, reasoning with and across distributed, partially implicit assumptions (contextual knowledge), is a milestone.