scispace - formally typeset
Search or ask a question

Showing papers on "Semantic Web published in 2004"



BookDOI
01 Jan 2004
TL;DR: DODDLE-R, a support environment for user-centered ontology development, consists of two main parts: pre-processing part and quality improvement part, which generates a prototype ontology semi-automatically and supports the refinement of it interactively.
Abstract: In order to realize the on-the-fly ontology construction for the Semantic Web, this paper proposes DODDLE-R, a support environment for user-centered ontology development. It consists of two main parts: pre-processing part and quality improvement part. Pre-processing part generates a prototype ontology semi-automatically, and quality improvement part supports the refinement of it interactively. As we believe that careful construction of ontologies from preliminary phase is more efficient than attempting generate ontologies full-automatically (it may cause too many modification by hand), quality improvement part plays significant role in DODDLE-R. Through interactive support for improving the quality of prototype ontology, OWL-Lite level ontology, which consists of taxonomic relationships (class sub class relationship) and non-taxonomic relationships (defined as property), is constructed effi-

2,006 citations


Book
01 Jan 2004
TL;DR: The third edition of this widely used text has been thoroughly updated, with significant new material that reflects a rapidly developing field.
Abstract: The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its uses. A Semantic Web Primer provides an introduction and guide to this continuously evolving field, describing its key ideas, languages, and technologies. Suitable for use as a textbook or for independent study by professionals, it concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own and includes exercises, project descriptions, and annotated references to relevant online materials.The third edition of this widely used text has been thoroughly updated, with significant new material that reflects a rapidly developing field. Treatment of the different languages (OWL2, rules) expands the coverage of RDF and OWL, defining the data model independently of XML and including coverage of N3/Turtle and RDFa. A chapter is devoted to OWL2, the new W3C standard. This edition also features additional coverage of the query language SPARQL, the rule language RIF and the possibility of interaction between rules and ontology languages and applications. The chapter on Semantic Web applications reflects the rapid developments of the past few years. A new chapter offers ideas for term projects. Additional material, including updates on the technological trends and research directions, can be found at http://www.semanticwebprimer.org.

1,634 citations


Proceedings ArticleDOI
14 Mar 2004
TL;DR: An OWL encoded context ontology (CONON) is proposed for modeling context in pervasive computing environments, and for supporting logic-based context reasoning, and provides extensibility for adding domain-specific ontology in a hierarchical manner.
Abstract: Here we propose an OWL encoded context ontology (CONON) for modeling context in pervasive computing environments, and for supporting logic-based context reasoning. CONON provides an upper context ontology that captures general concepts about basic context, and also provides extensibility for adding domain-specific ontology in a hierarchical manner. Based on this context ontology, we have studied the use of logic reasoning to check the consistency of context information, and to reason over low-level, explicit context to derive high-level, implicit context. By giving a performance study for our prototype, we quantitatively evaluate the feasibility of logic based context reasoning for nontime-critical applications in pervasive computing environments, where we always have to deal carefully with the limitation of computational resources.

1,236 citations


Proceedings ArticleDOI
19 May 2004
TL;DR: The new Semantic Web recommendations for RDF, RDFS and OWL have, at their heart, the RDF graph, and Jena2, a second-generation RDF toolkit, is similarly centered on the R DF graph.
Abstract: The new Semantic Web recommendations for RDF, RDFS and OWL have, at their heart, the RDF graph. Jena2, a second-generation RDF toolkit, is similarly centered on the RDF graph. RDFS and OWL reasoning are seen as graph-to-graph transforms, producing graphs of virtual triples. Rich APIs are provided. The Model API includes support for other aspects of the RDF recommendations, such as containers and reification. The Ontology API includes support for RDFS and OWL, including advanced OWL Full support. Jena includes the de facto reference RDF/XML parser, and provides RDF/XML output using the full range of the rich RDF/XML grammar. N3 I/O is supported. RDF graphs can be stored in-memory or in databases. Jena's query language, RDQL, and the Web API are both offered for the next round of standardization.

1,125 citations


Book ChapterDOI
07 Nov 2004
TL;DR: The architecture of the OWL Plugin is described, the most important features are walked through, and some of the design decisions are discussed.
Abstract: We introduce the OWL Plugin, a Semantic Web extension of the Protege ontology development platform. The OWL Plugin can be used to edit ontologies in the Web Ontology Language (OWL), to access description logic reasoners, and to acquire instances for semantic markup. In many of these features, the OWL Plugin has created and facilitated new practices for building Semantic Web contents, often driven by the needs of and feedback from our users. Furthermore, Protege's flexible open-source platform means that it is easy to integrate custom-tailored components to build real-world applications. This document describes the architecture of the OWL Plugin, walks through its most important features, and discusses some of our design decisions.

1,023 citations


Proceedings ArticleDOI
13 Nov 2004
TL;DR: Swoogle is a crawler-based indexing and retrieval system for the Semantic Web that extracts metadata for each discovered document, and computes relations between documents.
Abstract: Swoogle is a crawler-based indexing and retrieval system for the Semantic Web. It extracts metadata for each discovered document, and computes relations between documents. Discovered documents are also indexed by an information retrieval system which can use either character N-Gram or URIrefs as keywords to find relevant documents and to compute the similarity among a set of documents. One of the interesting properties we compute is ontology rank, a measure of the importance of a Semantic Web document.

926 citations


Book
01 Jan 2004
TL;DR: Theoretical Foundations of Ontologies as discussed by the authors The most outstanding ontologies and methods for building ontologies are discussed in Section 3.1.2.2 Languages for Building Ontologies.
Abstract: Theoretical Foundations of Ontologies.- The Most Outstanding Ontologies.- Methodologies and Methods for Building Ontologies.- Languages for Building Ontologies.- Ontology Tools.

919 citations


Book ChapterDOI
06 Jul 2004
TL;DR: This paper shows how to use OWL-S in conjunction with Web service standards, and explains and illustrates the value added by the semantics expressed in OWl-S.
Abstract: Service interface description languages such as WSDL, and related standards, are evolving rapidly to provide a foundation for interoperation between Web services. At the same time, Semantic Web service technologies, such as the Ontology Web Language for Services (OWL-S), are developing the means by which services can be given richer semantic specifications. Richer semantics can enable fuller, more flexible automation of service provision and use, and support the construction of more powerful tools and methodologies. Both sets of technologies can benefit from complementary uses and cross-fertilization of ideas. This paper shows how to use OWL-S in conjunction with Web service standards, and explains and illustrates the value added by the semantics expressed in OWL-S.

896 citations


Journal ArticleDOI
TL;DR: Four key issues for Web service composition are described, which offer developers reuse possibilities and users seamless access to a variety of complex services.
Abstract: Web service composition lets developers create applications on top of service-oriented computing's native description, discovery, and communication capabilities. Such applications are rapidly deployable and offer developers reuse possibilities and users seamless access to a variety of complex services. There are many existing approaches to service composition, ranging from abstract methods to those aiming to be industry standards. The authors describe four key issues for Web service composition.

770 citations


Book ChapterDOI
07 Nov 2004
TL;DR: The experience in applying KAoS services to ensure policy compliance for Semantic Web Services workflow composition and enactment is described and how this work has uncovered requirements for increasing the expressivity of policy beyond what can be done with description logic is described.
Abstract: In this paper we describe our experience in applying KAoS services to ensure policy compliance for Semantic Web Services workflow composition and enactment. We are developing these capabilities within the context of two applications: Coalition Search and Rescue (CoSAR-TS) and Semantic Firewall (SFW). We describe how this work has uncovered requirements for increasing the expressivity of policy beyond what can be done with description logic (e.g., role-value-maps), and how we are extending our representation and reasoning mechanisms in a carefully controlled manner to that end. Since KAoS employs OWL for policy representation, it fits naturally with the use of OWL-S workflow descriptions generated by the AIAI I-X planning system in the CoSAR-TS application. The advanced reasoning mechanisms of KAoS are based on the JTP inference engine and enable the analysis of classes and instances of processes from a policy perspective. As the result of analysis, KAoS concludes whether a particular workflow step is allowed by policy and whether the performance of this step would incur additional policy-generated obligations. Issues in the representation of processes within OWL-S are described. Besides what is done during workflow composition, aspects of policy compliance can be checked at runtime when a workflow is enacted. We illustrate these capabilities through two application examples. Finally, we outline plans for future work.

01 Jan 2004
TL;DR: The Web Services Choreography Description Language (WS-CDL) as mentioned in this paper is an XML-based language that describes peer-to-peer collaborations of parties by defining, from a global viewpoint, their common and complementary observable behavior; where ordered message exchanges result in accomplishing a common business goal.
Abstract: 19 20 21 22 23 24 25 26 27 28 29 30 31 The Web Services Choreography Description Language (WS-CDL) is an XML-based language that describes peer-to-peer collaborations of parties by defining, from a global viewpoint, their common and complementary observable behavior; where ordered message exchanges result in accomplishing a common business goal. The Web Services specifications offer a communication bridge between the heterogeneous computational environments used to develop and host applications. The future of E-Business applications requires the ability to perform long-lived, peer-to-peer collaborations between the participating services, within or across the trusted domains of an organization. The Web Services Choreography specification is targeted for composing interoperable, peer-to-peer collaborations between any type of party regardless of the supporting platform or programming model used by the implementation of the hosting environment.

Proceedings ArticleDOI
17 May 2004
TL;DR: MWSAF (METEOR-S Web Service Annotation Framework), a framework for semi-automatically marking up Web service descriptions with ontologies, which has developed algorithms to match and annotate WSDL files with relevant ontologies.
Abstract: The World Wide Web is emerging not only as an infrastructure for data, but also for a broader variety of resources that are increasingly being made available as Web services. Relevant current standards like UDDI, WSDL, and SOAP are in their fledgling years and form the basis of making Web services a workable and broadly adopted technology. However, realizing the fuller scope of the promise of Web services and associated service oriented architecture will requite further technological advances in the areas of service interoperation, service discovery, service composition, and process orchestration. Semantics, especially as supported by the use of ontologies, and related Semantic Web technologies, are likely to provide better qualitative and scalable solutions to these requirements. Just as semantic annotation of data in the Semantic Web is the first critical step to better search, integration and analytics over heterogeneous data, semantic annotation of Web services is an equally critical first step to achieving the above promise. Our approach is to work with existing Web services technologies and combine them with ideas from the Semantic Web to create a better framework for Web service discovery and composition. In this paper we present MWSAF (METEOR-S Web Service Annotation Framework), a framework for semi-automatically marking up Web service descriptions with ontologies. We have developed algorithms to match and annotate WSDL files with relevant ontologies. We use domain ontologies to categorize Web services into domains. An empirical study of our approach is presented to help evaluate its performance.

Journal ArticleDOI
TL;DR: An ontology of time is being developed for describing the temporal content of Web pages and the temporal properties of Web services, which covers topological properties of instants and intervals, measures of duration, and the meanings of clock and calendar terms.
Abstract: In connection with the DAML project for bringing about the Semantic Web, an ontology of time is being developed for describing the temporal content of Web pages and the temporal properties of Web services This ontology covers topological properties of instants and intervals, measures of duration, and the meanings of clock and calendar terms

Journal ArticleDOI
TL;DR: This paper describes the design and implementation of a service matchmaking prototype that uses a DAML -S based ontology and a description logic reasoner to compare ontology-based service descriptions to represent the semantics of service descriptions.
Abstract: The semantic Web can make e-commerce interactions more flexible and automated by standardizing ontologies, message content, and message protocols. This paper investigates how semantic and Web Services technologies can be used to support service advertisement and discovery in e-commerce. In particular, it describes the design and implementation of a service matchmaking prototype that uses a DAML -S based ontology and a description logic reasoner to compare ontology-based service descriptions. By representing the semantics of service descriptions, the matchmaker enables the behavior of an intelligent agent to approach more closely that of a human user trying to locate suitable Web services. The performance of this prototype implementation was tested in a realistic agent-based e-commerce scenario.

01 Jan 2004
TL;DR: D2R Server is a tool for publishing the content of relational databases on the Semantic Web that allows Web agents to retrieve RDF and XHTML representations of resources and to query non-RDF databases using the SParQL query language over the SPARQL protocol.
Abstract: D2R Server is a tool for publishing the content of relational databases on the Semantic Web. Database content is mapped to RDF by a declarative mapping which specifies how resources are identified and how property values are generated from database content. Based on this mapping, D2R Server allows Web agents to retrieve RDF and XHTML representations of resources and to query non-RDF databases using the SPARQL query language over the SPARQL protocol. The generated representations are richly interlinked on RDF and XHTML level in order to enable browsers and crawlers to navigate database content.

Proceedings Article
01 May 2004
TL;DR: It is proposed in this paper that one approach to ontology evaluation should be corpus or data driven, because a corpus is the most accessible form of knowledge and its use allows a measure to be derived of the ‘fit’ between an ontology and a domain of knowledge.
Abstract: The evaluation of ontologies is vital for the growth of the Semantic Web. We consider a number of problems in evaluating a knowledge artifact like an ontology. We propose in this paper that one approach to ontology evaluation should be corpus or data driven. A corpus is the most accessible form of knowledge and its use allows a measure to be derived of the ‘fit’ between an ontology and a domain of knowledge. We consider a number of methods for measuring this ‘fit’ and propose a measure to evaluate structural fit, and a probabilistic approach to identifying the best ontology.

Book ChapterDOI
07 Nov 2004
TL;DR: A planning technique for the automated composition of web services described in OWLS process models, which can deal effectively with nondeterminism, partial observability, and complex goals is proposed and implemented in a planner.
Abstract: Different planning techniques have been applied to the problem of automated composition of web services. However, in realistic cases, this planning problem is far from trivial: the planner needs to deal with the nondeterministic behavior of web services, the partial observability of their internal status, and with complex goals expressing temporal conditions and preference requirements. We propose a planning technique for the automated composition of web services described in OWLS process models, which can deal effectively with nondeterminism, partial observability, and complex goals. The technique allows for the synthesis of plans that encode compositions of web services with the usual programming constructs, like conditionals and iterations. The generated plans can thus be translated into executable processes, e.g., BPEL4WS programs. We implement our solution in a planner and do some preliminary experimental evaluations that show the potentialities of our approach, and the gain in performance of automating the composition at the semantic level w.r.t. the automated composition at the level of executable processes.

Journal ArticleDOI
TL;DR: This work proposes a method, ONTOMETRIC, which allows the users to measure the suitability of existing ontologies, regarding the requirements of their systems.
Abstract: In the last years, the development of ontology-based applications has increased considerably, mainly related to the semantic web. Users currently looking for ontologies in order to incorporate them into their systems, just use their experience and intuition. This makes it difficult for them to justify their choices. Mainly, this is due to the lack of methods that help the user to determine which are the most appropriate ontologies for the new system. To solve this deficiency, the present work proposes a method, ONTOMETRIC, which allows the users to measure the suitability of existing ontologies, regarding the requirements of their systems.

Proceedings ArticleDOI
17 May 2004
TL;DR: PANKOW (Pattern-based Annotation through Knowledge on theWeb), a method which employs an unsupervised, pattern-based approach to categorize instances with regard to an ontology, is proposed.
Abstract: The success of the Semantic Web depends on the availability of ontologies as well as on the proliferation of web pages annotated with metadata conforming to these ontologies. Thus, a crucial question is where to acquire these metadata from. In this paper wepropose PANKOW (Pattern-based Annotation through Knowledge on theWeb), a method which employs an unsupervised, pattern-based approach to categorize instances with regard to an ontology. The approach is evaluated against the manual annotations of two human subjects. The approach is implemented in OntoMat, an annotation tool for the Semantic Web and shows very promising results.

Journal ArticleDOI
TL;DR: The most representative methodologies for building ontologies from scratch are presented, and the proposed techniques, guidelines and methods to help in the construction task are described.
Abstract: Ontologies are an important component in many areas, such as knowledge management and organization, electronic commerce and information retrieval and extraction. Several methodologies for ontology building have been proposed. In this article, we provide an overview of ontology building. We start by characterizing the ontology building process and its life cycle. We present the most representative methodologies for building ontologies from scratch, and the proposed techniques, guidelines and methods to help in the construction task. We analyze and compare these methodologies. We describe current research issues in ontology reuse. Finally, we discuss the current trends in ontology building and its future challenges, namely, the new issues for building ontologies for the Semantic Web.


Journal ArticleDOI
TL;DR: This work proposes an ontology-based context model that leverages Semantic Web technology and OWL (Web Ontology Language) and proposes a service-oriented context-aware middleware (SOCAM) architecture, including a set of independent services that perform context discovery, acquisition, and interpretation.
Abstract: Applications and services must adapt to changing contexts in dynamic environments. However, building context-aware applications is still complex and time-consuming due to inadequate infrastructure support. We propose a context-aware infrastructure for building and rapidly prototyping such applications in a smart-home environment. This OSGi-based infrastructure manages context-aware services reliably and securely and efficiently supports context acquisition, discovery, and reasoning. A formal, ontology-based context model enables semantic context representation, reasoning, and knowledge sharing. We propose an ontology-based context model that leverages Semantic Web technology and OWL (Web Ontology Language). OWL is an ontology markup language that enables context sharing and context reasoning. Based on our context model, we also propose a service-oriented context-aware middleware (SOCAM) architecture, including a set of independent services that perform context discovery, acquisition, and interpretation.

Proceedings ArticleDOI
17 May 2004
TL;DR: The expressive power of ORL is discussed, showing that the ontology consistency problem is undecidable, and how reasoning support for ORL might be provided are discussed.
Abstract: Although the OWLWeb Ontology Language adds considerable expressive power to the Semantic Web it does have expressive limitations, particularly with respect to what can be said about properties. Wepresent ORL (OWL Rules Language), a Horn clause rules extension to OWL that overcomes many of these limitations. ORL extends OWL in a syntactically and semantically coherent manner: the basic syntax for ORL rules is an extension of the abstract syntax for OWL DL and OWLLite; ORL rules are given formal meaning via an extension of the OWLDL model-theoretic semantics; ORL rules are given an XML syntax basedon the OWL XML presentation syntax; and a mapping from ORL rules to RDF graphs is given based on the OWL RDF/XML exchange syntax. Wediscuss the expressive power of ORL, showing that the ontology consistency problem is undecidable, provide several examples of ORLusage, and discuss how reasoning support for ORL might be provided.

Proceedings ArticleDOI
17 May 2004
TL;DR: A search architecture that combines classical search techniques with spread activation techniques applied to a semantic model of a given domain and it was observed that the proposed hybrid spread activation, combining the symbolic and the sub-symbolic approaches, achieved better results when compared to each of the approaches alone.
Abstract: This paper presents a search architecture that combines classical search techniques with spread activation techniques applied to a semantic model of a given domain. Given an ontology, weights are assigned to links based on certain properties of the ontology, so that they measure the strength of the relation. Spread activation techniques are used to find related concepts in the ontology given an initial set of concepts and corresponding initial activation values. These initial values are obtained from the results of classical search applied to the data associated with the concepts in the ontology. Two test cases were implemented, with very positive results. It was also observed that the proposed hybrid spread activation, combining the symbolic and the sub-symbolic approaches, achieved better results when compared to each of the approaches alone.

Journal ArticleDOI
TL;DR: Semantic Space is a pervasive computing infrastructure that exploits semantic Web technologies to support explicit representation, expressive querying, and flexible reasoning of contexts in smart spaces.
Abstract: Semantic Space is a pervasive computing infrastructure that exploits semantic Web technologies to support explicit representation, expressive querying, and flexible reasoning of contexts in smart spaces.

Book ChapterDOI
05 Oct 2004
TL;DR: This paper presents the most common difficulties encountered by newcomers to the language, that have been observed during the course of more than a dozen workshops, tutorials and modules about OWL-DL and it’s predecessor languages.
Abstract: Understanding the logical meaning of any description logic or similar formalism is difficult for most people, and OWL-DL is no exception. This paper presents the most common difficulties encountered by newcomers to the language, that have been observed during the course of more than a dozen workshops, tutorials and modules about OWL-DL and it’s predecessor languages. It emphasises understanding the exact meaning of OWL expressions – proving that understanding by paraphrasing them in pedantic but explicit language. It addresses, specifically, the confusion which OWL’s open world assumption presents to users accustomed to closed world systems such as databases, logic programming and frame languages. Our experience has had a major influence in formulating the requirements for a new set of user interfaces for OWL the first of which are now available as prototypes. A summary of the guidelines and paraphrases and examples of the new interface are provided. The example ontologies are available online.

Proceedings ArticleDOI
28 Mar 2004
TL;DR: This work proposes Appleseed, a novel proposal for local group trust computation that borrows many ideas from spreading activation models in psychology and relates their concepts to trust evaluation in an intuitive fashion.
Abstract: Semantic Web endeavors have mainly focused on issues pertaining to knowledge representation and ontology design. However, besides understanding information metadata stated by subjects, knowing about their credibility becomes equally crucial. Hence, trust and trust metrics, conceived as computational means to evaluate trust relationships between individuals, come into play. Our major contributions to semantic Web trust management are twofold. First, we introduce our classification scheme for trust metrics along various axes and discuss advantages and drawbacks of existing approaches for semantic Web scenarios. Hereby, we devise our advocacy for local group trust metrics, guiding us to the second part which presents Appleseed, our novel proposal for local group trust computation. Compelling in its simplicity, Appleseed borrows many ideas from spreading activation models in psychology and relates their concepts to trust evaluation in an intuitive fashion.

Journal ArticleDOI
TL;DR: This paper describes-several aspects of the Piazza PDMS, including the schema mediation formalism, query answering and optimization algorithms, and the relevance of PDMSs to the semantic Web.
Abstract: Intuitively, data management and data integration tools are well-suited for exchanging information in a semantically meaningful way. Unfortunately, they suffer from two significant problems: They typically require a comprehensive schema design before they can be used to store or share information and they are difficult to extend because schema evolution is heavyweight and may break backward compatibility. As a result, many small-scale data sharing tasks are more easily facilitated by nondatabase-oriented tools that have little support for semantics. The goal of the peer data management system (PDMS) is to address this need: We propose the use of a decentralized, easily extensible data management architecture in which any user can contribute new data, schema information, or even mappings between other peers' schemes. PDMSs represent a natural step beyond data integration systems, replacing their single logical schema with an interlinked collection of semantic mappings between peers' individual schemas. This paper describes-several aspects of the Piazza PDMS, including the schema mediation formalism, query answering and optimization algorithms, and the relevance of PDMSs to the semantic Web.

Journal ArticleDOI
Borislav Popov1, Atanas Kiryakov1, Damyan Ognyanoff1, Dimitar Manov1, Angel Kirilov1 
TL;DR: The KIM platform as mentioned in this paper provides a knowledge and information management framework and services for automatic semantic annotation, indexing, and retrieval of documents, based on a simple model of real-world entity concepts and quasi-exhaustive instance knowledge.
Abstract: The KIM platform provides a novel Knowledge and Information Management framework and services for automatic semantic annotation, indexing, and retrieval of documents. It provides a mature and semantically enabled infrastructure for scalable and customizable information extraction (IE) as Our understanding is that a system for semantic annotation should be based upon a simple model of real-world entity concepts, complemented with quasi-exhaustive instance knowledge. To ensure efficiency, easy sharing, and reusability of the metadata we introduce an upper-level ontology. Based on the ontology, a large-scale instance base of entity descriptions is maintained. The knowledge resources involved are handled by use of state-of-the-art Semantic Web technology and standards, including RDF(S) repositories, ontology middleware and reasoning. From a technical point of view, the platform allows KIM-based applications to use it for automatic semantic annotation, for content retrieval based on semantic queries, and for semantic repository access. As a framework, KIM also allows various IE modules, semantic repositories and information retrieval engines to be plugged into it. This paper presents the KIM platform, with an emphasis on its architecture, interfaces, front-ends, and other technical issues.