scispace - formally typeset
Search or ask a question

Showing papers on "Semantic Web published in 2010"


Journal ArticleDOI
TL;DR: This paper provides an introduction to ontology-based information extraction and reviews the details of different OBIE systems developed so far to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation.
Abstract: Information extraction (IE) aims to retrieve certain types of information from natural language text by processing them automatically. For example, an IE system might retrieve information about geopolitical indicators of countries from a set of web pages while ignoring other types of information. Ontology-based information extraction (OBIE) has recently emerged as a subfield of information extraction. Here, ontologies - which provide formal and explicit specifications of conceptualizations - play a crucial role in the IE process. Because of the use of ontologies, this field is related to knowledge representation and has the potential to assist the development of the Semantic Web. In this paper, we provide an introduction to ontology-based information extraction and review the details of different OBIE systems developed so far. We attempt to identify a common architecture among these systems and classify them based on different factors, which leads to a better understanding on their operation. We also discuss the implementation details of these systems including the tools used by them and the metrics used to measure their performance. In addition, we attempt to identify the possible future directions for this field.

409 citations


Proceedings ArticleDOI
04 Feb 2010
TL;DR: It is believed that corroboration can serve in a wide range of applications such as source selection in the semantic Web, data quality assessment or semantic annotation cleaning in social networks, and this work sets the bases for a widerange of techniques for solving these more complex problems.
Abstract: We consider a set of views stating possibly conflicting facts. Negative facts in the views may come, e.g., from functional dependencies in the underlying database schema. We want to predict the truth values of the facts. Beyond simple methods such as voting (typically rather accurate), we explore techniques based on "corroboration", i.e., taking into account trust in the views. We introduce three fixpoint algorithms corresponding to different levels of complexity of an underlying probabilistic model. They all estimate both truth values of facts and trust in the views. We present experimental studies on synthetic and real-world data. This analysis illustrates how and in which context these methods improve corroboration results over baseline methods. We believe that corroboration can serve in a wide range of applications such as source selection in the semantic Web, data quality assessment or semantic annotation cleaning in social networks. This work sets the bases for a wide range of techniques for solving these more complex problems.

321 citations


Proceedings Article
03 Nov 2010
TL;DR: SenticNet is a publicly available resource for opinion mining built exploiting AI and Semantic Web techniques and uses dimensionality reduction to infer the polarity of common sense concepts and hence provide a public resource for mining opinions from natural language text at a semantic, rather than just syntactic, level.
Abstract: Today millions of web-users express their opinions about many topics through blogs, wikis, fora, chats and social networks. For sectors such as e-commerce and e-tourism, it is very useful to automatically analyze the huge amount of social information available on the Web, but the extremely unstructured nature of these contents makes it a difficult task. SenticNet is a publicly available resource for opinion mining built exploiting AI and Semantic Web techniques. It uses dimensionality reduction to infer the polarity of common sense concepts and hence provide a public resource for mining opinions from natural language text at a semantic, rather than just syntactic, level.

285 citations


Posted Content
TL;DR: In this article, the syntactic differences that a fuzzy ontology language has to cope with are identified and a concrete methodology to represent fuzzy ontologies using OWL 2 annotation properties is proposed.
Abstract: The need to deal with vague information in Semantic Web languages is rising in importance and, thus, calls for a standard way to represent such information. We may address this issue by either extending current Semantic Web languages to cope with vagueness, or by providing a procedure to represent such information within current standard languages and tools. In this work, we follow the latter approach, by identifying the syntactic differences that a fuzzy ontology language has to cope with, and by proposing a concrete methodology to represent fuzzy ontologies using OWL 2 annotation properties. We also report on the prototypical implementations.

271 citations


Book ChapterDOI
07 Nov 2010
TL;DR: This paper presents a system for finding schema-level links between LOD datasets in the sense of ontology alignment, based on the idea of bootstrapping information already present on the LOD cloud, and presents a comprehensive evaluation which shows that BLOOMS outperforms state-of-the-art ontology aligned systems on LOD dataset.
Abstract: The Web of Data currently coming into existence through the Linked Open Data (LOD) effort is a major milestone in realizing the Semantic Web vision. However, the development of applications based on LOD faces difficulties due to the fact that the different LOD datasets are rather loosely connected pieces of information. In particular, links between LOD datasets are almost exclusively on the level of instances, and schema-level information is being ignored. In this paper, we therefore present a system for finding schema-level links between LOD datasets in the sense of ontology alignment. Our system, called BLOOMS, is based on the idea of bootstrapping information already present on the LOD cloud. We also present a comprehensive evaluation which shows that BLOOMS outperforms state-of-the-art ontology alignment systems on LOD datasets. At the same time, BLOOMS is also competitive compared with these other systems on the Ontology Evaluation Alignment Initiative Benchmark datasets.

270 citations


Book
01 Jan 2010
TL;DR: This monograph contends that provenance can and should reliably be tracked and exploited on the Web, and investigates the necessary foundations to achieve such a vision, as well as identifying an open approach and a model for provenance.
Abstract: Provenance, i.e., the origin or source of something, is becoming an important concern, since it offers the means to verify data products, to infer their quality, to analyse the processes that led to them, and to decide whether they can be trusted. For instance, provenance enables the reproducibility of scientific results; provenance is necessary to track attribution and credit in curated databases; and, it is essential for reasoners to make trust judgements about the information they use over the Semantic Web. As the Web allows information sharing, discovery, aggregation, filtering and flow in an unprecedented manner, it also becomes very difficult to identify, reliably, the original source that produced an information item on the Web. Since the emerging use of provenance in niche applications is undoubtedly demonstrating the benefits of provenance, this monograph contends that provenance can and should reliably be tracked and exploited on the Web, and investigates the necessary foundations to achieve such a vision. Multiple data sources have been used to compile the largest bibliographical database on provenance so far. This large corpus permits the analysis of emerging trends in the research community. Specifically, the CiteSpace tool identifies clusters of papers that constitute research fronts, from which characteristics are extracted to structure a foundational framework for provenance on the Web. Such an endeavour requires a multi-disciplinary approach, since it requires contributions from many computer science sub-disciplines, but also other non-technical fields given the human challenge that is anticipated. To develop such a vision, it is necessary to provide a definition of provenance that applies to the Web context. A conceptual definition of provenance is expressed in terms of processes, and is shown to generalise various definitions of provenance commonly encountered. Furthermore, by bringing realistic distributed systems assumptions, this definition is refined as a query over assertions made by applications. Given that the majority of work on provenance has been undertaken by the database, workflow and e-science communities, some of their work is reviewed, contrasting approaches, and focusing on important topics believed to be crucial for bringing provenance to the Web, such as abstraction, collections, storage, queries, workflow evolution, semantics and activities involving human interactions. However, provenance approaches developed in the context of databases and workflows essentially deal with closed systems. By that, it is meant that workflow or database management systems are in full control of the data they manage, and track their provenance within their own scope, but not beyond. In the context of the Web, a broader approach is required by which chunks of provenance representation can be brought together to describe the provenance of information flowing across multiple systems. For this purpose, this monograph puts forward the Open Provenance Vision, which is an approach that consists of controlled vocabulary, serialisation formats and interfaces to allow the provenance of individual systems to be expressed, connected in a coherent fashion, and queried seamlessly. In this context, the Open Provenance Model is an emerging community-driven representation of provenance, which has been actively used by some 20 teams to exchange provenance information, in line with the Open Provenance Vision. After identifying an open approach and a model for provenance, techniques to expose provenance over the Web are investigated. In particular, Semantic Web technologies are discussed since they have been successfully exploited to express, query and reason over provenance. Symmetrically, Semantic Web technologies such as RDF, underpinning the Linked Data effort, are analysed since they offer their own difficulties with respect to provenance. A powerful argument for provenance is that it can help make systems transparent, so that it becomes possible to determine whether a particular use of information is appropriate under a set of rules. Such capability helps make systems and information accountable. To offer accountability, provenance itself must be authentic, and rely on security approaches, which are described in the monograph. This is then followed by systems where provenance is the basis of an auditing mechanism to check past processes against rules or regulations. In practice, not all users want to check and audit provenance, instead, they may rely on measures of quality or trust; hence, emerging provenance-based approaches to compute trust and quality of data are reviewed.

248 citations


Journal ArticleDOI
TL;DR: It is shown how the growing Semantic Web provides necessary support for these technologies, the challenges in bringing the technology to the next level, and some starting places for the research are proposed.

239 citations


Proceedings ArticleDOI
22 Jun 2010
TL;DR: The concept, architecture and key design decisions of Smart-M3 interoperability platform, based on the ideas of space-based information sharing and semantic web ideas about information representation and ontologies, are described.
Abstract: We describe the concept, architecture and key design decisions of Smart-M3 interoperability platform. The platform is based on the ideas of space-based information sharing and semantic web ideas about information representation and ontologies. The interoperability platform has been used as the basis for multiple case studies.

232 citations


Proceedings ArticleDOI
26 Apr 2010
TL;DR: This work proposes a formal model of one specific semantic search task: ad-hoc object retrieval and shows that this task provides a solid framework to study some of the semantic search problems currently tackled by commercial Web search engines.
Abstract: Semantic Search refers to a loose set of concepts, challenges and techniques having to do with harnessing the information of the growing Web of Data (WoD) for Web search. Here we propose a formal model of one specific semantic search task: ad-hoc object retrieval. We show that this task provides a solid framework to study some of the semantic search problems currently tackled by commercial Web search engines. We connect this task to the traditional ad-hoc document retrieval and discuss appropriate evaluation metrics. Finally, we carry out a realistic evaluation of this task in the context of a Web search application.

228 citations


Journal ArticleDOI
TL;DR: This paper provides a learning algorithm based on refinement operators for the description logic ALCQ including support for concrete roles and shows that the approach is superior to other learning approaches on description logics, and is competitive with established ILP systems.
Abstract: With the advent of the Semantic Web, description logics have become one of the most prominent paradigms for knowledge representation and reasoning. Progress in research and applications, however, is constrained by the lack of well-structured knowledge bases consisting of a sophisticated schema and instance data adhering to this schema. It is paramount that suitable automated methods for their acquisition, maintenance, and evolution will be developed. In this paper, we provide a learning algorithm based on refinement operators for the description logic ALCQ including support for concrete roles. We develop the algorithm from thorough theoretical foundations by identifying possible abstract property combinations which refinement operators for description logics can have. Using these investigations as a basis, we derive a practically useful complete and proper refinement operator. The operator is then cast into a learning algorithm and evaluated using our implementation DL-Learner. The results of the evaluation show that our approach is superior to other learning approaches on description logics, and is competitive with established ILP systems.

223 citations


01 Jan 2010
TL;DR: This paper discusses common errors in RDF publishing, their consequences for applications, along with possible publisher-oriented approaches to improve the quality of structured, machine-readable and open data on the Web.
Abstract: Over a decade after RDF has been published as a W3C recommendation, publishing open and machine-readable content on the Web has recently received a lot more attention, including from corporate and governmental bodies; notably thanks to the Linked Open Data community, there now exists a rich vein of heterogeneous RDF data published on the Web (the so-called \Web of Data") accessible to all. However, RDF publishers are prone to making errors which compromise the e ectiveness of applications leveraging the resulting data. In this paper, we discuss common errors in RDF publishing, their consequences for applications, along with possible publisher-oriented approaches to improve the quality of structured, machine-readable and open data on the Web.

01 Jan 2010
TL;DR: To understand how Twitter is practically used for spreading scientic messages, tweets containing the ocial hashtags of #scientic messages were captured in order to understand how the social network is used for spread information.
Abstract: According to a survey we recently conducted, Twitter was ranked in the top three services used by Semantic Web researchers to spread information. In order to understand how Twitter is practically used for spreading scientic messages, we captured tweets containing the ocial hashtags of

Journal ArticleDOI
TL;DR: Instead of developing new semantically enabled services from scratch, this work proposes to create profiles of existing services that implement a transparent mapping between the OGC and the Semantic Web world, and points out how to combine SDI with linked data.
Abstract: Building on abstract reference models, the Open Geospatial Consortium (OGC) has established standards for storing, discovering, and processing geographical information. These standards act as a basis for the implementation of specific services and Spatial Data Infrastructures (SDI). Research on geo-semantics plays an increasing role to support complex queries and retrieval across heterogeneous information sources, as well as for service orchestration, semantic translation, and on-the-fly integration. So far, this research targets individual solutions or focuses on the Semantic Web, leaving the integration into SDI aside. What is missing is a shared and transparent Semantic Enablement Layer for SDI which also integrates reasoning services known from the Semantic Web. Instead of developing new semantically enabled services from scratch, we propose to create profiles of existing services that implement a transparent mapping between the OGC and the Semantic Web world. Finally, we point out how to combine SDI with linked data.

Journal ArticleDOI
TL;DR: This work presents Sig.ma, both a service and an end user application to access the Web of Data as an integrated information space in which large scale semantic Web indexing, logic reasoning, data aggregation heuristics, ad-hoc ontology consolidation, external services and responsive user interaction all play together to create rich entity descriptions.

Proceedings Article
23 Mar 2010
TL;DR: It is argued that the Linked Open Data (LoD) Cloud, in its current form, is only of limited value for furthering the Semantic Web vision and directions for research to remedy the situation are given.
Abstract: In this position paper, we argue that the Linked Open Data (LoD) Cloud, in its current form, is only of limited value for furthering the Semantic Web vision. Being merely a weakly linked triple collection, it will only be of very limited benefit for the AI or Semantic Web communities. We describe the corresponding problems with the LoD Cloud and give directions for research to remedy the situation.

Proceedings Article
01 Nov 2010
TL;DR: The Entity List Completion (ELC) task as mentioned in this paper was introduced as a pilot task to foster research in the field of entity retrieval, which is motivated by the same user scenario as as mentioned in this paper, but with the main difference that entities are represented by their URIs in a Semantic Web crawl (the Billion Triple Collection).
Abstract: : The issue of combining (noisy) textual material (the Web) with semi-structured data (like Wikipedia or slightly more structured data sources like IMDB) is however an interesting line of research. As many data sources, and in particular those being constructed as so-called Linked Open Data (LOD), are naturally organized around entities, it would be reasonable to examine this problem in the context of entity retrieval. To foster research in this direction, we introduced the new Entity List Completion (ELC) pilot task. ELC is motivated by the same user scenario as REF, but with the main difference that entities are represented by their URIs in a Semantic Web crawl (the Billion Triple Collection). In addition, a small number of example entities (defined by their URIs) are made available as part of the topic definition. Our goal is to turn this pilot task to an "official" task in 2011.

Proceedings Article
11 Jul 2010
TL;DR: This paper presents an alternative IC semantics for OWL that allows applications to work with the CWA and the weak UNA and shows that IC validation can be reduced to query answering under certain conditions.
Abstract: In many data-centric semantic web applications, it is desirable to use OWL to encode the Integrity Constraints (IC) that must be satisfied by instance data. However, challenges arise due to the Open World Assumption (OWA) and the lack of a Unique Name Assumption (UNA) in OWL's standard semantics. In particular, conditions that trigger constraint violations in systems using the Closed World Assumption (CWA), will generate new inferences in standard OWL-based reasoning applications. In this paper, we present an alternative IC semantics for OWL that allows applications to work with the CWA and the weak UNA. Ontology modelers can choose which OWL axioms to be interpreted with our IC semantics. Thus application developers are able to combine open world reasoning with closed world constraint validation in a flexible way. We also show that IC validation can be reduced to query answering under certain conditions. Finally, we describe our prototype implementation based on the OWL reasoner Pellet.

Proceedings ArticleDOI
30 Dec 2010
TL;DR: An Internet of Things virtualization framework to support connected objects sensor event processing and reasoning by providing a semantic overlay of underlying IoT cloud by using event-driven service oriented architecture (e-SOA) paradigm.
Abstract: In this paper, we propose an Internet of Things (IoT) virtualization framework to support connected objects sensor event processing and reasoning by providing a semantic overlay of underlying IoT cloud. The framework uses the sensor-as-aservice notion to expose IoT cloud's connected objects functional aspects in the form of web services. The framework uses an adapter oriented approach to address the issue of connectivity with various types of sensor nodes. We employ semantic enhanced access polices to ensure that only authorized parties can access the IoT framework services, which result in enhancing overall security of the proposed framework. Furthermore, the use of event-driven service oriented architecture (e-SOA) paradigm assists the framework to leverage the monitoring process by dynamically sensing and responding to different connected objects sensor events. We present our design principles, implementations, and demonstrate the development of IoT application with reasoning capability by using a green school motorcycle (GSMC) case study. Our exploration shows that amalgamation of e-SOA, semantic web technologies and virtualization paves the way to address the connectivity, security and monitoring issues of IoT domain.

Book ChapterDOI
01 Jan 2010
TL;DR: This chapter introduces several approaches that have been developed to aid in evaluating ontologies, and presents highlights of OntoQA, an ontology evaluation and analysis tool that uses a set of metrics measuring different aspects of the ontology schema and knowledgebase to give an insight to the overall characteristics of the Ontology.
Abstract: In the last few years, the Semantic Web gained scientific acceptance as the means of sharing knowledge in different domains, and the cornerstone of the Semantic Web is ontologies. Currently, users trying to incorporate ontologies in their applications have to rely on their experience to try to find a suitable ontology for their applications. Methods for evaluating ontology quality and validity, ontology characterization and ranking have been developed for that purpose. In this chapter, we introduce several approaches that have been developed to aid in evaluating ontologies. In addition, we present highlights of OntoQA, an ontology evaluation and analysis tool that uses a set of metrics measuring different aspects of the ontology schema and knowledgebase to give an insight to the overall characteristics of the ontology. It is important to keep in mind while reading this chapter that the definition “goodness” or the “validity” of an ontology might vary between different users or different domains.

Book
15 Sep 2010
TL;DR: Three SOA experts provide a down-to-earth explanation of REST and demonstrate how you can develop simple and elegant distributed hypermedia systems by applying the Web's guiding principles to common enterprise computing problems.
Abstract: Why don't typical enterprise projects go as smoothly as projects you develop for the Web? Does the REST architectural style really present a viable alternative for building distributed systems and enterprise-class applications? In this insightful book, three SOA experts provide a down-to-earth explanation of REST and demonstrate how you can develop simple and elegant distributed hypermedia systems by applying the Web's guiding principles to common enterprise computing problems. You'll learn techniques for implementing specific Web technologies and patterns to solve the needs of a typical company as it grows from modest beginnings to become a global enterprise. Learn basic Web techniques for application integration Use HTTP and the Webs infrastructure to build scalable, fault-tolerant enterprise applications Discover the Create, Read, Update, Delete (CRUD) pattern for manipulating resources Build RESTful services that use hypermedia to model state transitions and describe business protocols Learn how to make Web-based solutions secure and interoperable Extend integration patterns for event-driven computing with the Atom Syndication Format and implement multi-party interactions in AtomPub Understand how the Semantic Web will impact systems design

Proceedings ArticleDOI
30 Dec 2010
TL;DR: This paper demonstrates how the proposed framework automates interoperability without any modifications to existing standards, devices, or technologies, while providing to the user an intuitive semantic interface with services that can be executed by combining devices in the network.
Abstract: The Internet of Things (IoT) refers to extending the Internet to devices such as home appliances, consumer electronics, and sensor networks. As multiple heterogeneous devices attempt to create area networks, one of the major challenges is the interoperability and com-posability of their services. The traditional way to address interoperability is to define standards; however, there are many standards and specifications that are incompatible with each other. In this paper we propose an application layer solution for interoperability. The key idea is to utilize device semantics provided by existing specifications and dynamically wrap them in our middleware into semantic services. Next, with the help of Semantic Web technologies, users can create and then execute complex tasks involving multiple heterogeneous devices. We demonstrate how our framework automates interoperability without any modifications to existing standards, devices, or technologies, while providing to the user an intuitive semantic interface with services that can be executed by combining devices in the network.

Journal ArticleDOI
TL;DR: This paper describes CiTO and illustrates its usefulness both for the annotation of bibliographic reference lists and for the visualization of citation networks, as well as harmonizing it with other ontologies describing bibliographies and the rhetorical structure of scientific discourse.
Abstract: CiTO, the Citation Typing Ontology, is an ontology for describing the nature of reference citations in scientific research articles and other scholarly works, both to other such publications and also to Web information resources, and for publishing these descriptions on the Semantic Web. Citation are described in terms of the factual and rhetorical relationships between citing publication and cited publication, the in-text and global citation frequencies of each cited work, and the nature of the cited work itself, including its publication and peer review status. This paper describes CiTO and illustrates its usefulness both for the annotation of bibliographic reference lists and for the visualization of citation networks. The latest version of CiTO, which this paper describes, is CiTO Version 1.6, published on 19 March 2010. CiTO is written in the Web Ontology Language OWL, uses the namespace http://purl.org/net/cito/, and is available from http://purl.org/net/cito/. This site uses content negotiation to deliver to the user an OWLDoc Web version of the ontology if accessed via a Web browser, or the OWL ontology itself if accessed from an ontology management tool such as Protege 4 (http://protege.stanford.edu/). Collaborative work is currently under way to harmonize CiTO with other ontologies describing bibliographies and the rhetorical structure of scientific discourse.

Journal ArticleDOI
TL;DR: The philosophical relevance of this new language is explained, its syntactic and semantic structures are expounded and its possible implications for the growth of collective intelligence in cyberspace are pondered.

Proceedings ArticleDOI
22 Mar 2010
TL;DR: The syntax and semantics of the C-SPARQL language are shown, a query graph model is introduced which is an intermediate representation of queries devoted to optimization, and optimizations in terms of rewriting rules applied to the querygraph model are introduced.
Abstract: Continuous SPARQL (C-SPARQL) is proposed as new language for continuous queries over streams of RDF data. It covers a gap in the Semantic Web abstractions which is needed for many emerging applications, including our focus on Urban Computing. In this domain, sensor-based information on roads must be processed to deduce localized traffic conditions and then produce traffic management strategies. Executing C-SPARQL queries requires the effective integration of SPARQL and streaming technologies, which capitalize over a decade of research and development; such integration poses several nontrivial challenges.In this paper we (a) show the syntax and semantics of the C-SPARQL language together with some examples; (b) introduce a query graph model which is an intermediate representation of queries devoted to optimization; (c) discuss the features of an execution environment that leverages existing technologies; (d) introduce optimizations in terms of rewriting rules applied to the query graph model, so as to efficiently exploit the execution environment; and (e) show evidence of the effectiveness of our optimizations on a prototype of execution environment.

Proceedings ArticleDOI
26 Apr 2010
TL;DR: Sig.ma uses an holistic approach in which large scale semantic web indexing, logic reasoning, data aggregation heuristics, ad hoc ontology consolidation, external services and responsive user interaction all play together to create rich entity descriptions.
Abstract: We demonstrate Sig.ma, both a service and an end user application to access the Web of Data as an integrated information space.Sig.ma uses an holistic approach in which large scale semantic web indexing, logic reasoning, data aggregation heuristics, ad hoc ontology consolidation, external services and responsive user interaction all play together to create rich entity descriptions. These consolidated entity descriptions then form the base for embeddable data mashups, machine oriented services as well as data browsing services. Finally, we discuss Sig.ma's peculiar characteristics and report on lessions learned and ideas it inspires.

Book ChapterDOI
07 Nov 2010
TL;DR: This paper presents the Signal/Collect programming model for synchronous and asynchronous graph algorithms and demonstrates that this abstraction can capture the essence of many algorithms on graphs in a concise and elegant way by giving Signal/ collect adaptations of various relevant algorithms.
Abstract: The Semantic Web graph is growing at an incredible pace, enabling opportunities to discover new knowledge by interlinking and analyzing previously unconnected data sets. This confronts researchers with a conundrum: Whilst the data is available the programming models that facilitate scalability and the infrastructure to run various algorithms on the graph are missing. Some use MapReduce - a good solution for many problems. However, even some simple iterative graph algorithms do not map nicely to that programming model requiring programmers to shoehorn their problem to the MapReduce model. This paper presents the Signal/Collect programming model for synchronous and asynchronous graph algorithms. We demonstrate that this abstraction can capture the essence of many algorithms on graphs in a concise and elegant way by giving Signal/Collect adaptations of various relevant algorithms. Furthermore, we built and evaluated a prototype Signal/Collect framework that executes algorithms in our programming model. We empirically show that this prototype transparently scales and that guiding computations by scoring as well as asynchronicity can greatly improve the convergence of some example algorithms. We released the framework under the Apache License 2.0 (at http://www.ifi.uzh.ch/ddis/research/sc).

Journal ArticleDOI
TL;DR: This paper proposes a suite of ontology metrics, at both the ontology-level and class-level, to measure the design complexity of ontologies, and points out possible applications of the proposed metrics to ontology quality control.

Book ChapterDOI
30 May 2010
TL;DR: This work presents the TrOWL infrastructure for transforming, reasoning, and querying OWL2 ontologies which uses novel techniques such as Quality Guaranteed Approximations and Forgetting to achieve this goal.
Abstract: The Semantic Web movement has led to the publication of thousands of ontologies online. These ontologies present and mediate information and knowledge on the Semantic Web. Tools exist to reason over these ontologies and to answer queries over them, but there are no large scale infrastructures for storing, reasoning, and querying ontologies on a scale that would be useful for a large enterprise or research institution. We present the TrOWL infrastructure for transforming, reasoning, and querying OWL2 ontologies which uses novel techniques such as Quality Guaranteed Approximations and Forgetting to achieve this goal.

07 Nov 2010
TL;DR: This paper describes a linked-data platform to publish sensor data and link them to existing resource on the semantic Web, called Sense2Web, which supports flexible and interoperable descriptions and provide association of different sensor data ontologies to resources described on the Semantic Web and the Web of data.
Abstract: This paper describes a linked-data platform to publish sensor data and link them to existing resource on the semantic Web. The linked sensor data platform, called Sense2Web supports flexible and interoperable descriptions and provide association of different sensor data ontologies to resources described on the semantic Web and the Web of data. The current advancements in (wireless) sensor networks and being able to manufacture low cost and energy efficient hardware for sensors has lead to a potential interest in integrating physical world data into the Web. Wireless sensor networks employ various types of hardware and software components to observe and measure physical phenomena and make the obtained data available through different networking services. Applications and users are typically interested in querying various events and requesting measurement and observation data from physical world. Using a linked data approach enables data consumers to access sensor data and query the data and relations to obtain information and/or integrate data from various sources. Global access to sensor data can provides a wide range of applications in different domains such as geographical information systems, healthcare, smart homes, and business applications and scenarios. In this paper we focus on publishing linked-data to describe sensors and link them to other existing resources on the Web.

Book ChapterDOI
30 May 2010
TL;DR: Main contributions compared to previous and related work are data aggregations on several dimensions, a graph visualization that displays and connects relationships also between more than two given objects, and an advanced implementation that is highly configurable and applicable to arbitrary RDF datasets.
Abstract: This paper presents an approach for the interactive discovery of relationships between selected elements via the Semantic Web. It emphasizes the human aspect of relationship discovery by offering sophisticated interaction support. Selected elements are first semi-automatically mapped to unique objects of Semantic Web datasets. These datasets are then crawled for relationships which are presented in detail and overview. Interactive features and visual clues allow for a sophisticated exploration of the found relationships. The general process is described and the RelFinder tool as a concrete implementation and proof-of-concept is presented and evaluated in a user study. The application potentials are illustrated by a scenario that uses the RelFinder and DBpedia to assist a business analyst in decision-making. Main contributions compared to previous and related work are data aggregations on several dimensions, a graph visualization that displays and connects relationships also between more than two given objects, and an advanced implementation that is highly configurable and applicable to arbitrary RDF datasets.