scispace - formally typeset
Search or ask a question

Showing papers on "Semantic Web published in 2011"


Book
02 Feb 2011
TL;DR: This Synthesis lecture provides readers with a detailed technical introduction to Linked Data, including coverage of relevant aspects of Web architecture, as the basis for application development, research or further study.
Abstract: The World Wide Web has enabled the creation of a global information space comprising linked documents. As the Web becomes ever more enmeshed with our daily lives, there is a growing desire for direct access to raw data not currently available on the Web or bound up in hypertext documents. Linked Data provides a publishing paradigm in which not only documents, but also data, can be a first class citizen of the Web, thereby enabling the extension of the Web with a global data space based on open standards - the Web of Data. In this Synthesis lecture we provide readers with a detailed technical introduction to Linked Data. We begin by outlining the basic principles of Linked Data, including coverage of relevant aspects of Web architecture. The remainder of the text is based around two main themes - the publication and consumption of Linked Data. Drawing on a practical Linked Data scenario, we provide guidance and best practices on: architectural approaches to publishing Linked Data; choosing URIs and vocabularies to identify and describe resources; deciding what data to return in a description of a resource on the Web; methods and frameworks for automated linking of data sets; and testing and debugging approaches for Linked Data deployments. We give an overview of existing Linked Data applications and then examine the architectures that are used to consume Linked Data from the Web, alongside existing tools and frameworks that enable these. Readers can expect to gain a rich technical understanding of Linked Data fundamentals, as the basis for application development, research or further study.

2,174 citations



Journal ArticleDOI
TL;DR: The practice of crowdsourcing is transforming the Web and giving rise to a new field of inquiry called "crowdsourcing", which aims to provide real-time information about events in a democratic manner.

1,165 citations


Book
01 Jan 2011
TL;DR: Semantic Web for the Working Ontologist is the essential, comprehensive resource on semantic modeling, for practitioners in health care, artificial intelligence, finance, engineering, military intelligence, enterprise architecture, and more.
Abstract: Semantic Web models and technologies provide information in machine-readable languages that enable computers to access the Web more intelligently and perform tasks automatically without the direction of users. These technologies are relatively recent and advancing rapidly, creating a set of unique challenges for those developing applications. Semantic Web for the Working Ontologist is the essential, comprehensive resource on semantic modeling, for practitioners in health care, artificial intelligence, finance, engineering, military intelligence, enterprise architecture, and more. Focused on developing useful and reusable models, this market-leading book explains how to build semantic content (ontologies) and how to build applications that access that content. New in this edition: Coverage of the latest Semantic Web tools for organizing, querying, and processing information - see details in TOC below Detailed information on the latest ontologies used in key web applications including ecommerce, social networking, data mining, using government data, and more Updated with the latest developments and advances in Semantic Web technologies for organizing, querying, and processing information, including SPARQL, RDF and RDFS, OWL 2.0, and SKOS Detailed information on the ontologies used in today's key web applications, including ecommerce, social networking, data mining, using government data, and more Even more illustrative examples and case studies that demonstrate what semantic technologies are and how they work together to solve real-world problems Table of Contents 1?What Is The Semantic Web? 2?Semantic Modeling 3?RDF - The Basis of the Semantic Web 4?SPARQL - The Query Language for RDF 5?Semantic Web Application Architecture 6?RDF And Inferencing 7?RDF Schema Language 8?RDFS-Plus 9?SKOS - the Simple Knowledge Organization System 10?Ontologies in the Wild: Linked Open Data and the Open Graph Project 11?Basic OWL 12?Counting and Sets In OWL 13?MORE Ontologies in the Wild: QUDT, GoodRelations, and OBO Foundry 14?Good and Bad Modeling Practices 15?OWL 2.0 Levels and Logic 16?Conclusions 17?Frequently Asked Questions

600 citations


Journal ArticleDOI
01 Aug 2011
TL;DR: This paper introduces a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDFData management systems.
Abstract: The generation of RDF data has accelerated to the point where many data sets need to be partitioned across multiple machines in order to achieve reasonable performance when querying the data. Although tremendous progress has been made in the Semantic Web community for achieving high performance data management on a single node, current solutions that allow the data to be partitioned across multiple machines are highly inefficient. In this paper, we introduce a scalable RDF data management system that is up to three orders of magnitude more efficient than popular multi-node RDF data management systems. In so doing, we introduce techniques for (1) leveraging state-of-the-art single node RDF-store technology (2) partitioning the data across nodes in a manner that helps accelerate query processing through locality optimizations and (3) decomposing SPARQL queries into high performance fragments that take advantage of how data is partitioned in a cluster.

488 citations


Proceedings ArticleDOI
16 Jul 2011
TL;DR: This paper presents and evaluates LIMES, a novel time-efficient approach for link discovery in metric spaces that utilizes the mathematical characteristics of metric spaces during the mapping process to filter out a large number of those instance pairs that do not suffice the mapping conditions.
Abstract: The Linked Data paradigm has evolved into a powerful enabler for the transition from the document-oriented Web into the Semantic Web While the amount of data published as Linked Data grows steadily and has surpassed 25 billion triples, less than 5% of these triples are links between knowledge bases Link discovery frameworks provide the functionality necessary to discover missing links between knowledge bases Yet, this task requires a significant amount of time, especially when it is carried out on large data sets This paper presents and evaluates LIMES, a novel time-efficient approach for link discovery in metric spaces Our approach utilizes the mathematical characteristics of metric spaces during the mapping process to filter out a large number of those instance pairs that do not suffice the mapping conditions We present the mathematical foundation and the core algorithms employed in LIMES We evaluate our algorithms with synthetic data to elucidate their behavior on small and large data sets with different configurations and compare the runtime of LIMES with another state-of-the-art link discovery tool

399 citations


Journal ArticleDOI
TL;DR: A comprehensive review of the overlapping domains of the Sensor Web, citizen sensing and human-in-the-loop sensing in the era of Mobile and Social Web, and the roles these domains can play in environmental and public health surveillance and crisis/disaster informatics can be found in this article.
Abstract: 'Wikification of GIS by the masses' is a phrase-term first coined by Kamel Boulos in 2005, two years earlier than Goodchild's term 'Volunteered Geographic Information'. Six years later (2005-2011), OpenStreetMap and Google Earth (GE) are now full-fledged, crowdsourced 'Wikipedias of the Earth' par excellence, with millions of users contributing their own layers to GE, attaching photos, videos, notes and even 3-D (three dimensional) models to locations in GE. From using Twitter in participatory sensing and bicycle-mounted sensors in pervasive environmental sensing, to creating a 100,000-sensor geo-mashup using Semantic Web technology, to the 3-D visualisation of indoor and outdoor surveillance data in real-time and the development of next-generation, collaborative natural user interfaces that will power the spatially-enabled public health and emergency situation rooms of the future, where sensor data and citizen reports can be triaged and acted upon in real-time by distributed teams of professionals, this paper offers a comprehensive state-of-the-art review of the overlapping domains of the Sensor Web, citizen sensing and 'human-in-the-loop sensing' in the era of the Mobile and Social Web, and the roles these domains can play in environmental and public health surveillance and crisis/disaster informatics. We provide an in-depth review of the key issues and trends in these areas, the challenges faced when reasoning and making decisions with real-time crowdsourced data (such as issues of information overload, "noise", misinformation, bias and trust), the core technologies and Open Geospatial Consortium (OGC) standards involved (Sensor Web Enablement and Open GeoSMS), as well as a few outstanding project implementation examples from around the world.

395 citations


01 Dec 2011
TL;DR: An in-depth review of the key issues and trends in these areas, the challenges faced when reasoning and making decisions with real-time crowdsourced data, the core technologies and Open Geospatial Consortium standards involved (Sensor Web Enablement and Open GeoSMS), as well as a few outstanding project implementation examples from around the world.
Abstract: 'Wikification of GIS by the masses' is a phrase-term first coined by Kamel Boulos in 2005, two years earlier than Goodchild's term 'Volunteered Geographic Information'. Six years later (2005-2011), OpenStreetMap and Google Earth (GE) are now full-fledged, crowdsourced 'Wikipedias of the Earth' par excellence, with millions of users contributing their own layers to GE, attaching photos, videos, notes and even 3-D (three dimensional) models to locations in GE. From using Twitter in participatory sensing and bicycle-mounted sensors in pervasive environmental sensing, to creating a 100,000-sensor geo-mashup using Semantic Web technology, to the 3-D visualisation of indoor and outdoor surveillance data in real-time and the development of next-generation, collaborative natural user interfaces that will power the spatially-enabled public health and emergency situation rooms of the future, where sensor data and citizen reports can be triaged and acted upon in real-time by distributed teams of professionals, this paper offers a comprehensive state-of-the-art review of the overlapping domains of the Sensor Web, citizen sensing and 'human-in-the-loop sensing' in the era of the Mobile and Social Web, and the roles these domains can play in environmental and public health surveillance and crisis/disaster informatics. We provide an in-depth review of the key issues and trends in these areas, the challenges faced when reasoning and making decisions with real-time crowdsourced data (such as issues of information overload, "noise", misinformation, bias and trust), the core technologies and Open Geospatial Consortium (OGC) standards involved (Sensor Web Enablement and Open GeoSMS), as well as a few outstanding project implementation examples from around the world.

387 citations


Proceedings ArticleDOI
28 Mar 2011
TL;DR: This work proposes Event Processing SPARQL (EP-SPARQL) as a new language for complex events and Stream Reasoning and provides syntax and formal semantics of the language and devise an effective execution model for the proposed formalism.
Abstract: Streams of events appear increasingly today in various Web applications such as blogs, feeds, sensor data streams, geospatial information, on-line financial data, etc. Event Processing (EP) is concerned with timely detection of compound events within streams of simple events. State-of-the-art EP provides on-the-fly analysis of event streams, but cannot combine streams with background knowledge and cannot perform reasoning tasks. On the other hand, semantic tools can effectively handle background knowledge and perform reasoning thereon, but cannot deal with rapidly changing data provided by event streams.To bridge the gap, we propose Event Processing SPARQL (EP-SPARQL) as a new language for complex events and Stream Reasoning. We provide syntax and formal semantics of the language and devise an effective execution model for the proposed formalism. The execution model is grounded on logic programming, and features effective event processing and inferencing capabilities over temporal and static knowledge. We provide an open-source prototype implementation and present a set of tests to show the usefulness and effectiveness of our approach.

380 citations


Journal ArticleDOI
TL;DR: The vision and architecture of a Semantic Web of Things is described: a service infrastructure that makes the deployment and use of semantic applications involving Internet-connected sensors almost as easy as building, searching, and reading a web page today.
Abstract: The developed world is awash with sensors. However, they are typically locked into unimodal closed systems. To unleash their full potential, access to sensors should be opened such that their data and services can be integrated with data and services available in other information systems, facilitating novel applications and services that are based on the state of the real world. We describe our vision and architecture of a Semantic Web of Things: a service infrastructure that makes the deployment and use of semantic applications involving Internet-connected sensors almost as easy as building, searching, and reading a web page today.

337 citations


Journal ArticleDOI
01 Nov 2011
TL;DR: This work presents paris, an approach for the automatic alignment of ontologies, which aligns not only instances, but also relations and classes and provides a truly holistic solution to the problem of ontology alignment.
Abstract: One of the main challenges that the Semantic Web faces is the integration of a growing number of independently designed ontologies. In this work, we present paris, an approach for the automatic alignment of ontologies. paris aligns not only instances, but also relations and classes. Alignments at the instance level cross-fertilize with alignments at the schema level. Thereby, our system provides a truly holistic solution to the problem of ontology alignment. The heart of the approach is probabilistic, i.e., we measure degrees of matchings based on probability estimates. This allows paris to run without any parameter tuning. We demonstrate the efficiency of the algorithm and its precision through extensive experiments. In particular, we obtain a precision of around 90% in experiments with some of the world's largest ontologies.

Journal ArticleDOI
TL;DR: The general requirements of an event model for web data are discussed and examples from two use cases are given: historic events and events in the maritime safety and security domain.

Posted Content
TL;DR: Paris as mentioned in this paper is a probabilistic approach for ontology alignment, i.e., it measures degrees of matchings based on probability estimates, and it can align not only instances, but also relations and classes.
Abstract: One of the main challenges that the Semantic Web faces is the integration of a growing number of independently designed ontologies. In this work, we present PARIS, an approach for the automatic alignment of ontologies. PARIS aligns not only instances, but also relations and classes. Alignments at the instance level cross-fertilize with alignments at the schema level. Thereby, our system provides a truly holistic solution to the problem of ontology alignment. The heart of the approach is probabilistic, i.e., we measure degrees of matchings based on probability estimates. This allows PARIS to run without any parameter tuning. We demonstrate the efficiency of the algorithm and its precision through extensive experiments. In particular, we obtain a precision of around 90% in experiments with some of the world's largest ontologies.



Journal ArticleDOI
TL;DR: This work identifies the syntactic differences that a fuzzy ontology language has to cope with, and proposes a concrete methodology to represent fuzzy ontologies using OWL 2 annotation properties.

BookDOI
TL;DR: The Semantic Web - ISWC 2011 - 10th ISWC Conference, Bonn, Germany, October 23-27, 2011, Proceedings, Part I as discussed by the authors, Part I
Abstract: The Semantic Web - ISWC 2011 - 10th International Semantic Web Conference, Bonn, Germany, October 23-27, 2011, Proceedings, Part I

Journal ArticleDOI
TL;DR: OWLIM is described, a family of semantic repositories that provide storage, inference and novel data-access features delivered in a scalable, resilient, industrial-strength platform.
Abstract: An explosion in the use of RDF for representing information about resources has driven the requirements for Web-scale server systems that can store and process huge quantities of data, and furthermore provide powerful data access and mining functionalities. This paper describes OWLIM, a family of semantic repositories that provide storage, inference and novel data-access features delivered in a scalable, resilient, industrial-strength platform.

Journal ArticleDOI
TL;DR: A concrete implementation approach is presented for a semantic rule checking environment for building design and construction, and an implemented test case for acoustic performance checking illustrates the improvements of such an environment compared to traditionally deployed approaches in rule checking.

23 Oct 2011
TL;DR: SPENDID as discussed by the authors is a query optimization strategy for federating SPARQL endpoints based on statistical data obtained from voiD descriptions. But it is not straightforward to adapt successful database techniques for RDF federation.
Abstract: In order to leverage the full potential of the Semantic Web it is necessary to transparently query distributed RDF data sources in the same way as it has been possible with federated databases for ages. However, there are significant differences between the Web of (linked) Data and the traditional database approaches. Hence, it is not straightforward to adapt successful database techniques for RDF federation. Reasons are the missing cooperation between SPARQL end-points and the need for detailed data statistics for estimating the costs of query execution plans. We have implemented SPLENDID, a query optimization strategy for federating SPARQL endpoints based on statistical data obtained from voiD descriptions.

Journal ArticleDOI
TL;DR: Key requirements which the visualisation of Linked Data must fulfil in order to lower the technical barrier and make the Web of Data accessible for all are described and proposals for advancing current Linked data visualisation efforts are presented.
Abstract: . The uptake and consumption of Linked Data is currently restricted almost entirely to the Semantic Web community. While the utility of Linked Data to non-tech savvy web users is evident, the lack of technical knowledge and an understanding of the intricacies of the semantic technology stack limit such users in their ability to interpret and make use of the Web of Data. A key solution in overcoming this hurdle is to visualise Linked Data in a coherent and legible manner, allowing nondomain and non-technical audiences to obtain a good understanding of its structure, and therefore implicitly compose queries, identify links between resources and intuitively discover new pieces of information. In this paper we describe key requirements which the visualisation of Linked Data must fulfil in order to lower the technical barrier and make the Web of Data accessible for all. We provide an extensive survey of current efforts in the Semantic Web community with respect to our requirements, and identify the potential for visual support to lead to more effective, intuitive interaction of the end user with Linked Data. We conclude with the conclusions drawn from our survey and analysis, and present proposals for advancing current Linked Data visualisation efforts. Keywords: Linked Data, information visualisation, visual analytics, user-centred design, users, consumption

Journal ArticleDOI
TL;DR: The major contribution of this work is an innovative, comprehensive semantic search model, which extends the classic IR model, addresses the challenges of the massive and heterogeneous Web environment, and integrates the benefits of both keyword and semantic-based search.

Journal ArticleDOI
TL;DR: The current SWSE system is described, initially detailing the architecture and later elaborating upon the function, design, implementation and performance of each individual component, to give an insight into how current Semantic Web standards can be tailored, in a best-effort manner, for use on Web data.

Book ChapterDOI
29 May 2011
TL;DR: It is shown that the adoption of Semantic Web standards can provide added value for lexicon models by supporting a rich axiomatization of linguistic categories that can be used to constrain the usage of the model and to perform consistency checks.
Abstract: There are a large number of ontologies currently available on the Semantic Web. However, in order to exploit them within natural language processing applications, more linguistic information than can be represented in current Semantic Web standards is required. Further, there are a large number of lexical resources available representing a wealth of linguistic information, but this data exists in various formats and is difficult to link to ontologies and other resources. We present a model we call lemon (Lexicon Model for Ontologies) that supports the sharing of terminological and lexicon resources on the Semantic Web as well as their linking to the existing semantic representations provided by ontologies. We demonstrate that lemon can succinctly represent existing lexical resources and in combination with standard NLP tools we can easily generate new lexica for domain ontologies according to the lemon model. We demonstrate that by combining generated and existing lexica we can collaboratively develop rich lexical descriptions of ontology entities. We also show that the adoption of Semantic Web standards can provide added value for lexicon models by supporting a rich axiomatization of linguistic categories that can be used to constrain the usage of the model and to perform consistency checks.

Journal ArticleDOI
TL;DR: A discovery algorithm of associated resources is first proposed to build original ALN for organizing loose Web resources and an application using C-ALN to organize Web services is presented, which shows that C- ALN is an effective and efficient tool for building semantic link on the resources of Web services.
Abstract: Association Link Network (ALN) aims to establish associated relations among various resources. By extending the hyperlink network World Wide Web to an association-rich network, ALN is able to effectively support Web intelligence activities such as Web browsing, Web knowledge discovery, and publishing, etc. Since existing methods for building semantic link on Web resources cannot effectively and automatically organize loose Web resources, effective Web intelligence activities are still challenging. In this paper, a discovery algorithm of associated resources is first proposed to build original ALN for organizing loose Web resources. Second, three schemas for constructing kernel ALN and connection-rich ALN (C-ALN) are developed gradually to optimize the organizing of Web resources. After that, properties of different types of ALN are discussed, which show that C-ALN has good performances to support Web intelligence activities. Moreover, an evaluation method is presented to verify the correctness of C-ALN for semantic link on documents. Finally, an application using C-ALN to organize Web services is presented, which shows that C-ALN is an effective and efficient tool for building semantic link on the resources of Web services.

Journal ArticleDOI
TL;DR: This paper describes a framework that is built using Hadoop to store and retrieve large numbers of RDF triples by exploiting the cloud computing paradigm and shows that this framework is scalable and efficient and can handle large amounts of R DF data, unlike traditional approaches.
Abstract: Semantic web is an emerging area to augment human reasoning. Various technologies are being developed in this arena which have been standardized by the World Wide Web Consortium (W3C). One such standard is the Resource Description Framework (RDF). Semantic web technologies can be utilized to build efficient and scalable systems for Cloud Computing. With the explosion of semantic web technologies, large RDF graphs are common place. This poses significant challenges for the storage and retrieval of RDF graphs. Current frameworks do not scale for large RDF graphs and as a result do not address these challenges. In this paper, we describe a framework that we built using Hadoop to store and retrieve large numbers of RDF triples by exploiting the cloud computing paradigm. We describe a scheme to store RDF data in Hadoop Distributed File System. More than one Hadoop job (the smallest unit of execution in Hadoop) may be needed to answer a query because a single triple pattern in a query cannot simultaneously take part in more than one join in a single Hadoop job. To determine the jobs, we present an algorithm to generate query plan, whose worst case cost is bounded, based on a greedy approach to answer a SPARQL Protocol and RDF Query Language (SPARQL) query. We use Hadoop's MapReduce framework to answer the queries. Our results show that we can store large RDF graphs in Hadoop clusters built with cheap commodity class hardware. Furthermore, we show that our framework is scalable and efficient and can handle large amounts of RDF data, unlike traditional approaches.

Proceedings ArticleDOI
Sören Auer1
18 Apr 2011
TL;DR: An overview of the Linked Data life-cycle is presented and some promising approaches with regard to extraction, storage and querying, authoring, linking, enrichment, quality analysis, evolution, as well as search and exploration of Linked data are discussed.
Abstract: Over the past 4 years, the semantic web activity has gained momentum with the widespread publishing of structured data as RDF. The Linked Data paradigm has therefore evolved from a practical research idea into a very promising candidate for addressing one of the biggest challenges in the area of the Semantic Web vision: the exploitation of the Web as a platform for data and information integration.To translate this initial success into a world-scale reality, a number of research challenges need to be addressed. While many standards, methods and technologies developed within the Semantic Web activity are applicable for Linked Data, there are also a number of specific characteristics of Linked Data, which have to be considered. In this talk we present an overview of the Linked Data life-cycle and discuss some promising approaches with regard to extraction, storage and querying, authoring, linking, enrichment, quality analysis, evolution, as well as search and exploration of Linked Data.

Journal ArticleDOI
TL;DR: The main conclusion from this study is that reasoners vary significantly with regard to all included characteristics, and therefore a critical assessment and evaluation of requirements is needed before selecting a reasoner for a real-life application.
Abstract: This paper provides a survey to and a comparison of state-of-the-art Semantic Web reasoners that succeed in classifying large ontologies expressed in the tractable OWL 2 EL profile. Reasoners are characterized along several dimensions: The first dimension comprises underlying reasoning characteristics, such as the employed reasoning method and its correctness as well as the expressivity and worst-case computational complexity of its supported language and whether the reasoner supports incremental classification, rules, justifications for inconsistent concepts and ABox reasoning tasks. The second dimension is practical usability: whether the reasoner implements the OWL API and can be used via OWLlink, whether it is available as Protege plugin, on which platforms it runs, whether its source is open or closed and which license it comes with. The last dimension contains performance indicators that can be evaluated empirically, such as classification, concept satisfiability, subsumption checking and consistency checking performance as well as required heap space and practical correctness, which is determined by comparing the computed concept hierarchies with each other. For the very large ontology SNOMED CT, which is released both in stated and inferred form, we test whether the computed concept hierarchies are correct by comparing them to the inferred form of the official distribution. The reasoners are categorized along the defined characteristics and benchmarked against well-known biomedical ontologies. The main conclusion from this study is that reasoners vary significantly with regard to all included characteristics, and therefore a critical assessment and evaluation of requirements is needed before selecting a reasoner for a real-life application.

Book
03 Jan 2011
TL;DR: Software developers in industry and students specializing in Web development or Semantic Web technologies will find in this book the most complete guide to this exciting field available today.
Abstract: The Semantic Web represents a vision for how to make the huge amount of information on the Web automatically processable by machines on a large scale. For this purpose, a whole suite of standards, technologies and related tools have been specified and developed over the last couple of years, and they have now become the foundation for numerous new applications.A Developers Guide to the Semantic Web helps the reader to learn the core standards, key components, and underlying concepts. It provides in-depth coverage of both the what-is and how-to aspects of the Semantic Web. From Yus presentation, the reader will obtain not only a solid understanding about the Semantic Web, but also learn how to combine all the pieces to build new applications on the Semantic Web.Software developers in industry and students specializing in Web development or Semantic Web technologies will find in this book the most complete guide to this exciting field available today. Based on the step-by-step presentation of real-world projects, where the technologies and standards are applied, they will acquire the knowledge needed to design and implement state-of-the-art applications.

Journal ArticleDOI
TL;DR: A survey on ontology-based Question Answering (QA), which has emerged in recent years to exploit the opportunities offered by structured semantic information on the Web, and the potential of this technology to go beyond the current state of the art to support end-users in reusing and querying the SW content.
Abstract: . With the recent rapid growth of the Semantic Web (SW), the processes of searching and querying content that is both massive in scale and heterogeneous have become increasingly challenging. User-friendly interfaces, which can support end users in querying and exploring this novel and diverse, structured information space, are needed to make the vision of the SW a reality. We present a survey on ontology-based Question Answering (QA), which has emerged in recent years to exploit the opportunities offered by structured semantic information on the Web. First, we provide a comprehensive perspective by analyzing the general background and history of the QA research field, from influential works from the artificial intelligence and database communities developed in the 70s and later decades, through open domain QA stimulated by the QA track in TREC since 1999, to the latest commercial semantic QA solutions, before tacking the current state of the art in open userfriendly interfaces for the SW. Second, we examine the potential of this technology to go beyond the current state of the art to support end-users in reusing and querying the SW content. We conclude our review with an outlook for this novel research area, focusing in particular on the R&D directions that need to be pursued to realize the goal of efficient and competent retrieval and integration of answers from large scale, heterogeneous, and continuously evolving semantic sources.