scispace - formally typeset
Search or ask a question

Showing papers presented at "Web Reasoning and Rule Systems in 2010"


Book ChapterDOI
22 Sep 2010
TL;DR: It is shown that, if the notion of repair studied in databases is used, inconsistency-tolerant query answering is intractable, even for the simplest form of queries.
Abstract: We address the problem of dealing with inconsistencies in Description Logic (DL) knowledge bases. Our general goal is both to study DL semantical frameworks which are inconsistency-tolerant, and to devise techniques for answering unions of conjunctive queries posed to DL knowledge bases under such inconsistency-tolerant semantics. Our work is inspired by the approaches to consistent query answering in databases, which are based on the idea of living with inconsistencies in the database, but trying to obtain only consistent information during query answering, by relying on the notion of database repair. We show that, if we use the notion of repair studied in databases, inconsistency-tolerant query answering is intractable, even for the simplest form of queries. Therefore, we study different variants of the repair-based semantics, with the goal of reaching a good compromise between expressive power of the semantics and computational complexity of inconsistency-tolerant query answering.

226 citations


Book ChapterDOI
22 Sep 2010
TL;DR: The complexity of query answering under Datalog+/- class of decidable languages is investigated, and in addition the novel class of sticky-join sets of TGDs is presented, which generalizes both sticky sets ofTGDs and so-called linear TGDs, an extension of inclusion dependencies.
Abstract: In ontology-based data access, an extensional database is enhanced by an ontology that generates new intensional knowledge which has to be considered when answering queries. In this setting, tractable data complexity (i.e., complexity w.r.t. the data only) of query answering is crucial, given the need to deal with large data sets. A well-known class of tractable ontology languages is the DL-lite family; however, in DL-lite it is impossible to express simple and useful integrity constraints that involve joins. To overcome this limitation, the Datalog+/- class of decidable languages uses tuple-generating dependencies (TGDs) as rules, thus allowing for conjunctions of atoms in the rule bodies, with suitable limitations to ensure decidability. In particular, sticky sets of TGDs allow for joins and variable repetition in rule bodies under certain conditions. In this paper we extend the notion of stickiness by introducing weaklysticky sets of TGDs, which also generalize the well-known weakly-acyclic sets of TGDs. We investigate the complexity of query answering under such language, and in addition we provide novel complexity results on weakly-acyclic sets of TGDs. Moreover, we present the novel class of sticky-join sets of TGDs, which generalizes both sticky sets of TGDs and so-called linear TGDs, an extension of inclusion dependencies.

115 citations


Book ChapterDOI
22 Sep 2010
TL;DR: This paper presents an expressive logic-based language for specifying and combining complex events, and provides both a syntax as well as a formal declarative semantics and presents the performance results showing the competitiveness of this approach.
Abstract: Complex Event Processing (CEP) is concerned with timely detection of complex events within multiple streams of atomic occurrences. It has useful applications in areas including financial services, mobile and sensor devices, click stream analysis etc. Numerous approaches in CEP have already been proposed in the literature. Event processing systems with a logic-based representation have attracted considerable attention as (among others reasons) they feature formal semantics and offer reasoning service. However logic-based approaches are not optimized for run-time event recognition (as they are mainly query-driven systems). In this paper, we present an expressive logic-based language for specifying and combining complex events. For this language we provide both a syntax as well as a formal declarative semantics. The language enables efficient run time event recognition and supports deductive reasoning. Execution model of the language is based on a compilation strategy into Prolog. We provide an implementation of the language, and present the performance results showing the competitiveness of our approach.

111 citations


Book ChapterDOI
22 Sep 2010
TL;DR: This demo is the first implementation of NREs (or similarly expressive RDF path languages) with this complexity and implements RPL by transformation to extended nested regular expressions (NREs).
Abstract: RPL (pronounced "ripple") is the most expressive path language for navigating in RDF graphs proposed to date that can still be evaluated with polynomial combined complexity. RPL is a lean language well-suited for integration into RDF rule languages. This integration enables a limited form of recursion for traversing RDF paths of unknown length at almost no additional cost over conjunctive triple patterns. We demonstrate the power, ease, and efficiency of RPL with two applications on top of the RPL Web interface. The demonstrator implements RPL by transformation to extended nested regular expressions (NREs). For these extended NREs we have implemented an evaluation algorithm with polynomial data complexity. To the best of our knowledge, this demo is the first implementation of NREs (or similarly expressive RDF path languages) with this complexity.

28 citations


Book ChapterDOI
22 Sep 2010
TL;DR: This paper analyzes the semantics of AIR language by giving the declarative semantics that support the reasoning algorithm, providing complexity of AIR inference, and evaluating the expressiveness of language by encoding Logic Programs of different expressivities in AIR.
Abstract: The Accountability In RDF (AIR) language is an N3-based, Semantic Web production rule language that supports nested activation of rules, negation, closed world reasoning, scoped contextualized reasoning, and explanation of inferred facts. Each AIR rule has unique identifier (typically an HTTP URI) that supports reuse of rule. In this paper we analyze the semantics of AIR language by: i) giving the declarative semantics that support the reasoning algorithm, ii) providing complexity of AIR inference; and iii) evaluating the expressiveness of language by encoding Logic Programs of different expressivities in AIR.

28 citations


Book ChapterDOI
22 Sep 2010
TL;DR: A fine-grained complexity analysis of both graph and rule minimisation in various settings of the problem of redundancy elimination on RDF graphs in the presence of rules and constraints.
Abstract: Based on practical observations on rule-based inference on RDF data, we study the problem of redundancy elimination on RDF graphs in the presence of rules (in the form of Datalog rules) and constraints (in the form of so-called tuple-generating dependencies), as well as with respect to queries (ranging from conjunctive queries up to more complex ones, particularly covering features of SPARQL, such as union, negation, or filters). To this end, we investigate the influence of several problem parameters (like restrictions on the size of the rules, the constraints, and/or the queries) on the complexity of detecting redundancy. The main result of this paper is a fine-grained complexity analysis of both graph and rule minimisation in various settings.

26 citations


Book ChapterDOI
22 Sep 2010
TL;DR: An abduction-based formalism is proposed that uses description logics for the ontology and Horn rules for defining the space of hypotheses for explanations, and uses Markov logic to define the motivation for the agent to generate explanations.
Abstract: We propose an abduction-based formalism that uses description logics for the ontology and Horn rules for defining the space of hypotheses for explanations, and we use Markov logic to define the motivation for the agent to generate explanations on the one hand, and for ranking different explanations on the other. The formalism is applied to media interpretation problems in a agent-oriented scenario.

16 citations


Book ChapterDOI
22 Sep 2010
TL;DR: A polynomial time algorithm is provided that, given an \(\mathcal{EL}\) KB Σ, a set of secrets to be protected and a query q, outputs “Yes” whenever \(\Sigma \vDash q\) and the answer to q, together with the answers to any previous queries answered by the KB, does not allow the querying agent to deduce any of the secrets in \(\mathbb{S}\).
Abstract: We consider the problem of answering queries against an EL knowledge base (KB) using secrets, whenever it is possible to do so without compromising secrets. We provide a polynomial time algorithm that, given an EL KB Σ, a set S of secrets to be protected and a query q, outputs "Yes" whenever Σ ⊧ q and the answer to q, together with the answers to any previous queries answered by the KB, does not allow the querying agent to deduce any of the secrets in S. This approach allows more flexible information sharing than is possible with traditional access control mechanisms.

16 citations


Book ChapterDOI
22 Sep 2010
TL;DR: This paper presents methods to find an optimal axiom labeling to enforce query-based access restrictions and reports experiments on real world data showing that a significant number of results are retained using the axiom filtering method.
Abstract: Role-based access control is a standard mechanism in information systems. Based on the role a user has, certain information is kept from the user even if requested. For ontologies representing knowledge, deciding what can be told to a user without revealing secrets is more difficult as the user might be able to infer secret knowledge using logical reasoning. In this paper, we present two approaches to solving this problem: query rewriting vs. axiom filtering, and show that while both approaches prevent the unveiling of secret knowledge, axiom filtering is more complete in the sense that it does not suppress knowledge the user is allowed to see while this happens frequently in query rewriting. Axiom filtering requires that each axiom carries a label representing its access level. We present methods to find an optimal axiom labeling to enforce query-based access restrictions and report experiments on real world data showing that a significant number of results are retained using the axiom filtering method.

15 citations


Book ChapterDOI
22 Sep 2010
TL;DR: In this PhD thesis, the termination problem of the chase algorithm is studied, a central tool in various database problems such as the constraint implication problem, conjunctive query optimization, rewriting queries using views, data exchange, and data integration.
Abstract: In my PhD thesis I study the termination problem of the chase algorithm, a central tool in various database problems such as the constraint implication problem, conjunctive query optimization, rewriting queries using views, data exchange, and data integration

11 citations


Book ChapterDOI
22 Sep 2010
TL;DR: A declarative service specification language and a calculus for service composition are proposed that formalize the problem of service composition in the framework of a constructive description logic.
Abstract: We formalize the problem of service composition in the framework of a constructive description logic. We propose a declarative service specification language and a calculus for service composition.

Book ChapterDOI
22 Sep 2010
TL;DR: The potential of conditional hedge transformations in Web-related applications is illustrated on the example of PρLog: an extension of logic programming with advanced rule-based programming features for hedge transformations, strategies, and regular constraints.
Abstract: We illustrate the potential of conditional hedge transformations in Web-related applications on the example of PρLog: an extension of logic programming with advanced rule-based programming features for hedge transformations, strategies, and regular constraints.

Proceedings Article
01 Jan 2010
TL;DR: In this article, the authors present an Integrity Constraint (IC) semantics for OWL axioms to address the issue of data validation in data integration and analysis tasks, and they also show that IC validation can be reduced to query answering under certain conditions.
Abstract: Data validation is an important part of data integration and analysis tasks. The consequences of having invalid data ranges from rather harmless application failures to serious errors in decision making process. Web Ontology Language (OWL) provides an expressive language that facilitates data integration and analysis tasks. However, the Open World Assumption (OWA) adopted by standard OWL semantics, combined with the absence of the Unique Name Assumption (UNA), makes it difficult to use OWL for data validation. What triggers constraint violations in closed world systems leads to new inferences in standard OWL systems. In this paper, we present an Integrity Constraint (IC) semantics for OWL axioms to address this issue. Ontology modelers can choose which axioms will be interpreted with IC semantics and combine open world reasoning with closed world constraint validation in a flexible way. We also show that IC validation can be reduced to query answering under certain conditions.

Proceedings Article
22 Sep 2010
TL;DR: It is shown that the modified logic is classically sound and that its embedding into classical SROIQ is consequence preserving, and that inserting special axioms into a SRO IQ4 knowledge base allows additional nontrivial conclusions to be drawn, without affecting paraconsistency.
Abstract: The four-valued paraconsistent logic SROIQ4, originally presented by Ma and Hitzler, is extended to incorporate additional elements of SROIQ. It is shown that the modified logic is classically sound and that its embedding into classical SROIQ is consequence preserving. Furthermore, inserting special axioms into a SROIQ4 knowledge base allows additional nontrivial conclusions to be drawn, without affecting paraconsistency. It is also shown that the interaction of nominals and cardinality restrictions prevents some SROIQ4 knowledge bases from having models. For such knowledge bases, the logic remains explosive.

Book ChapterDOI
22 Sep 2010
TL;DR: This tutorial gives an overview of new features in SPARQL 1.1, which the W3C is currently working on, as well as on the interplay with its "neighbour standards", OWL2 and RIF.
Abstract: In this tutorial we will give an overview of new features in SPARQL 11, which the W3C is currently working on, as well as on the interplay with its "neighbour standards", OWL2 and RIF We will also give a rough overview of existing implementations to play around with

Book ChapterDOI
22 Sep 2010
TL;DR: This work offers a characterisation for DL fragments that can be expressed, in a concrete sense, in datalog, and determines the largest such fragment for the DL ALC, and provides an outlook on the extension of the authors' methods to more expressive DLs.
Abstract: Translations to (first-order) datalog have been used in a number of inferencing techniques for description logics (DLs), yet the relationship between the semantic expressivities of function-free Horn logic and DL is understood only poorly. Although Description Logic Programs (DLP) have been described as DLs in the "expressive intersection" of DL and datalog, it is unclear what an intersection of two syntactically incomparable logics is, even if both have a first-order logic semantics. In this work, we offer a characterisation for DL fragments that can be expressed, in a concrete sense, in datalog. We then determine the largest such fragment for the DL ALC, and provide an outlook on the extension of our methods to more expressive DLs.

Book ChapterDOI
22 Sep 2010
TL;DR: An Integrity Constraint (IC) semantics for OWL axioms is presented to address the issue of what triggers constraint violations in closed world systems and how to combine open world reasoning with closed world constraint validation in a flexible way.
Abstract: Data validation is an important part of data integration and analysis tasks. The consequences of having invalid data ranges from rather harmless application failures to serious errors in decision making process. Web Ontology Language (OWL) provides an expressive language that facilitates data integration and analysis tasks. However, the Open World Assumption (OWA) adopted by standard OWL semantics, combined with the absence of the Unique Name Assumption (UNA), makes it difficult to use OWL for data validation. What triggers constraint violations in closed world systems leads to new inferences in standard OWL systems. In this paper, we present an Integrity Constraint (IC) semantics for OWL axioms to address this issue. Ontology modelers can choose which axioms will be interpreted with IC semantics and combine open world reasoning with closed world constraint validation in a flexible way. We also show that IC validation can be reduced to query answering under certain conditions.

Book ChapterDOI
22 Sep 2010
TL;DR: An extensive experimentation reported in this paper proves the effectiveness of the method at the task of ranking the answers to queries, expressed by class descriptions when applied to real ontologies describing simple and complex domains.
Abstract: We describe a method for learning functions that can predict the ranking of resources in knowledge bases expressed in Description Logics. The method relies on a kernelized version of the PERCEPTRON RANKING algorithm which is suitable for batch but also online problems settings. The usage of specific kernel functions that encode the similarity between individuals in the context of knowledge bases allows the application of the method to ontologies in the standard representations for the Semantic Web. An extensive experimentation reported in this paper proves the effectiveness of the method at the task of ranking the answers to queries, expressed by class descriptions when applied to real ontologies describing simple and complex domains.

Book ChapterDOI
22 Sep 2010
TL;DR: An extension of the DLVHEX system is presented to support RIF-Core, a dialect of W3C's Rule Interchange Format (RIF), as well as combinations of Rif-Core and OWL2RL ontologies.
Abstract: We present an extension of the DLVHEX system to support RIF-Core, a dialect of W3C's Rule Interchange Format (RIF), as well as combinations of RIF-Core and OWL2RL ontologies. DLVHEX is a plugin system on top of DLV, a disjunctive Datalog engine which enables higher-order and external atoms, as well as input rewriting capabilities, which are provided as plugins and enable DLVHEX to bidirectionally exchange data with external knowledge bases and consuming input in different Semantic Web languages. In fact, there already exist plugins for languages such as RDF and SPARQL. Our new plugin facilitates consumption and processing of RIF rule sets, as well as OWL2RL reasoning by a 2-step-reduction to DLVHEX via embedding in RIF-Core. The current version implements the translation from OWL2RL to RIF by a static rule set [12] and supports the RIF built-ins mandatory for this reduction trough external atoms in DLVHEX. For the future we plan to switch to a dynamic approach for RIF embedding of OWL2RL [2] and extend the RIF reasoning capabilities to more features of RIF-BLD. We provide a description of our current system, its current development status as well as an illustrative example, and conclude future plans to complete the Semantic Web library of plugins for DLVHEX.

Book ChapterDOI
22 Sep 2010
TL;DR: ASP is introduced--a unifying framework for defeasibility of disjunctive logic programs under the Answer Set Programming (ASP) since the well-founded and the answer set semantics underlie almost all existing approaches to defeasible reasoning in Logic Programming.
Abstract: Defeasible reasoning has been studied extensively in the last two decades and many different and dissimilar approaches are currently on the table. This multitude of ideas has made the field hard to navigate and the different techniques hard to compare. Our earlier work on Logic Programming with Defaults and Argumentation Theories (LPDA) introduced a degree of unification into the approaches that rely on the well-founded semantics. The present work takes this idea further and introduces ASPDA--a unifying framework for defeasibility of disjunctive logic programs under the Answer Set Programming (ASP). Since the well-founded and the answer set semantics underlie almost all existing approaches to defeasible reasoning in Logic Programming, LPDA and ASPDA together capture most of those approaches. In addition to ASPDA, we obtained a number of interesting and non-trivial results. First, we show that ASPDA is reducible to ordinary ASP programs, albeit at the cost of exponential blowup in the number of rules. Second, we study reducibility of ASPDA to the non-disjunctive case and show that head-cycle-free ASPDA programs reduce to the non-disjunctive case-- similarly to head-cycle-free ASP programs, but through a more complex transformation. The blowup in the program size is linear in this case.

Book ChapterDOI
22 Sep 2010
TL;DR: This work introduces a Representational State Transfer-based approach that enables online rule editing to overcome problems of sparse application and software interoperability.
Abstract: The sparse application of the Semantic Web Rule Language is partly caused by a lack of intuitive rule editors This applies both from a human user's, as well as from a software interoperability perspective, as creating and modifying rules is currently hard in distributed Web applications We introduce a Representational State Transfer-based approach that enables online rule editing to overcome these problems

Book ChapterDOI
22 Sep 2010
TL;DR: A new visualization framework called "model outlines", where more emphasis is placed on the semantics of concept descriptions than on their syntax is proposed, with results that indicate the potential benefits of the visual language for understanding concept descriptions.
Abstract: The development and use of ontologies may require users with no training in formal logic to handle complex concept descriptions. To aid such users, we propose a new visualization framework called "model outlines", where more emphasis is placed on the semantics of concept descriptions than on their syntax. We have conducted a usability study comparing model outlines and Manchester OWL, with results that indicate the potential benefits of our visual language for understanding concept descriptions.

Book ChapterDOI
22 Sep 2010
TL;DR: The use of the Modular Web framework is exploited to specify the modular semantics for Extended Resource Description Framework ontologies.
Abstract: The Extended Resource Description Framework has been proposed to equip RDF graphs with weak and strong negation, as well as derivation rules, increasing the expressiveness of ordinary RDF graphs. In parallel, theModularWeb framework enables collaborative and controlled reasoning in the Semantic Web. In this paper we exploit the use of the Modular Web framework to specify the modular semantics for Extended Resource Description Framework ontologies.

Book ChapterDOI
22 Sep 2010
TL;DR: This paper shows that it is possible to efficiently recognize KWQL queries that can be evaluated using only information retrieval or information retrieval and structure matching, and allows KWilt to evaluate basic queries at almost the speed of the underlying search engine, yet also provides all the power of full first-order queries, where needed.
Abstract: Semantic wikis and other modern knowledge management systems deviate from traditional knowledge bases in that information ranges from unstructured (wiki pages) over semi-formal (tags) to formal (RDF or OWL) and is produced by users with varying levels of expertise. KWQL is a query language for semantic wikis that scales with a user's level of expertise by combining ideas from keyword query languages with aspects of formal query languages such as SPARQL. In this paper, we discuss KWQL's implementation KWilt: It uses, for each data format and query type, technology tailored to that setting and combines, in a patchwork fashion, information retrieval, structure matching and constraint evaluation tools with only lightweight "glue". We show that it is possible to efficiently recognize KWQL queries that can be evaluated using only information retrieval or information retrieval and structure matching. This allows KWilt to evaluate basic queries at almost the speed of the underlying search engine, yet also provides all the power of full first-order queries, where needed. Moreover, adding new data formats or abilities is easier than in a monolithic system.

Proceedings Article
01 Jan 2010
TL;DR: In this paper, the four-valued paraconsistent logic (SROIQ) was extended to incorporate additional elements of the logic, and it was shown that the modified logic is classically sound and that its embedding into classical SROIQ is consequence preserving.
Abstract: The four-valued paraconsistent logic \(\mathcal{SROIQ}4\), originally presented by Ma and Hitzler, is extended to incorporate additional elements of \(\mathcal{SROIQ}\). It is shown that the modified logic is classically sound and that its embedding into classical \(\mathcal{SROIQ}\) is consequence preserving. Furthermore, inserting special axioms into a \(\mathcal{SROIQ}4\) knowledge base allows additional nontrivial conclusions to be drawn, without affecting paraconsistency. It is also shown that the interaction of nominals and cardinality restrictions prevents some \(\mathcal{SROIQ}4\) knowledge bases from having models. For such knowledge bases, the logic remains explosive.

Book ChapterDOI
22 Sep 2010
TL;DR: It is shown that a sample collection of annotation rules are effective on a relevant corpus that is assembled by collecting e-mails that have escaped detection by the industry-standard SpamAssassin filter.
Abstract: A new system for spam e-mail annotation by end-users is presented. It is based on the recursive application of hand-written annotation rules by means of an inferential engine based on Logic Programming. Annotation rules allow the user to express nuanced considerations that depend on deobfuscation, word (non-)occurrence and structure of the message in a straightforward, humanreadable syntax. We show that a sample collection of annotation rules are effective on a relevant corpus that we have assembled by collecting e-mails that have escaped detection by the industry-standard SpamAssassin filter. The system presented here is intended as a personal tool enforcing personalized annotation rules that would not be suitable for the general e-mail traffic.

Book ChapterDOI
22 Sep 2010
TL;DR: This paper presents the experiences on the implementation of a basic loosely-coupled system and a more advanced embedded solution of loose integration and embedded integration.
Abstract: Integrating distinct reasoning styles such as the ones exploited by description logics, rule-based systems and fuzzy logic is still an open challenge because of the differences among them. Three complementary approaches suggest possible models of integration: loose integration, tight integration and embedded integration. Loose integration couples existing tools into a hybrid system, handling their mutual interactions and keeping their knowledge aligned. Tight integration, instead, is based on a unique theory and framework supporting both reasoning styles. Embedded integration is a mixed approach aimed to the simplicity of the former and the efficiency of the latter. In this paper we present our experiences on the implementation of a basic loosely-coupled system and a more advanced embedded solution.

Book ChapterDOI
22 Sep 2010
TL;DR: It is shown that the modified logic is classically sound and that its embedding into classical \(\mathcal{SROIQ}\) is consequence preserving, and inserting special axioms into a knowledge base allows additional nontrivial conclusions to be drawn, without affecting paraconsistency.
Abstract: The four-valued paraconsistent logic \(\mathcal{SROIQ}4\), originally presented by Ma and Hitzler, is extended to incorporate additional elements of \(\mathcal{SROIQ}\). It is shown that the modified logic is classically sound and that its embedding into classical \(\mathcal{SROIQ}\) is consequence preserving. Furthermore, inserting special axioms into a \(\mathcal{SROIQ}4\) knowledge base allows additional nontrivial conclusions to be drawn, without affecting paraconsistency. It is also shown that the interaction of nominals and cardinality restrictions prevents some \(\mathcal{SROIQ}4\) knowledge bases from having models. For such knowledge bases, the logic remains explosive.

Proceedings Article
01 Jan 2010
TL;DR: In this article, a polynomial time algorithm was proposed to answer queries against an information base (KB) using secrets, whenever it is possible to do so without compromising secrets.
Abstract: We consider the problem of answering queries against an \(\mathcal{EL}\) knowledge base (KB) using secrets, whenever it is possible to do so without compromising secrets. We provide a polynomial time algorithm that, given an \(\mathcal{EL}\) KB Σ, a set \(\mathbb{S}\) of secrets to be protected and a query q, outputs “Yes” whenever \(\Sigma \vDash q\) and the answer to q, together with the answers to any previous queries answered by the KB, does not allow the querying agent to deduce any of the secrets in \(\mathbb{S}\). This approach allows more flexible information sharing than is possible with traditional access control mechanisms.