scispace - formally typeset
Search or ask a question

Showing papers presented at "Web Reasoning and Rule Systems in 2008"


Book ChapterDOI
31 Oct 2008
TL;DR: This work devise a correct and complete algorithm which shows that consistency of a IDDL system is decidable whenever consistency of the local logics isdecidable.
Abstract: In the context of the Semantic Web or semantic peer to peer systems, many ontologies may exist and be developed independently. Ontology alignments help integrating, mediating or reasoning with a system of networked ontologies. Though different formalisms have already been defined to reason with such systems, they do not consider ontology alignments as first class objects designed by third party ontology matching systems. Correspondences between ontologies are often asserted from an external point of view encompassing both ontologies. We study consistency checking in a network of aligned ontologies represented in Integrated Distributed Description Logics ( IDDL ). This formalism treats local knowledge (ontologies) and global knowledge (inter-ontology semantic relations, i.e. alignments) separately by distinguishing local interpretations and global interpretation so that local systems do not need to directly connect to each other. We consequently devise a correct and complete algorithm which, although being far from tractacle, has interesting properties: it is independent from the local logics expressing ontologies by encapsulating local reasoners. This shows that consistency of a IDDL system is decidable whenever consistency of the local logics is decidable. Moreover, the expressiveness of local logics does not need to be known as long as local reasoners can handle at least $\mathcal{ALC}$.

48 citations


Book ChapterDOI
31 Oct 2008
TL;DR: This paper formally defines a foundation for approximate reasoning research, and clarifies by means of notions from statistics how different approximate algorithms can be compared, and ground the most fundamental notions in the field formally.
Abstract: Approximate reasoning for the Semantic Web is based on the idea of sacrificing soundness or completeness for a significant speed-up of reasoning. This is to be done in such a way that the number of introduced mistakes is at least outweighed by the obtained speed-up. When pursuing such approximate reasoning approaches, however, it is important to be critical not only about appropriate application domains, but also about the quality of the resulting approximate reasoning procedures. With different approximate reasoning algorithms discussed and developed in the literature, it needs to be clarified how these approaches can be compared, i.e. what it means that one approximate reasoning approach is better than some other. In this paper, we will formally define such a foundation for approximate reasoning research. We will clarify --- by means of notions from statistics --- how different approximate algorithms can be compared, and ground the most fundamental notions in the field formally. We will also exemplify what a corresponding statistical comparison of algorithms would look like.

46 citations


Book ChapterDOI
31 Oct 2008
TL;DR: This paper presents a comprehensive overview of the Screech approach to approximate reasoning with OWL ontologies, which is based on the KAON2 algorithms, facilitating a compilation of OWL DL TBoxes into Datalog,Which is tractable in terms of data complexity.
Abstract: With the increasing interest in expressive ontologies for the Semantic Web, it is critical to develop scalable and efficient ontology reasoning techniques that can properly cope with very high data volumes. For certain application domains, approximate reasoning solutions, which trade soundness or completeness for inctreased reasoning speed, will help to deal with the high computational complexities which state of the art ontology reasoning tools have to face. In this paper, we present a comprehensive overview of the Screech approach to approximate reasoning with OWL ontologies, which is based on the KAON2 algorithms, facilitating a compilation of OWL DL TBoxes into Datalog, which is tractable in terms of data complexity. We present three different instantiations of the Screech approach, and report on experiments which show that the gain in efficiency outweighs the number of introduced mistakes in the reasoning process.

40 citations


Book ChapterDOI
31 Oct 2008
TL;DR: The main principles behind RIF are discussed, the RIF extensibility framework is introduced, and the Basic Logic Dialect is outlined--the only fully developed RIF dialect so far.
Abstract: The Rule Interchange Format (RIF) is an activity within the World Wide Web Consortium aimed at developing a Web standard for exchanging rules. The need for rule-based information processing on the Semantic Web has been felt ever since RDF was introduced in the late 90's. As ontology development picked up pace this decade and as the limitations of OWL became apparent, rules were firmly put back on the agenda. RIF is therefore a major opportunity for the introduction of rule based technologies into the main stream of knowledge representation and information processing on the Web. Despite its humble name, RIF is not just a format and is not primarily about syntax. It is an extensible framework for rule-based languages, called RIF dialects , which includes precise and formal specification of the syntax, semantics, and XML serialization. In this paper we will discuss the main principles behind RIF, introduce the RIF extensibility framework, and outline the Basic Logic Dialect--the only fully developed RIF dialect so far.

36 citations


Book ChapterDOI
31 Oct 2008
TL;DR: This work proposes a user-specific minimization technique based on Datalog rules, enabling a user to specify the structures in an RDF graph that are not relevant for an application and therefore are deleted, while still by means of the rules retaining the possibility to reconstruct the deleted data.
Abstract: The Resource Description Framework (RDF) is a cornerstone of the Semantic Web. Due to its few and elementary language constructs, RDF data can become large and contain redundant information. So far, techniques for eliminating redundancy rely on the generic notion of lean graphs. We propose a user-specific minimization technique based on Datalog rules, enabling a user to specify the structures in an RDF graph that are not relevant for an application and therefore are deleted, while still by means of the rules retaining the possibility to reconstruct the deleted data. We set this scenario on top of constraints to ensure data consistency, i.e. if an RDF graph satisfies some constraints before minimization, these constraints must be also satisfied afterwards. The problem is decidable but already for a restricted case intractable. In addition we give a fragment of the minimization problem which can be solved in polynomial time.

28 citations


Book ChapterDOI
31 Oct 2008
TL;DR: A fuzzy extension of SWRL (vague-SWRL), based on vague sets which employ membership degree intervals to represent fuzzy information, is proposed and the notion of second degree weight to represent weights of the membership degrees in vague- SWRL is presented.
Abstract: Rules in the Web have become a mainstream topic since inference rules are marked up for e-commerce and are identified as a design issue of the semantic web [7]. SWRL [4] is incapable of representing the imprecision and uncertainty, and a single membership degree in fuzzy sets [6] is inaccurate to represent the imprecise knowledge. Based on vague sets [2] which employ membership degree intervals to represent fuzzy information, we propose a fuzzy extension of SWRL(vague-SWRL. What's more, weights in f-SWRL [5] have no power to represent the importance of membership degrees. In order to modify the membership degrees and to balance and supplement the weights of vague classes and properties (i.e., first degree weights), we present the notion of second degree weight to represent weights of the membership degrees in vague-SWRL. In addition, we extend RuleML to express vague-SWRL.

26 citations


Book ChapterDOI
31 Oct 2008
TL;DR: This paper lays bare the assumptions underlying different approaches for revision in DLs and proposes some criteria to compare them and gives the definition of a revision operator inDLs and point out some open problems.
Abstract: Revision of a Description Logic-based ontology to incorporate newly received information consistently is an important problem for the lifecycle of ontologies. Many approaches in the theory of belief revision have been applied to deal with this problem and most of them focus on the postulates or the logical properties of a revision operator in Description Logics (DLs). However, there is no coherent view on how to characterize a revision operator in DLs. In this paper, we lay bare the assumptions underlying different approaches for revision in DLs and propose some criteria to compare them. Based on the analysis, we give our definition of a revision operator in DLs and point out some open problems.

26 citations


Book ChapterDOI
31 Oct 2008
TL;DR: This paper provides a set of decidability and complexity results for reasoning in systems combining ontologies specified in DLs and rules specified in nonrecursive Datalog (and its extensions with inequality and negation): such results identify, from the viewpoint of the expressive abilities of the two formalisms, minimal combinations of Description Logics andDatalog in which reasoning is undecidable.
Abstract: Reasoning in systems integrating Description Logics (DL) ontologies and Datalog rules is a very hard task, and previous studies have shown undecidability of reasoning in systems integrating (even very simple) DL ontologies with recursive Datalog. However, the results obtained so far constitute a very partial picture of the computational properties of systems combining DL ontologies and Datalog rules. The aim of this paper is to contribute to complete this picture, extending the computational analysis of reasoning in systems integrating ontologies and Datalog rules. More precisely, we first provide a set of decidability and complexity results for reasoning in systems combining ontologies specified in DLs and rules specified in nonrecursive Datalog (and its extensions with inequality and negation): such results identify, from the viewpoint of the expressive abilities of the two formalisms, minimal combinations of Description Logics and Datalog in which reasoning is undecidable. Then, we present new results on the decidability and complexity of the so-called restricted (or safe ) integration of DL ontologies and Datalog rules. Our results show that: (1) the unrestricted interaction between DLs and Datalog is computationally very hard even in the absence of recursion in rules; (2) surprisingly, the various "safeness" restrictions, which have been defined to regain decidability of reasoning in the interaction between DLs and recursive Datalog, appear as necessary restrictions even when rules are not recursive.

18 citations


Proceedings Article
01 Jan 2008
TL;DR: This paper considers the characters of $ALC with Quasi-classical semantics and develops a sound and complete tableau algorithm for paraconsistent reasoning in $\mathcal{ALC}$.
Abstract: Description logics are a family of knowledge representation formal- ism which descended from semantic networks. During the past decade, the important reasoning problems such as satisfiability and subsumption have been handled by tableau-like algorithms. Description logics are practical monotonic logics which, though imparting strong and conclusive reasoning mechanisms, lack the flexibility of non-monotonic reasoning mechanisms. In recent years, the study of inconsistency handling in description logics becomes more and more important. Some technologies are being applied to handle inconsistency in de- scription logic. Quasi-classical logic, which allows the derivation of nontriv- ial classical inferences from inconsistent information, supports many important proof rules such as modus tollens, modus ponens, and disjunctive syllogism. In this paper, we consider the characters of ALC with Quasi-classical semantics and develop a sound and complete tableau algorithm for paraconsistent reasoning in ALC.

14 citations


Proceedings Article
01 Jan 2008

14 citations


Book ChapterDOI
31 Oct 2008
TL;DR: A DLP system which carries out as much as possible of the reasoning tasks in mass memory without degrading performances, allowing to deal with data-intensive applications and incorporates an optimization strategy, based on an unfolding technique, for efficient query answering.
Abstract: Disjunctive logic programming under answer set semantics (DLP, ASP) is a powerful rule-based formalism for knowledge representation and reasoning. The language of DLP is very expressive, and allows to model also advanced knowledge-based tasks arising in modern application-areas like, e.g., information integration and knowledge management. The recent development of efficient systems supporting disjunctive logic programming, has encouraged the usage of DLP in real-world applications. However, despite the high expressiveness of their languages, the success of DLP systems is still dimmed when the applications of interest become data intensive (current DLP systems work only in main memory) or they need the execution of some inherently procedural sub-tasks. The main goal of this paper is precisely to improve efficiency and usability of DLP systems in these contexts, for a full exploitation of DLP in real-world applications. We develop a DLP system which (i) carries out as much as possible of the reasoning tasks in mass memory without degrading performances, allowing to deal with data-intensive applications; (ii) extends the expressiveness of DLP language with external function calls, yet improving efficiency (at least for procedural sub-tasks) and knowledge-modeling power; (iii) incorporates an optimization strategy, based on an unfolding technique, for efficient query answering; (iv) supports primitives allowing to integrate data from different databases in a simple way. We test the system on a real data-integration application, comparing its performance against the main DLP systems. Test results are very encouraging: the proposed system can handle significantly larger amounts of data than competitor systems, and it is also faster in response time.

Book ChapterDOI
31 Oct 2008
TL;DR: It is argued that the availability of the domain knowledge should not be disregarded during data mining process and the semantic redundancy reduction techniques should be integrated into the approach to mining association rules from the hybrid knowledge bases represented in OWL with rules.
Abstract: In this paper we discuss how to reduce redundancy in the process and in the results of mining the Semantic Web data. In particular, we argue that the availability of the domain knowledge should not be disregarded during data mining process. As the case study we show how to integrate the semantic redundancy reduction techniques into our approach to mining association rules from the hybrid knowledge bases represented in OWL with rules.

Book ChapterDOI
31 Oct 2008
TL;DR: This paper presents role-based ontologies as an extension of standard ontologies and define their semantics through a reduction to standard Description Logics, such that existing reasoners can be used.
Abstract: Ontologies are already used in the life sciences and the Semantic Web, but are expected to be deployed in many other areas in the near future--for example, in software development. As the use of ontologies becomes commonplace, they will be constructed more frequently and also become more complex. To cope with this issue, modularization paradigms and reuse techniques must be defined for ontologies and supported by ontology languages. In this paper, we propose the use of roles from conceptual modeling for this purpose, and show how they can be used to define ontological reuse units and enable modularization. We present role-based ontologies as an extension of standard ontologies and define their semantics through a reduction to standard Description Logics, such that existing reasoners can be used.

Book ChapterDOI
31 Oct 2008
TL;DR: The semantics of RDFLog is closed (every answer is an RDF graph), but lifts RDF's restrictions on literal and blank node occurrences for intermediary data and shows equivalence between languages with full quantifier alternation and languages with only *** *** rules.
Abstract: We introduce the recursive, rule-based RDF query language RDFLog. RDFLog extends previous RDF query languages by arbitrary quantifier alternation: blank nodes may occur in the scope of all, some, or none of the universal variables of a rule. In addition RDFLog is aware of important RDF features such as the distinction between blank nodes, literals and URIs or the RDFS vocabulary. The semantics of RDFLog is closed (every answer is an RDF graph), but lifts RDF's restrictions on literal and blank node occurrences for intermediary data. We show how to define a sound and complete operational semantics that can be implemented using existing logic programming techniques. Using RDFLog we classify previous approaches to RDF querying along their support for blank node construction and show equivalence between languages with full quantifier alternation and languages with only *** *** rules.

Book ChapterDOI
31 Oct 2008
TL;DR: The contribution of the present paper is the extension of the MARS meta model of component languages to aMeta model of services and an informational infrastructure that is required for a most general framework for specifying and executing active rules over heterogeneous languages.
Abstract: We present ---on the base of previous papers--- a framework for markup, interchange, execution, and interoperability of Active Rules, namely, Event-Condition-Action (ECA) Rules over semantically different sublanguages for expressing events, conditions, and actions. The contribution of the present paper is the extension of the MARS meta model of component languages to a meta model of services and an informational infrastructure that is required for a most general framework for specifying and executing active rules over heterogeneous languages. The approach is implemented in the MARS prototype.

Book ChapterDOI
31 Oct 2008
TL;DR: A reasoner for OWL ontologies and SWRL policies used on cognitive radios to control dynamic spectrum access that allows for a behavior similar to that of a logic programming system, while constraint simplification rules as well as operations can easily be defined and processed.
Abstract: We describe a reasoner for OWL ontologies and SWRL policies used on cognitive radios to control dynamic spectrum access. In addition to rules and ontologies, the reasoner needs to handle user-defined operations (e.g., temporal and geospatial). Furthermore, the reasoner must perform sophisticated constraint simplification because any unresolved constraints can be used by a cognitive radio to plan and reason about its spectrum usage. No existing reasoner supported all these features. However, the term rewriting engine Maude, augmented with narrowing, provides a promising reasoning mechanism. This allows for a behavior similar to that of a logic programming system, while constraint simplification rules as well as operations can easily be defined and processed. Our system and general approach will be useful for other problems that need sophisticated constraint processing in addition to rule-based reasoning, or where new operations need to be added. The implementation is efficient enough to run on resource-constrained embedded systems such as software-defined radios.

Book ChapterDOI
31 Oct 2008
TL;DR: Possibility logics, first proposed by Hollunder in [1], are extension of description logics with possibilistic semantics that are a powerful logical framework for dealing with uncertainty and handling inconsistency.
Abstract: Uncertainty reasoning and inconsistency handling are two important problems that often occur in the applications of the Semantic Web, such as the areas like medicine and biology [2] Possibilistic description logics, first proposed by Hollunder in [1], are extension of description logics with possibilistic semantics It is well-known that possibilistic logic is a powerful logical framework for dealing with uncertainty and handling inconsistency

Book ChapterDOI
31 Oct 2008
TL;DR: In this article, a sound and complete tableau algorithm for paraconsistent reasoning in description logics with quasi-classical semantics is presented, which allows the derivation of nontrivial classical inferences from inconsistent information, supports many important proof rules such as modus tollens, modus ponens, and disjunctive syllogism.
Abstract: Description logics are a family of knowledge representation formalism which descended from semantic networks. During the past decade, the important reasoning problems such as satisfiability and subsumption have been handled by tableau-like algorithms. Description logics are practical monotonic logics which, though imparting strong and conclusive reasoning mechanisms, lack the flexibility of non-monotonic reasoning mechanisms. In recent years, the study of inconsistency handling in description logics becomes more and more important. Some technologies are being applied to handle inconsistency in description logic. Quasi-classical logic, which allows the derivation of nontrivial classical inferences from inconsistent information, supports many important proof rules such as modus tollens, modus ponens, and disjunctive syllogism. In this paper, we consider the characters of $\mathcal{ALC}$ with Quasi-classical semantics and develop a sound and complete tableau algorithm for paraconsistent reasoning in $\mathcal{ALC}$.

Book ChapterDOI
31 Oct 2008
TL;DR: This paper generalizes both hex programs and fuzzy dl-programs to fuzzy hex programs : a LP-based paradigm, supporting both fuzziness as well as reasoning with external sources, and defines basic syntax and semantics and analyze the framework semantically.
Abstract: The need to reason with knowledge expressed in both Logic Programming (LP) and Description Logics (DLs) paradigms on the Semantic Web lead to several integrating formalisms, e.g., Description Logic programs (dl-programs ) allow a logic program to retrieve results from and feed results to a DL knowledge base. Two functional extensions of dl-programs are hex programs and fuzzy dl-programs. The former abstract away from DLs, allowing for general external queries, the latter deal with the uncertain, vague, and inconsistent nature of knowledge on the Web by means of fuzzy logic mechanisms. In this paper, we generalize both hex programs and fuzzy dl-programs to fuzzy hex programs : a LP-based paradigm, supporting both fuzziness as well as reasoning with external sources. We define basic syntax and semantics and analyze the framework semantically, e.g., by investigating the complexity. Additionally, we provide a translation from fuzzy hex programs to hex programs, enabling an implementation via the dlvhex reasoner. Finally, we illustrate the use of fuzzy hex programs for ranking services by using them to model non-functional properties of services and user preferences.

Book ChapterDOI
31 Oct 2008
TL;DR: This work proposes a method for compiling a $\mathcal{SHIQ}$ ontology to a propositional program so that the problem can be solved in polynomial calls to a SAT solver, and proves that this time complexity is worst-case optimal in data complexity.
Abstract: Logical inconsistency may often occur throughout the development stage of a DL-based ontology. We apply the lexicographic inference to reason over inconsistent DL-based ontologies without repairing them first. We address the problem of checking consequences in a $\mathcal{SHIQ}$ ontology that are classically inferred from every consistent (or coherent) subontology having the highest lexicographic precedence . We propose a method for compiling a $\mathcal{SHIQ}$ ontology to a propositional program so that the problem can be solved in polynomial calls to a SAT solver. We prove that this time complexity is worst-case optimal in data complexity. In order to make the method more scalable, we also present partition-based techniques to optimize the calling of SAT solvers.

Book ChapterDOI
31 Oct 2008
TL;DR: It is shown how the planning capabilities of the fluent calculus can be used to automatically generate an abstract composition model for semantic web service composition and an OWL-S ontology is used for describing the capabilities of web services.
Abstract: This paper presents a novel approach for semantic web service composition based on the formalism of fluent calculus. We show how the planning capabilities of the fluent calculus can be used to automatically generate an abstract composition model . For describing the capabilities of web services we have used an OWL-S ontology. Based on the OWL-S ontology semantics, we encode the web service description in the fluent calculus formalism and provide a planning strategy for service composition. To test our composition method, we have implemented an experimental framework that automatically composes and executes web services. Our approach is similar with McIlraiths [2] in terms of computational model, both approaches having the roots in the theory of situation calculus, but the two solutions are complementary. In the situation calculus, the successor state axioms describe how a particular fluent may be changed by an action, whereas the state update axioms of the fluent calculus describe which fluents are changed by an action. The two approaches also differ in the way to evaluate the conditions in the programming languages (GOLOG and FLUX) that implement the two formalisms. For condition evaluation, GOLOG applies the principle of regression, while FLUX [3] uses the principle of progression. Using regression is efficient for short action sequences, but the computational effort increases with the number of performed actions, while by using progression, the computational effort remains the same, independently of the number of performed actions.

Book ChapterDOI
31 Oct 2008
TL;DR: This paper presents an approach from the field of Multi-Context Systems that handles requirements of reasoning in Ambient Computing environments by modeling contexts as local rule theories in a P2P system and mappings through which the agents exchange context information as defeasible rules.
Abstract: Reasoning in Ambient Computing environments requires formal models that represent ambient agents as autonomous logic-based entities, and support sharing and distributed reasoning with the available ambiguous context information. This paper presents an approach from the field of Multi-Context Systems that handles these requirements by modeling contexts as local rule theories in a P2P system and mappings through which the agents exchange context information as defeasible rules, and by performing some type of distributed defeasible reasoning.

Book ChapterDOI
31 Oct 2008
TL;DR: This article establishes decidability of simulation subsumption for advanced query patterns featuring descendant constructs, regular expressions, negative subterms (or subterm exclusions), and multiple variable occurrences and shows that subsumption between two query terms can be decided in $\mathcal{O}(n!^n)$ where n is the sum of the sizes of both query terms.
Abstract: Simulation unification is a special kind of unification adapted to retrieving semi-structured data on the Web. This article introduces simulation subsumption, or containment, that is, query subsumption under simulation unification. Simulation subsumption is crucial in general for query optimization, in particular for optimizing pattern-based search engines, and for the termination of recursive rule-based web languages such as the XML and RDF query language Xcerpt. This paper first motivates and formalizes simulation subsumption. Then, it establishes decidability of simulation subsumption for advanced query patterns featuring descendant constructs, regular expressions, negative subterms (or subterm exclusions), and multiple variable occurrences. Finally, we show that subsumption between two query terms can be decided in $\mathcal{O}(n!^n)$ where n is the sum of the sizes of both query terms.

Proceedings Article
16 Dec 2008
TL;DR: This report presents an overview of the actual official documents on the Standard QoS, and analyzes documents related to the QoS standards in a very general context, without going into details of a particular service or implementation.
Abstract: This report presents an overview of the actual official documents on the Standard QoS The aim is to clarify the concept of quality of service and its relationship with performance indices Quality is an essential feature in the characterization of products and services The definition and measure of quality is very difficult when we decide to evaluate the quality of the services of computer systems Therefore, it’s necessary to clarify and identify the key aspects of Quality of Service (QoS) and the activities related to QoS management In this work, we analyze documents related to the QoS standards in a very general context, without going into details of a particular service or implementation

Book ChapterDOI
31 Oct 2008
TL;DR: A Triple -oriented hybrid language that integrates the mentioned language elements of both aspects following the expressiveness of locally stratified datalog and introduces fixpoint semantics as well as pragmatic extensions for defining transformations between fact bases.
Abstract: In recent years, many researchers in the area of reasoning have focussed on the adoption of rule languages for the Semantic Web that led to remarkable approaches offering various functionality. On one hand, this included language elements of the rule part itself like contexts, higher-orderness, and non-monotonic negation. On the other hand, the proper integration with ontology languages like RDF and OWL had to consider language-specific properties like disjunctivity as well as the demand for using existing external components. The paper proposes a Triple -oriented hybrid language that integrates the mentioned language elements of both aspects following the expressiveness of locally stratified datalog. It introduces fixpoint semantics as well as pragmatic extensions for defining transformations between fact bases. A partial implementation is based on stratified, semi-naive evaluation, and static filtering.