scispace - formally typeset
Search or ask a question

Showing papers presented at "Web Reasoning and Rule Systems in 2007"


Book ChapterDOI
07 Jun 2007
TL;DR: The rule-based query language XChangeEQ is described, designed to completely cover and integrate the four complementary querying dimensions: event data, event composition, temporal relationships, and event accumulation.
Abstract: Reactive Web systems, Web services, and Web-based publish/ subscribe systems communicate events as XML messages, and in many cases require composite event detection: it is not sufficient to react to single event messages, but events have to be considered in relation to other events that are received over time. Emphasizing language design and formal semantics, we describe the rule-based query language XChangeEQ for detecting composite events. XChangeEQ is designed to completely cover and integrate the four complementary querying dimensions: event data, event composition, temporal relationships, and event accumulation. Semantics are provided as model and fixpoint theories; while this is an established approach for rule languages, it has not been applied for event queries before.

79 citations


Book ChapterDOI
07 Jun 2007
TL;DR: This paper shows how Quantified Equilibrium Logic can function as a unified framework which embraces classical logic as well as disjunctive logic programs under the (open) answer set semantics and shows that this framework elegantly captures the existing modular approaches for hybrid knowledge bases in a unified way.
Abstract: In the ongoing discussion about combining rules and Ontologies on the Semantic Web a recurring issue is how to combine first-order classical logic with nonmonotonic rule languages. Whereas several modular approaches to define a combined semantics for such hybrid knowledge bases focus mainly on decidability issues, we tackle the matter from a more general point of view. In this paper we show how Quantified Equilibrium Logic (QEL) can function as a unified framework which embraces classical logic as well as disjunctive logic programs under the (open) answer set semantics. In the proposed variant of QEL we relax the unique names assumption, which was present in earlier versions of QEL. Moreover, we show that this framework elegantly captures the existing modular approaches for hybrid knowledge bases in a unified way.

51 citations


Book ChapterDOI
07 Jun 2007
TL;DR: This paper generalizes normal description logic programs (dl-programs) under the answer set semantics by fuzzy vagueness and imprecision, and defines a canonical semantics of positive and stratified fuzzy dl- programs in terms of a unique least model and iterative least models, respectively.
Abstract: We present a novel approach to fuzzy dl-programs under the answer set semantics, which is a tight integration of fuzzy disjunctive programs under the answer set semantics with fuzzy description logics. From a different perspective, it is a generalization of tightly integrated disjunctive dl-programs by fuzzy vagueness in both the description logic and the logic program component. We show that the new formalism faithfully extends both fuzzy disjunctive programs and fuzzy description logics, and that under suitable assumptions, reasoning in the new formalism is decidable. Furthermore, we present a polynomial reduction of certain fuzzy dl-programs to tightly integrated disjunctive dl-programs. We also provide a special case of fuzzy dl-programs for which deciding consistency and query processing have both a polynomial data complexity.

49 citations


Book ChapterDOI
Tobias Kuhn1
07 Jun 2007
TL;DR: AceRules is a prototype of a rule system with a multisemantics architecture that demonstrates the formal representation of rules using the controlled natural language ACE and shows that a rule language can be executable and easily understandable at the same time.
Abstract: Expressing rules in controlled natural language can bring us closer to the vision of the Semantic Web since rules can be written in the notation of the application domain and are understandable by anybody AceRules is a prototype of a rule system with a multisemantics architecture It demonstrates the formal representation of rules using the controlled natural language ACE We show that a rule language can be executable and easily understandable at the same time AceRules is available via a web service and two web interfaces

38 citations


Book ChapterDOI
07 Jun 2007
TL;DR: A declarative semantics of hybrid programs and a formal operational semantics are defined that provides a basis for hybrid implementations combining Prolog with constraint solvers and can be seen as an extension of SLS-resolution.
Abstract: The problem of integration of rules and ontologies is addressed in a general framework based on the well-founded semantics of normal logic programs and inspired by the ideas of Constraint Logic Programming (CLP). Hybrid rules are defined as normal clauses extended with constraints in the bodies. The constraints are formulae in a language of a first order theory defined by a set T of axioms. Instances of the framework are obtained by specifying a language of constraints and providing T. A hybrid program is a pair (P, T) where P is a finite set of hybrid rules. Thus integration of (non-disjunctive) Datalog with ontologies formalized in a Description Logic is covered as a special case. The paper defines a declarative semantics of hybrid programs and a formal operational semantics. The latter can be seen as an extension of SLS-resolution and provides a basis for hybrid implementations combining Prolog with constraint solvers. In the restricted case of positive rules, hybrid programs are formulae of FOL. In that case the declarative semantics reduces to the standard notion of logical consequence. The operational semantics is sound and it is complete for a restricted class of hybrid programs.

37 citations


Book ChapterDOI
07 Jun 2007
TL;DR: This paper provides inconsistency tolerant semantics for DLs, and studies the computational complexity of consistent query answering over ontologies specified in DL-Lite, a family of DLs specifically tailored to deal with large amounts of data.
Abstract: Description Logics (DLs) have been widely used in the last years as formal language for specifying ontologies over the web. Due to the dynamic nature of this setting, it may frequently happen that data retrieved from the web contradict the intensional knowledge provided by the ontology through which they are collected, which therefore may result inconsistent. In this paper, we analyze the problem of consistent query answering over DL ontologies, i.e., the problem of providing meaningful answers to queries posed over inconsistent ontologies. We provide inconsistency tolerant semantics for DLs, and study the computational complexity of consistent query answering over ontologies specified in DL-Lite, a family of DLs specifically tailored to deal with large amounts of data. We show that the above problem is coNP-complete w.r.t. data complexity, i.e., the complexity measured w.r.t. the size of the data only. Towards identification of tractable cases of consistent query answering over DL-Lite ontologies, we then study the problem of consistent instance checking, i.e., the instance checking problem considered under our inconsistency-tolerant semantics. We provide an algorithm for it which runs in time polynomial in the size of the data, thus showing that the problem is in PTIME w.r.t. data complexity.

35 citations


Book ChapterDOI
07 Jun 2007
TL;DR: This paper presents a system that reasons with an ontology of emotions implemented with semantic web technologies and provides a bridge between an unrestricted input and a restricted set of concepts for which particular rules are provided.
Abstract: The adequate representation of emotions in affective computing is an important problem and the starting point of studies related to emotions. There are different approaches for representing emotions, selecting one of this existing methods depends on the purpose of the application. Another problem related to emotions is the amount of different emotional concepts which makes it very difficult to find the most specific emotion to be expressed in each situation. This paper presents a system that reasons with an ontology of emotions implemented with semantic web technologies. Each emotional concept is defined in terms of a range of values along the three-dimensional space of emotional dimensions. The capabilities for automated classification and establishing taxonomical relations between concepts are used to provide a bridge between an unrestricted input and a restricted set of concepts for which particular rules are provided. The rules applied at the end of the process provide configuration parameters for a system for emotional voice synthesis.

31 citations


Book ChapterDOI
07 Jun 2007
TL;DR: A new framework for the representation of and reasoning over geo-ontologies is presented using the web ontology language (OWL) and its associated reasoning tools and a spatial rule engine extension to the reasoning tools associated with OWL is presented.
Abstract: Geo-ontologies have a key role to play in the development of the geospatial-semantic web, with regard to facilitating the search for geographical information and resources. They normally hold large amounts of geographic information and undergo a continuous process of revision and update. Hence, means of ensuring their integrity are crucial and needed to allow them to serve their purpose. This paper proposes the use of qualitative spatial reasoning as a tool to support the development of a geo-ontology management system. A new framework for the representation of and reasoning over geo-ontologies is presented using the web ontology language (OWL) and its associated reasoning tools. Spatial reasoning and integrity rules are represented using a spatial rule engine extension to the reasoning tools associated with OWL. The components of the framework are described and the implementation of the spatial reasoning engine is presented. This work is a step towards the realisation of a complete geo-ontology management system for the semantic web.

26 citations


Book ChapterDOI
07 Jun 2007
TL;DR: This paper presents an approach to generate the semantics of service compositions from a formal Workflow net model of the service composition, and shows how this works in practice.
Abstract: Supporting service discovery by semantic service specifications is currently an important research area. While the approaches for the annotation of individual services are well researched, determining the semantics of compositions of services remains an open research issue. In this paper, we present an approach to generate the semantics of service compositions from the semantics of the contained services. To do this we assume a formal Workflow net model of the service composition. With an example use case we show how this works in practice.

25 citations


Book ChapterDOI
07 Jun 2007
TL;DR: Fuzzy CARIN provides a sound and complete algorithm for representing and reasoning about fuzzy ALCNR extended with nonrecursive Horn rules, and provides the ability of answering to union of conjunctive queries, which is a novelty not previously addressed by fuzzy DL systems.
Abstract: This essay describes fuzzy CARIN, a knowledge representation language combining fuzzy description logics with Horn rules. Fuzzy CARIN integrates the management of fuzzy logic into the nonrecursive CARIN system. It provides a sound and complete algorithm for representing and reasoning about fuzzy ALCNR extended with nonrecursive Horn rules. Such an extension is most useful in realistic applications dealing with uncertainty and imprecision, such as multimedia processing and medical applications. Additionally, it provides the ability of answering to union of conjunctive queries, which is a novelty not previously addressed by fuzzy DL systems.

23 citations


Book ChapterDOI
07 Jun 2007
TL;DR: An hybrid method combining symbolic and numerical techniques for annotating brain Magnetic Resonance images using an OWL DL ontology enriched by SWRL rules to assist automatic labelling methods.
Abstract: This paper describes an hybrid method combining symbolic and numerical techniques for annotating brain Magnetic Resonance images. Existing automatic labelling methods are mostly statistical in nature and do not work very well in certain situations such as the presence of lesions. The goal is to assist them by a knowledge-based method. The system uses statistical method for generating a sufficient set of initial facts for fruitful reasoning. Then, the reasoning is supported by an OWL DL ontology enriched by SWRL rules. The experiments described were achieved using the KAON2 reasoner for inferring the annotations.

Book ChapterDOI
07 Jun 2007
TL;DR: This paper presents the design of a system for proof explanation on the Semantic Web, based on defeasible reasoning, and the basis of this work is the DR-DEVICE system that is extended to handle proofs.
Abstract: Trust is a vital feature for the Semantic Web: If users (humans and agents) are to use and integrate system answers, they must trust them. Thus, systems should be able to explain their actions, sources, and beliefs, and this issue is the topic of the proof layer in the design of the Semantic Web. This paper presents the design of a system for proof explanation on the Semantic Web, based on defeasible reasoning. The basis of this work is the DR-DEVICE system that is extended to handle proofs. A critical aspect is the representation of proofs in an XML language, which is achieved by a RuleML language extension.

Proceedings Article
07 Jun 2007
TL;DR: In this article, the authors propose ALCpu, which integrates a description logic (DL) that makes a unique names assumption with general rules that have the form of Datalog Programs permitting default negation in the body.
Abstract: A unifying logic is built on top of ontologies and rules for the revised Semantic Web Architecture. This paper proposes ALCpu, which integrates a description logic (DL) that makes a unique names assumption with general rules that have the form of Datalog Programs permitting default negation in the body. An ALCpu knowledge base (KB) consists of a TBox T of subsumptions, an ABox A of assertions, and a novel PBox P of general rules that share predicates with DL concepts and DL roles. To model open answer set semantics, extended Herbrand structures are used for interpreting DL concepts and DL roles, while open answer sets hold for general rules. To retain decidability, a well-known weak safeness condition is employed. We develop DL tableaux-based algorithms for decision procedures of the KB satisfiability and the query entailment problems.

Book ChapterDOI
07 Jun 2007
TL;DR: The goal of this work is to take explicitly into account any possible contextual dependency of a collection of RDF models, without losing sight of performance and scalability issues.
Abstract: In this paper we present a context-based architecture and implementation for supporting the construction and management of contextualized RDF knowledge bases The goal of this work is to take explicitly into account any possible contextual dependency of a collection of RDF models, without losing sight of performance and scalability issues We are illustrating motivations, as well as theoretical background, implementation details and test-results of our latest works

Book ChapterDOI
07 Jun 2007
TL;DR: Different formalisms for modular ontologies are compared in their ability to support networked, dynamic and distributed ontologies, as well as the reasoning capability over these ontologies to show the strength and limitation of existing formalisms against the needs of modular ontology in the given setting.
Abstract: Modern semantic technology is one of the necessary supports for the infrastructure of next generation information systems. In particular, large international organizations, which usually have branches around the globe, need to manage web-based, complex, dynamically changing and geographically distributed information. Formalisms for modular ontologies offer the necessary mechanism that is needed to handle ontology-based distributed information systems in the aforementioned scenario. In this paper, we investigate state-of-the-art technologies in the area of modular ontologies and corresponding logical formalisms. We compare different formalisms for modular ontologies in their ability to support networked, dynamic and distributed ontologies, as well as the reasoning capability over these ontologies. The comparison results show the strength and limitation of existing formalisms against the needs of modular ontologies in the given setting, and possible future extensions to overcome those limitations.

Book ChapterDOI
07 Jun 2007
TL;DR: An extension to the Semantic Web Rule Language and a methodology to enable advanced mathematical support in SWRL rules allowing the inclusion of integration, differentiation and other operations not built-in to SWRL are presented.
Abstract: This paper presents an extension to the Semantic Web Rule Language and a methodology to enable advanced mathematical support in SWRL rules. This solution separates mathematical and problem semantics allowing the inclusion of integration, differentiation and other operations not built-in to SWRL. Using this approach, it is possible to create rules to cope with complex scenarios that include mathematical relationships and formulas that exceed the SWRL capabilities.

Book ChapterDOI
07 Jun 2007
TL;DR: The paper presents an architecture and implementation techniques for hybrid integration of normal clauses under well-founded semantics with ontologies specified in Description Logics and uses XSB Prolog both for rule reasoning and for controlling communication with the ontology reasoner RacerPro.
Abstract: The paper presents an architecture and implementation techniques for hybrid integration of normal clauses under well-founded semantics with ontologies specified in Description Logics. The described prototype uses XSB Prolog both for rule reasoning and for controlling communication with the ontology reasoner RacerPro. The query answering techniques for hybrid rules implemented in this prototype are sound wrt. the declarative semantics, extending the well-founded semantics of normal programs and are faithful wrt. FOL.

Book ChapterDOI
07 Jun 2007
TL;DR: In this article, a formalism for schema mapping specification taking into account key constraints and value dependencies is presented, which extends results from [1, 2] and [3, 4].
Abstract: Schema mappings play a central role in both data integration and data exchange, and are understood as high-level specifications describing the relationships between data schemas. Based on these specifications, data structured under a source schema can be transformed into data structured under a target schema. During the transformation some structural constraints, both contextfree (the structure) and contextual (e.g. keys and value dependencies) should be taken into account. In this work, we present a formalism for schema mapping specification taking into account key constraints and value dependencies. The formalism extends results from [1,2], and our previous work [3,4]. We illustrate the approach by an example.

Book ChapterDOI
07 Jun 2007
TL;DR: This work generalizes the characteristics of a service, which need to be considered for successful execution of the service, as constraints, and presents a predicate logic model to specify the corresponding constraints.
Abstract: The most promising feature of the Web services platform is its ability to form new (composite) services by combining the capabilities of already existing (component) services. The existing services may themselves be composite leading to a hierarchical composition. In this work, we focus on the discovery aspect. We generalize the characteristics of a service, which need to be considered for successful execution of the service, as constraints. We present a predicate logic model to specify the corresponding constraints. Further, composite services are also published in a registry and available for discovery (hierarchical composition). Towards this end, we show how the constraints of a composite service can be derived from the constraints of its component services in a consistent manner. Finally, we present an incremental matchmaking algorithm which allows bounded inconsistency.

Book ChapterDOI
07 Jun 2007
TL;DR: A way to automatically generate a part of the business rules by combining concepts coming from Model Driven Architecture and Semantic Web using the Ontology Definition Metamodel is proposed.
Abstract: Business rules are statements that express (certain parts of) a business policy, defining terms and defining or constraining the operation of an entreprise, in a declarative manner. The business rule approach is more and more used due to the fact that in such systems, business experts can maintain the complex behavior of their application in a "zero development" environment. There exist more and more business rule management systems (BRMS) and rule engines, adding new needs in the business rules community. Currently the main requirement in this domain is having a standard language for representing business rules, facilitating their integration and share. Works for solving this lack are in progress at e.g OMG and W3C. The aim of this paper is to propose a way to automatically generate a part of the business rules by combining concepts coming from Model Driven Architecture and Semantic Web using the Ontology Definition Metamodel.

Book ChapterDOI
07 Jun 2007
TL;DR: A rule acquisition framework that uses a rule ontology, which can be acquired from the rule base of a similar site, and then used for rule acquisition in the other sites of the same domain.
Abstract: Rule based systems and agents are important applications of the Semantic Web constructs such as RDF, OWL, and SWRL. While there are plenty of utilities that support ontology generation and utilization, rule acquisition is still a bottleneck as an obstacle to wide propagation of rule based systems. To automatically acquire rules from unstructured texts, we develop a rule acquisition framework that uses a rule ontology. The ontology can be acquired from the rule base of a similar site, and then is used for rule acquisition in the other sites of the same domain. The procedure of ontology-based rule acquisition consists of rule component identification and rule composition. The former uses stemming and semantic similarity to extract variables and values from the Web page and the latter uses the best-first search method in composing the variables and values into rules.

Book ChapterDOI
07 Jun 2007
TL;DR: In this paper, the authors present tools that allow direct access to relational data from OWL applications using extensions to OWL's rule language SWRL and a variety of optimization techniques ensure that this process is efficient and scales to large datasets.
Abstract: For the foreseeable future, most data will continue to be stored in relational databases. To work with these data in ontology-based applications, tools and techniques that bridge the two models are required. Mapping all relational data to ontology instances is often not practical so dynamic data access approaches are typically employed, though these approaches can still suffer from scalability problems. The use of rules with these systems presents an opportunity to employ optimization techniques that can significantly reduce the amount of data transferred from databases. To illustrate this premise, we have developed tools that allow direct access to relational data from OWL applications. We express these data requirements by using extensions to OWL's rule language SWRL. A variety of optimization techniques ensure that this process is efficient and scales to large data sets.

Book ChapterDOI
07 Jun 2007
TL;DR: This paper presents the rewriting-based, Web verification service WebVerdi-M, which is able to recognize forbidden/incorrect patterns and incomplete/missing Web pages, and develops some scalable experiments which demonstrate the usefulness of the approach.
Abstract: In this paper, we present the rewriting-based, Web verification service WebVerdi-M, which is able to recognize forbidden/incorrect patterns and incomplete/missing Web pages. WebVerdi-M relies on a powerful Web verification engine that is written in Maude, which automatically derives the error symptoms. Thanks to the AC pattern matching supported by Maude and its metalevel facilities, WebVerdi-M enjoys much better performance and usability than a previous implementation of the verification framework. By using the XML Benchmarking tool xmlgen, we develop some scalable experiments which demonstrate the usefulness of our approach.

Book ChapterDOI
07 Jun 2007
TL;DR: The nonmonotonic first-order autoepistemic logic is taken, which generalizes both Description Logics and Logic Programming, and extended it with frames and concrete domains, to capture all features of WSML; it is demonstrated that each WSML variant semantically corresponds to its target formalism.
Abstract: WSML presents a framework encompassing different language variants, rooted in Description Logics and (F-)Logic Programming. So far, the precise relationships between these variants have not been investigated. We take the nonmonotonic first-order autoepistemic logic, which generalizes both Description Logics and Logic Programming, and extend it with frames and concrete domains, to capture all features of WSML; we call this novel formalism FF-AEL. We consider two forms of language layering for WSML, namely loose and strict layering, where the latter enforces additional restrictions on the use of certain language constructs in the rule-based language variants, in order to give additional guarantees about the layering. Finally, we demonstrate that each WSML variant semantically corresponds to its target formalism, i.e. WSML-DL corresponds to SHIQ(D), WSML-Rule to the Stable Model Semantics for Logic Programs (the Well-Founded Semantics can be seen as an approximation), and WSML-Core to DHL(D) (without nominals), a Horn subset of SHIQ(D).

Book ChapterDOI
07 Jun 2007
TL;DR: Efficiency evaluations concerning two different approaches to using logic programming for OWL reasoning are reported on and it is shown how the two approaches can be combined.
Abstract: We report on efficiency evaluations concerning two different approaches to using logic programming for OWL [1] reasoning and show, how the two approaches can be combined.

Book ChapterDOI
07 Jun 2007
TL;DR: In this paper, situation theory is used to model context of agents' actions in heterogeneous P2P system of semantic data integration, which is also suitable to cope with information partiality, open-world and non-monotonic reasoning.
Abstract: We use situation theory to model context of agents' actions in heterogeneous P2P system of semantic data integration. This formal basis is also suitable to cope with information partiality, open-world and nonmonotonic reasoning. Operational semantics of asking and answering queries by the agents is presented as a set of context-dependent rules. Situations are represented by facts and rules and Prolog-like reasoning mechanisms are used in the system. Specification of sample actions is presented.

Book ChapterDOI
07 Jun 2007
TL;DR: A procedural, query answering-oriented semantics for weighted fuzzy logic programs that combines resolution with tabling methodologies and is done by constructing and evaluating an appropriate resolution graph.
Abstract: We describe a procedural, query answering-oriented semantics for weighted fuzzy logic programs. The computation of the semantics combines resolution with tabling methodologies and is done by constructing and evaluating an appropriate resolution graph.

Book ChapterDOI
07 Jun 2007
TL;DR: A description logic (DL) that makes a unique names assumption with general rules that have the form of Datalog ℙrograms permitting default negation in the body is proposed.
Abstract: A unifying logic is built on top of ontologies and rules for the revised Semantic Web Architecture. This paper proposes \(\mathcal{ALC}^{u}_{\mathbb{P}}\), which integrates a description logic (DL) that makes a unique names assumption with general rules that have the form of Datalog ℙrograms permitting default negation in the body. An \(\mathcal{ALC}^{u}_{\mathbb{P}}\) knowledge base (KB) consists of a TBox \(\mathcal{T}\) of subsumptions, an ABox \(\mathcal{A}\) of assertions, and a novel PBox ℙ of general rules that share predicates with DL concepts and DL roles. To model open answer set semantics, extended Herbrand structures are used for interpreting DL concepts and DL roles, while open answer sets hold for general rules. To retain decidability, a well-known weak safeness condition is employed. We develop DL tableaux-based algorithms for decision procedures of the KB satisfiability and the query entailment problems.

Book ChapterDOI
07 Jun 2007
TL;DR: This work analyzes what information has to be contained in a domain ontology and shows that large parts of the behavior can be expressed preferably by rules, and how the tasks can be integrated and handled by a service infrastructure in the Semantic Web.
Abstract: We investigate the use of domain ontologies that also include actions and events of that domain. Such ontologies do not only cover the static aspects of an ontology, but also activities and behavior in the given domain. We analyze what information has to be contained in such an ontology and show that large parts of the behavior can be expressed preferably by rules. We show how the tasks can be integrated and handled by a service infrastructure in the Semantic Web.

Book ChapterDOI
07 Jun 2007
TL;DR: A Racer-based consistency-checking method of reasoning and an ontology evolution method and performance evaluation are given, indicating the high performance of the proposed two-stage clustering approach for semiautomatically building ontologies from the Chinese-document corpus.
Abstract: Building domain ontology is time consuming and tedious since it is usually done by domain experts and knowledge engineers manually. This paper proposes a two-stage clustering approach for semiautomatically building ontologies from the Chinese-document corpus based on SOM neural network and agglomerative hierarchical clustering and automatically checking the ontology consistency. Chinese lexical analysis and XML Path Language(XPath) are used in the process of extracting resources from Web documents. In our experiment, this twostage clustering approach is used for building an automobile ontology. Experimental results and the comparison with the more conventional ontology-generation method are presented and discussed, indicating the high performance of our approach. A Racer-based consistency-checking method of reasoning is presented in this paper. An ontology evolution method and performance evaluation are also given.