scispace - formally typeset
Search or ask a question
Book ChapterDOI

A Datalog+ RuleML 1.01 Architecture for Rule-Based Data Access in Ecosystem Research

18 Aug 2014-Vol. 8620, pp 112-126
TL;DR: An RBDA architecture layered on Datalog+ RuleML is proposed, and used for the ΔForest case study on the susceptibility of forests to climate change, which has shown the usefulness of the approach to Ecosystem Research for global schema design and demonstrated how automated reasoning can become key to knowledge modeling and consolidation for complex statistical data analysis.
Abstract: Rule-Based Data Access (RBDA) enables automated reasoning over a knowledge base (KB) as a generalized global schema for the data in local (e.g., relational or graph) databases reachable through mappings. RBDA can semantically validate, enrich, and integrate heterogeneous data sources. This paper proposes an RBDA architecture layered on Datalog+ RuleML, and uses it for the ΔForest case study on the susceptibility of forests to climate change. Deliberation RuleML 1.01 was mostly motivated by Datalog customization requirements for RBDA. It includes Datalog+ RuleML 1.01 as a standard XML serialization of Datalog+, a superlanguage of the decidable Datalog±. Datalog+ RuleML is customized into the three Datalog extensions Datalog[∃], Datalog[=], and Datalog[\(\bot\)] through MYNG, the RuleML Modular sYNtax confiGurator generating (Relax NG and XSD) schemas from language-feature selections. The ΔForest case study on climate change employs data derived from three main forest monitoring networks in Switzerland. The KB includes background knowledge about the study sites and design, e.g., abundant tree species groups, pure tree stands, and statistical independence among forest plots. The KB is used to rewrite queries about, e.g., the eligible plots for studying a particular species group. The mapping rules unfold our newly designed global schema to the three given local schemas, e.g. for the grade of forest management. The RBDA/ΔForest case study has shown the usefulness of our approach to Ecosystem Research for global schema design and demonstrated how automated reasoning can become key to knowledge modeling and consolidation for complex statistical data analysis.

Summary (2 min read)

1 Introduction

  • Ontology-Based Data Access (OBDA) has emerged as a major application area of Semantic Technologies for validating, enriching, and integrating heterogeneous databases (e.g., [1]).
  • It also validates and enriches local-schema data with global-schema knowledge represented as ontologies or rulebases.
  • In particular, the WSL project addressed in this work is about the susceptibility of forests to climate change [9].

2.1 Kinds of Rules in KBDA

  • (2) OBDA ontologies support query rewriting through global-schema-level reasoning.
  • Such ontologies often extend RDFS to the description logic DL-Lite [11] (as in OWL 2 QL [12]), including subsumption axioms that correspond to (head-)existential rules.
  • The authors will refer to the store containing ontologies or rulebases for rewriting as the KB.
  • (3) KBDA data integration is centered on GAV mappings, which are safe Datalog rules for query unfolding of each global head predicate into a conjunction of local body predicates.

2.2 A Unified Architecture for KBDA

  • Mediator and warehouse architectures for KBDA have often been considered in isolation.
  • The Q′′i are finally grounded to (SQL/SPARQL/. . .) queries for the original databases DBi, whose answers – ultimately for Q – are returned.
  • In (b), the warehouse strategy, databases DB1, DB2, . . ., DBn undergo database Folding through the Folding/Unfolding transformation, resulting in an integrated database DB.
  • In the KB-directed normal form the KB store performs all deductions except atom-level local/global renamings, reserved to the mapping store.
  • The RuleML language is based on a set of monotonic schema modules, each module providing the grammar of a syntactic feature that can be mixed-in to the language [19].

2.4 Relations and Graphs in PSOA RuleML

  • The two modeling paradigms of relational and graph languages can be used simultaneously in the global and local schemas of KBDA architectures.
  • Some KBDA engines, e.g. Nyaya [7], use Datalog± for global relational querying.
  • Graph languages are used, e.g., in frame logic and Semantic Web applications.
  • Hence, a language like PSOA RuleML [21], capable of knowledge and data modeling in both paradigms, can support the specification of these transformations.
  • Here, the positional argument that acts as the simple key in the relation becomes the OID of a frame, and the other positional arguments become slot values whose slot names correspond to relational column headings.

3 ∆Forest Case Study

  • The WSL project [9] aims for an assessment of the susceptibility of forest ecosystems to the expected changing environmental conditions going along with climate change, such as temperature or precipitation.
  • The following working hypotheses are tested: (i) The relative mortality is a useful indicator for the susceptibility of forest stands to changing climatic conditions.
  • Analysis is conducted for 285 pure and mixed forest stands in Switzerland, covering the five tree species groups of interest: beech (Fagus sylvatica), oak (Quercus petraea and Quercus robur), spruce (Picea abies), pine (Pinus sylvestris), fir (Abies alba), and several climatic regions.
  • To make a significant statement about how two or more variables are related, the sample size (i.e., the number of plots) must exceed a certain lower bound.

4 Conclusions

  • OBDA is complemented by Rule-Based Data Access (RBDA) and generalized to Knowledge-Based Data Access (KBDA).
  • RBDA is founded on three kinds of rules: Query rules (including integrity rules), KB rules (for query rewriting and DB materialization), as well as mapping rules (for query unfolding and DB folding).
  • A unified KBDA architecture is presented with mediator, warehouse, and bidirectional data-access strategies.
  • In particular, relevant KBDA efficiency techniques [28] could be adapted to ∆Forest.
  • Datalog+ RuleML Architecture for RBDA in Ecosystem Research 15.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

A Datalog
+
RuleML 1.01 Architecture for
Rule-Based Data Access in Ecosystem Research
Harold Boley
1
Rolf Grütter
2
Gen Zou
1
Tara Athan
3
Sophia Etzold
2
1
Faculty of Computer Science, University of New Brunswick, Fredericton, Canada
{harold.boley, gen.zou}[AT]unb.ca
2
Swiss Federal Research Institute WSL, Birmensdorf, Switzerland
{rolf.gruetter, sophia.etzold}[AT]wsl.ch
3
Athan Services (athant.com), West Lafayette, Indiana, USA
taraathan[AT]gmail.com
Abstract. Rule-Based Data Access (RBDA) enables automated rea-
soning over a knowledge base (KB) as a generalized global schema for
the data in local (e.g., relational or graph) databases reachable through
mappings. RBDA can semantically validate, enrich, and integrate het-
erogeneous data sources. This paper proposes an RBDA architecture
layered on Datalog
+
RuleML, and uses it for the Forest case study on
the susceptibility of forests to climate change. Deliberation RuleML 1.01
was mostly motivated by Datalog customization requirements for RBDA.
It includes Datalog
+
RuleML 1.01 as a standard XML serialization of
Datalog
+
, a superlanguage of the decidable Datalog
±
. Datalog
+
RuleML
is customized into the three Datalog extensions Datalog[], Datalog[=],
and Datalog[] through MYNG, the RuleML Modular sYNtax confiG-
urator generating (Relax NG and XSD) schemas from language-feature
selections. The Forest case study on climate change employs data de-
rived from three main forest monitoring networks in Switzerland. The KB
includes background knowledge about the study sites and design, e.g.,
abundant tree species groups, pure tree stands, and statistical indepen-
dence among forest plots. The KB is used to rewrite queries about, e.g.,
the eligible plots for studying a particular species group. The mapping
rules unfold our newly designed global schema to the three given local
schemas, e.g. for the grade of forest management. The RBDA/Forest
case study has shown the usefulness of our approach to Ecosystem Re-
search for global schema design and demonstrated how automated rea-
soning can become key to knowledge modeling and consolidation for
complex statistical data analysis.
1 Introduction
Ontology-Based Data Access (OBDA) has emerged as a major application area
of Semantic Technologies for validating, enriching, and integrating heteroge-
neous databases (e.g., [1]). Complementary systems for Rule-Based Data Access

2 Harold Boley Rolf Grütter Gen Zou Tara Athan Sophia Etzold
(RBDA) have been developed as well (e.g., [2]). For ontology-rule synergy, OBDA
and RBDA have been generalized to Knowledge-Based Data Access (KBDA).
4
While the earlier logic-database combinations, e.g. procedural Prolog-SQL in-
terfaces, interleaved knowledge-based reasoning with data access, KBDA keeps
these layers separate, using declarative mappings to bridge between the two. This
way, the (higher-level) ontology and rule technologies can be advanced indepen-
dently from, yet be combined with, the (lower-level) optimizations progressing
for DB engines. KBDA can thus provide the urgently needed knowledge level for
the growing number of data sources (e.g., about climate change) of big volume,
variety, and velocity in a cost-effective manner.
KBDA builds on earlier work in knowledge-based information/data/schema
integration (e.g., [3–5]). It integrates data complying to local (heterogeneous)
schemas into data complying to a global (homogeneous) schema, usually employ-
ing Global-As-View (GAV) mappings. It also validates and enriches local-schema
data with global-schema knowledge represented as ontologies or rulebases.
Some KBDA approaches use a mediator architecture for query rewrit-
ing [2, 6, 7] corresponding to top-down processing and backward reasoning
while others use a warehouse architecture for database materialization [8]
corresponding to bottom-up processing and forward reasoning. Given that both
have their advantages, we will propose a unified mediator/warehouse architec-
ture.
KBDA KBs usually encompass rule knowledge to enrich the factual data
mapped again via rules from the local (heterogeneous) schemas of one or
more databases to a global (homogeneous) schema. Given these and other roles
of rules, we will focus on RBDA in the following.
RuleML provides a family of rule (including fact) languages of customizable
expressivity, a family-uniform XML format, and a suite of tools for rule pro-
cessing, including the MYNG tool for generating serialization schemas in RNC
and XSD. Deliberation RuleML 1.01 introduces a standard XML serialization
of Datalog
+
, a superlanguage of the decidable Datalog
±
, which is being increas-
ingly used for RBDA. Section 2 will present a unified architecture for KBDA,
examine KBs and Mappings in Datalog
+
RuleML, and discuss relational-graph
transformations for the global schema.
WSL creates knowledge and publishes data about Swiss forests, giving an
integrated federal perspective on heterogeneous databases of various (e.g., geo-
graphically and thematically) specialized sources. In particular, the WSL project
addressed in this work is about the susceptibility of forests to climate change [9].
Section 3 will show how this RuleML-WSL collaboration, termed Forest, is
bringing the RBDA technologies of Section 2 to bear on WSL knowledge and
databases.
4
An overview is at http://www.cs.unb.ca/~boley/talks/RulesOBDA.pdf.

Datalog
+
RuleML Architecture for RBDA in Ecosystem Research 3
2 RBDA Technology
We will now examine RBDA technology, starting with ‘the rules of OBDA’ from
a mediator perspective, continuing with a unification of mediator and warehouse
architectures for KBDA, and then expanding on Datalog
+
RuleML and PSOA
RuleML for our focus area of RBDA.
2.1 Kinds of Rules in KBDA
Motivated by rule-ontology synergies, we will discuss key mediator concepts of
KBDA and their foundation in three kinds of (Datalog
+
) rules, to be exemplified
through the Forest case study in Section 3.
(1) A conjunctive query is a special Datalog rule whose conjunctive body can
be rewritten as in (2) and unfolded as in (3), and whose n-ary head predicate
instantiates the distinguished answer variables of the body predicates. OBDA
ontologies beyond RDF Schema (RDFS) expressivity usually permit negative
constraints for data validation, which are represented as Boolean conjunctive
queries corresponding to RBDA integrity rules, e.g. in the extension Datalog[]
of Datalog
+
[10].
(2) OBDA ontologies support query rewriting through global-schema-level
reasoning. They usually include the expressivity of RDFS, whose class and prop-
erty subsumptions can be seen as single-premise Datalog rules with, respectively,
unary and binary predicates, and whose remaining axioms are also definable by
rules. Such ontologies often extend RDFS to the description logic DL-Lite [11]
(as in OWL 2 QL [12]), including subsumption axioms that correspond to
(head-)existential rules. RBDA rulebases are also being used for rewriting, e.g.
via Description Logic Programs [13] (as in OWL 2 RL [12], definable in RIF-
Core [14]), Datalog
±
[10], and Disjunctive Datalog [15]. We will refer to the store
containing ontologies or rulebases for rewriting as the KB.
(3) KBDA data integration is centered on GAV mappings, which are safe Datalog
rules for query unfolding of each global head predicate into a conjunction of
local body predicates. These (heterogeneous) conjunctive queries can be further
mapped to the database languages of the sources (e.g., to SQL or SPARQL).
The store containing mappings for unfolding always is a rulebase.
2.2 A Unified Architecture for KBDA
Mediator and warehouse architectures for KBDA have often been considered in
isolation. An architectural unification is achievable by using parts of the KB
disjointly for mediator-style Query Rewriting [16–18] and warehouse-style DB
Materialization [8], and using the Mappings reversely for mediator-style Query
Unfolding and warehouse-style DB Folding. The unified architecture can thus
be employed for a mediator, warehouse, and bidirectional strategy of KBDA (cf.
Fig. 1), allowing for ‘pluggable’ domain refinements (cf. Fig. 2).
The architecture shows queries (as decorated Qs) and databases (as decorated
DBs) explicitly while indicating answers (via solid triangular or diamond-shaped

4 Harold Boley Rolf Grütter Gen Zou Tara Athan Sophia Etzold
Fig. 1. From (a) mediator, (b) warehouse, and (c) bidirectional to (d) unified
architecture.

Datalog
+
RuleML Architecture for RBDA in Ecosystem Research 5
arrow heads) implicitly. Each query Q
00
i
targeting the local source DB
i
abstracts
from the relational/graph/. . . database level, but becomes grounded to, e.g.,
SQL/SPARQL
5
/. . . at the DB
i
interface (indicated by a diamond head).
In (a), the mediator strategy, an incoming query Q undergoes Query
Rewriting to Q
0
using (part or all of) the KB store. This Q
0
then undergoes
Query Unfolding through the Folding/Unfolding transformation using the Map-
pings store, with results Q
00
1
, Q
00
2
, . . ., Q
00
n
. The Q
00
i
are finally grounded to
(SQL/SPARQL/. . .) queries for the original databases DB
i
, whose answers
ultimately for Q are returned.
In (b), the warehouse strategy, databases DB
1
, DB
2
, . . ., DB
n
undergo
database Folding through the Folding/Unfolding transformation, resulting in
an integrated database DB. This DB then undergoes Database Materialization
using (part or all of) the KB store, with result DB
0
. The original query Q is
then sent to this DB
0
, whose answers are returned.
In (c), the bidirectional strategy, databases DB
1
, DB
2
, . . ., DB
n
are
transformed (in two steps) to a database DB
0
as in the warehouse strategy
except that only part of the KB store is used. Independently, an incoming query
Q undergoes Query Rewriting to Q
0
using a disjoint part of the KB store. This
Q
0
is then sent to that DB
0
, whose answers ultimately for Q are returned.
6
The unified strategy (d) encompasses (a)-(c). This meets the needs of our
Forest case study, where, e.g., R scripts materializing parts of the source data
correspond to the warehouse strategy while the continuing extensions to the
sources and the possible addition of new sources call for the mediator strategy,
as focused in Section 3.
All strategies use the KB and the mapping store to perform (compositions
of) transformations. The boundary between these stores, hence their transforma-
tions, is adjustable, both between the mediator-style transformations of Query
Rewriting followed by Query Unfolding and between the warehouse-style trans-
formations of DB Folding followed by DB Materialization. Intermediate forms
can range between two normal forms. In the KB-directed normal form the KB
store performs all deductions except atom-level local/global renamings, reserved
to the mapping store. In the mapping-directed normal form the mapping store
performs all deductions having local premises, leaving only purely global deduc-
tions to the KB store.
2.3 KB and Mappings in Datalog
+
RuleML
The RuleML language is based on a set of monotonic schema modules, each mod-
ule providing the grammar of a syntactic feature that can be mixed-in to the
language [19]. A language defined by a set of modules is always a superlanguage
of a language defined by a subset of those modules, and the resulting structure
5
Unlike the relational SQL for local data, the graph-oriented SPARQL plays an am-
biguous role as a query language for local-schema data and global-schema knowledge.
6
The two directions of the bidirectional strategy thus enable parallel processing with
DB
0
acting as the synchronization point.

Citations
More filters
Book ChapterDOI
06 Jul 2016
TL;DR: The RuleML knowledge-interoperation hub provides for syntactic/semantic representation and internal/external transformation of formal knowledge and the configuration of textbook and enriched Relax NG syntax as well as the association of syntax with semantics.
Abstract: The RuleML knowledge-interoperation hub provides for syntactic/semantic representation and internal/external transformation of formal knowledge. The representation system permits the configuration of textbook and enriched Relax NG syntax as well as the association of syntax with semantics. The transformation tool suite includes serialized formatters (normalizers and compactifiers), polarized parsers and generators (the RuleML\(\leftrightarrow \)POSL tool and the RuleML\(\rightarrow \)PSOA/PS generator and PSOA/PS\(\rightarrow \)AST parser), as well as importers and exporters (the importer from Dexlog to Naf Datalog RuleML and the exporter from FOL RuleML languages to TPTP). An N3-PSOA-Flora knowledge-interoperation use case is introduced for illustration.

7 citations


Cites background from "A Datalog+ RuleML 1.01 Architecture..."

  • ...Similarly, ontological subsumption axioms and rule-based mappings can be combined for uniform Rule-Based Data Access [3]....

    [...]

Proceedings Article
01 Jan 2014
TL;DR: A Datalog rulebase, GeospatialRules, is presented, built on top of the Region Connection Calculus, which allows the use of RuleML-compatible implementations for geospatial reasoning.
Abstract: Representing and reasoning with qualitative geospatial relationships among regions is an important task in many geospatial applications. In this paper, we present a Datalog rulebase, GeospatialRules, which can be used for this task. The rulebase is built on top of the Region Connection Calculus (RCC). It includes rules, facts, and queries. The rules in GeospatialRules consist of a set of rules that are equivalent to the Datalog fragment of the RCC axioms and additional rules which express part of the RCC knowledge that are not captured by the Datalog fragment. The XML version of the rulebase complies to the Deliberation RuleML 1.01 standard, so that it allows the use of RuleML-compatible implementations for geospatial reasoning.

2 citations

Proceedings Article
01 Jan 2014
TL;DR: This paper presents a complete rulebase, based on the UServ Product Derby Case Study, ready to be deployed into rule systems allowing RuleML serialization, and raises two issues that the next version of Deliberation RuleML may address.
Abstract: After four decades of development, it is now well accepted that rules are one of the most suitable decisioning mechanism for changeintensive applications. Moreover, as web applications are expanding everywhere, rule languages allow to publish, trasmit, deploy, and execute rules on the Web. This paper describes some principles and practices of developing rulebases from computationally independent business models using Deliberation RuleML version 1.01. We present a complete rulebase, based on the UServ Product Derby Case Study, ready to be deployed into rule systems allowing RuleML serialization. We also raise two issues that the next version of Deliberation RuleML may address: i) reference to external domain vocabularies used by rules and ii) distinction between binary object properties, i.e. predicates/relations (whose second argument is a RuleML Ind element), and datatype properties, i.e. predicates/relations (whose second argument is a RuleML Data element), as in OWL.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: It is shown that, for the DLs of the DL-Lite family, the usual DL reasoning tasks are polynomial in the size of the TBox, and query answering is LogSpace in thesize of the ABox, which is the first result ofPolynomial-time data complexity for query answering over DL knowledge bases.
Abstract: We propose a new family of description logics (DLs), called DL-Lite, specifically tailored to capture basic ontology languages, while keeping low complexity of reasoning. Reasoning here means not only computing subsumption between concepts and checking satisfiability of the whole knowledge base, but also answering complex queries (in particular, unions of conjunctive queries) over the instance level (ABox) of the DL knowledge base. We show that, for the DLs of the DL-Lite family, the usual DL reasoning tasks are polynomial in the size of the TBox, and query answering is LogSpace in the size of the ABox (i.e., in data complexity). To the best of our knowledge, this is the first result of polynomial-time data complexity for query answering over DL knowledge bases. Notably our logics allow for a separation between TBox and ABox reasoning during query evaluation: the part of the process requiring TBox reasoning is independent of the ABox, and the part of the process requiring access to the ABox can be carried out by an SQL engine, thus taking advantage of the query optimization strategies provided by current database management systems. Since even slight extensions to the logics of the DL-Lite family make query answering at least NLogSpace in data complexity, thus ruling out the possibility of using on-the-shelf relational technology for query processing, we can conclude that the logics of the DL-Lite family are the maximal DLs supporting efficient query answering over large amounts of instances.

1,482 citations


"A Datalog+ RuleML 1.01 Architecture..." refers background in this paper

  • ...Such ontologies often extend RDFS to the description logic DL-Lite [11] (as in OWL 2 QL [12]), including subsumption axioms that correspond to (head-)existential rules....

    [...]

Proceedings ArticleDOI
20 May 2003
TL;DR: It is shown how to interoperate, semantically and inferentially, between the leading Semantic Web approaches to rules and ontologies and define a new intermediate knowledge representation contained within this intersection: Description Logic Programs (DLP), and the closely related Description Horn Logic (DHL).
Abstract: We show how to interoperate, semantically and inferentially, between the leading Semantic Web approaches to rules (RuleML Logic Programs) and ontologies (OWL/DAML+OIL Description Logic) via analyzing their expressive intersection. To do so, we define a new intermediate knowledge representation (KR) contained within this intersection: Description Logic Programs (DLP), and the closely related Description Horn Logic (DHL) which is an expressive fragment of first-order logic (FOL). DLP provides a significant degree of expressiveness, substantially greater than the RDF-Schema fragment of Description Logic. We show how to perform DLP-fusion: the bidirectional translation of premises and inferences (including typical kinds of queries) from the DLP fragment of DL to LP, and vice versa from the DLP fragment of LP to DL. In particular, this translation enables one to "build rules on top of ontologies": it enables the rule KR to have access to DL ontological definitions for vocabulary primitives (e.g., predicates and individual constants) used by the rules. Conversely, the DLP-fusion technique likewise enables one to "build ontologies on top of rules": it enables ontological definitions to be supplemented by rules, or imported into DL from rules. It also enables available efficient LP inferencing algorithms/implementations to be exploited for reasoning over large-scale DL ontologies.

939 citations


"A Datalog+ RuleML 1.01 Architecture..." refers methods in this paper

  • ...RBDA rulebases are also being used for rewriting, e.g. via Description Logic Programs [13] (as in OWL 2 RL [12], definable in RIFCore [14]), Datalog± [10], and Disjunctive Datalog [15]....

    [...]

  • ...via Description Logic Programs [13] (as in OWL 2 RL [12], definable in RIFCore [14]), Datalog± [10], and Disjunctive Datalog [15]....

    [...]

01 Jan 2009
TL;DR: The OWL 2 Web Ontology Language is an ontology language for the Semantic Web with formally defined meaning and provides classes, properties, individuals, and data values and are stored as SemanticWeb.
Abstract: The OWL 2 Web Ontology Language, informally OWL 2, is an ontology language for the Semantic Web with formally defined meaning. OWL 2 ontologies provide classes, properties, individuals, and data values and are stored as Semantic Web documents. OWL 2 ontologies can be used along with information written in RDF, OWL 2 Web Ontology LanguageProfiles W3C Working Draft 21 April 2009 Page 1 of 53 http://www.w3.org/TR/2009/WD-owl2-profiles-20090421/ and OWL 2 ontologies themselves are primarily exchanged as RDF documents. The OWL 2 Document Overview describes the overall state of OWL 2, and should be read before other OWL 2 documents. This document provides a specification of several profiles of OWL 2 which can be more simply and/or efficiently implemented. In logic, profiles are often called fragments. Most profiles are defined by placing restrictions on the structure of OWL 2 ontologies. These restrictions have been specified by modifying the productions of the functional-style syntax. Status of this Document

869 citations


"A Datalog+ RuleML 1.01 Architecture..." refers background or methods in this paper

  • ...via Description Logic Programs [13] (as in OWL 2 RL [12], definable in RIFCore [14]), Datalog± [10], and Disjunctive Datalog [15]....

    [...]

  • ...Such ontologies often extend RDFS to the description logic DL-Lite [11] (as in OWL 2 QL [12]), including subsumption axioms that correspond to (head-)existential rules....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors define a new intermediate knowledge representation (KR) contained within this intersection: Description Logic Programs (DLP) and the closely related Description Horn Logic (DHL) which is an expressive fragment of first-order logic (FOL).
Abstract: We show how to interoperate, semantically and inferentially, between the leading Semantic Web approaches to rules (RuleML Logic Programs) and ontologies (OWL/DAML+OIL Description Logic) via analyzing their expressive intersection. To do so, we define a new intermediate knowledge representation (KR) contained within this intersection: Description Logic Programs (DLP), and the closely related Description Horn Logic (DHL) which is an expressive fragment of first-order logic (FOL). DLP provides a significant degree of expressiveness, substantially greater than the RDF-Schema fragment of Description Logic. We show how to perform DLP-fusion: the bidirectional translation of premises and inferences (including typical kinds of queries) from the DLP fragment of DL to LP, and vice versa from the DLP fragment of LP to DL. In particular, this translation enables one to "build rules on top of ontologies": it enables the rule KR to have access to DL ontological definitions for vocabulary primitives (e.g., predicates and individual constants) used by the rules. Conversely, the DLP-fusion technique likewise enables one to "build ontologies on top of rules": it enables ontological definitions to be supplemented by rules, or imported into DL from rules. It also enables available efficient LP inferencing algorithms/implementations to be exploited for reasoning over large-scale DL ontologies.

843 citations

Book
03 Jan 1992
TL;DR: A Transaction Manager Development Facility for Non Standard Database Systems and Concepts and Applications of Multilevel Transactions and Open Nested Transactions and Using Polytransactions to Manage Interdependent Data are presented.
Abstract: 1 Transaction Management in Database Systems 2 Introduction to Advanced Transaction Models 3 A Cooperative Transaction Model for Design Databases 4 A Flexible Framework for Transaction Management in Engineering Environments 5 A Transaction Model for Active Distributed Object Systems 6 A Transaction Model for an Open Publication Environment 7 The ConTract Model 8 Dynamic Restructuring of Transactions 9 Multidatabase Transaction and Query Processing in Logic 10 ACTA: The Saga Continues 11 A Transaction Manager Development Facility for Non Standard Database Systems 12 The S-Transaction Model 13 Concepts and Applications of Multilevel Transactions and Open Nested Transactions 14 Using Polytransactions to Manage Interdependent Data

713 citations

Frequently Asked Questions (2)
Q1. What have the authors contributed in "A datalog ruleml 1.01 architecture for rule-based data access in ecosystem research" ?

This paper proposes an RBDA architecture layered on Datalog RuleML, and uses it for the ∆Forest case study on the susceptibility of forests to climate change. The KB includes background knowledge about the study sites and design, e. g., abundant tree species groups, pure tree stands, and statistical independence among forest plots. The RBDA/∆Forest case study has shown the usefulness of their approach to Ecosystem Research for global schema design and demonstrated how automated reasoning can become key to knowledge modeling and consolidation for complex statistical data analysis. 

In the context of the open RBDA/∆Forest collaboration between RuleML and WSL, various avenues for future work are being explored, described as part of the RBDA wiki page.