scispace - formally typeset
Search or ask a question

Showing papers in "Description Logics in 2008"


Journal Article
TL;DR: It is proved that reasoning in SRIQ and SROIQ is exponentially harder than in SHOIQ, and the tableau-based procedures for the respective DLs are presented and prove their soundness, completeness and termination.
Abstract: We identify the complexity of (finite model) reasoning in the DL SROIQ to be N2ExpTime-complete. We also prove that (finite model) reasoning in the DL SR—a fragment of SROIQ without nominals, number restrictions, and inverse roles—is 2ExpTime-hard. 1 From SHIQ to SROIQ In this paper we study the complexity of reasoning in the DL SROIQ—the logic chosen as a candidate for OWL 2. SROIQ has been introduced in [1] as an extension of SRIQ, which itself was introduced previously in [2] as an extension of RIQ [3]. These papers present tableau-based procedures for the respective DLs and prove their soundness, completeness and termination. In contrast to sub-languages of SHOIQ whose computational complexities are currently well understood [4], almost nothing was known, up until now, about the complexity of SROIQ, SRIQ and RIQ except for the hardness results inherited from their sub-lanbuages: SROIQ is NExpTime-hard as an extension of SHOIQ, SRIQ and RIQ are ExpTime-hard as extensions of SHIQ. The difficulty was caused by complex role inclusion axioms R1 ◦ · · · ◦Rn v R, which cause exponential blowup in the tableau procedure. In this paper we demonstrate that this blowup was essentially unavoidable by proving that reasoning in SRIQ and SROIQ is exponentially harder than in SHIQ and SHOIQ. We assume that the reader is familiar with the DL SHOIQ [5]. A SHOIQ signature is a tuple Σ = (CΣ , RΣ , IΣ) consisting of the sets of atomic concepts CΣ , atomic roles RΣ and individuals IΣ . A SHOIQ interpretation is a pair I = (∆I , ·I) where ∆I is a non-empty set called the domain of I, and ·I is the interpretation function, which assigns to every A ∈ CΣ a subset AI ⊆ ∆I , to every r ∈ RΣ a relation rI ⊆ ∆I × ∆I , and to every a ∈ IΣ , an element aI ∈ ∆I . The interpretation I is finite iff ∆I is finite. A role is either some r ∈ RΣ or an inverse role r−. For each r ∈ RΣ , we set Inv(r) = r− and Inv(r−) = r. A SHOIQ RBox is a finite set R of role inclusion axioms (RIA) R1 v R, transitivity axioms Tra(R) and functionality axioms Fun(R) where R1 and R are roles. Let vR be the reflexive transitive closure of the relation vR on roles defined by R1 vR R iff R1 v R ∈ R or ? Unless 2ExpTime = NExpTime, in which case just SROIQ is harder than SHOIQ 1 A.k.a. OWL 1.1: http://www.webont.org/owl/1.1 Inv(R1) v Inv(R) ∈ R. A role S is called simple (w.r.t. R) if there is no role R such that R vR S and either Tra(R) ∈ R or Tra(Inv(R)) ∈ R. Given an RBox R, the set of SHOIQ concepts is the smallest set containing >, ⊥, A, {a}, ¬C, C uD, C tD, ∃R.C, ∀R.C, >nS.C, and 6nS.C, where A is an atomic concept, a an individual, C and D concepts, R a role, S a simple role w.r.t. R, and n a non-negative integer. A SHOIQ TBox is a finite set T of generalized concept inclusion axioms (GCIs) C v D where C andD are concepts. We write C ≡ D as an abbreviation for C v D and D v C. A SHOIQ ABox is a finite set consisting of concept assertions C(a) and role assertions R(a, b) where a and b are individuals from IΣ . A SHOIQ ontology is a triple O = (R, T ,A), where R is a SHOIQ RBox, T a SHOIQ TBox, and A a SHOIQ ABox. The interpretation I is extended to complex role, complex concepts, axioms, and assertions in the usual way [5]. I is a model of a SHOIQ ontology O, if every axiom and assertion in O is satisfied in I. A concept C is (finitely) satisfiable w.r.t. O if CI 6= ∅ for some (finite) model I of O. It is well-known [6, 4] that the problem of concept satisfiability for SHOIQ is NExpTime-complete. SROIQ [1] extends SHOIQ in several ways. (1) It provides for the universal role U , which is interpreted as the total relation: UI = ∆I ×∆I . (2) It allows for negative role assertions ¬R(a, b). (3) It introduces a concept constructor ∃S.Self interpreted as {x ∈ ∆I | 〈x, x〉 ∈ SI} where S is a simple role. (4) It allows for new role axioms Sym(R), Ref(R), Asy(S), Irr(S), Disj(S1, S2) where S(i) are simple roles, which restrict RI to be symmetric or reflexive, SI to be asymmetric or irreflexive, or SI 1 and S I 2 to be disjoint. (5) Finally, it allows for complex role inclusion axioms of the form R1 ◦ · · · ◦Rn v R, which require that RI 1 ◦ · · · ◦ RI n ⊆ RI where ◦ is the usual composition of binary relations. The notion of simple roles is adjusted to make sure that no simple role can be implied by a role composition. SRIQ [2] is the fragment of SROIQ without nominals. The constructors (1)–(4) do not introduce too many difficulties in SROIQ— the existing tableau procedure for SHOIQ [5] can be relatively easily adapted to support the new constructors. Dealing with complex role inclusion axioms in DLs turned out to be more difficult. First, with an exception of the DL EL [7], the unrestricted usage of complex RIAs easily leads to undecidability of modal and description logics [8, 3]. Therefore special syntactic restrictions have been introduced in SROIQ to regain decidability. A regular order on roles is an irreflexive transitive binary relation ≺ on roles such that R1 ≺ R2 iff Inv(R1) ≺ R2. A RIA R1 ◦ · · · ◦ Rn v R is said to be ≺-regular, if it does not contain the universal role U and either: (i) n = 2 and R1 = R2 = R, or (ii) n = 1 and R1 = Inv(R), or (iii) Ri ≺ R for 1 ≤ i ≤ n, or (iv) R1 = R and Ri ≺ R for 1 < i ≤ n, or (v) Rn = R and Ri ≺ R for 1 ≤ i < n. Example 1. Consider the complex RIA (1). This RIA is not ≺-regular regardless of the choice for the ordering ≺. Indeed, (1) does not satisfy (i)–(ii) since n = 3, and does not satisfy (iii)–(iv) since v = R2 ⊀ R = v. r ◦ v ◦ r v v (1) vi ◦ vi v vi+1, 0 ≤ i < n (2) As an example of ≺-regular complex RIAs, consider axioms (2) over the atomic roles v0, . . . , vn. It is easy to see that these axioms satisfy condition (iii) of ≺-regularity for every ordering ≺ such that vi ≺ vj , for every 0 ≤ i < j ≤ n. Although Example 1 does not demonstrate the usage of the conditions (i), (ii), (iv) and (v) for ≺-regularity of RIAs, as will be shown soon, axioms that satisfy just the condition (iii) already make reasoning in SROIQ hard. The syntactic restrictions on the set of RIAs of an RBox R ensure that R is regular in the following sense. Given a role R, let LR(R) be the language consisting of the words over roles defined by: LR(R) := {R1R2 . . . Rn | R |= (R1 ◦ · · · ◦Rn v R)} It has been shown in [3] that if the RIAs of R are ≺-regular for some ordering ≺, then for every role R, the language LR(R) is regular. The tableau procedure for SROIQ presented in [1], utilizes the non-deterministic finite automata (NFA) corresponding to LR(R) to ensure that only finitely many states are produced by tableau expansion rules. Unfortunately, the NFA for LR(R) can be exponentially large in the size of R, which results in exponential blowup in the number of states produced in the worst case by the procedure for SROIQ compared to the procedure for SHOIQ. It was conjectured in [1] that without further restrictions on RIAs such blowup is unavoidable. In Example 2, we demonstrate that minimal automata for regular RBoxes can be exponentially large. Example 2 (Example 1 continued). Let R be an RBox consisting of the single axiom (1). It is easy to see that LR(s) = {rvr | i ≥ 0}, where r denotes the word consisting of i letters r. The language LR(v) is non-regular, which can be shown, e.g., by using the pumping lemma for regular languages (see, e.g., [9]). On the other hand, the RBox R consisting of the axioms (2) gives regular languages. It is easy to show by induction on i that LR(vi) consist of finitely many words, and hence, are regular. Moreover, by induction on i it is easy to show that v 0 ∈ LR(vi) iff j = 2. Let BR(vi) be an NFA for LR(vi) and q0, . . . , q2i a run in BR(vi) accepting v i 0 . Then all states in this run are different, since otherwise there is a cycle, which would mean that BR(vi) accepts infinitely many words. Hence BR(vi) has at least 2 + 1 states. 2 The Lower Complexity Bounds In this section, we prove that reasoning in SROIF—a fragment of SROIQ that includes functional roles instead of number restrictions—is N2ExpTime-hard. The proof is by reduction from the doubly-exponential Domino tiling problem. We also demonstrate that reasoning in SR—a fragment of SRIQ that does not use counting and inverse roles—is 2ExpTime-hard by reduction from the word problem for an exponential-space alternating Turing machine. The main idea of our reductions is to enforce double-exponentially long chains using SR axioms. Single-exponentially long chains can be enforced using a wellknown “integer counting” technique [6]. A counter cI(x) is an integer between 0 and 2 − 1 that is assigned to an element x of the interpretation I using n atomic concepts B1, . . . , Bn as follows: the i-th bit of cI(x) is equal to 1 if and only if x ∈ BI i . It is easy to see that axioms (3)–(7) induce an exponentially long r-chain by initializing the counter and incrementing it over the role r. Z ≡ ¬B1 u · · · u ¬Bn (3) E ≡ B1 u · · · uBn (4) ¬E ≡ ∃r.> (5) > ≡ (B1 u ∀r.¬B1) t (¬B1 u ∀r.B1) (6) Bi−1 u ∀r.¬Bi−1 ≡ (Bi u ∀r.¬Bi) t (¬Bi u ∀r.Bi), 1 < i ≤ n (7) Axiom (3) is responsible for initializing the counter to zero using the atomic concept Z. Axiom (4) can be used to detect whether the counter has reached the final value 2 − 1, by checking whether E holds. Thus, using axiom (5), we can express that an element has an r-successor if and only if its counter has not reached the final value. Axioms (6) and (7) express how the counter is incremented over r: axiom (6) expresses that the lowest bit of the counter is always flipped; axioms (7) express that any other bit of the counter is flipped if and only if the lower bit is changed from 1 to 0. Lemma 1. Let O be an ontology containing axioms (3)–(7). Then for every model I = (∆I , ·I) of O and x ∈ ZI there exist xi ∈ ∆I with 0 ≤ i < 2 such that x = x0 and 〈xi−1, xi〉 ∈ rI for every i with 1 ≤ i < 2, and c(xi) = i. Now we use similar ideas to enforce double-exponentially long chains in the model. This time, however, w

103 citations


Journal Article
TL;DR: It is argued that concept products provide practically relevant expressivity at little cost, making them a good candidate for future extensions of the DL-based ontology language OWL.
Abstract: We investigate the concept product as an expressive feature for de- scription logics (DLs). While this construct allows us to express an arguably very common and natural type of statement, it can be simulated only by the very ex- pressive DL SROIQ for which no tight worst-case complexity is known. How- ever, we show that concept products can also be added to the DLs SHOIQ and SHOI, and to the tractable DL EL ++ without increasing the worst-case com- plexities in any of those cases. We therefore argue that concept products provide practically relevant expressivity at little cost, making them a good candidate for future extensions of the DL-based ontology language OWL.

54 citations


Journal Article
TL;DR: The more DLs are being used in applications such as the Semantic Web, biology, and the clinical sciences, the more certain expressive weaknesses are commented upon, and various combinations of DLs with nonmonotonic formalisms have been investigated so far.
Abstract: The more DLs are being used in applications such as the Semantic Web [2], biology, and the clinical sciences, the more certain expressive weaknesses are commented upon. A recurring set of these comments is due to the fact that only few DLs and even fewer DL reasoners support forms of defeasible reasoning. For example, Rector describes in [12, 16] how useful statements such as “the heart of a human is normally located on the left hand side of the body” could be for the clinical sciences, and OWL design patterns have been developed to work around the lack of such statements. Various combinations of DLs with nonmonotonic formalisms have been investigated so far. DL-MKNF, the combination of DLs with minimal knowledge and negation as failure (MKNF) [9] is introduced in [4]. DL-MKNF extends DLs with two modal operators and is considered to be a unified framework for nonmonotonic extensions of DLs since various nonmonotonic logics can be embedded into MKNF [9]; these include default logic [13] and autoepistemic logic [10]. The combination of DLs with default logic was introduced [1], implemented in Pellet [7], and its translation into DL-MKNF was explained in [4]. The combination of DLs with circumscription [3] provides a powerful and flexible alternative way for nonmonotonic reasoning in DLs since its entailment relation is parametrized with a set of concepts to be circumscribed. Hence we can pick different modes of defeasibility without changing our knowledge base. Decidablity and complexity are known for various DLs with circumscription [3], but no calculus or implementation is known. The integration of DLs with logic programming (LP) using MKNF [11] is closely related to DL-MKNF. They differ in expressive power since LP rules can make use of arbitrarily connected variables, yet these variables are all quantified in the same way. In contrast, DL-MKNF allows modal operators appearing in existential and universal restrictions. An exact comparison of this relationship is part of our future work. A tableau algorithm for the combination of the basic DL ALC [15] with MKNF (ALCKNF ) has been described in [4]. As mentioned in [4], ALCKNF can capture certain kinds of defaults and integrity constraints (ICs). For example, our example default regarding the location of the heart in humans can be formalised

40 citations


Journal Article
TL;DR: In this paper, the authors lay bare the assumptions underlying different approaches for revision in DLs and propose some criteria to compare them and give their definition of a revision operator in DL and point out some open problems.
Abstract: Revision of a Description Logic-based ontology to incorporate newly received information consistently is an important problem for the lifecycle of ontologies. Many approaches in the theory of belief revision have been applied to deal with this problem and most of them focus on the postulates or the logical properties of a revision operator in Description Logics (DLs). However, there is no coherent view on how to characterize a revision operator in DLs. In this paper, we lay bare the assumptions underlying different approaches for revision in DLs and propose some criteria to compare them. Based on the analysis, we give our definition of a revision operator in DLs and point out some open problems.

38 citations


Journal Article
TL;DR: This paper further study how to extend the four-valued semantics to more expressive descriptionLogics, such as SHIQ, and to more tractable description logics including EL++, DL-Lite, and Horn-DLs.
Abstract: Four-valued description logic has been proposed to reason with description logic based inconsistent knowledge bases, mainly ALC This approach has a distinct advantage that it can be implemented by invoking classical reasoners to keep the same complexity as classical semantics In this paper, we further study how to extend the four-valued semantics to more expressive description logics, such as SHIQ, and to more tractable description logics including EL++, DL-Lite, and Horn-DLs The most effort we spend defining the four-valued semantics of expressive four-valued description logics is on keeping the reduction from four-valued semantics to classical semantics as in the case of ALC; While for tractable description logics, we mainly focus on how to maintain their tractability when adopting four-valued semantics

37 citations


Journal Article
TL;DR: In this article, a constructive version of the Gentzen tableau calculus is introduced, called ${c\mathcal{ALC}$, for which a sound and complete Hilbert axiomatisation and a Gentzen Tableau calculus showing finite model property and decidability are given.
Abstract: This work explores some aspects of a new and natural semantical dimension that can be accommodated within the syntax of description logics which opens up when passing from the classical truth-value interpretation to a constructive interpretation. We argue that such a strengthened interpretation is essential to represent applications with partial information adequately and to achieve consistency under abstraction as well as robustness under refinement. We introduce a constructive version of $\mathcal{ALC}$ , called ${c\mathcal{ALC}}$ , for which we give a sound and complete Hilbert axiomatisation and a Gentzen tableau calculus showing finite model property and decidability.

28 citations


Journal Article
TL;DR: This paper describes the rewriting technique and proves that it does really preserve the semantics of the rule, and has implemented the rewriting algorithm and have practical results.
Abstract: Description Logics are a family of very expressive logics but some forms of knowledge are much more intuitive to formulate otherwise, say, as rules. Rules in DL can be dealt with two approaches: (i) use rules as they are knowing that it leads to undecidability. (ii) or make the rules DL-safe, which will restrict their semantic impact and, e.g., loose the nice “car owners are engine owners” inference. Here, we offer a third possibility: we rewrite the rule, if it satisfies certain restrictions, into a set of axioms which preserves the nice inferences. In this paper, we describe the rewriting technique and prove that it does really preserve the semantics of the rule. We have implemented the rewriting algorithm and have practical results.

26 citations


Journal Article
TL;DR: In this paper, the authors propose an approach for extending a tableau-based satisfiability algorithm by an arithmetic component, which is a hybrid concept satisfiability for the Description Logic (DL) ALCQ which extends ALC with qualified number restrictions.
Abstract: We propose an approach for extending a tableau-based satisfiability algorithm by an arithmetic component. The result is a hybrid concept satisfiability algorithm for the Description Logic (DL) ALCQ which extends ALC with qualified number restrictions. The hybrid approach ensures a more informed calculus which, on the one hand, adequately handles the interaction between numerical and logical restrictions of descriptions, and on the other hand, when applied is a very promising framework for average case optimizations.

25 citations


Journal Article
TL;DR: This paper shows that, in SHIQ without inverse roles (and without transitive roles in the query), conjunctive query answering is only ExpTime-complete and thus not harder than satisfiability.
Abstract: We have shown recently that, in extensions of ALC that involve inverse roles, conjunctive query answering is harder than satisfiability: it is 2-ExpTime-complete in general and NExpTime-hard if queries are connected and contain at least one answer variable [9]. In this paper, we show that, in SHIQ without inverse roles (and without transitive roles in the query), conjunctive query answering is only ExpTime-complete and thus not harder than satisfiability. We also show that the NExpTime-lower bound from [9] is tight.

23 citations


Journal Article
TL;DR: This paper proposes a distributed, complete and terminating algorithm that decides satisfiability of terminologies in ALC, and shows that the resolution procedure proposed by Tammet can be distributed amongst multiple resolution solvers by assigning unique sets of literals to individual solvers.
Abstract: The use of Description Logic as the basis for Semantic Web Languages has led to new requirements with respect to scalable and nonstandard reasoning. In this paper, we address the problem of scalable reasoning by proposing a distributed, complete and terminating algorithm that decides satisfiability of terminologies in ALC. The algorithm is based on recent results on applying resolution to description logics. We show that the resolution procedure proposed by Tammet can be distributed amongst multiple resolution solvers by assigning unique sets of literals to individual solvers. This results provides the basis for a highly scalable reasoning infrastructure for Description logics.

22 citations


Journal Article
TL;DR: The invention relates to a self locking container comprising an integral tray and cover that is assembled without adhesive by tab and slot combinations provided in the walls of the container.
Abstract: The invention relates to a self locking container comprising an integral tray and cover. Both the tray and cover are assembled without adhesive by tab and slot combinations provided in the walls thereof. In addition, the tab and slot combination for securing and locking the cover portion also serves as an automatic locking device for locking the cover to the tray when the container is closed.

Journal Article
TL;DR: The module extraction problem: extract from T1 a minimal self-contained terminology T0 such that T1 and T0 imply the same dependencies between Σ-terms.
Abstract: – The module extraction problem: given a terminology T1 and a signature Σ, extract from T1 a minimal self-contained terminology T0 such that T1 and T0 imply the same dependencies between Σ-terms. – The logical diff problem: given a signature Σ and two versions T0 and T1 of a terminology, check whether T0 and T1 are logically different in the sense that they do not imply the same dependencies between Σ-terms.

Journal Article
TL;DR: This paper presents a classification of existing algorithms and describes a new method for the possibilistic case that yields an inconsistency degree and not only a binary answer to the consistency question.
Abstract: In this paper we consider the extensions of description logics that were proposed to represent uncertain or vague knowledge, focusing on the fuzzy and possibilistic formalisms. We compare these two approaches and comment on their differences concentrating on the consistency issue of knowledge bases represented in these extended frameworks. We present a classification of existing algorithms and describe a new method for the possibilistic case that yields an inconsistency degree and not only a binary answer to the consistency question. The proposed algorithm is based on a direct extension of the tableau algorithm to the possibilistic case, for which we introduce appropriate clash and completion rule definitions.

Journal Article
TL;DR: This paper presents an approach to encode some state-of-the-art absorption techniques into a state space planner, aiming to achieve a better solution to absorb more general axioms into an unfoldable TBox.
Abstract: Absorptions are generally employed in Description Logics (DL) reasoners in a uniform way regardless of the structure of an input knowledge base. In this paper we present an approach to encode some state-of-the-art absorption techniques into a state space planner, aiming to achieve a better solution. The planner applies appropriate operators to general axioms and produces a solution with a minimized cost to automatically organize these absorptions in a certain sequence to facilitate DL reasoning. Compared to predetermined or fixed applications of established absorptions, such a solution is more flexible and probable to absorb more general axioms into an unfoldable TBox.

Journal Article
TL;DR: This work presents the implementation of the proposed system, a combination of probabilistic knowledge in combination with description logic, and some modeling observations it made.
Abstract: Representing probabilistic knowledge in combination with a description logic has been a research topic for quite some time. In [1] one of such combinations is introduced. We present our implementation of the proposed system and some modeling observations we made.

Journal Article
TL;DR: A parallel approach for TBox classification is proposed in response to emerging TBoxes from the semantic web community consisting of up to hundreds of thousand of named concepts and the increasing availability of multi-processor and multior many-core computers.
Abstract: One of the most frequently used inference services of description logic reasoners is the classification of TBoxes with a subsumption hierarchy of all named concepts as the result. In response to (i) emerging TBoxes from the semantic web community consisting of up to hundreds of thousand of named concepts and (ii) the increasing availability of multi-processor and multior many-core computers, we propose a parallel approach for TBox classification. First experiments on parallelizing well-known algorithms for TBox classification were conducted to study the trade-off between incompleteness and speed improvement.

Journal Article
TL;DR: The paper summarizes the experiences with optimization techniques for well-known tableau-based description logic reasoning systems, and analyzes the performance of very simple techniques to cope with Tboxes whose bulk axioms just use a less expressive language such as ELH, whereas some small parts of the Tbox use a language as expressive as SHIQ.
Abstract: The paper summarizes our experiences with optimization techniques for well-known tableau-based description logic reasoning systems, and analyzes the performance of very simple techniques to cope with Tboxes whose bulk axioms just use a less expressive language such as ELH, whereas some small parts of the Tbox use a language as expressive as SHIQ. The techniques analyzed in this paper have been tested with RacerPro, but they can be embedded into other tableau-based reasoners such as, e.g., Fact++ or Pellet in a seamless way.

Journal Article
TL;DR: The revision method is a reformulation of the kernel revision operator in belief revision in terms of MIPS (minimal incoherence-preserving sub-terminologies), and it is shown that it satisfies some desirable logical properties.
Abstract: In this paper, we propose a new method for revising terminologies in description logic-based ontologies. Our revision method is a reformulation of the kernel revision operator in belief revision. We first define our revision operator for terminologies in terms of MIPS (minimal incoherence-preserving sub-terminologies), and we show that it satisfies some desirable logical properties. Second, two concrete algorithms are developed to implement the revision operator.

Journal Article
TL;DR: This work defines a framework for ontology extraction that integrates and enhances database reverse engineering techniques, giving a faithful and higher level specification of the knowledge present in the given database.
Abstract: The benefits of using an ontology over relational data sources to mediate the access to these data are widely accepted and well understood. Such ontologies provide a conceptual view of the application domain, therefore they can be conveniently employed for navigational (and reasoning) purposes when accessing the data [1]. To date, however, the task of wrapping relational data sources by means of an ontology is mainly done manually and is thus time-consuming and expensive process. We concentrate in this work on techniques towards an automatic support for ontology design in the scenario where the resulting ontology is used to access the data residing at the sources. Specifically, within this research area we identify two key tasks that we discuss next. Due to the wide use of relational databases in information management, their structure contains a lot of considerable information about domain of interest (e.g., in the scenario of enterprise integration). Therefore, when an ontology about the same domain is being designed, it is desirable to re-use this existing information and extract automatically a core ontology from the database schema, rather than constructing it from scratch. We tackle this issue by defining a framework for ontology extraction that integrates and enhances database reverse engineering techniques (see Section 2), giving us a faithful and higher level specification of the knowledge present in the given database. In order to fully leverage the obtained ontology for accessing the data, it is necessary to preserve the mapping between data sources and ontology. Our approach is to define and associate a view over the source data to each element of the ontology, which means that queries formulated over the extracted ontology can be simply evaluated by expanding the corresponding views. However, as soon as the extracted ontology is modified, the simple expansion is no longer enough and the newly added constraints and terms must be taken into account. Using an appropriate ontology language, this can be done by means of query rewriting techniques (see [2]). In most of the cases extracted ontologies are rather “flat”, and constitute a bare bootstrap ontology rather than a rich vocabulary enabling enhanced data access. For this reason, the task of enriching the extracted ontology is crucial in order to build a truly effective ontology-based information access system. The task of modifying a given ontology involves at least the introduction of

Journal Article
TL;DR: This work formally characterize the semantics of these shareability notions by resorting to the temporal conceptual model ERV T and its formalization in the description logic DLRUS.
Abstract: A recurring problem in conceptual modelling and ontology development is the representation of part-whole relations, with a requirement to be able to distinguish between essential and mandatory parts. To solve this problem, we formally characterize the semantics of these shareability notions by resorting to the temporal conceptual model ERV T and its formalization in the description logic DLRUS .

Journal Article
TL;DR: The result reconfirms Lutz’s result that inverse roles cause an exponential jump in complexity, being the problem 2EXPTIME-complete for ALCI, and yields an algorithm for CQ answering that works in exponential time for ALCH and for large classes of CQs in SH, and is worst-case optimal, under data complexity.
Abstract: Answering conjunctive queries (CQs) has been recognized as a key task for the usage of Description Logics (DLs) in a number of applications. The problem has been studied by many authors, who developed a number of different techniques for it. We present a novel method for CQ answering based on knots, which are schematic trees of depth ≤ 1. It yields an algorithm for CQ answering that works in exponential time for ALCH and for large classes of CQs in SH. This improves over previous algorithms which require double exponential time and is worst-case optimal, as already satisfiability testing in ALC is EXPTIMEcomplete. Our result reconfirms Lutz’s result that inverse roles cause an exponential jump in complexity, being the problem 2EXPTIME-complete for ALCI. The algorithm is CONP, and hence also worst-case optimal, under data complexity.

Journal Article
TL;DR: In professional environments, users have a good knowledge about their domain of interest as well as the documents they consult regularly and they need an Information Retrieval System (IRS) that allows them to find a precise answer to their information needs.
Abstract: In professional environments, users have a good knowledge about their domain of interest as well as the documents they consult regularly. In order to carry out their professional tasks, they need an Information Retrieval System (IRS) that allows them to find a precise answer to their information needs. Generally speaking, they know about documents content that may satisfy their information needs. Thus, during the retrieval task, they try to complete the information that they have and that is insufficient. Their information needs are in this case formulated through precise queries. The qualifier ”precise” denotes a query that contains: i) a very specialised terminology and ii) a complex structure. Through a precise query, a user can describe his information need using explicit semantic relationships between the descriptors of his query. He also can use boolean operators or quantification (at least, all, etc.) in order to specify the number of elements that the desired document should contain. In order to illustrate some characteristics of precise queries, we present here some query examples.

Journal Article
TL;DR: A proof-theoretic approach is introduced that yields a polynomial-time decision procedure for subsumption in EL w.r.t. hybrid TBoxes and preliminary experimental results regarding the performance of the reasoner Hyb that implements this decision procedure are presented.
Abstract: Hybrid EL-TBoxes combine general concept inclusions (GCIs), which are interpreted with descriptive semantics, with cyclic concept definitions, which are interpreted with greatest fixpoint (gfp) semantics We introduce a proof-theoretic approach that yields a polynomial-time decision procedure for subsumption in EL wrt hybrid TBoxes, and present preliminary experimental results regarding the performance of the reasoner Hyb that implements this decision procedure

Journal Article
TL;DR: The presented algorithm decides knowledge base consistency in deterministic double exponential time for SHOQ⊓, but is in ExpTime if no role conjunctions occur in the input knowledge base.
Abstract: We introduce an automata-based method for deciding the consistency of SHOQ⊓ knowledge bases. The presented algorithm decides knowledge base consistency in deterministic double exponential time for SHOQ⊓, but is in ExpTime if no role conjunctions occur in the input knowledge base. This shows that SHOQ is indeed ExpTimecomplete, which was, to the best of our knowledge, always conjectured but never proved.


Journal Article
TL;DR: This work hopes this semantically-oriented visualization strategy will allow users to obtain deeper insights about the meaning of concept expressions, thereby preventing errors of design or of interpretation.
Abstract: Many visualization frameworks for ontologies in general and for concept expressions in particular are too faithful to the syntax of the languages in which those objects are represented (e.g., RDF, OWL, DL). Model outlines depart from this tradition in that they consist of diagrams characterizing the class of models of a given concept expression. We hope this semantically-oriented visualization strategy will allow users to obtain deeper insights about the meaning of such expressions, thereby preventing errors of design or of interpretation.

Journal Article
TL;DR: It is proved that subsumption between ALC concepts in prime implicate normal form can be carried out in polynomial time using a simple structural subsumption algorithm reminiscent of those used for less expressive description logics.
Abstract: In this paper, we present a normal form for concept expressions in the description logic ALC which is based on a recently introduced notion of prime implicate for the modal logic K. We show that concepts in prime implicate normal form enjoy a number of desirable properties which make prime implicate normal form interesting from the viewpoint of knowledge compilation. In particular, we prove that subsumption between ALC concepts in prime implicate normal form can be carried out in polynomial time using a simple structural subsumption algorithm reminiscent of those used for less expressive description logics. Of course, in order to take advantage of these properties, we need a way to transform concepts into equivalent concepts in prime implicate normal form. We provide a sound and complete algorithm for putting concepts into prime implicate normal form, and we investigate the spatial complexity of this transformation, showing there to be an at most doubly-exponential blowup in concept length. At the end of the paper, we compare prime implicate normal form to two other normal forms for ALC, discussing the relative merits of the different approaches.

Journal Article
TL;DR: The complexity of executability and projection in EL and EL, the extension of EL with atomic negation is investigated, and it is shown that, in general, tractability does not transfer from instance checking to executable and projection.
Abstract: Classical action formalisms form a dichotomy regarding their expressive power and computational properties: they are either based on first-order logic (FOL) and undecidable like the Situation Calculus [13], or decidable but only propositional like STRIPS [8, 7]. In [3, 11], it was proposed to integrate description logics (DLs) into action formalisms in order to increase the expressive power beyond propositional logic while retaining decidability of reasoning. In particular, ABox assertions are used for describing the initial state of the world and the preand post-conditions of actions, and acyclic TBoxes are used to describe background knowledge. A similar approach based on the 2-variable fragment of FOL is described in [9]. The results in [3] show that, even if expressive DLs such as ALCQIO are used in the action formalism, standard reasoning problems such as executability and projection remain decidable. The proof is by a reduction of these problems in a DL L to instance checking in the extension LO of L with nominals, and it works for all standard extensions of the propositionally closed DL ALC. A recent trend in description logic is to consider lightweight DLs that are not propositionally closed and for which standard reasoning problems such as subsumption and instance checking are tractable. In particular, the EL-family of DLs has been developed in [1, 6, 2, 4], and it has proved useful for modelling life science ontologies such as SNOMED [16] and the National Cancer Institute’s NCI thesaurus [15]. Many such ontologies are acyclic TBoxes and can thus be used in a DL-based action formalism. This paves the way to new applications such as the following: one can use ABoxes to describe patient data in the medical domain, actions to represent medical treatments, and in both cases use concepts defined in an underlying medical ontology. Executability and projection can then determine, e.g., whether a certain treatment is effective or has undesired sideeffects. In this paper, we investigate the complexity of executability and projection in EL and EL, the extension of EL with atomic negation. In both cases, we allow for negated assertions in the post-conditions of actions. Our results show that, in general, tractability does not transfer from instance checking to executability and projection. Even in EL without TBoxes, the latter problems are

Journal Article
TL;DR: This work provides the experiences gained by implementing and understanding the given partitioning algorithm and proposes an extension of the algorithm, which allows for assertional updates, without need to repartition the whole knowledge base.
Abstract: The development of scalable reasoning systems is one of the crucial factors determining the success of Semantic Web systems. Recently, in [GH06], an approach is proposed, which tackles the problem by splitting the assertional part of ontologies into several partitions. In this work we provide our experiences gained by implementing and understanding the given partitioning algorithm and fix some issues which came our way. Furthermore, we propose an extension of the algorithm, which allows for assertional updates, without need to repartition the whole knowledge base. Both contributions can hopefully increase the potential success of partitioning real world ontologies..

Journal Article
TL;DR: An algorithm for computing the minimal subsets of an unfoldable ALC-terminology that keep the unsatisfiability of a concept, using a Boolean formula, called pinpointing formula, whose minimal satisfying valuations correspond to the minimal sub-ABoxes that are inconsistent.
Abstract: Recent years have seen a boom in the creation and development of ontologies. Unfortunately, the maintenance of such ontologies is an error-prone process. On one side, it is in general unrealistic to expect a developer to be simultaneously a domainand an ontology-expert. This leads to problems when a part of the domain is not correctly understood, or when, although correctly understood, is translated wrongly to the ontology language. On the other side, most of the larger ontologies are developed by a group of individuals. The difference in their points of view can produce unexpected consequences. Whenever an error is identified, one would like to be able to detect the portion of the ontology responsible for such it; additionally, it would also be desirable to modify the ontology as little as possible to remove the error. If, for instance, an ontology is expresed by a TBox of an expressive Description Logic (DL), an unwanted consequence could be the unsatisfiability of a certain concept term C. Given that C is indeed unsatisfiable, we can search for a minimal sub-TBox that still leads to unsatisfiability of the concept (explaining the consequence), or for a maximal sub-TBox where C is satisfiable (removing the consequence). Finding these sets by hand in large ontologies is not a viable option. Schlobach and Cornet [14] describe an algorithm for computing the minimal subsets of an unfoldable ALC-terminology that keep the unsatisfiability of a concept. The algorithm extends the known tableau-based satisfiability algorithm for ALC [15], using labels to keep track of the axioms responsible of the generation of an assertion during the execution of the algorithm. A similar approach was actually presented previously in [2], for checking consistency of ALC-ABoxes. The main difference between the algorithms in [14] and [2] is that the latter does not directly compute the minimal subsets that have the consequence, but rather a Boolean formula, called pinpointing formula, whose minimal satisfying valuations correspond to the minimal sub-ABoxes that are inconsistent. The ideas sketched by these algorithms have been applied to other tableau-based decision algorithms for more expressive DLs (see, e.g. [13, 12, 11]), and generalized in [3] where so-called general tableaux are extended into pinpointing algorithms that compute a formula as in [2]. This general approach was then successfuly applied for explaining subsumption relations in EL [4]. The main drawback of the general approach in [3] is that it assumes that the original tableau algorithm stops after a finite number of steps without the