scispace - formally typeset
Search or ask a question
Author

Renata Wassermann

Bio: Renata Wassermann is an academic researcher from University of São Paulo. The author has contributed to research in topics: Belief revision & Ontology (information science). The author has an hindex of 15, co-authored 69 publications receiving 703 citations. Previous affiliations of Renata Wassermann include Universidad Nacional del Sur & University of Amsterdam.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors propose a framework to reason about non-ideal agents that generalizes the AGM paradigm and define a set of basic operations that change the status of beliefs and show how these operations can be used to model agents with different capacities.
Abstract: The AGM paradigm for belief revision provides a very elegant and powerful framework for reasoning about idealized agents. The paradigm assumes that the modeled agent is a perfect reasoner with infinite memory. In this paper we propose a framework to reason about non-ideal agents that generalizes the AGM paradigm. We first introduce a structure to represent an agent's belief states that distinguishes different status of beliefs according to whether or not they are explicitly represented, whether they are currently active and whether they are fully accepted or provisional. Then we define a set of basic operations that change the status of beliefs and show how these operations can be used to model agents with different capacities. We also show how different operations of belief change described in the literature can be seen as special cases of our theory.

79 citations

Journal ArticleDOI
TL;DR: A belief revision approach in order to find and repair inconsistencies in ontologies represented in some description logic (DL) and new operators that can be used with more general logics are proposed and shown to be applied to the logics underlying OWL-DL and Lite.
Abstract: Belief Revision deals with the problem of adding new information to a knowledge base in a consistent way. Ontology Debugging, on the other hand, aims to find the axioms in a terminological knowledge base which caused the base to become inconsistent. In this article, we propose a belief revision approach in order to find and repair inconsistencies in ontologies represented in some description logic (DL). As the usual belief revision operators cannot be directly applied to DLs, we propose new operators that can be used with more general logics and show that, in particular, they can be applied to the logics underlying OWL-DL and Lite.

62 citations

Proceedings Article
09 May 2010
TL;DR: This paper considers approaches to belief contraction in Horn knowledge bases, and develops two broad approaches for Horn contraction, corresponding to the two major approaches in belief change, based on Horn belief sets and Horn belief bases.
Abstract: Standard approachs to belief change assume that the underlying logic contains classical propositional logic. Recently there has been interest in investigating approaches to belief change, specifically contraction, in which the underlying logic is not as expressive as full propositional logic. In this paper we consider approaches to belief contraction in Horn knowledge bases. We develop two broad approaches for Horn contraction, corresponding to the two major approaches in belief change, based on Horn belief sets and Horn belief bases. We argue that previous approaches, which have taken Horn remainder sets as a starting point, have undesirable properties, and moreover that not all desirable Horn contraction functions are captured by these approaches. This is shown in part by examining model-theoretic considerations involving Horn contraction. For Horn belief set contraction, we develop an account based in terms of weak remainder sets. Maxichoice and partial meet Horn contraction is specified, along with a consideration of package contraction. Following this we consider Horn belief base contraction, in which the underlying knowledge base is not necessarily closed under the Horn consequence relation. Again, approaches to maxi-choice and partial meet belief set contraction are developed. In all cases, constructions of the specific operators and sets of postulates are provided, and representation results are obtained. As well, we show that problems arising with earlier work are resolved by these approaches.

40 citations

Journal ArticleDOI
TL;DR: A more appropriate notion of basic contraction is defined for the Horn case, influenced by the convexity property holding for full propositional logic and which is referred to as infra contraction, which shows that the construction method for Horn contraction for belief sets based on infra remainder sets corresponds exactly to Hansson's classical kernel contraction for beliefs sets, when restricted to Horn logic.
Abstract: Standard belief change assumes an underlying logic containing full classical propositional logic. However, there are good reasons for considering belief change in less expressive logics as well. In this paper we build on recent investigations by Delgrande on contraction for Horn logic. We show that the standard basic form of contraction, partial meet, is too strong in the Horn case. This result stands in contrast to Delgrande's conjecture that orderly maxichoice is the appropriate form of contraction for Horn logic. We then define a more appropriate notion of basic contraction for the Horn case, influenced by the convexity property holding for full propositional logic and which we refer to as infra contraction. The main contribution of this work is a result which shows that the construction method for Horn contraction for belief sets based on our infra remainder sets corresponds exactly to Hansson's classical kernel contraction for belief sets, when restricted to Horn logic. This result is obtained via a detour through contraction for belief bases. We prove that kernel contraction for belief bases produces precisely the same results as the belief base version of infra contraction. The use of belief bases to obtain this result provides evidence for the conjecture that Horn belief change is best viewed as a `hybrid' version of belief set change and belief base change. One of the consequences of the link with base contraction is the provision of a representation result for Horn contraction for belief sets in which a version of the Core-retainment postulate features.

34 citations


Cited by
More filters
Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: When I started out as a newly hatched PhD student, one of the first articles I read and understood was Ray Reiter’s classic article on default logic, and I became fascinated by both default logic and, more generally, non-monotonic logics.
Abstract: When I started out as a newly hatched PhD student, back in the day, one of the first articles I read and understood (or at least thought that I understood) was Ray Reiter’s classic article on default logic (Reiter, 1980).This was some years after the famous ‘non-monotonic logic’ issue of Artificial Intelligence in which that article appeared, but default logic was still one of the leading approaches, a tribute to the simplicity and power of the theory. As a result of reading the article, I became fascinated by both default logic and, more generally, non-monotonic logics. However, despite my fascination, these approaches never seemed terribly useful for the kinds of problem that I was supposed to be studying—problems like those in medical decision making—and so I eventually lost interest. In fact non-monotonic logics seemed to me, and to many people at the time I think, not to be terribly useful for anything. They were interesting, and clearly relevant to the long-term goals of Artificial Intelligence as a discipline, but not of any immediate practical importance. This verdict, delivered at the end of the 1980s, continued, I think, to be true for the next few years while researchers working in non-monotonic logics studied problems that to outsiders seemed to be ever more obscure. However, by the end of the 1990s, it was becoming clear, even to folk as short-sighted as I, that non-monotonic logics were getting to the point at which they could be used to solve practical problems. Knowledge in action shows quite how far these techniques have come. The reason that non-monotonic logics were invented was, of course, in order to use logic to reason about the world. Our knowledge of the world is typically incomplete, and so, in order to reason about it, one has to make assumptions about things one does not know. This, in turn, requires mechanisms for both making assumptions and then retracting them if and when they turn out not to be true. Non-monotonic logics are intended to handle this kind of assumption making and retracting, providing a mechanism that has the clean semantics of logic, but which has a non-monotonic set of conclusions. Much of the early work on non-monotonic logics was concerned with theoretical reasoning, that is reasoning about the beliefs of an agent—what the agent believes to be true. Theoretical reasoning is the domain of all those famous examples like ‘Typically birds fly. Tweety is a bird, so does Tweety fly?’, and the fact that so much of non-monotonic reasoning seemed to focus on theoretical reasoning was why I lost interest in it. I became much more concerned with practical reasoning—that is reasoning about what an agent should do—and non-monotonic reasoning seemed to me to have nothing interesting to say about practical reasoning. Of course I was wrong. When one tries to formulate any kind of description of the world as the basis for planning, one immediately runs into applications of non-monotonic logics, for example in keeping track of the state of a changing world. It is this use of non-monotonic logic that is at the heart of Knowledge in action. Building on the McCarthy’s situation calculus, Knowledge in action constructs a theory of action that encompasses a very large part of what an agent requires to reason about the world. As Reiter says in the final chapter,

899 citations

Journal Article
TL;DR: This work presents the first experiences in using PROB on several case studies, highlighting that PROB enables users to uncover errors that are not easily discovered by existing tools.
Abstract: We present PROB, an animation and model checking tool for the B method PROB's animation facilities allow users to gain confidence in their specifications, and unlike the animator provided by the B-Toolkit, the user does not have to guess the right values for the operation arguments or choice variables PROB contains a model checker and a constraint-based checker, both of which can be used to detect various errors in B specifications We present our first experiences in using PROB on several case studies, highlighting that PROB enables users to uncover errors that are not easily discovered by existing tools

541 citations

Proceedings Article
22 Jul 2007
TL;DR: The problem of errors in mappings is addressed by proposing a completely automatic debugging method that uses logical reasoning to discover and repair logical inconsistencies caused by erroneous mappings.
Abstract: Automatically discovering semantic relations between ontologies is an important task with respect to overcoming semantic heterogeneity on the semantic web. Existing ontology matching systems, however, often produce erroneous mappings. In this paper, we address the problem of errors in mappings by proposing a completely automatic debugging method for ontology mappings. The method uses logical reasoning to discover and repair logical inconsistencies caused by erroneous mappings. We describe the debugging method and report experiments on mappings submitted to the ontology alignment evaluation challenge that show that the proposed method actually improves mappings created by different matching systems without any human intervention.

144 citations

Journal ArticleDOI
TL;DR: An overall process model synthesized from an overview of the existing models in the literature is provided, which concludes on future challenges for techniques aiming to solve that particular stage of ontology evolution.
Abstract: Ontology evolution aims at maintaining an ontology up to date with respect to changes in the domain that it models or novel requirements of information systems that it enables. The recent industrial adoption of Semantic Web techniques, which rely on ontologies, has led to the increased importance of the ontology evolution research. Typical approaches to ontology evolution are designed as multiple-stage processes combining techniques from a variety of fields (e.g., natural language processing and reasoning). However, the few existing surveys on this topic lack an in-depth analysis of the various stages of the ontology evolution process. This survey extends the literature by adopting a process-centric view of ontology evolution. Accordingly, we first provide an overall process model synthesized from an overview of the existing models in the literature. Then we survey the major approaches to each of the steps in this process and conclude on future challenges for techniques aiming to solve that particular stage.

138 citations