scispace - formally typeset
Search or ask a question

Showing papers on "Commonsense reasoning published in 1995"


Journal ArticleDOI
Ron Sun1
TL;DR: It is demonstrated that combining rules and similarities can result in more robust reasoning models, and many seemingly disparate patterns of commonsense reasoning are actually different manifestations of the same underlying process and can be generated using the integrated architecture, which captures the underlying process to a large extent.

235 citations


Journal ArticleDOI
TL;DR: A survey of the development of Qualitative Reasoning from the 80s is given, focusing on the present state-of-the-art of the mathematical formalisms and modelling techniques, and presents the principal domains of application through the applied research done in France.
Abstract: After preliminary work in economics and control theory, qualitative reasoning emerged in AI at the end of the 70s and beginning of the 80s, in the form of Naive Physics and Commonsense Reasoning. This way was progressively abandoned in aid of more formalised approaches to tackle modelling problems in engineering tasks. Qualitative Reasoning became a proper subfield of AI in 1984, the year when several seminal papers developed the foundations and the main concepts that remain topical today. Since then Qualitative Reasoning has considerably broadened the scope of problems addressed, investigating new tasks and new systems, such as natural systems. This paper gives a survey of the development of Qualitative Reasoning from the 80s, focusing on the present state-of-the-art of the mathematical formalisms and modelling techniques, and presents the principal domains of application through the applied research done in France.

61 citations


Journal ArticleDOI
TL;DR: This paper argues that a solution to the qualification problem should be based on a (meta) conjecture that the theory used to reason about the world contains all the necessary information, and shows that this theory adequacy conjecture can be made before the application of any of the formalisms proposed in the past, eg.
Abstract: One of the main problems in commonsense reasoning is the qualification problem, ie. the fact that the number of qualifications for most general commonsense statements is virtually infinite. In this paper we argue that a solution to this problem should be based on a (meta) conjecture that the theory used to reason about the world contains all the necessary information. We also show that this theory adequacy conjecture can be made before the application of any of the formalisms proposed in the past, eg. circumscription. Finally, we present a formalization of the solution proposed using contexts and circumscription and use it to solve McCarthy's Glasgow-London-Moscow example.

47 citations


Proceedings Article
30 May 1995
TL;DR: In this paper, the authors introduce the notion of group logic and show that a group of agents may agree on opposite facts depending on the group she participates in, making a workable definition of "group logic" quite hard.
Abstract: ion by which the real world may be observed and the simplicity of the resulting representation, both in domain modeling and in functional specification of the system to be built [Ke78] [So84]. An immediate consequence is that system designers and builders must be able to place perfect trust in the — now implicit— agreements on correct implementations of those primitives. Note that this is exactly what we do when we trust e.g. the computer hardware to add and multiply accurately, or the DBMS to store facts correctly. In [ISO82/90] one may find an early description of a layered modeling architecture (the "onion model", albeit without implementation!) that could result from this, where each higher layer uses the primitives and constructions of its underlying layers. Any analytic treatment so far however leads to tough problems in commonsense reasoning, see for example [GO94]. We have repeatedly introduced the external observers as the necessary intelligent agents of the agreement (e.g. under the form of axioms) without which no sensible starting point exists for a realworld semantics definition. (Some mathematicians of the Platonic persuasion might disagree with this.) The problem in practice is, that these agreements, while usually well-behaved locally, i.e. within one system or system component, need not be static nor even consistent on a global system level. This latter aspect becomes especially apparent when we need to connect heterogeneous autonomous systems with the purpose of interoperation. A well-known seemingly trivial example of this problem is captured by what is sometimes called Schoenmaker's Conundrum [Scho86], in which (simplified here for brevity) one witness only tells a judge "p" but keeps to himself that "not q" , while a second witness independently only tells the judge "if p then q" but also keeps to herself that "not q". Each witness is consistent, but the judge is able to derive "q", supposedly thereby hanging a hapless victim. Finally, relatively few research results are available on the methodological and formal problems that arise when groups of fallible and individually motivated agents/observers need to reach agreements — on which important design decisions will be taken. Each group of observers in fact may define its own version of the real world, as we have seen not necessarily consistent with others. It is even entirely possible that the same individual agent agrees on opposite facts depending on the group she participates in, making a workable definition of "group logic" quite hard. Some work in distributed AI, e.g. [Mo90] may however prove relevant here.

31 citations


Proceedings Article
Dan Roth1
20 Aug 1995
TL;DR: Previous works in the Learning to Reason framework are continued and supports the thesis that in order to develop a computational account for commonsense reasoning one should study the phenomena of learning and reasoning together.
Abstract: We suggest a new approach for the study of the non monotonicity of human commonsense reasoning. The two main premises that underlie this work are that commonsense reasoning is an inductive phenomenon and that missing information in the interaction of the agent with the environment may be as informative for future interactions as observed information. This intuition is normalized and the problem of reasoning from incomplete information is presented as a problem of learning attribute functions over a generalized domain. We consider examples that illustrate various aspects of the non monotonic reasoning phenomena which have been used over the years as bench marks for various formalisms and translate them into Learning to Reason problems. We demonstrate that these have concise representations over the generalized domain and prove that these representations can be learned efficiently. The framework developed suggests an operational approach to studying reasoning that is nevertheless rigorous and amenable to analysis. We show that this approach efficiently supports reasoning with incomplete information and at the same lime matches our expectations of plausible patterns of reasoning in cases where other theories do not. This work continues previous works in the Learning to Reason framework and supports the thesis that in order to develop a computational account for commonsense reasoning one should study the phenomena of learning and reasoning together.

28 citations


ReportDOI
01 Sep 1995
TL;DR: This dissertation describes the formal foundations and implementation of a commonsense, mixed-initiative plan reasoning system, taking as its data human communicative and plan reasoning abilities and developing formalisms that characterize these abilities and systems that approximate them.
Abstract: This dissertation describes the formal foundations and implementation of a commonsense, mixed-initiative plan reasoning system. By "plan reasoning" I mean the complete range of cognitive tasks that people perform with plans including, for example, plan construction (planning), plan recognition, plan evaluation and comparison, and plan repair (replanning), among other things. "Mixed-initiative" means that several participants can each make contributions to the plan under development through some form of communication. "Commonsense" means that the system represents plans and their constituents at a level that is "natural" to us in the sense that they can be described and discussed in language. In addition, the reasoning that the system performs includes those conclusions that we would take to be sanctioned by common sense, including especially those conclusions that are defeasible given additional knowledge or time spent reasoning. The main theses of this dissertation are the following: (1) Any representation of plans sufficient for commonsense plan reasoning must be based on an expressive and natural representation of such underlying phenomena as time, properties, events, and actions. (2) For mixed-initiative planning, plans should be viewed as arguments that a certain course of action under certain conditions will achieve certain goals. These theses are defended by presenting, first, a representation of events and actions based on interval temporal logic and, second, a representation of plans as arguments in a formal system of defeasible reasoning that explicitly constructs arguments. These two aspects of commonsense plan reasoning are combined and implemented in the TRAINS domain plan reasoner, which is also described in detail. The emphasis in this dissertation is on breadth, taking as its data human communicative and plan reasoning abilities and developing formalisms that characterize these abilities and systems that approximate them. I therefore draw on literature from a broad range of disciplines in the development of these ideas, including: philosophy of language, linguistics and AI work on knowledge representation for the representation of events and actions, philosophical logic and AI work on nonmonotonic reasoning for representing defeasible knowledge and reasoning about it, and, of course, AI work on planning and planning recognition itself.

25 citations


Journal ArticleDOI
01 Aug 1995
TL;DR: One of the main ideas that provides consistency is the interpretation of qualitative input events as elements of the partition of the Cartesian product of input, initial state and time sets.
Abstract: This paper deals with the issue of consistent symbolic (qualitative) representation of continuous dynamic systems. Consistency means here that the results of reasoning with the qualitative representation hold in the underlying (quantitative) dynamic system. In the formalization proposed in this paper, the quantitative structure is represented using the notion of a general dynamic system (GDS). The qualitative counterpart (QDS), is represented by a finite-state automaton structure. The two representational substructures are related through functions, called qualitative abstractions of dynamic systems. Qualitative abstractions associate inputs, states and outputs of the QDS, with partitions of appropriate GDS spaces. The paper shows how to establish such consistent partitions, given a partitioning of the system's output. To represent borders of these partitions, the notion of critical hypersurfaces is introduced. One of the main ideas that provides consistency is the interpretation of qualitative input events as elements of the partition of the Cartesian product of input, initial state and time sets. An example of a consistent qualitative/quantitative representation of a simple dynamic system, and of reasoning using such a representation, is provided. >

23 citations


Book ChapterDOI
Ron Sun1
01 Jan 1995
TL;DR: In this chapter, a connectionist architecture for structuring knowledge in vague and continuous domains is proposed, and it consists of an inference network with nodes representing concepts and links representing rules connecting concepts and a microfeature-based replica of the first level.
Abstract: In this chapter, a connectionist architecture for structuring knowledge in vague and continuous domains is proposed. The architecture is hybrid in terms of representation, and it consists of two levels: one is an inference network with nodes representing concepts and links representing rules connecting concepts, and the other is a microfeature-based replica of the first level. Based on the interaction between the concept nodes and microfeature nodes in the architecture, inferences are facilitated and knowledge not explicitly encoded in a system can be deduced via a mixture of similarity matching and rule application. The architecture is able to take account of many important desiderata of plausible commonsense reasoning, and produces sensible conclusions.

11 citations


Book
01 May 1995
TL;DR: A number of nonmonotonic properties that enable EIR to subsume existing formalisms, such as default logic and inferential distance ordering, have been included within this reasoning technique.
Abstract: Within artificial intelligence, the need to create sophisticated, intelligent behaviour based on common-sense reasoning has long been recognized. Research has demonstrated that formalism for dealing with common sense reasoning require nonmonotonic capabilities where, typically, inferences based on incomplete knowledge need to be revised in light of later information which fills in some of the gaps. This text examines a reasoning technique based on multiple inheritance structures with exceptions (nonmonotonic inheritance structures). Without an adequate nonmonotonic inheritance reasoning technique, such as exceptional inheritance reasoning (or EIR) as proposed in this book, inheritance networks will produce inconsistencies. A number of nonmonotonic properties that enable EIR to subsume existing formalisms, such as default logic and inferential distance ordering, have been included within this reasoning technique. This inheritance formalism has been applied to the two important domains of causal reasoning and analogical reasoning, to demonstrate the conceptual power and expressiveness of the formalism.

10 citations


01 May 1995
TL;DR: A new approach to reasoning about actions is introduced and its adequacy in formalizing a large class of action domains is shown and it is shown that symbolic methods can often be employed for automating reasoning aboutactions.
Abstract: Reasoning about actions is a central area of research in artificial intelligence, related to the study of common sense and nonmonotonic reasoning, knowledge representation, planning and theorem proving. The methodology of research in this area has not been quite satisfactory; typically, a new proposal for reasoning about actions is illustrated by way of a few examples that have been known to be challenging to formalize, and then claims are made that the method works in general. This is not very satisfactory since minor modifications of the examples might prove (indeed, have proven, in many cases) to be difficult or impossible for the new proposal to handle correctly. Such an example-oriented methodology also makes it difficult to compare various formalisms and thus to synthesize new ones. In this dissertation, we present a systematic study of reasoning about actions. The shortcomings of the example-oriented methodology are avoided as follows. First, declarative languages for describing actions are introduced and their semantics defined in such a way as to capture the underlying commonsense intuitions. Different methods of reasoning about actions are presented as translations from these languages and then the adequacy of the different formalizations is established by proving the soundness and completeness of the translations. The new declarative languages we propose are capable of representing rich action domains, such as those where actions can have indirect effects. In addition, we introduce a new approach to reasoning about actions and show its adequacy in formalizing a large class of action domains. We show that, in conjunction with this approach, symbolic methods can often be employed for automating reasoning about actions. Finally, we point out some limitations of proposed methods of reasoning about actions and suggest ways to overcome these limitations.

9 citations


Journal ArticleDOI
TL;DR: A program, called hypersolver, which can solve systems of equations defined in terms of sets in the universe of this new theory called the Hyperset Theory is presented, which may be a useful tool for commonsense reasoning.

Journal ArticleDOI
01 Feb 1995
TL;DR: If the paradox occurs frequently but undramatically in real life, every uncertain reasoning system will have to deal with the problem in some way, and it is not surprising that the paradox should arise in commonsense reasoning.
Abstract: “Simpson's paradox,” first described nearly a century ago, is an anomaly that sometimes arises from pooling data. Dramatic instances of the paradox have occurred in real life in the domains of epidemiology and admissions policies. Many writers have recently described hypothetical examples of the paradox arising in other areas of life and it seems possible that the paradox may occur frequently in mundane domains but with less serious implications. Thus, it is not surprising that the paradox should arise in commonsense reasoning, that subarea of artificial intelligence that seeks to axiomatize reasoning in such mundane domains. It arises as the problem “approximate proof by cases” and the question of whether to accept it may well depend on whether we wish to construct performance or competence models of reasoning. This article gives a brief history of the paradox and discusses its occurrence in our own discipline. It argues that if the paradox occurs frequently but undramatically in real life, every uncertain reasoning system will have to deal with the problem in some way.

Journal ArticleDOI
Ron Sun1
TL;DR: This work directly maps a (causal) rule‐encoding scheme into a connectionist model; thus, it serves to link rule‐based reasoning to connectionist models, notably with direct one‐to‐one correspondence between the basic structures of the two formalisms.
Abstract: This article examines the issue of causality in commonsense reasoning and proposes a connectionist approach for modeling commonsense causal reasoning. Based on an analysis of the advantages and limitations of existing accounts, especially Shoham's logic, a generalized rule-based model FEL is proposed that can take into account the inexactness and the cumulative evidentiality of commonsense reasoning; this model corresponds naturally to a connectionist architecture. Detailed analyses are performed to show how the model handles commonsense causal reasoning. This work shows that a logic-based account of causality can be viewed as an (over)idealization of a more realistic model, which is simpler in form but deals with causality better. This work directly maps a (causal) rule-encoding scheme into a connectionist model; thus, it serves to link rule-based reasoning to connectionist models, notably with direct one-to-one correspondence between the basic structures of the two formalisms. © 1995 John Wiley & Sons, Inc.

Book ChapterDOI
01 Jan 1995
TL;DR: Possibility theory is developed on top of fuzzy set theory, by interpreting the membership function of a fuzzy set as a description of the available knowledge about the normal course of things, which leads to possibilistic logic.
Abstract: It is widely acknowledged that classical logic stems from old attempts at developing a formal model of human reasoning. These attempts mainly come from philosophers. The first half of the century has witnessed significant progress in classical logic as a tool for founding mathematical reasoning. In contrast, the last 20 years, and the emergence of Artificial Intelligence, have pointed out the deficiencies of classical logic as a tool for modeling commonsense reasoning. When inferring from incomplete, uncertain or contradictory information, man does not follow the strict rules of classical logic. Simultaneously, a significant revival of non-additive probabilites has been observed, with the emergence of several approaches to uncertainty modelling such as belief functions, fuzzy measures, upper and lower probabilities. Fuzzy set theory (Zadeh, 1965) belongs to this trend, although it has been originally construed as a tool for modeling lexical imprecision in natural language, often referred to as “vagueness”. Possibility theory (Zadeh, 1978; Dubois and Prade, 1988) has been developed on top of fuzzy set theory, by interpreting the membership function of a fuzzy set as a description of the available knowledge about the normal course of things. As opposed to most other theories, possibility theory is an ordinal approach to uncertainty. Putting together possibility theory and logic leads to possibilistic logic (Dubois, Lang and Prade, 1994).

Journal ArticleDOI
TL;DR: In this paper, a case study describes a multi-dimensional model of the effects of particle size upon the fracture toughness of epoxy resin, which is self-contained, so that no prior knowledge of qualitative modelling is required.

Journal ArticleDOI
TL;DR: The active logic approach to resolving such effects of a context clash is sketched, where the agent's misidentification is initially reflected in her beliefs, and belief revision is later used to resolve this effect of the clash.
Abstract: Context plays a crucial role in natural-language dialogues. But context can change over the course of a dialogue, and thus communicating agents are tasked with identifying and keeping track of context shifts in order to understand conversations. But context is not wholly objective; each participant in a dialogue has her own view of context. At times the parties involved in a dialogue will unknowingly presume different (aspects of) contexts from one another, perhaps because one person has not kept up with the other's most recent context shift. The resulting context clash may lead to confusion or miscommunication. Agents must be prepared to sort out these confusions when they become evident. Active logics can be successfully utilized by an agent facing confusions such as those that arise when a context clash leads her to misidentify an object. In this paper the active logic approach to resolving such effects of a context clash is sketched. The agent's misidentification is initially reflected in her beliefs, and belief revision is later used to resolve this effect of the clash. As theoretical tools active logics have proven useful for solving a varied array of commonsense reasoning problems, but with implementation comes space and time complexity concerns. These are also discussed along with a proposed partial remedy based in part on context or focus-of-attention.

Proceedings ArticleDOI
Yan Wang1, Jian Chen1
22 Oct 1995
TL;DR: A framework of decision making integrating QR with quantitative models, which includes information of different abstraction degree and nonnumerical factors, helps to solve decision making problems more effectively and efficiently.
Abstract: Conventional decision analyses based on quantitative models suffer from the lack of ability to handle qualitative knowledge especially under complex circumstances. To integrate quantitative models with qualitative knowledge, which we argue in this paper includes information of different abstraction degree and nonnumerical factors, helps to solve decision making problems more effectively and efficiently. We apply qualitative reasoning (QR) to handle information qualitatively. A framework of decision making integrating QR with quantitative models is presented and the interaction between qualitative simulation models at the upper level and quantitative models at the lower level is discussed.

Proceedings Article
20 Aug 1995
TL;DR: This paper discusses fuzzy implications in the sense of common sense reasoning, and presents a new fuzzy preferential implication that is nonmonotonic, paraconsistent and without the general implicational paradoxes.
Abstract: It is well known that knowledge-based systems would be more robust and smarter if they can deal with the inconsistent, incomplete or imprecise knowledge, which has been referred to as common sense knowledge. In this paper, we discuss fuzzy implications in the sense of common sense reasoning. Firstly, we analyse the rationality of some existing fuzzy implications based on the discussion of implicational paradoxes. Secondly, we present a new fuzzy preferential implication that is nonmonotonic, paraconsistent and without the general implicational paradoxes. Finally, we propose sound and complete decision tableaux of such implications, which can be used as the inference engines of adaptive expert systems or frameworks for the fuzzy Prolog.

Journal ArticleDOI
TL;DR: A new generation of very powerful Reduced Instruction Set Computers which, while not exactly matching Turing's Spartan hardware design, are conceptually much nearer to it than the vast majority of the computer architectures that have been designed over the last three decades.
Abstract: The first is Turing's insistence that the computer has a hardware system that would be as simple as possible. Turing's philosophy being that the main functionality of the ACE computer would be achieved by programming rather than complex electronic circuitry. The trend in computer architectures since the publication of this report has been towards more and more complex hardware. However, the inevitable result of this has been the computer becoming increasingly baroque and inefficient. This has resulted in a new generation of very powerful Reduced Instruction Set Computers which, while not exactly matching Turing's Spartan hardware design, are conceptually much nearer to it than the vast majority of the computer architectures that have been designed over the last three decades.

Proceedings ArticleDOI
27 Nov 1995
TL;DR: This work describes a way by which knowledge bases (KB), in the form of semantic networks, can be represented by a particular "weightless" neural network model, the GNU (generalising neural unit), and presents a new strategy for 'spreading' in such networks, that is, how generalising capabilities are instilled in the network.
Abstract: This work describes a way by which knowledge bases (KB), in the form of semantic networks, can be represented by a particular "weightless" neural network model, the GNU (generalising neural unit). It also presents a new strategy for 'spreading' in such networks, that is, how generalising capabilities are instilled in the network. The case described here concerns primarily how neural networks can acquire semantic power and has implications for discussing possibilities for knowledge bases which involve some degree of "common-sense reasoning". The idea of partitioning the external fields in order to give them structural semantic power has proved appropriate to discuss inheritance properties and overlapping concepts in semantic networks. A discussion of full versus sparse connectivity is justified in order to build up low cost applications as far as computational resources are concerned. The relationship between sparse connectivity and the ability to perform is discussed in relation to the required semantic tasks. The results show that the retrieval performance as indicated by three different measurements, is improved with the use of a novel spreading strategy.

Journal ArticleDOI
TL;DR: A novel approach to integrating formal models of commonsense reasoning with traditional knowledge representation formalisms, such as frames and scripts is suggested, and a "typicality logic" is described, and the claim is made that default logic formalisms that have been proposed in the literature are "specializations" of the author's typicality logic.
Abstract: This book deals with a well-known problem in artificial intelligence, namely commonsense reasoning. The author tackles this problem from the standpoint of classical AI, building on theories of nonmonotonic reasoning developed by McCarthy, McDermott, Poole, Reiter, and Allen. However, the proposals given in this book differ from traditional approaches in two respects: (1) a novel approach to integrating formal models of commonsense reasoning with traditional knowledge representation formalisms, such as frames and scripts is suggested, and (2) a "typicality logic" is described, and the claim is made that default logic formalisms that have been proposed in the literature are "specializations" of the author's typicality logic.

Proceedings ArticleDOI
21 May 1995
TL;DR: A 3-D spatial reasoning algebra is developed, using qualitative versions of Gibbs-Heaviside vector calculus with homogeneous coordinates, and screw calculus motions that map one object frame onto another.
Abstract: Qualitative spatial reasoning has been used for representing landmarks, path and grasp planning, shape recognition etc, but has generally not been extended beyond 2-D. In this paper, we develop a 3-D spatial reasoning algebra, using qualitative versions of (a) Gibbs-Heaviside vector calculus with homogeneous coordinates, and (b) screw calculus motions that map one object frame onto another. By comparing the results of composition in one, two and three dimensions, we find that the degree of uncertainty is much higher in 3-D. We demonstrate the utility of a qualitative formulation in a simple robot spatial task with imprecise instructions.


Book ChapterDOI
01 Jan 1995
TL;DR: This review helps to identify those existing qualitative techniques, which are ready to be used in CIM, and introduces readers to some techniques of qualitative simulation on examples.
Abstract: Qualitative reasoning has been suggested and studied in artificial intelligence as a tool for knowledge representation, mental simulation and simulation of behavior of physical systems. This paper is an attempt to formalize various aspects of human thought covering common sense reasoning about physical reality. We try to point to those CIM tasks, which are of qualitative nature. This review helps us to identify those existing qualitative techniques, which are ready to be used in CIM. This paper introduces readers to some techniques of qualitative simulation on examples.

Dan Roth1
02 Jan 1995
TL;DR: A new framework for the study of reasoning, in which a learning component has a principal role, is developed and shown to support efficient reasoning with incomplete information, and to avoid many of the representational problems which existing default reasoning formalisms face.
Abstract: Any theory aimed at understanding commonsense reasoning, the process that humans use to cope with the mundane but complex aspects of the world in evaluating everyday situations, should account for its flexibility, its adaptability, and the speed with which it is performed. In this thesis we analyze current theories of reasoning and argue that they do not satisfy those requirements. We then proceed to develop a new framework for the study of reasoning, in which a learning component has a principal role. We show that our framework efficiently supports a lot "more reasoning" than traditional approaches and at the same time matches our expectations of plausible patterns of reasoning in cases where other theories do not. In the first part of this thesis we present a computational study of the knowledge-based system approach, the generally accepted framework for reasoning in intelligent systems. We present a comprehensive study of several methods used in approximate reasoning as well as some reasoning techniques that use approximations in an effort to avoid computational difficulties. We show that these are even harder computationally than exact reasoning tasks. What is more surprising is that, as we show, even the approximate versions of these approximate reasoning tasks are intractable, and these severe hardness results on approximate reasoning hold even for very restricted knowledge representations. Motivated by these computational considerations we argue that a central question to consider, if we want to develop computational models for commonsense reasoning, is how the intelligent system acquires its knowledge and how this process of interaction with its environment influences the performance of the reasoning system. The Learning to Reason framework developed and studied in the rest of the thesis exhibits the role of inductive learning in achieving efficient reasoning, and the importance of studying reasoning and learning phenomena together. The framework is defined in a way that is intended to overcome the main computational difficulties in the traditional treatment of reasoning, and indeed, we exhibit several positive results that do not hold in the traditional setting. We develop Learning to Reason algorithms for classes of theories for which no efficient reasoning algorithm exists when represented as a traditional (formula-based) knowledge base. We also exhibit Learning to Reason algorithms for a class of theories that is not known to be learnable in the traditional sense. Many of our results rely on the theory of model-based representations that we develop in this thesis. In this representation, the knowledge base is represented as a set of models (satisfying assignments) rather than a logical formula. We show that in many cases reasoning with a model-based representation is more efficient than reasoning with a formula-based representation and, more significantly, that it suggests a new view of reasoning, and in particular, of logical reasoning. In the final part of this thesis, we address another fundamental criticism of the knowledge-based system approach. We suggest a new approach for the study of the non-monotonicity of human commonsense reasoning, within the Learning to Reason framework. The theory developed is shown to support efficient reasoning with incomplete information, and to avoid many of the representational problems which existing default reasoning formalisms face. We show how the various reasoning tasks we discuss in this thesis relate to each other and conclude that they are all supported together naturally.

Proceedings ArticleDOI
20 Feb 1995
TL;DR: In this paper, an extension of the HEART system is presented, which allows the manipulation of qualitative data and allows the treatment of facts such as "the robot is at the door at time 10".
Abstract: We present an extension of the HEART system developed by Joubel and Raiman (1990). This extension makes it possible to perform qualitative reasoning. HEART includes an assumption-based truth maintenance system (ATMS) and a temporal constraint propagator (TCP). Our extension allows the manipulation of qualitative data. This extended system is linked with a qualitative simulator based on QSIM. In our qualitative simulator, qualitative parameters and tendencies (time derivatives of parameters) take their values in a quantity space based on intervals. Furthermore, time is represented explicitly, thus permitting the treatment of facts such as "the robot is at the door at time 10". We present a method of performing assumption-based truth maintenance in a qualitative-based temporal logic. This is based on the results of a previous work that considered assumption truth maintenance systems from a logical perspective. >