scispace - formally typeset
Search or ask a question

Showing papers by "Hector J. Levesque published in 1992"


Proceedings Article
12 Jul 1992
TL;DR: A greedy local search procedure called GSAT is introduced for solving propositional satisfiability problems and its good performance suggests that it may be advantageous to reformulate reasoning tasks that have traditionally been viewed as theorem-proving problems as model-finding tasks.
Abstract: We introduce a greedy local search procedure called GSAT for solving propositional satisfiability problems. Our experiments show that this procedure can be used to solve hard, randomly generated problems that are an order of magnitude larger than those that can be handled by more traditional approaches such as the Davis-Putnam procedure or resolution. We also show that GSAT can solve structured satisfiability problems quickly. In particular, we solve encodings of graph coloring problems, N-queens, and Boolean induction. General application strategies and limitations of the approach are also discussed. GSAT is best viewed as a model-finding procedure. Its good performance suggests that it may be advantageous to reformulate reasoning tasks that have traditionally been viewed as theorem-proving problems as model-finding tasks.

1,410 citations


Proceedings Article
12 Jul 1992
TL;DR: It is shown that by using the right distribution of instances, and appropriate parameter values, it is possible to generate random formulas that are hard, that is, for which satisfiability testing is quite difficult.
Abstract: We report results from large-scale experiments in satisfiability testing. As has been observed by others, testing the satisfiability of random formulas often appears surprisingly easy. Here we show that by using the right distribution of instances, and appropriate parameter values, it is possible to generate random formulas that are hard, that is, for which satisfiability testing is quite difficult. Our results provide a benchmark for the evaluation of satisfiability-testing procedures.

1,004 citations


01 Jan 1992
TL;DR: This paper presents a new solution to the Yale shooting problem based on the idea that since the abnormality predicate takes a situational argument, it is important for the meanings of the situations to be held constant across the various models being compared.
Abstract: Most of the solutions proposed to the Yale shooting problem have either introduced new nonmonotonic reasoning methods (generally involving temporal priorities) or completely reformulated the domain axioms to represent causality explicitly. This paper presents a new solution based on the idea that since the abnormality predicate takes a situational argument, it is important for the meanings of the situations to be held constant across the various models being compared. This is accomplished by a simple change in circumscription policy: when lb is circumscribed, Result (rather than Holds) is allowed to vary. In addition, we need an axiom ensuring that every consistent situation is included in the domain of discourse. Ordinary circumscription will then produce the intuitively Correct answer. Beyond its conceptual simplicity, the solution proposed here has additional advantages over the previous approaches. Unlike the approach that uses temporal priorities, it can support reasoning backward in time as well as fon'ard. And unlike the causal approach, it can handle ramifications in a natural manner

22 citations


01 Jan 1992
TL;DR: In this article, an agent's control policy is assessed along three dimensions: deliberation cost, execution cost, and goal value, and the agent must choose which goal to attend to as well as which action to take.
Abstract: An autonomous agent's control problem is often formulated as the attempt to minimize the expected cost of accomplishing a goal, This paper presents a three-dimensional view of the control problem that is substantially more realistic. The agent's control policy is assessed along three dimensions: deliberation cost, execution cost, and goal value. The agent must choose which goal to attend to as well as which action to take. Our control policy seeks to maximize satisfaction by trading execution cost and goal value while keeping deliberation cost low. The agent's control decisions are guided by the MU heuristic—choose the alternative whose marginal expected utility is maximal. Thus, when necessary, the agent will prefer easily-achieved goals to attractive but difficult-to-attain alternatives. The MU heuristic is embedded in an architecture with record-keeping and learning capabilities. The architecture offers its control module expected utility and expected cost estimates that are gradually refined as the agent accumulates experience. A programmer is not required to supply that knowledge, and the estimates are provided without recourse to distributional assumptions

12 citations


01 Jan 1992
TL;DR: Three-rained extensions of major nonmonotonic formalisms are introduced and it is proved that the recently proposed well-founded semantics of logic programs is equivalent, for arbitrary logic programs, to three-valued forms of McCarthy's circumscription, Reiter's closed world assumption, Moore's autoepistemic logic and reiter's default theory.
Abstract: We introduce three-rained extensions of major nonmonotonic formalisms and we prove that the recently proposed well-founded semantics of logic programs is equivalent, for arbitrary logic programs, to three-valued forms of McCarthy's circumscription, Reiter's closed world assumption, Moore's autoepistemic logic and Reiter's default theory. This result not only provides a further justification of the well-founded semantics as a natural extension of the perfect model semantics from the class of stratified programs to the class of all logic programs, but it also establishes the class of all logic programs as a large class of theories, for which natural forms of all four nonmonotonic formalisms coincide. lt also paves the way for using efficient computation methods, developed for logic programming, as inference mechanisms for nonmonotonic reasoning

9 citations


01 Jan 1992
TL;DR: A greedy local search procedure called GSAT, which can be used to solve hard, randomly generated problems that are an order of magnitude larger than those that can be handled by more traditional approaches such as the Davis-Putnam procedure or resolution, is introduced.
Abstract: We introduce a greedy local search procedure called GSAT for solving propositional satisfiability problems. Our experiments show that this procedure can be used to solve hard, randomly generated problems that are an order of magnitude larger than those that can be handled by more traditional approaches such as the Davis-Putnam procedure or resolution. We also show that GSAT can solve structured satisfiability problems quickly. In particular, we solve encodings of graph coloring problems, N-queens, and Boolean induction. General application strategies and limitations of the approach are also discussed. GSAT is best viewed as a model-finding procedure. Its good performance suggests that it may be advantageous to reformulate reasoning tasks that have traditionally been viewed as theorem-proving problems as model-finding tasks.

5 citations


01 Jan 1992
TL;DR: In this article, the authors contrast two views of knowledge in default reasoning systems: the traditional view that one knows the logical consequences of one's knowledge base, and the contingent view of knowledge, what is contingentl known about the world under consideration.
Abstract: How should what one knows about an individual affect default conclusions about that individual? This paper contrasts two views of “knowledge” in default reasoning systems The first is the traditional view that one knows the logical consequences of one's knowledge base It is shown how, under this interpretation, having to know an exception is too strong for default reasoning It is argued that we need to distinguish “background” and “contingent” knowledge in order to be able to handle specificity, and that this is a natural distinction The second view of knowledge is what is contingentl known about the world under consideration Using this view of knowledge, a notion of conditioning that seems like a minimal property of a default is defined Finally, a qualitative version of the lottery paradox is given if we want to be able to say that individuals that are typical in every respect do not exist, we should not expect to conclude the conjunction of our default conclusions This paper expands on work in the proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning [381]

2 citations


01 Jan 1992
TL;DR: A general framework for integrating logical deduction and sortal deduction to form a deductive system for sorted logic, and results that provide the theoretical underpinnings of the framework are presented.
Abstract: Researchers in artificial intelligence have recent( been taking great interest in hybrid representations, among them sorted logics—logics that link a traditional logical representation to a taxonomie (or sort) representation such as those prevalent in semantic networks. This paper introduces a general framework—the substitutional framework—for integrating logical deduction and sortal deduction to form a deductive system for sorted logic. This paper also presents results that provide the theoretical underpinnings of the framework A distinguishing characteristic of a deductive system that is structured according to the substitutional framework is that the sort subsystem is invoked only when the logic subsystem performs unification, and thus sort information is used only in determining what substitutions to make for variables. Unlike every other known approach to sorted deduction, the substitutional framework provides for a systematic transformation of unsorted deductive systems into sorted ones

1 citations