scispace - formally typeset
Search or ask a question

Showing papers on "Conjunctive normal form published in 1999"


Journal ArticleDOI
TL;DR: A survey of the latest techniques in planning algorithms, with an emphasis on propositional methods such as GRAPHPLAN and compilers that convert planning problems into propositional conjunctive normal form formulas for solution using systematic or stochastic SAT methods.
Abstract: The past five years have seen dramatic advances in planning algorithms, with an emphasis on propositional methods such as GRAPHPLAN and compilers that convert planning problems into propositional conjunctive normal form formulas for solution using systematic or stochastic SAT methods. Related work, in the context of spacecraft control, advances our understanding of interleaved planning and execution. In this survey, I explain the latest techniques and suggest areas for future research.

485 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that for the class of irredundant monotone functions, it is NP-hard to test either of the conditions D′=D or C′=C.

136 citations


Proceedings Article
31 Jul 1999
TL;DR: A method for compiling propositional theories into a new tractable form that is referred to as decomposable negation normal form (DNNF), which shows that once a propositional theory is compiled into DNNF, a number of reasoning tasks, such as satisfiability and forgetting, can be performed in linear time.
Abstract: We propose a method for compiling propositional theories into a new tractable form that we refer to as decomposable negation normal form (DNNF). We show a number of results about our compilation approach. First, we show that every propositional theory can be compiled into DNNF and present an algorithm to this effect. Second, we show that if a clausal form has a bounded treewidth, then its DNNF compilation has a linear size and can be computed in linear time - treewidth is a graphtheoretic parameter which measures the connectivity of the clausal form. Third, we show that once a propositional theory is compiled into DNNF, a number of reasoning tasks, such as satisfiability and forgetting, can be performed in linear time. Finally, we propose two techniques for approximating the DNNF compilation of a theory when the size of such compilation is too large to be practical. One of the techniques generates a sound but incomplete compilation, while the other generates a complete but unsound compilation. Together, these approximations bound the exact compilation from below and above in terms for their ability to answer queries.

84 citations


Book ChapterDOI
12 Jul 1999
TL;DR: Basic notions and results of a Contextual Attribute Logic are surveyed and a common theory of cumulated clauses is presented for algorithmically computing bases of those logics.
Abstract: Contextual Attribute Logic is part of Contextual Concept Logic. It may be considered as a contextual version of the Boolean Logic of Signs and Classes. In this paper we survey basic notions and results of a Contextual Attribute Logic. Main themes are the clause logic and the implication logic of formal contexts. For algorithmically computing bases of those logics, a common theory of cumulated clauses is presented.

76 citations



Journal ArticleDOI
TL;DR: It is shown that the problem is coNP-complete when the expression is required to be in conjunctive normal form with three literals per clause (3CNF), and a dichotomy theorem analogous to the classical one by Schaefer is proved, stating that, unless P=NP, the problem can be solved in polynomial time.
Abstract: We study the complexity of telling whether a set of bit-vectors represents the set of all satisfying truth assignments of a Boolean expression of a certain type. We show that the problem is coNP-complete when the expression is required to be in conjunctive normal form with three literals per clause (3CNF). We also prove a dichotomy theorem analogous to the classical one by Schaefer, stating that, unless P=NP, the problem can be solved in polynomial time if and only if the clauses allowed are all Horn, or all anti-Horn, or all 2CNF, or all equivalent to equations modulo two.

68 citations


Dissertation
20 Nov 1999
TL;DR: The algorithms presented here are among the first parallel algorithms for random generation of combinatorial structures and derive an algorithm that solves one of them optimally in linear time when the input graph is a tree as well as a number of non-approximability results.
Abstract: Probabilistic techniques are becoming more and more important in Computer Science. Some of them are useful for the analysis of algorithms. The aim of this thesis is to describe and develop applications of these techniques. We first look at the problem of generating a graph uniformly at random from the set of all unlabelled graphs with n vertices, by means of efficient parallel algorithms. Our model of parallel computation is the well-known parallel random access machine (PRAM). The algorithms presented here are among the first parallel algorithms for random generation of combinatorial structures. We present two different parallel algorithms for the uniform generation of unlabelled graphs. The algorithms run in O(log^2 n) time with high probability on an EREW PRAM using O(n^2) processors. Combinatorial and algorithmic notions of approximation are another important thread in this thesis. We look at possible ways of approximating the parameters that describe the phase transitional behaviour (similar in some sense to the transition in Physics between solid and liquid state) of two important computational problems: that of deciding whether a graph is colourable using only three colours so that no two adjacent vertices receive the same colour, and that of deciding whether a propositional boolean formula in conjunctive normal form with clauses containing at most three literals is satisfiable. A specific notion of maximal solution and, for the second problem, the use of a probabilistic model called the (young) coupon collector allows us to improve the best known results for these problems. Finally we look at two graph theoretic matching problems. We first study the computational complexity of these problems and the algorithmic approximability of the optimal solutions, in particular classes of graphs. We also derive an algorithm that solves one of them optimally in linear time when the input graph is a tree as well as a number of non-approximability results. Then we make some assumptions about the input distribution, we study the expected structure of these matchings and we derive improved approximation results on several models of random graphs.

42 citations


Book ChapterDOI
14 Sep 1999
TL;DR: This paper presents two variants of the DP procedure which overcome the problem outlined above and shows that limiting the splitting step to a subset of the set of variables can lead to significant speeds up.
Abstract: Traditionally, the satisfiability problem for propositional logics deals with formulas in Conjunctive Normal Form (CNF). A typical way to deal with non-CNF formulas requires (i) converting them into CNF, and (ii) applying solvers usually based on the Davis-Putnam (DP) procedure. A well known problem of this solution is that the CNF conversion may introduce many new variables, thus greatly widening the space of assignments in which the DP procedure has to search in order to find solutions. In this paper we present two variants of the DP procedure which overcome the problem outlined above. The idea underlying these variants is that splitting should occur only for the variables in the original formula. The CNF conversion methods employed ensure their correctness and completeness. As a consequence, we get two decision procedures for non-CNF formulas (i) which can exploit all the present and future sophisticated technology of current DP implementations, and (ii) whose space of assignments they have to search in, is limited in size by the number of variables in the original input formula. In [11], it is showed that limiting the splitting step to a subset of the set of variables (the truth values of the others being consequentially determined) can lead to significant speeds up.

40 citations


Book ChapterDOI
Laurent Juban1
30 Aug 1999
TL;DR: This paper presents a dichotomy theorem for the unique satisfiability problem, partitioning the instances of the problem between the polynomial-time solvable and coNP-hard cases and noticing that the additional knowledge of a model makes this problem co NP-complete.
Abstract: The unique satisfiability problem, that asks whether there exists a unique solution to a given propositional formula, was extensively studied in the recent years. This paper presents a dichotomy theorem for the unique satisfiability problem, partitioning the instances of the problem between the polynomial-time solvable and coNP-hard cases. We notice that the additional knowledge of a model makes this problem coNP-complete.We compare the polynomial cases of unique satisfiability to the polynomial cases of the usual satisfiability problem and show that they are incomparable. This difference between the polynomial cases is partially due to the necessity to apply parsimonious reductions among the unique satisfiability problems to preserve the number of solutions. In particular, we notice that the unique not-all-equal satisfiability problem, where we ask whether there is a unique model such that each clause has at least one true literal and one false literal, is solvable in polynomial time.

36 citations


Journal ArticleDOI
TL;DR: The structure of functional dependencies that hold in a Horn theory is studied, showing that every such functional dependency is in fact a single positive term Boolean function, and it is proved that for any Horn theory the set of its minimal functional dependencies is quasi-acyclic.

30 citations


Book ChapterDOI
14 Sep 1999
TL;DR: This paper defines a mapping between a SAT instance and a BN, and it is proved that BN fixed points correspond to the SAT solutions and provides a general framework for local search algorithms.
Abstract: In this paper we present a new approach to solve the satisfiability problem (SAT), based on boolean networks (BN). We define a mapping between a SAT instance and a BN, and we solve SAT problem by simulating the BN dynamics. We prove that BN fixed points correspond to the SAT solutions. The mapping presented allows to develop a new class of algorithms to solve SAT. Moreover, this new approach suggests new ways to combine symbolic and connectionist computation and provides a general framework for local search algorithms.

Book ChapterDOI
11 Jul 1999
TL;DR: The decision version of the Maximum Satisfiability (MaxSat) problem, which has several applications, is studied and an algorithm running in time O(|F|1:3995k) is presented, which is the fastest algorithm in the number of clauses and the length of the formula.
Abstract: Given a boolean formula F in conjunctive normal form and an integer k, is there a truth assignment satisfying at least k clauses? This is the decision version of the Maximum Satisfiability (MaxSat) problem we study in this paper. We improve upper bounds on the worst case running time for MaxSat. First, Cai and Chen showed that MaxSat can be solved in time |F|2O(k) when the clause size is bounded by a constant. Imposing no restrictions on clause size, Mahajan and Raman and, independently, Dantsin et al. improved this to O(|F|Φk), where Φ ≅ 1:6181 is the golden ratio. We present an algorithm running in time O(|F|1:3995k). The result extends to finding an optimal assignment and has several applications, in particular, for parameterized complexity and approximation algorithms. Moreover, if F has K clauses, we can find an optimal assignment in O(|F|1:3972K) steps and in O(1:1279|F|) steps, respectively. These are the fastest algorithm in the number of clauses and the length of the formula, respectively.

Journal ArticleDOI
TL;DR: For almost every r-CNF formula, when it is satisfiable, the proportion of variables which must be assigned a value by such procedures, in order to find a solution, is at least equal to (αmr(c)(1 - e-rc)) - e.

Journal Article
TL;DR: The main elements of the algorithm, including the branch/merge rule inspired by an algorithm proposed by Stållmarck, are discussed and its remarkable effectiveness is illustrated with some examples and computational results.
Abstract: HeerHugo is a propositional formula checker that determines whether a given formula is satisfiable or not. Its main ingredient is the branch/merge rule, that is inspired by an algorithm proposed by Staa llmarck, which is protected by a software patent. The algorithm can be interpreted as a breadth first search algorithm. HeerHugo differs substantially from Staa llmarck's algorithm, as it operates on formulas in conjunctive normal form and it is enhanced with many logical rules including unit resolution, 2--satisfiability tests and additional systematic reasoning techniques In this paper, the main elements of the algorithm are discussed, and its remarkable effectiveness is illustrated with some examples and computational results.

Book ChapterDOI
TL;DR: Using the “compilability framework,” it is shown that preconditions in conjunctive normal form add to the expressive power of propositional strips, which confirms a conjecture by Backstrom.
Abstract: While there seems to be a general consensus about the expressive power of a number of language features in planning formalisms, one can find many different statements about the expressive power of disjunctive preconditions. Using the “compilability framework,” we show that preconditions in conjunctive normal form add to the expressive power of propositional strips, which confirms a conjecture by Backstrom. Further, we show that preconditions in conjunctive normal form do not add any expressive power once we have conditional effects.

Journal Article
TL;DR: The Modoc analysis yields a worst-case upper bound that is not as strong as the best known upper bound for model-searching satisfiability methods, on general propositional CNF, but is the first time a nontrivial upper bound on non-Horn formulas has been shown for any resolution-based refutation procedure.

Journal ArticleDOI
TL;DR: In this paper, the authors present a worst-case analysis of Modoc as a function of the number of propositional variables in the formula, which is the first time a nontrivial upper bound on non-Horn formulas has been shown for any resolution-based refutation procedure.

Journal ArticleDOI
TL;DR: An efficient recursive algorithm is presented to compute the set of prime implicants of a propositional formula in conjunctive normal form (CNF), and it is shown that the number of subsumption operation is reduced in the proposed algorithm.
Abstract: In this paper, an efficient recursive algorithm is presented to compute the set of prime implicants of a propositional formula in conjunctive normal form (CNF). The propositional formula is represented as a (0,1)-matrix, and a set of 1’s across its columns are termed as paths. The algorithm finds the prime implicants as the prime paths in the matrix using the divide-and-conquer technique. The algorithm is based on the principle that the prime implicant of a formula is the concatenation of the prime implicants of two of its subformulae. The set of prime paths containing a specific literal and devoid of a literal are characterized. Based on this characterization, the formula is recursively divided into subformulae to employ the divide-and-conquer paradigm. The prime paths of the subformulae are then concatenated to obtain the prime paths of the formula. In this process, the number of subsumption operations is reduced. It is also shown that the earlier algorithm based on prime paths has some avoidable computations that the proposed algorithm avoids. Besides being more efficient, the proposed algorithm has the additional advantage of being suitable for the incremental method, without recomputing prime paths for the updated formula. The subsumption operation is one of the crucial operations for any such algorithms, and it is shown that the number of subsumption operation is reduced in the proposed algorithm. Experimental results are presented to substantiate that the proposed algorithm is more efficient than the existing algorithms.

Journal ArticleDOI
TL;DR: This paper proposes new algorithms for the generation of a GLB and gives precise characterization of the computational complexity of the problem of generating such lower bounds, thus addressing in a formal way the question “how many queries are needed to amortize the overhead of compilation?”
Abstract: Propositional greatest lower bounds (GLBs) are logicallyddefined approximations of a knowledge base. They were defined in the context of Knowledge Compilation, a technique developed for addressing high computational cost of logical inference. A GLB allows for polynomialdtime complete ondline reasoning, although soundness is not guaranteed. In this paper we propose new algorithms for the generation of a GLB. Furthermore, we give precise characterization of the computational complexity of the problem of generating such lower bounds, thus addressing in a formal way the question “how many queries are needed to amortize the overhead of compilationq”

Journal ArticleDOI
01 Sep 1999
TL;DR: In this paper, a general method for approximating the number of solutions of a boolean formula in conjunctive normal form F is proposed, based on cutting a seriation established on an incidence data table associated with F. This method reduces considerably the computational complexity.
Abstract: We propose here a general method for approximating the number of solutions of a boolean formula in conjunctive normal form F. By applying the principle "divise to resolve", this method reduces considerably the computational complexity. It is based on cutting a seriation established on an incidence data table associated with F. Moreover, the independence probability concept is finely exploited. Theoretical justification and intensive experimentation validate the proposed method.

Journal ArticleDOI
TL;DR: It is shown that – under the assumption NP≠coNP – it is impossible to transform formulas into a logically equivalent formula by adding polynomially many clauses such that it can be decided in polynomial time whether a clause is a consequence of the enlarged formula.

Proceedings Article
18 Jul 1999
TL;DR: A new approach for solving first-order predicate logic problems stated in conjunctive normal form is presented to combine resolution with the Constraint Satisfaction Problem (CSP) paradigm to prove the inconsistency and find a model of a problem.
Abstract: The purpose of this paper is to present a new approach for solving first-order predicate logic problems stated in conjunctive normal form. We propose to combine resolution with the Constraint Satisfaction Problem (CSP) paradigm to prove the inconsistency and find a model of a problem. The resulting method benefits from resolution and constraint satisfaction techniques and seems very efficient when confronted to some problems of the CADE-13 competition.

Book ChapterDOI
01 Jan 1999
TL;DR: The decision problem (or the “Entscheidungsproblem”) of first-order logic can be traced back to the early years of the 20th century, and around 1920 Hilbert formulated the problem to find an algorithm which decides the validity of formulas in first- order predicate logic.
Abstract: The decision problem (or the “Entscheidungsproblem”) of first-order logic can be traced back to the early years of the 20th century. Around 1920 Hilbert formulated the problem to find an algorithm which decides the validity of formulas in first-order predicate logic (see, e.g., [11]). He called this decision problem the “fundamental problem of mathematical logic”. Indeed, in some informal sense, the problem is even older than modem symbolic logic. G. W. Leibniz already formulated the vision of a calculus ratiocinator [16], which would allow to settle arbitrary problems by purely mechanical computation, once they had been translated into an adequate formalism.

Journal ArticleDOI
TL;DR: A Linear Programming formulation is considered for the satisfiability problem and the use of Recurrent Neural Networks is described for choosing the best pivot positions and greatly improving the algorithm performance.
Abstract: First a Linear Programming formulation is considered for the satisfiability problem, in particular for the satisfaction of a Conjunctive Normal Form in the Propositional Calculus and the Simplex algorithm for solving the optimization problem. The use of Recurrent Neural Networks is then described for choosing the best pivot positions and greatly improving the algorithm performance. The result of hard cases testing is reported and shows that the technique can be useful even if it requires a huge amount of size for the constraint array and Neural Network Data Input.

Book ChapterDOI
06 Sep 1999
TL;DR: It is shown that every satisfiable and finite set of guarded Horn clauses S can be transformed into a infinite set of primitive guarded Horn clause S′ such that the least Herbrand models of S and S′ coincide on predicate symbols that occur in S.
Abstract: The guarded fragment of first order logic, defined in [1], has attracted much attention recently due to the fact that it is decidable and several interesting modal logics can be translated into it. Guarded clauses, defined by de Nivelle in [7], are a generalization of guarded formulas in clausal form. In [7], it is shown that the class of guarded clause sets is decidable by saturation under ordered resolution. In this work, we deal with guarded clauses that are Horn clauses. We introduce the notion of primitive guarded Horn clause: A guarded Horn clause is primitive iff it is either ground and its body is empty, or it contains exactly one body literal which is flat and linear, and its head literal contains a non-ground functional term. Then, we show that every satisfiable and finite set of guarded Horn clauses S can be transformed into a finite set of primitive guarded Horn clauses S′ such that the least Herbrand models of S and S′ coincide on predicate symbols that occur in S. This transformation is done in the following way: first, de Nivelle's saturation procedure is applied on the given set S, and certain clauses are extracted form the resulting set. Then, a resolution based technique that introduces new predicate symbols is used in order to obtain the set S′. Our motivation for the presented method is automated model building.

Book ChapterDOI
01 Jan 1999
TL;DR: The mathematical simulation of cause-and-effect relations (CE relations) in the system climate-ocean-sediments considered in this chapter is based on the ideas of the English logician J.S.Mill, who made the first known attempt to apply tools of logic to the detection of CE relations between phenomena described by propositions.
Abstract: The mathematical simulation of cause-and-effect relations (CE relations) in the system climate-ocean-sediments considered in this chapter is based on the ideas of the English logician J.S.Mill [1], proposed in the 19th century. He made the first known attempt to apply tools of logic to the detection of CE relations between phenomena described by propositions. However he considered only the simplest cases, only one cause and one effect. Unlike Mill’s investigations, the i.e. cause-and-effect simulation (CE simulation) suggested here uses the concepts of plurality and interaction which have arisen as a result of the study of complicated geological objects. Additionally, tools of formal logic, modern mathematical logic [2, 6] and computer processing of data [3, 4] are also employed for the simulation and detection of CE relations.