scispace - formally typeset
Search or ask a question

Showing papers on "Conjunctive normal form published in 1998"


Journal ArticleDOI
TL;DR: Preliminary computational experience using a technique that transforms any binary programming problem with integral coefficients to a satisfiability problem of propositional logic in linear time shows that a pure logical solver can be a valuable tool for solving binary programming problems.

132 citations


Proceedings ArticleDOI
08 Nov 1998
TL;DR: It is shown that, for each k, the running time of ResolveSat on a k-CNF formula is significantly better than 2/sup n/, even in the worst case, and the idea of succinctly encoding satisfying solutions can be applied to obtain lower bounds on circuit site.
Abstract: We propose and analyze a simple new algorithm for finding satisfying assignments of Boolean formulae in conjunctive normal form. The algorithm, ResolveSat, is a randomized variant of the DDL procedure by M. Davis et al. (1962) or Davis-Putnam procedure. Rather than applying the DLL procedure to the input formula F, however; ResolveSat enlarges F by adding additional clauses using limited resolution before performing DLL. The basic idea behind our analysis is the same as by R. Paturi (1997): a critical clause for a variable at a satisfying assignment gives rise to a unit clause in the DLL procedure with sufficiently high probability, thus increasing the probability of finding a satisfying assignment. In the current paper, we analyze the effect of multiple critical clauses (obtained through resolution) in producing unit clauses. We show that, for each k, the running time of ResolveSat on a k-CNF formula is significantly better than 2/sup n/, even in the worst case. In particular we show that the algorithm finds a satisfying assignment of a general 3-CNF in time O(2/sup .446n/) with high probability; where the best previous algorithm has running time O(2/sup .582n/). We obtain a better upper bound of O(2/sup (2ln2-1)/n+0(n))=O(2/sup 0.387n/) for 3-CNF that have at most one satisfying assignment (unique k-SAT). For each k, the bounds for general k-CNF are the best known for the worst-case complexity of finding a satisfying solution for k-SAT, the idea of succinctly encoding satisfying solutions can be applied to obtain lower bounds on circuit site. Here, we exhibit a function f such that any depth-3 AND-OR circuit with bottom fan-in bounded by k requires /spl Omega/(2(c/sub k/n/k)) gates (with c/sub k/>1). This is the first such lower bound with c/sub k/>1.

132 citations


Journal ArticleDOI
TL;DR: It is shown that in case of at most n clauses no formula is minimal unsatisfiable, and for n+1 clauses the minimal unsatisfiability problem is solvable in quadratic time.
Abstract: We consider the minimal unsatisfiability problem for propositional formulas over n variables with n+k clauses for fixed k. We will show that in case of at most n clauses no formula is minimal unsatisfiable. For n+1 clauses the minimal unsatisfiability problem is solvable in quadratic time. Further, we present a characterization of minimal unsatisfiable formulas with n+1 clauses in terms of a certain form of matrices.

108 citations


Journal ArticleDOI
01 May 1998
TL;DR: This work identifies two classes of Boolean functions that have been used: positive and definite functions, and systematically investigates their implementation for dependency analyses and shows that both are closed under existential quantification.
Abstract: Many static analyses for declarative programming/database languages use Boolean functions to express dependencies among variables or argument positions. Examples include groundness analysis, arguably the most important analysis for logic programs, finiteness analysis and functional dependency analysis for databases. We identify two classes of Boolean functions that have been used: positive and definite functions, and we systematically investigate these classes and their efficient implementation for dependency analyses. On the theoretical side, we provide syntactic characterizations and study the expressiveness and algebraic properties of the classes. In particular, we show that both are closed under existential quantification. On the practical side, we investigate various representations for the classes based on reduced ordered binary decision diagrams (ROBDDs), disjunctive normal form, conjunctive normal form, Blake canonical form, dual Blake canonical form, and a form specific to definite functions. We compare the resulting implementations of groundness analyzers based on the representations for precision and efficiency.

105 citations


Book ChapterDOI
04 Nov 1998
TL;DR: The proof system underlying St?lmarck's proof procedure for classical propositional logic is presented, and the various design decisions that have resulted in a system that copes well with the large formulas encountered in industrial-scale verification are motivated.
Abstract: We explain St?lmarck's proof procedure for classical propositional logic. The method is implemented in a commercial tool that has been used successfully in real industrial verification projects. Here, we present the proof system underlying the method, and motivate the various design decisions that have resulted in a system that copes well with the large formulas encountered in industrial-scale verification. We also discuss possible applications in Computer Aided Design of electronic circuits.

74 citations


Book ChapterDOI
01 Jan 1998
TL;DR: In the Maximum Satisfiability (MAX-SAT) problem one is given a Boolean formula in conjunctive normal form, i.e., as a conjunction of clauses, each clause being a disjunction, to find an assignment of truth values to the variables that satisfies the maximum number of clauses.
Abstract: In the Maximum Satisfiability (MAX-SAT) problem one is given a Boolean formula in conjunctive normal form, i.e., as a conjunction of clauses, each clause being a disjunction. The task is to find an assignment of truth values to the variables that satisfies the maximum number of clauses.

57 citations


01 Jan 1998
TL;DR: A preliminary design of a compiler that overcomes the problem of errors in relational specifications, by exploiting typical features of the relational formulae that arise in practice.
Abstract: A new method for analyzing relational specifications is described. A property to be checked is cast as a relational formula, which, if the property holds, has no finite models. The relational formula is translated into a boolean formula that has a model for every model of the relational formula within some finite scope. Errors in specifications can usually be demon-strated with small counterexamples, so a small scope often suffices. The boolean formula is solved by an off-the-shelf satisfier. The satisfier requires that the boolean formula be in conjunctive normal form (CNF). A naive translation to CNF fails (by exhausting memory) for realistic specifications. This paper presents a preliminary design of a compiler that overcomes this problem, by exploiting typical features of the relational formulae that arise in practice. Initial experiments suggest that this method scales more readily than existing approaches and will be able to find more errors, in larger specifications.

9 citations


Book ChapterDOI
16 Sep 1998
TL;DR: It is argued that efficient automated reasoning techniques which utilize definite formula representation of knowledge (such as SLD-resolution) can be developed for classical and a variety of non-classical logics.
Abstract: In this paper we propose a non-clausal representational formalism (of definite formulas) that retains the syntactic flavor and algorithmic advantages of Horn clauses. The notion of a definite formula is generic in the sense that it is available to any logical calculus. We argue that efficient automated reasoning techniques which utilize definite formula representation of knowledge (such as SLD-resolution) can be developed for classical and a variety of non-classical logics.

5 citations


Book ChapterDOI
05 Jul 1998
TL;DR: In this paper, the authors focus on two powerful techniques to obtain compact clause normal forms: Renaming of formulae and refined Skolemization methods, and illustrate their effect on various examples.
Abstract: In this paper we focus on two powerful techniques to obtain compact clause normal forms: Renaming of formulae and refined Skolemization methods. We illustrate their effect on various examples. By an exhaustive experiment of all first-order TPTP problems, it shows that our clause normal form transformation yields fewer clauses and fewer literals than the methods known and used so far. This often allows for exponentially shorter proofs and, in some cases, it makes it even possible for a theorem prover to find a proof where it was unable to do so with more standard clause normal form transformations.

2 citations


Book ChapterDOI
01 Jan 1998
TL;DR: This chapter gives a survey of different methods for introducing and preserving structure in automated deduction by way of extension methods in the sense that either the syntax is extended by predicate or function symbols or formulae are introduced which do not fulfill the (strict) subformula property.
Abstract: Calculi of first-order logic were developed for many different purposes, e.g., for (i) reconstructing mathematical proofs in a formal framework or (ii) for automated proof search. The old and “traditional” calculi, like Hilbert-type and Gentzen-type calculi, belong to the first category. Either they serve as a framework to a theory of provability, where not actual proofs but only their existence is of relevance, or they are used as instruments for proof transformations (e.g., cut-elimination in the sequent calculus). Specific rules like cut and modus ponens serve as proof building tools, which allow to combine lemmata to more complex proofs. The substitution rule (or elimination rule for quantifiers) is mostly formulated as a unary rule which can be applied independently of others. As the discipline of automated theorem proving evolved in the early sixties, the cut-rule and the unrestricted substitution rule were clearly defective features in a discipline of proof search. The first attempts to use Herbrand’s theorem directly and to reduce a first-order formula to a propositional one failed because of tremendous complexity. The paper of J. A. Robinson on the resolution principle (Robinson, 1965) then brought the decisive breakthrough: resolution was the first first-order calculus with a binary and minimal substitution principle — the well-known unification principle. Moreover, it works on quantifier-free conjunctive normal forms (clausal forms) which are logic-free (clauses can be represented as sequents of atoms). Resolution uses an atomic cut-rule in combination with the unification principle which allows for most general substitutions only. Thus, the key feature of resolution is minimality. Due to this minimality there are only finitely many deductions within a fixed depth, a property we call local finiteness. The price paid for this minimality is a loss of structure and an increase of proof complexity. The high proof complexity of computational calculi (like resolution, tableau calculi and connection calculi) forms a serious barrier to proof search for more complex theorems. The question arises, whether it is possible to combine the strong structural potential of the traditional logic calculi with the economical search features of computational calculi. To this aim, it is necessary to give up the strict minimality of the computational calculus without, at the same time, allowing the creation of arbitrary structures, signatures or lemmata. It is the purpose of this chapter to give a survey of different methods for introducing and preserving structure in automated deduction. The inference methods we present are extension methods in the sense that either the syntax is extended by predicate or function symbols or formulae are introduced which do not fulfill the (strict) subformula property. But all the rules we present here are computational in the sense that they are locally finite.

2 citations


Book ChapterDOI
05 Jul 1998
TL;DR: In this paper, the validity problem for equational formulae in the empty theory has been proven to be decidable in the Herbrand universe and in the finite tree algebra.
Abstract: Equational formulae are first-order formulae containing only "=" as a predicate symbol. A substitution a is a solution of an equational formula Y iff ~'a is valid in the finite tree algebra (i.e. when = is interpreted as the syntactic equality on the Herbrand universe). Equational formulae have many applications in the domains of Automated Deduction, Artificial Intelligence and Computer Science (program verification, negation in logic programming [5,13,3], inductive proofs [6], model building [4,1,19] etc.). They have been studied by many authors for several years. In particular, the validity problem for equational formulae in the empty theory has been proven to be decidable [15,12,14,9].

Journal ArticleDOI
TL;DR: A category of new backtracking tactics that are better than the depth-first backtracking tactic in backtrack methods for the SAT problem are proposed.