scispace - formally typeset
Search or ask a question
Journal ArticleDOI

GRASP: a search algorithm for propositional satisfiability

01 May 1999-IEEE Transactions on Computers (IEEE Computer Society)-Vol. 48, Iss: 5, pp 506-521
TL;DR: Experimental results obtained from a large number of benchmarks indicate that application of the proposed conflict analysis techniques to SAT algorithms can be extremely effective for aLarge number of representative classes of SAT instances.
Abstract: This paper introduces GRASP (Generic seaRch Algorithm for the Satisfiability Problem), a new search algorithm for Propositional Satisfiability (SAT). GRASP incorporates several search-pruning techniques that proved to be quite powerful on a wide variety of SAT problems. Some of these techniques are specific to SAT, whereas others are similar in spirit to approaches in other fields of Artificial Intelligence. GRASP is premised on the inevitability of conflicts during the search and its most distinguishing feature is the augmentation of basic backtracking search with a powerful conflict analysis procedure. Analyzing conflicts to determine their causes enables GRASP to backtrack nonchronologically to earlier levels in the search tree, potentially pruning large portions of the search space. In addition, by "recording" the causes of conflicts, GRASP can recognize and preempt the occurrence of similar conflicts later on in the search. Finally, straightforward bookkeeping of the causality chains leading up to conflicts allows GRASP to identify assignments that are necessary for a solution to be found. Experimental results obtained from a large number of benchmarks indicate that application of the proposed conflict analysis techniques to SAT algorithms can be extremely effective for a large number of representative classes of SAT instances.

Summary (5 min read)

1 Introduction

  • THE Boolean satisfiability problem (SAT) appears inmany contexts in the field of computer-aided design of integrated circuits, including automatic test pattern generation (ATPG), timing analysis, delay fault testing, and logic verification, to name just a few.
  • Over the years, many algorithmic solutions have been proposed for SAT, the most well-known being the different variations of the Davis-Putnam procedure [7].
  • Nevertheless, nonchronological backtracking techniques have been extensively studied and applied to different areas of Artificial Intelligence, particularly Truth Maintenance Systems (TMS) [9], [35], Constraint Satisfaction Problems (CSP) [8], [14], [15], [31], and Logic Programming [4], in some cases with very promising experimental results [8], [15].
  • By noting that conflicts arise when certain clauses are missing from the problem specification, GRASP views conflict occurrence as an opportunity to augment the problem description with such conflict-induced clauses.

2.1 Basic Definitions and Notation

  • The authors will refer to a CNF formula as a clause database and use ªformula,º ªCNF formula,º and ªclause databaseº interchangeably.
  • A backtracking search algorithm for SAT is implemented by a search process that implicitly traverses the space of 2n possible binary assignments to the problem variables.
  • This last case can only happen when A is a partial assignment.
  • An assignment partitions the clauses of ' into three sets: satisfied clauses (evaluating to 1); unsatisfied clauses (evaluating to 0); and unresolved clauses (evaluating to X).

2.2 Formula Satisfiability

  • Formula satisfiability is concerned with determining if a given formula ' is satisfiable and with identifying a satisfying assignment for it.
  • The search process iterates through the steps of: 1. Extending the current assignment by making a decision assignment to an unassigned variable.
  • When relevant to the context, the assignment notation introduced earlier may be extended to indicate the decision level at which the assignment occurred.
  • The implications that can be derived from a given partial assignment depend on the set of available clauses.

2.3 Function Satisfiability

  • Given an initial formula ', a search system can attempt to augment it with additional implicates to increase the deductive power during the search process.
  • The approach considers the occurrence of a conflict, which is unavoidable for an unsatisfiable instance unless the formula is complete, as an opportunity to ªlearn from the mistake that led to the conflictº and introduces additional implicates to the clause database only when it stumbles.
  • Conflict diagnosis produces three distinct pieces of information that can help speed up the search: 1. New implicates that did not exist in the clause database and that can be identified with the occurrence of the conflict.
  • If that assignment was the most recent (i.e., at the current decision level), the opposite assignment (if it has not been tried) is immediately implied as a necessary consequence of the conflict; the authors refer to this as a failure-driven assertion (FDA).
  • If the conflict resulted from an earlier decision assignment (at a lower decision level), the search can backtrack to the corresponding level in the decision tree since the subtree rooted at that level corresponds to assignments that will yield the same conflict.

2.4 Structure of the Search Process

  • The basic mechanism for deriving implications from a given clause database is Boolean constraint propagation (BCP) [11], [39].
  • Such assignments are referred to as logical implications (implications, for short) and correspond to the application of the unit clause rule proposed by Davis and Putnam [7].
  • BCP refers to the iterated application of this rule to a clause database until the set of unit clauses becomes empty or one or more clauses become unsatisfied.
  • Note that the antecedent assignment of an electively assigned variable is empty.
  • The directed edges from the vertices in A!.

2.5 Search Algorithm Template

  • The general structure of the GRASP search algorithm is shown in Fig. The recursive Search function consists of four major operations: 1. Decide, which chooses a decision assignment at each stage of the search process.
  • For the results given in Section 4, the following greedy heuristic is used:.
  • For most of these heuristics, preference is given to assignments that simplify the clauses the most and can lead to more implications due to BCP.
  • The algorithm repeatedly applies the unit clause rule [7] while unit clauses exist.
  • If, on the other hand, a conflict arises due to this assignment, the Diagnose function is called to analyze this conflict and to determine an appropriate decision level for backtracking the search.

3 Conflict Analysis Procedures

  • When a conflict arises during BCP, the structure of the implication sequence converging on a conflict vertex K is analyzed to determine those variable assignments that are directly responsible for the conflict.
  • Negation of this implicant, therefore, yields an implicate of the Boolean function f (whose satisfiability the authors seek) that does not exist in the clause database '.
  • Thus, along with assignments from previous levels, the decision assignment at the current decision level is a sufficient condition for the conflict.
  • Determination of the conflicting assignment A!C can now be expressed as: A!C causes of where causes_of(.) is defined by: 4. Conditions similar to these implicates are referred to as ªnogoodsº in TMS [9], [35] and in some algorithms for CSP [31].
  • Unlike the precise computations of the conflicting assignment A!C in (3) and conflict-induced clause !C in (4), the procedures in these related works were only informally described.

3.1 Standard Conflict Diagnosis Engine

  • The identification of a conflict-induced clause !C enables the derivation of further implications that help prune the search.
  • C include asserting the current decision variable to its opposite value and determining a backtracking level for the search process.
  • Such immediate implications do not require that !.
  • In particular, adding !C to the clause database ensures that the search engine will not regenerate the conflicting assignment that led to the current conflict.

3.1.1 Failure-Driven Assertions

  • If !C involves the current decision variable, erasing the implication sequence at the current decision level makes !.
  • C a unit clause and causes the immediate implication of the decision variable to its opposite value.
  • The authors refer to such assignments as failure-driven assertions (FDAs) to emphasize that they are implications of conflicts and not decision assignments.
  • The authors note further that their derivation is automatically handled by their BCP-based deduction engine and does not require special processing.
  • This is in contrast with most search-based SAT algorithms that treat a second branch at the current decision level as another decision assignment.

3.1.2 Conflict-Directed Backtracking

  • If all the literals in !C correspond to variables that were assigned at decision levels that are lower than the current decision level, the authors can immediately conclude that the search process needs to backtrack.
  • Illustrated in Fig. 4a for their working example.
  • When < dÿ 1, however, the search process may backtrack nonchronologically by jumping back over several levels in the decision tree.
  • The procedure starts with an analysis of what caused the conflict and the creation of a new conflict-induced clause.
  • If backtracking is necessary (indicated by 6 d), a new conflict vertex K is added to the implication graph and its antecedent assignments A are recorded.

3.2 Variations on the Standard Diagnosis Engine

  • This section describes two improvements to the standard diagnosis engine described above.
  • The first is concerned with ways of controlling the growth of the clause database.
  • The second provides techniques that utilize the structure of the implication sequences to reduce the size of identified implicates.
  • Both of these improvements represent novel contributions to search-based SAT algorithms.

3.2.1 Space-Bounded Diagnosis Engines

  • Standard conflict diagnosis, described in the previous section, suffers from two drawbacks.
  • (b) Decision tree. of SAT, can lead to large run times.
  • One solution to the second drawback is a simple modification to the conflict diagnosis engine that guarantees the worst case growth of the clause database to be polynomial in the number of variables.
  • Conflict-induced clauses of size greater than k are marked red and kept around only while they are satisfied, unsatisfied, or are unit clauses.
  • The worst case growth becomes polynomial in the number of variables as a function of the fixed integer k.

3.2.2 Unique Implication Points

  • Further enhancements to the conflict diagnosis engine involve generating stronger implicates (containing fewer literals) by more careful analysis of the structure of the implication graph I.
  • X4 x10 x11 is an implicate of the function that did not exist in the clause database, also known as Hence, the clause.
  • Both of these implicates are stronger than the single conflict-induced clause identi- fied earlier in (5) and can potentially provide additional implications in the presence of partial assignments.
  • This procedure for constructing strong implicates can be generalized for an arbitrary number of UIPs.
  • Description of the standard diagnosis engine.

4 Experimental Results

  • The authors present experimental results for GRASP.
  • The CPU times for all programs were scaled to the equivalent CPU times on a SUN SPARC 5/85 machine.
  • These benchmarks represent one practical application of SAT algorithms to the field of Electronic Design Automation, thus being of key significance for experimentally evaluating SAT algorithms.
  • A benchmark suite is partitioned into classes of related benchmarks, e.g., for the DIMACS benchmarks, class AIM-100 includes all benchmarks with name aim-100-*.

4.1 DIMACS Benchmark Results

  • The CPU times of running GRASP and the other algorithms on the DIMACS benchmarks are shown in Table 1.9.
  • It can also be concluded that, for benchmarks where GRASP performs better, the other programs either take a very long time to find a solution or are unable to find a solution in less than 10,000 seconds.
  • Still, GSAT is the most efficient tool for the class of benchmarks G. Finally, none of the evaluated algorithms was able to find a solution to any problem instance of the benchmark classes F and PAR32.
  • Second, the jumps in the decision tree can save a large amount of search work.
  • Nevertheless, POSIT can be more efficient for specific benchmarks, as the examples of the last two rows indicate.

4.2 UCSC Benchmark Results

  • The results obtained for the UCSC benchmarks are shown in Table 4 and in Table 5.
  • The BF and SSA benchmark classes denote, respectively, CNF formulas for bridging and stuck-at faults.
  • GRASP performs significantly better than any other program on these benchmarks.
  • The UCSC benchmarks are characterized by extremely sparse CNF formulas for which the BCP-based conflict analysis procedure of GRASP works particularly well.
  • In addition, it should be noted that a direct comparison of the results of each algorithm with the results of DPL illustrates how effective search-pruning techniques can be for these classes of instances of SAT.

4.2.1 Database Growth Versus CPU Time

  • It is interesting to evaluate how the growth of the clause database affects the amount of search and the CPU time.
  • The CPU time and the number of backtracks for the SSA and BF benchmarks are shown in Fig.
  • As the maximum size of added clauses grows, the number of backtracks decreases and the CPU time decreases accordingly.
  • These results also suggest that it may possible to experimentally identify optimal growth rates for different classes of problem instances.

5 Conclusions and Research Directions

  • This paper introduces a procedure for conflict analysis in satisfiability algorithms and describes a configurable algorithmic framework for solving SAT.
  • Experimental results indicate that conflict analysis and its by-products, nonchronological backtracking and identification of equivalent conflicting conditions can contribute decisively for efficiently solving a large number of classes of instances of SAT.
  • Besides the algorithmic organization of GRASP, special attention must be paid to the implementation details.
  • Hence, BCP only identifies those assign- ments that are necessary for a partial assignment to be extended to a complete assignment representing a solution of a given instance of SAT.
  • Let the conflict assignment A!C be computed according to (3).

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

GRASP: A Search Algorithm
for Propositional Satisfiability
Joa
Ä
o P. Marques-Silva, Member, IEEE, and Karem A. Sakallah, Fellow, IEEE
AbstractÐThis paper introduces GRASP (Generic seaRch Algorithm for the Satisfiability Problem), a new search algorithm for
Propositional Satisfiability (SAT). GRASP incorporates several search-pruning techniques that proved to be quite powerful on a wide
variety of SAT problems. Some of these techniques are specific to SAT, whereas others are similar in spirit to approaches in other
fields of Artificial Intelligence. GRASP is premised on the inevitability of conflicts during the search and its most distinguishing feature is
the augmentation of basic backtracking search with a powerful conflict analysis procedure. Analyzing conflicts to determine their
causes enables GRASP to backtrack nonchronologically to earlier levels in the search tree, potentially pruning large portions of the
search space. In addition, by ªrecordingº the causes of conflicts, GRASP can recognize and preempt the occurrence of similar conflicts
later on in the search. Finally, straightforward bookkeeping of the causality chains leading up to conflicts allows GRASP to identify
assignments that are necessary for a solution to be found. Experimental results obtained from a large number of benchmarks indicate
that application of the proposed conflict analysis techniques to SAT algorithms can be extremely effective for a large number of
representative classes of SAT instances.
Index TermsÐSatisfiability, search algorithms, conflict diagnosis, conflict-directed nonchronological backtracking, conflict-based
equivalence, failure-driven assertions, unique implication points.
æ
1 Introduction
T
HE Boolean satisfiability problem (SAT) appears in
many contexts in the field of computer-aided design
of integrated circuits, including automatic test pattern
generation (ATPG), timing analysis, delay fault testing,
and logic verification, to name just a few. Though well-
researched and widely investigated, it remains the focus of
continuing interest because efficient techniques for its
solution can have great impact. SAT belongs to the class
of NP-complete problems whose algorithmic solutions are
currently believed to have exponential worst case complex-
ity [13]. Over the years, many algorithmic solutions have
been proposed for SAT, the most well-known being the
different variations of the Davis-Putnam procedure [7]. The
best known version of this procedure is based on a
backtracking search algorithm that, at each node in the
search tree, elects an assignment and prunes subsequent
search by iteratively applying the unit clause and the pure
literal rules [39]. Iterated application of the unit clause rule is
commonly referred to as Boolean Constraint Propagation
(BCP) [39] or as derivation of implications in the electronic
CAD literature [1].
Most of the recently proposed improvements to the basic
Davis-Putnam procedure [3], [6], [11], [12], [22], [30], [36],
[39] can be distinguished based on their decision making
heuristics or their use of preprocessing or relaxation
techniques. Common to all these approaches, however, is
the chronological nature of backtracking. Only in [28] is a
nonchronological backtracking procedure outlined for sol-
ving problems in Logic Truth Maintenance Systems
(LTMS), but it is only sketched and no experimental results
are presented. Nevertheless, nonchronological backtracking
techniques have been extensively studied and applied to
different areas of Artificial Intelligence, particularly Truth
Maintenance Systems (TMS) [9], [35], Constraint Satisfac-
tion Problems (CSP) [8], [14], [15], [31], and Logic Program-
ming [4], in some cases with very promising experimental
results [8], [15]. In recent years, extensive research work has
been dedicated to the development of local search algo-
rithms for SAT [33]. These algorithms are, in general,
incomplete, i.e., they may not find a solution and cannot
prove unsatisfiability. Nevertheless, local search algorithms
have been shown to be extremely effective on specific
classes of satisfiable instances of SAT.
Interest in the direct application of SAT algorithms to
electronic design automation (EDA) problems has been on
the rise recently [5], [22], [29], [36]. In addition, improve-
ments to the traditional structural (path sensitization)
algorithms for some EDA problems, such as ATPG, include
search-pruning techniques that are also applicable to SAT
algorithms in general [16], [21], [25].
This paper introduces a new procedure for conflict
analysis in satisfiability algorithms and describes its use in a
configurable algorithmic framework for solving SAT pro-
blems. Titled GRASP
1
(Generic seaRch Algorithm for the
Satisfiability Problem), this framework is premised on the
inevitability of conflicts during search. By noting that
conflicts arise when certain clauses are missing from the
problem specification, GRASP views conflict occurrence as
an opportunity to augment the problem description with
such conflict-induced clauses. The addition of these clauses
506 IEEE TRANSACTIONS ON COMPUTERS, VOL. 48, NO. 5, MAY 1999
. J.P. Marques-Silva is with Cadence European Laboratories, IST/INESC, R.
Alves Redol, 9, 1000 Lisboa, Portugal.
. K.A. Sakallah is with the Department of Electrical Engineering and
Computer Science, University of Michigan, Ann Arbor, MI 48109-2122.
E-mail: karem@eecs.umich.edu.
Manuscript received 22 Apr. 1997.
For information on obtaining reprints of this article, please send e-mail to:
tc@computer.org, and reference IEEECS Log Number 104973.
0018-9340/99/$10.00 ß 1999 IEEE

helps to prune the search for a solution in three comple-
mentary ways. First, annotation of the literals in a conflict-
induced clause by the decision level at which their values
were assigned enables GRASP to backtrack nonchronologi-
cally to earlier levels in the search tree. Second, by
ªrecordingº these clauses, GRASP can recognize and
preempt the occurrence of similar conflicts later on in the
search. And third, straightforward bookkeeping of the
causality chains leading up to conflicts allows GRASP to
identify assignments that are necessary for a solution to be
found. Experimental results obtained from a large number
of benchmarks [18] provide ample evidence that application
of the proposed conflict analysis techniques to SAT
algorithms can be extremely effective for a large number
of representative classes of SAT instances.
The remainder of this paper is organized in four sections.
In Section 2, we introduce the basics of backtracking search,
particularly our implementation of BCP, and describe the
overall architecture of GRASP. This is followed, in Section 3,
by a detailed discussion of the procedures for conflict
analysis and how they are implemented. (In the Appendix,
we prove that the GRASP SAT algorithm is both correct and
complete.) Extensive experimental results on a wide range
of benchmarks are presented and analyzed in Section 4. In
particular, GRASP is shown to outperform several state-of-
the-art SAT algorithms [2], [6], [10], [11], [30], [33], [36], [19]
on most, but not all, benchmarks. Furthermore, the
experimental results strongly suggest that, for several
practical classes of SAT instances, local search algorithms
may be inadequate. This is particularly significant when-
ever the SAT instances are likely to be unsatisfiable, as is
typical in Automated Theorem Proving and in several
Electronic Design Automation tasks. The paper concludes
in Section 5 with some suggestions for further research.
2 Backtrack Search for CNF Satisfiability
2.1 Basic Definitions and Notation
A conjunctive normal form (CNF) formula ' on n binary
variables x
1
; ;x
n
is the conjunction (AND) of m clauses
!
1
; ;!
m
each of which is the disjunction (OR) of one or
more literals, where a literal is the occurrence of a variable
or its complement. A formula ' denotes a unique n-variable
Boolean function fx
1
; ;x
n
and each of its clauses
corresponds to an implicate of f [17, p. 288]. Clearly, a
function f can be represented by many equivalent CNF
formulas. A formula is complete if it consists of the entire
set of prime implicates [17, p. 288] for the corresponding
function. In general, a complete formula will have an
exponential number of clauses. We will refer to a CNF
formula as a clause database and use ªformula,º ªCNF
formula,º and ªclause databaseº interchangeably. The
satisfiability problem (SAT) is concerned with finding an
assignment to the arguments of fx
1
; ;x
n
that makes the
function equal to 1 or proving that the function is equal to
the constant 0.
A backtracking search algorithm for SAT is implemen-
ted by a search process that implicitly traverses the space
of 2
n
possible binary assignments to the problem
variables. During the search, a variable whose binary
value has already been determined is considered to be
assigned; otherwise, it is unassigned with an implicit value
of X f0; 1g.Atruth assignment for a formula ' is a set
of assigned variables and their corresponding binary
values. It will be convenient to represent such assign-
ments as sets of variable/value pairs; for example
A fx
1
; 0; x
7
; 1; x
13
; 0g.Alternatively,assignments
can be denoted as A fx
1
0;x
7
1;x
13
0g. Sometimes
it is convenient to indicate that a variable x is assigned
without specifying its actual value. In such cases, we will
use the notation x to denote the binary value assigned to
x. An assignment A is complete if jAjn; otherwise, it is
partial. Evaluating a formula ' for a given a truth
assignment A yields three possible outcomes: 'j
A
1 and
we say that ' is satisfied and refer to A as a satisfying
assignment; 'j
A
0, in which case ' is unsatisfied and A is
referred to as an unsatisfying assignment; and 'j
A
X,
indicating that the value of ' cannot be resolved by the
assignment. This last case can only happen when A is a
partial assignment. An assignment partitions the clauses of
' into three sets: satisfied clauses (evaluating to 1);
unsatisfied clauses (evaluating to 0); and unresolved clauses
(evaluating to X). The unassigned literals of a clause are
referred to as its free literals. A clause is said to be unit if it is
unresolved and the number of its free literals is one.
2.2 Formula Satisfiability
Formula satisfiability is concerned with determining if a
given formula ' is satisfiable and with identifying a
satisfying assignment for it. Starting from an empty truth
assignment, a backtrack search algorithm traverses the
space of truth assignments implicitly and organizes the
search for a satisfying assignment by maintaining a decision
tree. Each node in the decision tree specifies an elective
assignment to an unassigned variable; such assignments are
referred to as decision assignments.Adecision level is
associated with each decision assignment to denote its
depth in the decision tree; the first decision assignment at
the root of the tree is at decision level 1. The search process
iterates through the steps of:
1. Extending the current assignment by making a
decision assignment to an unassigned variable. This
decision process is the basic mechanism for exploring
new regions of the search space. The search
terminates successfully if all clauses become satis-
fied; it terminates unsuccessfully if some clauses
remain unsatisfied and all possible assignments
have been exhausted.
2. Extending the current assignment by following the
logical consequences of the assignments made thus
far. The additional assignments derived by this
deduction process are referred to as implication assign-
ments or, more simply, implications. The deduction
process may also lead to the identification of one or
more unsatisfied clauses implying that the current
assignment is not a satisfying assignment. Such an
MARQUES-SILVA AND SAKALLAH: GRASP: A SEARCH ALGORITHM FOR PROPOSITIONAL SATISFIABILITY 507
1. The GRASP software is available for downloading from http://
andante.eecs.umich.edu/grasp-1-0.tar.gz or http://algos.inesc.pt/grasp/
grasp.tar.gz.

occurrence is referred to as a conflict and the
associated unsatisfying assignment is called a con-
flicting assignment.
3. Undoing the current assignment, if it is conflicting,
so that another assignment can be tried. This
backtracking process is the basic mechanism for
retreating from regions of the search space that do
not correspond to satisfying assignments.
The decision level at which a given variable x is either
electively assigned or forcibly implied will be denoted by
x. When relevant to the context, the assignment notation
introduced earlier may be extended to indicate the decision
level at which the assignment occurred. Thus, x v @ d
would be read as ªx becomes equal to v at decision level d
The average complexity of the above search process
depends on how decisions, deductions, and backtracking
are made. It also depends on the formula itself. The
implications that can be derived from a given partial
assignment depend on the set of available clauses. In
general, a formula consisting of more clauses will enable
more implications to be derived and will reduce the number
of backtracks due to conflicts. The limiting case is the
complete formula that contains all prime implicates. For
such a formula, no conflicts can arise since all logical
implications for a partial assignment can be derived.
2
This,
however, may not lead to shorter execution times since the
size of such a formula may be exponential.
2.3 Function Satisfiability
Given an initial formula ', a search system can attempt to
augment it with additional implicates to increase the
deductive power during the search process. We propose a
search mechanism that identifies additional implicates by
diagnosing the causes of conflicts. Our approach considers
the occurrence of a conflict, which is unavoidable for an
unsatisfiable instance unless the formula is complete, as an
opportunity to ªlearn from the mistake that led to the
conflictº and introduces additional implicates to the clause
database only when it stumbles. Conflict diagnosis produces
three distinct pieces of information that can help speed up
the search:
1. New implicates that did not exist in the clause
database and that can be identified with the
occurrence of the conflict. These clauses may be
added to the clause database to avert future
occurrence of the same conflict and represent a form
of conflict-based equivalence (CBE).
2. An indication of whether the conflict was ultimately
due to the most recent decision assignment or to an
earlier decision assignment.
a. If that assignment was the most recent (i.e., at
the current decision level), the opposite assign-
ment (if it has not been tried) is immediately
implied as a necessary consequence of the
conflict; we refer to this as a failure-driven
assertion (FDA).
b. If the conflict resulted from an earlier decision
assignment (at a lower decision level), the search
can backtrack to the corresponding level in the
decision tree since the subtree rooted at that
level corresponds to assignments that will yield
the same conflict. The ability to identify a
backtracking level that is much earlier than the
current decision level is a form of nonchronolo-
gical backtracking that we refer to as conflict-
directed backtracking (CDB),
3
and has the poten-
tial of significantly reducing the amount of
search.
These conflict diagnosis techniques are discussed further
in Section 3.
2.4 Structure of the Search Process
The basic mechanism for deriving implications from a given
clause database is Boolean constraint propagation (BCP)
[11], [39]. Consider a formula ' containing the clause !
x :y and assume y 1. For any satisfying assignment to
', ! requires that x be equal to 1, and we say that y 1
implies x 1 due to !. In general, given a unit clause l
1
l
k
of ' with free literal l
j
, consistency requires l
j
1
since this represents the only possibility for the clause to be
satisfied. If l
j
x, then the assignment x 1 is required; if
l
j
:x,thenx 0 is required. Such assignments are
referred to as logical implications (implications, for short)
and correspond to the application of the unit clause rule
proposed by Davis and Putnam [7]. BCP refers to the
iterated application of this rule to a clause database until the
set of unit clauses becomes empty or one or more clauses
become unsatisfied.
Let the assignment of a variable x be implied due
to a clause ! l
1
l
k
. The antecedent assignment
of x, denoted as A
!
x, is defined as the set of
assignments to variables other than x with literals in
!. Intuitively, A
!
x designates those variable assign-
ments that are directly responsible for implying the
assignment of x due to !. For example, the ante-
cedent assignments of x, y, and z due to the clause
! x y :z are, respectively, A
!
xfy 0;z 1g,
A
!
yfx 0;z 1g, and A
!
zfx 0;y 0g. Note
that the antecedent assignment of an electively
assigned variable is empty.
The sequence of implications generated by BCP is
captured by a directed implication graph I defined as follows
(see Fig. 1):
1. Each vertex in I corresponds to a variable assign-
ment x x.
2. The predecessors of vertex x x in I are the
antecedent assignments A
!
x corresponding to the
unit clause ! that led to the implication of x. The
directed edges from the vertices in A
!
x to vertex
x x are all labeled with !. Vertices that have no
predecessors correspond to decision assignments.
3. Special conflict vertices are added to I to indicate the
occurrence of conflicts. The predecessors of a conflict
vertex K correspond to variable assignments that
force a clause ! to become unsatisfied and are
viewed as the antecedent assignment A
!
. The
508 IEEE TRANSACTIONS ON COMPUTERS, VOL. 48, NO. 5, MAY 1999
2. This assertion is proven in Theorem 3 in the Appendix.
3. The designation CDB is used instead of dependency-directed backtracking [35]
because the backtracking procedure is tightly associated with BCP.

directed edges from the vertices in A
!
x to K are all
labeled with !.
The decision level of an implied variable x is related to
those of its antecedent variables according to:
xmaxfyjy; y 2 A
!
xg: 1
2.5 Search Algorithm Template
The general structure of the GRASP search algorithm is
shown in Fig. 2. We assume that an initial clause database '
and an initial assignment A, at decision level 0, are given.
This initial assignment, which may be empty, may be
viewed as an additional problem constraint and causes the
search to be restricted to a subcube of the n-dimensional
Boolean space. As the search proceeds, both ' and A are
modified. The recursive Search() function consists of four
major operations:
1. Decide(), which chooses a decision assignment at
each stage of the search process. Decision proce-
dures are commonly based on heuristic knowledge.
For the results given in Section 4, the following
greedy heuristic is used:
At each node in the decision tree evaluate the number
of clauses directly satisfied by each assignment to each
variable. Choose the variable and the assignment that
directly satisfies the largest number of clauses.
Other decision making procedures have been im-
plemented in the GRASP algorithmic framework,
particularly those described in [11], [26]. For most of
these heuristics, preference is given to assignments
that simplify the clauses the most and can lead to
more implications due to BCP. This is in explicit
contrast with our heuristic which always attempts to
satisfy the largest number of clauses. We chose to
employ this heuristic in our experimental evaluation
because of its simplicity and to highlight the
effectiveness of conflict analysis.
2. Deduce(), which implements BCP and (implicitly)
maintains the resulting implication graph. The
pseudocode for this procedure is shown in Fig. 3.
The algorithm repeatedly applies the unit clause rule
[7] while unit clauses exist. It returns with a
SUCCESS indication unless one or more clauses
become unsatisfied. In that case, a conflict vertex is
added to the implication graph and a CONFLICT
indication is returned.
3. Diagnose(), which identifies the causes of conflicts
and can augment the clause database with addi-
tional implicates. Realization of different conflict
diagnosis procedures is the subject of Section 3.
4. Erase(), which deletes the assignments at the
current decision level.
The Search() function starts by calling Decide() to
choose a variable assignment at decision level d. It then
determines the consequences of this elective assignment by
calling Deduce(). If this assignment does not cause any
clauses to become unsatisfied, Search() is called recur-
sively at decision level d 1. If, on the other hand, a conflict
arises due to this assignment, the Diagnose() function is
called to analyze this conflict and to determine an
appropriate decision level for backtracking the search.
When Search() encounters a conflict, it returns with a
CONFLICT indication and causes the elective assignment
made on entry to the function to be erased. We refer to
Decide(), Deduce(), and Diagnose() as the Decision,
Deduction, and Diagnosis engines, respectively. Different
realizations of these engines lead to different SAT algo-
rithms. For example, the Davis-Putnam procedure can be
emulated with the above algorithm by defining a decision
engine, requiring the deduction engine to implement BCP
and the pure literal rule, and organizing the diagnosis
engine to implement chronological backtracking.
3 Conflict Analysis Procedures
When a conflict arises during BCP, the structure of the
implication sequence converging on a conflict vertex K is
analyzed to determine those (unsatisfying) variable assign-
ments that are directly responsible for the conflict. The
conjunction of these conflicting assignments is an implicant
that represents a sufficient condition for the conflict to arise.
Negation of this implicant, therefore, yields an implicate of
the Boolean function f (whose satisfiability we seek) that
does not exist in the clause database '. This new implicate,
referred to as a conflict-induced clause,
4
provides the primary
mechanism for implementing failure-driven assertions,
nonchronological conflict-directed backtracking, and con-
flict-based equivalence (see Section 2.3).
We denote the conflicting assignment associated with a
conflict vertex K by A
!
C
and the associated conflict-
induced clause by !
C
. The conflicting assignment is
determined by a backward traversal of the implication
graph starting at K. Besides the decision assignment at the
current decision level, only those assignments that occurred
at previous decision levels are included in A
!
C
. This is
justified by the fact that the decision assignment at the
current decision level is directly responsible for all implied
assignments at that level. Thus, along with assignments
from previous levels, the decision assignment at the current
decision level is a sufficient condition for the conflict. To
facilitate the computation of A
!
C
, we partition the
antecedent assignments of K, as well as those for variables
assigned at the current decision level into two sets. Let x
denote either K or a variable that is assigned at the current
decision level. The partition of Ax is then given by:
5
xfy; y 2 Axjy <xg
xfy; y 2 Axjyxg:
2
For example, referring to the implication graph of Fig. 1,
x
6
fx
11
0@3g and x
6
fx
4
1@6g. Determi-
nation of the conflicting assignment A
!
C
can now be
expressed as:
A
!
C
causes of
where causes_of(.) is defined by:
MARQUES-SILVA AND SAKALLAH: GRASP: A SEARCH ALGORITHM FOR PROPOSITIONAL SATISFIABILITY 509
4. Conditions similar to these implicates are referred to as ªnogoodsº in
TMS [9], [35] and in some algorithms for CSP [31]. Nevertheless, the basic
mechanism for creating conflict-induced clauses differs.

causes of
x; x if Ax;
x[
S
y;y2x
causes ofy
"#
otherwise:
8
>
<
>
:
3
The conflict-induced clause corresponding to A
!
C
is now
determined according to:
!
C

X
x;x2A
!
C
x
x
; 4
where, for a binary variable x, x
0
x,andx
1
:x.
Application of (2)-(4) to the conflict depicted in Fig. 1
yields the following conflicting assignment and conflict-
induced clause at decision level 6:
A
!
C
fx
1
1@6;x
9
0@1;x
10
0@3;x
11
0@3g
!
C
:x
1
x
9
x
10
x
11
:
5
We note that our method for deriving implicates by
analyzing the causes of conflicts has its foundations in
[26]. It is also similar in spirit to the approaches of Freeman
[11, chapter 8] and McAllester [28]. However, unlike the
precise computations of the conflicting assignment A
!
C
in (3) and conflict-induced clause !
C
in (4), the
procedures in these related works were only informally
described.
3.1 Standard Conflict Diagnosis Engine
The identification of a conflict-induced clause !
C
enables
the derivation of further implications that help prune the
search. Immediate implications of !
C
include asserting
the current decision variable to its opposite value and
determining a backtracking level for the search process.
Such immediate implications do not require that !
C
be
added to the clause database. Augmenting the clause
database with !
C
,however,hasthepotentialof
identifying future implications that are not derivable
without !
C
. In particular, adding !
C
to the clause
database ensures that the search engine will not regenerate
the conflicting assignment that led to the current conflict.
3.1.1 Failure-Driven Assertions
If !
C
involves the current decision variable, erasing the
implication sequence at the current decision level makes
!
C
a unit clause and causes the immediate implication of
the decision variable to its opposite value. We refer to such
assignments as failure-driven assertions (FDAs) to empha-
size that they are implications of conflicts and not decision
assignments. We note further that their derivation is
automatically handled by our BCP-based deduction engine
and does not require special processing. This is in contrast
with most search-based SAT algorithms that treat a second
branch at the current decision level as another decision
assignment. Using our running example (see Fig. 1) as an
illustration, we note that after erasing the conflicting
implication sequence at level 6, the conflict-induced clause
!
C
in (5) becomes a unit clause with :x
1
as its free literal.
This immediately implies the assignment x
1
0 and x
1
is
said to be asserted.
3.1.2 Conflict-Directed Backtracking
If all the literals in !
C
correspond to variables that were
assigned at decision levels that are lower than the current
decision level, we can immediately conclude that the search
process needs to backtrack. This situation can only take
place when the conflict in question is produced as a direct
consequence of diagnosing a previous conflict and is
510 IEEE TRANSACTIONS ON COMPUTERS, VOL. 48, NO. 5, MAY 1999
Fig. 1. Example of clause database and partial implication graph.
5. To reduce clutter, we omit the superscripts denoting the clauses that
lead to these antecedent assignments and assume them to be understood
from context.

Citations
More filters
Proceedings ArticleDOI
22 Jun 2001
TL;DR: The development of a new complete solver, Chaff, is described which achieves significant performance gains through careful engineering of all aspects of the search-especially a particularly efficient implementation of Boolean constraint propagation (BCP) and a novel low overhead decision strategy.
Abstract: Boolean satisfiability is probably the most studied of the combinatorial optimization/search problems. Significant effort has been devoted to trying to provide practical solutions to this problem for problem instances encountered in a range of applications in electronic design automation (EDA), as well as in artificial intelligence (AI). This study has culminated in the development of several SAT packages, both proprietary and in the public domain (e.g. GRASP, SATO) which find significant use in both research and industry. Most existing complete solvers are variants of the Davis-Putnam (DP) search algorithm. In this paper we describe the development of a new complete solver, Chaff which achieves significant performance gains through careful engineering of all aspects of the search-especially a particularly efficient implementation of Boolean constraint propagation (BCP) and a novel low overhead decision strategy. Chaff has been able to obtain one to two orders of magnitude performance improvement on difficult SAT benchmarks in comparison with other solvers (DP or otherwise), including GRASP and SATO.

2,886 citations


Cites methods from "GRASP: a search algorithm for propo..."

  • ...GRASP [8], POSIT [5], SATO [13], rel_sat [2], WalkSAT [9]) have been developed, most employing some combination of two main strategies: the Davis-Putnam (DP) backtrack search and heuristic local search....

    [...]

Book
01 Jan 2003
TL;DR: Rina Dechter synthesizes three decades of researchers work on constraint processing in AI, databases and programming languages, operations research, management science, and applied mathematics to provide the first comprehensive examination of the theory that underlies constraint processing algorithms.
Abstract: Constraint satisfaction is a simple but powerful tool. Constraints identify the impossible and reduce the realm of possibilities to effectively focus on the possible, allowing for a natural declarative formulation of what must be satisfied, without expressing how. The field of constraint reasoning has matured over the last three decades with contributions from a diverse community of researchers in artificial intelligence, databases and programming languages, operations research, management science, and applied mathematics. Today, constraint problems are used to model cognitive tasks in vision, language comprehension, default reasoning, diagnosis, scheduling, temporal and spatial reasoning. In Constraint Processing, Rina Dechter, synthesizes these contributions, along with her own significant work, to provide the first comprehensive examination of the theory that underlies constraint processing algorithms. Throughout, she focuses on fundamental tools and principles, emphasizing the representation and analysis of algorithms. ·Examines the basic practical aspects of each topic and then tackles more advanced issues, including current research challenges ·Builds the reader's understanding with definitions, examples, theory, algorithms and complexity analysis ·Synthesizes three decades of researchers work on constraint processing in AI, databases and programming languages, operations research, management science, and applied mathematics Table of Contents Preface; Introduction; Constraint Networks; Consistency-Enforcing Algorithms: Constraint Propagation; Directional Consistency; General Search Strategies; General Search Strategies: Look-Back; Local Search Algorithms; Advanced Consistency Methods; Tree-Decomposition Methods; Hybrid of Search and Inference: Time-Space Trade-offs; Tractable Constraint Languages; Temporal Constraint Networks; Constraint Optimization; Probabilistic Networks; Constraint Logic Programming; Bibliography

1,739 citations

Book ChapterDOI
24 Jul 2017
TL;DR: In this paper, the authors presented a scalable and efficient technique for verifying properties of deep neural networks (or providing counter-examples) based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function.
Abstract: Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation airborne collision avoidance system for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

1,332 citations

Journal ArticleDOI
TL;DR: An overview of the main design concepts of SCIP and how it can be used to solve constraint integer programs is given and experimental results show that the approach outperforms current state-of-the-art techniques for proving the validity of properties on circuits containing arithmetic.
Abstract: Constraint integer programming (CIP) is a novel paradigm which integrates constraint programming (CP), mixed integer programming (MIP), and satisfiability (SAT) modeling and solving techniques. In this paper we discuss the software framework and solver SCIP (Solving Constraint Integer Programs), which is free for academic and non-commercial use and can be downloaded in source code. This paper gives an overview of the main design concepts of SCIP and how it can be used to solve constraint integer programs. To illustrate the performance and flexibility of SCIP, we apply it to two different problem classes. First, we consider mixed integer programming and show by computational experiments that SCIP is almost competitive to specialized commercial MIP solvers, even though SCIP supports the more general constraint integer programming paradigm. We develop new ingredients that improve current MIP solving technology. As a second application, we employ SCIP to solve chip design verification problems as they arise in the logic design of integrated circuits. This application goes far beyond traditional MIP solving, as it includes several highly non-linear constraints, which can be handled nicely within the constraint integer programming framework. We show anecdotally how the different solving techniques from MIP, CP, and SAT work together inside SCIP to deal with such constraint classes. Finally, experimental results show that our approach outperforms current state-of-the-art techniques for proving the validity of properties on circuits containing arithmetic.

1,163 citations


Cites background from "GRASP: a search algorithm for propo..."

  • ...One of the key ingredients in modern SAT solvers is conflict analysis [78]: infeasible subproblems that are encountered during branch-and-bound are analyzed in order to learn deduced clauses that can later be used to prune other nodes of the search tree....

    [...]

  • ...In addition, these conflict clauses enable the solver to perform so-called nonchronological backtracking [78]....

    [...]

Book ChapterDOI
TL;DR: This article surveys a technique called Bounded Model Checking (BMC), which uses a propositional SAT solver rather than BDD manipulation techniques, and is widely perceived as a complementary technique to BDD-based model checking.
Abstract: Symbolic model checking with Binary Decision Diagrams (BDDs) has been successfully used in the last decade for formally verifying finite state systems such as sequential circuits and protocols. Since its introduction in the beginning of the 90's, it has been integrated in the quality assurance process of several major hardware companies. The main bottleneck of this method is that BDDs may grow exponentially, and hence the amount of available memory re- stricts the size of circuits that can be verified efficiently. In this article we survey a technique called Bounded Model Checking (BMC), which uses a propositional SAT solver rather than BDD manipulation techniques. Since its introduction in 1999, BMC has been well received by the industry. It can find many logical er- rors in complex systems that can not be handled by competing techniques, and is therefore widely perceived as a complementary technique to BDD-based model checking. This observation is supported by several independent comparisons that have been published in the last few years.

904 citations

References
More filters
Book
01 Jan 1979
TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Abstract: This is the second edition of a quarterly column the purpose of which is to provide a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NP-Completeness,’’ W. H. Freeman & Co., San Francisco, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed. Readers having results they would like mentioned (NP-hardness, PSPACE-hardness, polynomial-time-solvability, etc.), or open problems they would like publicized, should send them to David S. Johnson, Room 2C355, Bell Laboratories, Murray Hill, NJ 07974, including details, or at least sketches, of any new proofs (full papers are preferred). In the case of unpublished results, please state explicitly that you would like the results mentioned in the column. Comments and corrections are also welcome. For more details on the nature of the column and the form of desired submissions, see the December 1981 issue of this journal.

40,020 citations

Book
01 Jan 1990
TL;DR: The new edition of Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems offers comprehensive and state-ofthe-art treatment of both testing and testable design.
Abstract: For many years, Breuer-Friedman's Diagnosis and Reliable Design ofDigital Systems was the most widely used textbook in digital system testing and testable design. Now, Computer Science Press makes available a new and greativ expanded edition. Incorporating a significant amount of new material related to recently developed technologies, the new edition offers comprehensive and state-ofthe-art treatment of both testing and testable design.

2,758 citations


"GRASP: a search algorithm for propo..." refers methods in this paper

  • ...Iterated application of the unit clause rule is commonly referred to as Boolean Constraint Propagation (BCP) [39] or as derivation of implications in the electronic CAD literature [1]....

    [...]

Journal ArticleDOI
TL;DR: In the present paper, a uniform proof procedure for quantification theory is given which is feasible for use with some rather complicated formulas and which does not ordinarily lead to exponentiation.
Abstract: The hope that mathematical methods employed in the investigation of formal logic would lead to purely computational methods for obtaining mathematical theorems goes back to Leibniz and has been revived by Peano around the turn of the century and by Hilbert's school in the 1920's. Hilbert, noting that all of classical mathematics could be formalized within quantification theory, declared that the problem of finding an algorithm for determining whether or not a given formula of quantification theory is valid was the central problem of mathematical logic. And indeed, at one time it seemed as if investigations of this “decision” problem were on the verge of success. However, it was shown by Church and by Turing that such an algorithm can not exist. This result led to considerable pessimism regarding the possibility of using modern digital computers in deciding significant mathematical questions. However, recently there has been a revival of interest in the whole question. Specifically, it has been realized that while no decision procedure exists for quantification theory there are many proof procedures available—that is, uniform procedures which will ultimately locate a proof for any formula of quantification theory which is valid but which will usually involve seeking “forever” in the case of a formula which is not valid—and that some of these proof procedures could well turn out to be feasible for use with modern computing machinery.Hao Wang [9] and P. C. Gilmore [3] have each produced working programs which employ proof procedures in quantification theory. Gilmore's program employs a form of a basic theorem of mathematical logic due to Herbrand, and Wang's makes use of a formulation of quantification theory related to those studied by Gentzen. However, both programs encounter decisive difficulties with any but the simplest formulas of quantification theory, in connection with methods of doing propositional calculus. Wang's program, because of its use of Gentzen-like methods, involves exponentiation on the total number of truth-functional connectives, whereas Gilmore's program, using normal forms, involves exponentiation on the number of clauses present. Both methods are superior in many cases to truth table methods which involve exponentiation on the total number of variables present, and represent important initial contributions, but both run into difficulty with some fairly simple examples.In the present paper, a uniform proof procedure for quantification theory is given which is feasible for use with some rather complicated formulas and which does not ordinarily lead to exponentiation. The superiority of the present procedure over those previously available is indicated in part by the fact that a formula on which Gilmore's routine for the IBM 704 causes the machine to computer for 21 minutes without obtaining a result was worked successfully by hand computation using the present method in 30 minutes. Cf. §6, below.It should be mentioned that, before it can be hoped to employ proof procedures for quantification theory in obtaining proofs of theorems belonging to “genuine” mathematics, finite axiomatizations, which are “short,” must be obtained for various branches of mathematics. This last question will not be pursued further here; cf., however, Davis and Putnam [2], where one solution to this problem is given for ele

2,743 citations


"GRASP: a search algorithm for propo..." refers background in this paper

  • ...GRASP is premised on the inevitability of conflicts during the search and its most distinguishing feature is the augmentation of basic backtracking search with a powerful conflict analysis procedure....

    [...]

Journal ArticleDOI
J. de Kleer1
TL;DR: A new view of problem solving motivated by a new kind of truth maintenance system based on manipulating assumption sets is presented, which is possible to work effectively and efficiently with inconsistent information, context switching is free, and most backtracking is avoided.

1,874 citations


"GRASP: a search algorithm for propo..." refers background in this paper

  • ...Conditions similar to these implicates are referred to as ªnogoodsº in TMS [9], [35] and in some algorithms for CSP [31]....

    [...]

  • ...Only in [28] is a nonchronological backtracking procedure outlined for solving problems in Logic Truth Maintenance Systems (LTMS), but it is only sketched and no experimental results are presented....

    [...]

  • ...Conditions similar to these implicates are referred to as anogoodso in TMS [9], [35] and in some algorithms for CSP [31]....

    [...]

  • ...Nevertheless, nonchronological backtracking techniques have been extensively studied and applied to different areas of Artificial Intelligence, particularly Truth Maintenance Systems (TMS) [9], [35], Constraint Satisfaction Problems (CSP) [8], [14], [15], [31], and Logic Programming [4], in some cases with very promising experimental results [8], [15]....

    [...]

Frequently Asked Questions (2)
Q1. What have the authors contributed in "Grasp: a search algorithm for propositional satisfiability" ?

ÐThis paper introduces GRASP ( Generic seaRch Algorithm for the Satisfiability Problem ), a new search algorithm for Propositional Satisfiability ( SAT ). Analyzing conflicts to determine their causes enables GRASP to backtrack nonchronologically to earlier levels in the search tree, potentially pruning large portions of the search space. 

Future research work will emphasize heuristic control of the rate of growth of the clause database. Finally, the authors propose to undertake a comprehensive experimental characterization of the instances of SAT for which conflict analysis provides significant performance gains. ( A more thorough discussion and proof of the TABLE 2 Number of Successes on the DIMACS Benchmarks correctness and completeness of the algorithm and its variations can be found in [ 26 ]. ) There are only two situations under which a conflict K can be identified.