scispace - formally typeset
Search or ask a question

Showing papers on "Conjunctive normal form published in 2011"


Book
16 May 2011
TL;DR: JBool: a software tool Claude Benzaken and Nadia Brauner, and characterizations of special classes by functional equations Lisa Hellerstein.
Abstract: Written by prominent experts in the field, this monograph provides the first comprehensive, unified presentation of the structural, algorithmic and applied aspects of the theory of Boolean functions. The book focuses on algebraic representations of Boolean functions, especially disjunctive and conjunctive normal form representations. This framework looks at the fundamental elements of the theory (Boolean equations and satisfiability problems, prime implicants and associated short representations, dualization), an in-depth study of special classes of Boolean functions (quadratic, Horn, shellable, regular, threshold, read-once functions and their characterization by functional equations) and two fruitful generalizations of the concept of Boolean functions (partially defined functions and pseudo-Boolean functions). Several topics are presented here in book form for the first time. Because of the depth and breadth and its emphasis on algorithms and applications, this monograph will have special appeal for researchers and graduate students in discrete mathematics, operations research, computer science, engineering and economics.

403 citations


Book ChapterDOI
19 Jun 2011
TL;DR: This paper proposes an algorithm for solving 2QBF satisfiability by counterexample guided abstraction refinement (CEGAR) and presents a comparison of a prototype implementing the presented algorithm to state of the art QBF solvers, showing that a larger set of instances is solved.
Abstract: Quantified Boolean Formulas (QBFs) enable standard representation of PSPACE problems. In particular, formulas with two quantifier levels (2QBFs) enable representing problems in the second level of the polynomial hierarchy (Π2P, Σ;2P). This paper proposes an algorithm for solving 2QBF satisfiability by counterexample guided abstraction refinement (CEGAR). This represents an alternative approach to 2QBF satisfiability and, by extension, to solving decision problems in the second level of polynomial hierarchy. In addition, the paper presents a comparison of a prototype implementing the presented algorithm to state of the art QBF solvers, showing that a larger set of instances is solved.

78 citations


Book ChapterDOI
19 Jun 2011
TL;DR: The first algorithm is optimal in its class, meaning that it requires the smallest number of calls to a SAT solver, and the resulting algorithms achieve significant performance gains with respect to state of the art MUS extraction algorithms.
Abstract: Minimally Unsatisfiable Subformulas (MUS) find a wide range of practical applications, including product configuration, knowledge-based validation, and hardware and software design and verification. MUSes also find application in recentMaximum Satisfiability algorithms and in CNF formula redundancy removal. Besides direct applications in Propositional Logic, algorithms for MUS extraction have been applied to more expressive logics. This paper proposes two algorithms forMUS extraction. The first algorithm is optimal in its class, meaning that it requires the smallest number of calls to a SAT solver. The second algorithm extends earlier work, but implements a number of new techniques. The resulting algorithms achieve significant performance gains with respect to state of the art MUS extraction algorithms.

72 citations


Book ChapterDOI
26 Oct 2011
TL;DR: This work presents disjunction category (DC) labels, a new label format for enforcing information flow in the presence of mutually distrusting parties, and introduces and proves soundness of decentralized privileges that are used in declassifying data, in addition to providing a notion of privilege-hierarchy.
Abstract: We present disjunction category (DC) labels, a new label format for enforcing information flow in the presence of mutually distrusting parties. DC labels can be ordered to form a lattice, based on propositional logic implication and conjunctive normal form. We introduce and prove soundness of decentralized privileges that are used in declassifying data, in addition to providing a notion of privilege-hierarchy. Our model is simpler than previous decentralized information flow control (DIFC) systems and does not rely on a centralized principal hierarchy. Additionally, DC labels can be used to enforce information flow both statically and dynamically. To demonstrate their use, we describe two Haskell implementations, a library used to perform dynamic label checks, compatible with existing DIFC systems, and a prototype library that enforces information flow statically, by leveraging the Haskell type checker.

58 citations


Proceedings Article
01 Jan 2011
TL;DR: In this paper, a host of space and length-space trade-offs for resolution proofs for formulas in conjunctive normal form (CNF) have been established, and most of them are superpolynomial or even exponential and essentially tight.
Abstract: For current state-of-the-art satisfiability algorithms based on the DPLL procedure and clause learning, the tw main bottlenecks are the amounts of time and memory used. In the field of proof complexity, these resources correspond to the length and space of resolution proofs for formulas in conjunctive normal form (CNF). There has been a long line of research investigating these proof complexity measures, but while strong results have been established for length, our understanding of space and how it relates to length has remained quite poor. In particular, the question whether resolution proofs can be optimized for length and space simultaneously, or whether there are trade-offs between these two measures, has remained essentially open apart from a few results in restricted settings. In this paper, we remedy this situation by proving a host of length-space trade-off results for resolution in a completely general setting. Our collection of trade-offs cover almost the whole range of values for the space complexity of formulas, and most of the trade-offs are superpolynomial or even exponential and essentially tight. Using similar techniques, we show that these trade-offs in fact extend (albeit with worse parameters) to the exponentially stronger k -DNF resolution proof systems, which operate with formulas in disjunctive normal form with terms of bounded arity k . We also answer the open question whether the k -DNF resolution systems form a strict hierarchy with respect to space in the affirmative. Our key technical contribution is the following, somewhat surprising, theorem: Any CNF formula F can be transformed by simple variable substitution into a new formula F0such that ifF has the right properties, F0can be proven in essentially the same length asF , whereas on the other hand the minimal number of linesone needs to keep in memory simultaneously in any proof off is lower-bounded by the minimal number of variablesneeded simultaneously in any proof off Applying this theorem to so-called pebbling formulas defined in terms of pebble games on directed acyclic graphs, we obtain our results.

55 citations


Book ChapterDOI
14 Jul 2011
TL;DR: Skolem-function derivation can be decoupled from Skolemization-based solvers and computed from standard search-based ones as well as from its clause-resolution proof of unsatisfiability under formula negation.
Abstract: Quantified Boolean formulae (QBF) allow compact encoding of many decision problems. Their importance motivated the development of fast QBF solvers. Certifying the results of a QBF solver not only ensures correctness, but also enables certain synthesis and verification tasks particularly when the certificate is given as a set of Skolem functions. To date the certificate of a true formula can be in the form of either a (cube) resolution proof or a Skolem-function model whereas that of a false formula is in the form of a (clause) resolution proof. The resolution proof and Skolem-function model are somewhat unrelated. This paper strengthens their connection by showing that, given a true QBF, its Skolem-function model is derivable from its cube-resolution proof of satisfiability as well as from its clause-resolution proof of unsatisfiability under formula negation. Consequently Skolem-function derivation can be decoupled from Skolemization-based solvers and computed from standard search-based ones. Fundamentally different from prior methods, our derivation in essence constructs Skolem functions following the variable quantification order. It permits constructing a subset of Skolem functions of interests rather than the whole, and is particularly desirable in many applications. Experimental results show the robust scalability and strong benefits of the new method.

43 citations


Journal ArticleDOI
TL;DR: A method for inducing logical rules from empirical data—Reverse Analysis is presented, which aims to know what logical rules are entrenched in a neural network when the values of the connections of a neural networks resulting from Hebbian learning for the data are given.
Abstract: Neural networks are becoming very popular with data mining practitioners because they have proven through comparison their predictive power with statistical techniques using real data sets. Based on this idea, we will present a method for inducing logical rules from empirical data—Reverse Analysis. When the values of the connections of a neural network resulting from Hebbian learning for the data are given, we hope to know what logical rules are entrenched in it. This method is tested with some real life data sets. In real life data sets, logical rules are assumed to be in conjunctive normal form (CNF) since Horn clauses are inadequate.

38 citations


Journal ArticleDOI
TL;DR: The combinatorial properties of non-boolean conjunctive normal forms (clause-sets), allowing arbitrary (but finite) sets of values for variables, while literals express that some variable shall not get some (given) value, are studied.
Abstract: Concluding this mini-series of 2 articles on the foundations of generalised clause-sets, we study the combinatorial properties of non-boolean conjunctive normal forms (clause-sets), allowing arbitrary (but finite) sets of values for variables, while literals express that some variable shall not get some (given) value. First we study the properties of the direct translation (or “encoding”) of generalised clause-sets into boolean clause-sets. Many combinatorial properties are preserved, and as a result we can lift fixed-parameter tractability of satisfiability in the maximal deficiency from the boolean case to the general case. Then we turn to irredundant clause-sets, which generalise minimally unsatisfiable clause-sets, and we prove basic properties. The simplest irredundant clause-sets are hitting clause-sets, and we provide characterisations and generalisations. Unsatisfiable irredundant clause-sets are the minimally unsatisfiable clause-sets, and we provide basic tools. These tools allow us to characterise the minimally unsatisfiable clause-sets of minimal deficiency. Finally we provide a new translation of generalised boolean clause-sets into boolean clause-sets, the nested translation, which preserves the conflict structure. As an application, we can generalise results for boolean clause-sets regarding the hermitian rank/defect, especially the characterisation of unsatisfiable hitting clause-sets where between every two clauses we have exactly one conflict. We conclude with a list of open problems, and a discussion of the “generic translation scheme”.

27 citations


Journal ArticleDOI
TL;DR: It is shown that every Boolean function represented by a k-CNF (or aK-DNF) has average sensitivity at most k, which is tight since the parity function on k variables hasAverage sensitivity k.
Abstract: The average sensitivity of a Boolean function is the expectation, given a uniformly random input, of the number of input bits which when flipped change the output of the function. Answering a question by O'Donnell, we show that every Boolean function represented by a k-CNF (or a k-DNF) has average sensitivity at most k. This bound is tight since the parity function on k variables has average sensitivity k.

25 citations


Book ChapterDOI
Jens Otten1
04 Jul 2011
TL;DR: A non-clausal connection calculus for classical first-order logic is presented that does not require the translation of input formulae into any clausal form and the definition of clauses is generalized, which may now also contain (sub-) matrices.
Abstract: A non-clausal connection calculus for classical first-order logic is presented that does not require the translation of input formulae into any clausal form. The definition of clauses is generalized, which may now also contain (sub-) matrices. Copying of appropriate (sub-)clauses in a dynamic way, i.e. during the actual proof search, is realized by a generalized extension rule. Thus, the calculus combines the advantage of a non-clausal proof search in tableau calculi with the more efficient goal-oriented proof search of clausal connection calculi. Soundness, completeness, and (relative) complexity results are presented as well as some optimization techniques.

24 citations


Journal ArticleDOI
TL;DR: This first part of a mini-series of two articles, a solid foundation for (generalised) clause-sets is built up, including the notion of autarky systems, the interplay between autarkies and resolution, and basic notions of (DP-)reductions.
Abstract: We consider the problem of generalising boolean formulas in conjunctive normal form by allowing non-boolean variables, with the goal of maintaining combinatorial properties. Requiring that a literal involves only a single variable, the most general form of literals are the wellknown “signed literals”, corresponding to unary constraints in CSP. However we argue that only the restricted form of “negative monosigned literals” and the resulting generalised clause-sets, corresponding to “sets of no-goods” in the AI literature, maintain the essential properties of boolean conjunctive normal forms. In this first part of a mini-series of two articles, we build up a solid foundation for (generalised) clause-sets, including the notion of autarky systems, the interplay between autarkies and resolution, and basic notions of (DP-)reductions. As a basic combinatorial parameter of generalised clause-sets we introduce the (generalised) notion of deficiency, which in the boolean case is the difference between the number of clauses and the number of variables. Autarky theory plays a fundamental role here, and we concentrate especially on matching autarkies (based on matching theory). A natural task is to determine the structure of (matching) lean clause-sets, which do not admit non-trivial (matching) autarkies. A central result is the computation of the lean kernel (the largest lean subset) of a (generalised) clause-set in polynomial time for bounded maximal deficiency.

Book ChapterDOI
12 Sep 2011
TL;DR: A number of novel QBF formulations of the MUS-membership problem are developed and their practicality is evaluated using modern off-the-shelf solvers.
Abstract: This paper tackles the problem of deciding whether a given clause belongs to some minimally unsatisfiable subset (MUS) of a formula, where the formula is in conjunctive normal form (CNF) and unsatisfiable. Deciding MUS-membership helps the understanding of why a formula is unsatisfiable. If a clause does not belong to any MUS, then removing it will certainly not contribute to restoring the formula's consistency. Unsatisfiable formulas and consistency restoration in particular have a number of practical applications in areas such as software verification or product configuration. The MUS-membership problem is known to be in the second level of polynomial hierarchy, more precisely it is Σ2P -complete. Hence, quantified Boolean formulas (QBFs) represent a possible avenue for tackling the problem. This paper develops a number of novel QBF formulations of the MUS-membership problem and evaluates their practicality using modern off-the-shelf solvers.

Book ChapterDOI
16 May 2011
TL;DR: The presented tool cmMUS solves the problem by translating it to propositional circumscription, a well-known problem from the area of nonmonotonic reasoning, and constantly outperforms other approaches to the problem.
Abstract: This article presents cmMUS--a tool for deciding whether a clause belongs to some minimal unsatisfiable subset (MUS) of a given formula. While MUS-membership has a number of practical applications, related with understanding the causes of unsatisfiability, it is computationally challenging--it is Σ2P-complete. The presented tool cmMUS solves the problem by translating it to propositional circumscription, a well-known problem from the area of nonmonotonic reasoning. The tool constantly outperforms other approaches to the problem, which is demonstrated on a variety of benchmarks.

Proceedings ArticleDOI
01 Nov 2011
TL;DR: The experiments demonstrate that the MAP inference for PEL-CNF successfully detects and localizes volleyball events in the face of different types of synthetic noise introduced in the ground-truth video annotations.
Abstract: This is a theoretical paper that proves that probabilistic event logic (PEL) is MAP-equivalent to its conjunctive normal form (PEL-CNF). This allows us to address the NP-hard MAP inference for PEL in a principled manner. We first map the confidence-weighted formulas from a PEL knowledge base to PEL-CNF, and then conduct MAP inference for PEL-CNF using stochastic local search. Our MAP inference leverages the spanning-interval data structure for compactly representing and manipulating entire sets of time intervals without enumerating them. For experimental evaluation, we use the specific domain of volleyball videos. Our experiments demonstrate that the MAP inference for PEL-CNF successfully detects and localizes volleyball events in the face of different types of synthetic noise introduced in the ground-truth video annotations.

Journal ArticleDOI
TL;DR: It is shown that the addition "σ," the multiplication "π" and two kinds of special weighting operations in BPUNN and BPSNN can implement the logical operators "V," "Λ," and "¬" on Boolean algebra.
Abstract: In order to more efficiently realize Boolean func tions by using neural networks, we propose a binary product-unit neural network (BPUNN) and a binary pi-sigma neural network (BPSNN). The network weights can be determined by one-step training. It is shown that the addition "σ," the multiplication "π" and two kinds of special weighting operations in BPUNN and BPSNN can implement the logical operators "V," "Λ," and "¬" on Boolean algebra 〈Z2, V, Λ, ¬, 0,1〉 (Z2 = {0,1}), respec tively. The proposed two neural networks enjoy the following advantages over the existing networks: 1) for a complete truth table of N variables with both truth and false assignments, the corresponding Boolean function can be realized by accordingly choosing a BPUNN or a BPSNN such that at most 2N-1 hidden nodes are needed, while O(2N), precisely 2N or at most 2 , hidden nodes are needed by existing networks; 2) a new network BPUPS based on a collaboration of BPUNN and BPSNN can be defined to deal with incomplete truth tables, while the existing networks can only deal with complete truth tables; and 3) the values of the weights are all simply -1 or 1, while the weights of all the existing networks are real numbers. Supporting numerical experiments are provided as well. Finally, we present the risk bounds of BPUNN, BPSNN, and BPUPS, and then analyze their probably approximately correct learnability.

Proceedings ArticleDOI
16 Jul 2011
TL;DR: A fully polynomial randomized approximation scheme (FPRAS) that computes the price of any security in disjunctive normal form (DNF) within an e multiplicative error factor in timePolynomial in 1/e and the size of the input, with high probability and under reasonable assumptions.
Abstract: Computing the marketmaker price of a security in a combinatorial prediction market is #P-hard. We devise a fully polynomial randomized approximation scheme (FPRAS) that computes the price of any security in disjunctive normal form (DNF) within an e multiplicative error factor in time polynomial in 1/e and the size of the input, with high probability and under reasonable assumptions. Our algorithm is a Monte-Carlo technique based on importance sampling. The algorithm can also approximately price securities represented in conjunctive normal form (CNF) with additive error bounds. To illustrate the applicability of our algorithm, we show that many securities in Yahoo!'s popular combinatorial prediction market game called Predictalot can be represented by DNF formulas of polynomial size.

Book ChapterDOI
28 Jun 2011
TL;DR: A more efficient version of the prime implicate algorithm introduced in that reduces the use of subsumption is presented and is shown to be easily tuned to produce restricted sets of prime implicates.
Abstract: A more efficient version of the prime implicate algorithm introduced in that reduces the use of subsumption is presented. The algorithm is shown to be easily tuned to produce restricted sets of prime implicates. Experiments that illustrate these improvements are described.

Journal ArticleDOI
TL;DR: It is shown that general permanent polynomials cannot be expressed by CNF polynmials of bounded tree-width, and this result is given in the case where the clique-width of the incidence graph is bounded, but for this the widely believed complexity theoretic assumption #[email protected]?FP/poly is relied on.

Book
09 Feb 2011
TL;DR: This thesis considers SAT for a subclass of CNF, the so called Mixed Horn formula class (MHF), and proposes that MHF has a central relevance in CNF and proves the NP-completeness of XSAT for CNF formulas which are l-regular meaning that every variable occurs exactly l times, where l>=3 is a fixed integer.
Abstract: The Boolean conjunctive normal form (CNF) satisfiability problem, called SAT for short, gets as input a CNF formula and has to decide whether this formula admits a satisfying truth assignment. As is well known, the remarkable result by S. Cook in 1971 established SAT as the first and genuine complete problem for the complexity class NP. In this thesis we consider SAT for a subclass of CNF, the so called Mixed Horn formula class (MHF). A formula F in MHF consists of a 2-CNF part P and a Horn part H. We propose that MHF has a central relevance in CNF because many prominent NP-complete problems, e.g. Feedback Vertex Set, Vertex Cover, Dominating Set and Hitting Set, can easily be encoded as MHF. Furthermore, we show that SAT remains NP-complete for some interesting subclasses of MHF. We also provide algorithms for some of these subclasses solving SAT in a better running time than O(2^0.5284n) which is the best bound for MHF so far. One of these subclasses consists of formulas, where the Horn part is negative monotone and the variable graph corresponding to the positive 2-CNF part P consists of disjoint triangles only. Regarding the other subclass consisting of certain k-uniform linear mixed Horn formulas, we provide an algorithm solving SAT in time O(k^(n/k)), for k>=4. Additionally, we consider mixed Horn formulas F in MHF for which holds: H is negative monotone, c =3. We also prove the NP-completeness of XSAT for CNF formulas which are l-regular meaning that every variable occurs exactly l times, where l>=3 is a fixed integer. On that basis, we can provide the NP-completeness of XSAT for the subclass of linear and l-regular formulas. This result is transferable to the monotone case. Moreover, we provide an algorithm solving XSAT for the subclass of monotone, linear and l-regular formulas faster than the so far best algorithm from J. M. Byskov et al. for CNF-XSAT with a running time of O(2^0.2325n). Using some connections to finite projective planes, we can also show that XSAT remains NP-complete for linear and l-regular formulas that in addition are l-uniform whenever l=q+1, where q is a prime power. Thus XSAT most likely is NP-complete for the other values of l>= 3, too. Apart from that, we are interested in exact linear formulas: Here each pair of distinct clauses has exactly one variable in common. We show that NAESAT is polynomial-time decidable restricted to exact linear formulas. Reinterpreting this result enables us to give a partial answer to a long-standing open question mentioned by T. Eiter: Classify the computational complexity of the symmetrical intersecting unsatisfiability problem (SIM-UNSAT). Then we show the NP-completeness of XSAT for monotone and exact linear formulas, which we can also establish for the subclass of formulas whose clauses have length at least k, k>=3. This is somehow surprising since both SAT and not-all-equal SAT are polynomial-time solvable for exact linear formulas. However, for k=3,4,5,6 we can show that XSAT is polynomial-time solvable for the k-uniform, monotone and exact linear formula class.

Book
16 Sep 2011
TL;DR: This thesis describes a prototype SODS for single-file relational queries and gives an integrated analysis of its major design problems: estimation of the number of records satisfying a condition; query optimization; storing information about a set of queries; and optimal selection of secondary indices.
Abstract: A Self-Organizing Database System (SODS) monitors queries asked, finds a good (or optimal) database structure for those queries, and suggests or does the reorganization. In this thesis we describe a prototype SODS for single-file relational queries and give an integrated analysis of its major design problems: (1) estimation of the number of records satisfying a condition (i.e., condition selectivity); (2) query optimization; (3) storing information about a set of queries; (4) optimal selection of secondary indices. We present new results for each of those problems. Some of this research was implemented in FASTSCAN, a commercial query system. We present a new method for accurate estimation of the number of records satisfying a condition field rel constant, where rel is one of "=", " ", "(LESSTHEQ)", "(GREATERTHEQ)". We also examine estimates for more complicated conditions. We present elementary operations (such as UNION, INTERSECT) on pointer and record streams. We show how to use the query parse tree to construct a query evaluation method (EM) from those operations. Then we give an algorithm for selecting the optimal EM, based on converting the query to conjunctive normal form. We examine ways to compress information about a set of queries by combining information for similar queries. We derive a compression scheme which allows a correct and fast computation of the cost of the average query under any index set. We combine all previous results in analyzing the NP-hard problem of optimal index selection. We present two algorithms for it. The first one always finds the optimal answer and runs fast on real-size problems despite its exponential worst-case complexity. The second one (a Greedy method) runs much faster, yet finds the optimal answer very frequently. We analyze the Maximum Cover problem (also NP-hard), a simplification of the optimal index selection. We prove that the Greedy method is an epsilon-approximate algorithm: its answer is always > 63% of the optimal answer.

Posted ContentDOI
TL;DR: In this article, the authors present a mini-series of two articles on the foundations of satisfiability of conjunctive normal forms with non-boolean variables, to appear in Fundamenta Informaticae, 2011.
Abstract: This is the report-version of a mini-series of two articles on the foundations of satisfiability of conjunctive normal forms with non-boolean variables, to appear in Fundamenta Informaticae, 2011. These two parts are here bundled in one report, each part yielding a chapter. Generalised conjunctive normal forms are considered, allowing literals of the form "variable not-equal value". The first part sets the foundations for the theory of autarkies, with emphasise on matching autarkies. Main results concern various polynomial time results in dependency on the deficiency. The second part considers translations to boolean clause-sets and irredundancy as well as minimal unsatisfiability. Main results concern classification of minimally unsatisfiable clause-sets and the relations to the hermitian rank of graphs. Both parts contain also discussions of many open problems.

Book ChapterDOI
05 Dec 2011
TL;DR: The computation of the disjunctive invariant is performed by a form of quantifier elimination expressed using SMT-solving, then the node is abstracted, each disjunct representing an abstract state.
Abstract: We wish to abstract nodes in a reactive programming language, such as Lustre, into nodes with a simpler control structure, with a bound on the number of control states. In order to do so, we compute disjunctive invariants in predicate abstraction, with a bounded number of disjuncts, then we abstract the node, each disjunct representing an abstract state. The computation of the disjunctive invariant is performed by a form of quantifier elimination expressed using SMT-solving. The same method can also be used to obtain disjunctive loop invariants.

Book ChapterDOI
05 May 2011
TL;DR: Two approaches are presented on how to extend the broadly applied Unit Propagation technique where a variable assignment is implied iff a clause has all but one of its literals assigned to false.
Abstract: The tremendous improvement in SAT solving has made SAT solvers a core engine for many real world applications Though still being a branch-and-bound approach purposive engineering of the original algorithm has enhanced state-of-the-art solvers to tackle huge and difficult SAT instances The bulk of solving time is spent on iteratively propagating variable assignments that are implied by decisions In this paper we present two approaches on how to extend the broadly applied Unit Propagation technique where a variable assignment is implied iff a clause has all but one of its literals assigned to false We propose efficient ways to utilize more reasoning in the main component of current SAT solvers so as to be less dependent on felicitous branching decisions

Book ChapterDOI
26 Mar 2011
TL;DR: This paper identifies a notion of Craig interpolant for the SSAT framework and develops an algorithm for computing such interpolants based on SSAT resolution and addresses the use of interpolation in SSAT-based BMC, turning the falsification procedure into a verification approach for probabilistic safety properties.
Abstract: The stochastic Boolean satisfiability (SSAT) problem has been introduced by Papadimitriou in 1985 when adding a probabilistic model of uncertainty to propositional satisfiability through randomized quantification SSAT has many applications, among them bounded model checking (BMC) of symbolically represented Markov decision processes This paper identifies a notion of Craig interpolant for the SSAT framework and develops an algorithm for computing such interpolants based on SSAT resolution As a potential application, we address the use of interpolation in SSAT-based BMC, turning the falsification procedure into a verification approach for probabilistic safety properties

01 Jan 2011
TL;DR: In this article, the trade-off between space and length of resolution proofs for formulas in conjunctive normal form (CNF) has been investigated, and trade-ofis have been shown to be superpolynomial and essentially tight.
Abstract: For current state-of-the-art satisflability algorithms based on the DPLL procedure and clause learn- ing, the two main bottlenecks are the amounts of time and memory used. In the fleld of proof complexity, these resources correspond to the length and space of resolution proofs for formulas in conjunctive normal form (CNF). There has been a long line of research investigating these proof complexity measures, but while strong results have been established for length, our understanding of space and how it relates to length has remained quite poor. In particular, the question whether resolution proofs can be optimized for length and space simulta- neously, or whether there are trade-ofis between these two measures, has remained essentially open apart from a few results in restricted settings. In this paper, we remedy this situation by proving a host of length-space trade-ofi results for resolution in a completely general setting. Our collection of trade-ofis cover almost the whole range of values for the space complexity of formulas, and most of the trade-ofis are superpolynomial or even exponential and essentially tight. Using similar techniques, we show that these trade-ofis in fact extend (albeit with worse parameters) to the exponentially stronger k-DNF resolution proof systems, which operate with formulas in disjunctive normal form with terms of bounded arity k. We also answer the open question whether the k-DNF resolution systems form a strict hierarchy with respect to space in the a-rmative. Our key technical contribution is the following, somewhat surprising, theorem: Any CNF formula F can be transformed by simple variable substitution into a new formula F 0 such that if F has the right properties, F 0 can be proven in essentially the same length as F, whereas on the other hand the minimal number of lines one needs to keep in memory simultaneously in any proof of F 0 is lower-bounded by the minimal number of variables needed simultaneously in any proof of F. Applying this theorem to so-called pebbling formulas deflned in terms of pebble games on directed acyclic graphs, we obtain our results.

Book ChapterDOI
23 Aug 2011
TL;DR: It is proved that P systems with active membranes operating under minimal parallelism are able to solve NP-complete and PP-complete problems in linear time and exponential space when using different types of rules.
Abstract: We prove that P systems with active membranes operating under minimal parallelism are able to solve NP-complete and PP-complete problems in linear time and exponential space when using different types of rules. We also prove that these systems can simulate register machines.

Book ChapterDOI
19 Jun 2011
TL;DR: This paper adapts the Upper Confidence bounds applied to Trees (UCT) algorithm which has been successfully used in many game playing programs including MoGo, one of the strongest computer Go players.
Abstract: In this paper we perform a preliminary investigation into the application of sampling-based search algorithms to satisfiability testing of propositional formulas in Conjunctive Normal Form (CNF) In particular, we adapt the Upper Confidence bounds applied to Trees (UCT) algorithm [5] which has been successfully used in many game playing programs including MoGo, one of the strongest computer Go players [3]

Book ChapterDOI
19 Sep 2011
TL;DR: A computing model based on the technique of DNA strand displacement which performs a chain of logical resolutions with logical formulae in conjunctive normal form and allows to run logic programs composed of Horn clauses by cascading resolution steps and, therefore, possibly function as an autonomous programmable nano-device.
Abstract: We present a computing model based on the technique of DNA strand displacement which performs a chain of logical resolutions with logical formulae in conjunctive normal form. The model is enzymefree and autonomous. Each clause of a formula is encoded in a separate DNA molecule: propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps and, therefore, possibly function as an autonomous programmable nano-device. This technique can be also used to solve SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula.

DissertationDOI
01 Jan 2011
TL;DR: It is proved that certain FCA exhibit chaos in CNF, in contrast to the periodic behaviours of DNF FCA, and five different forms of fuzzy logic are explored.
Abstract: Cellular automata (CA) are discrete dynamical systems comprised of a lattice of finitestate cells. At each time step, each cell updates its state as a function of the previous state of itself and its neighbours. Fuzzy cellular automata (FCA) are a real-valued extension of Boolean cellular automata which “fuzzifies” Boolean logic in the transition function using real values between zero and one (inclusive). To date, FCA have only been studied in disjunctive normal form (DNF). In this thesis, we study FCA in conjunctive normal form (CNF). We classify FCA in CNF both analytically and empirically. We compare these classes to their DNF counterparts. We prove that certain FCA exhibit chaos in CNF, in contrast to the periodic behaviours of DNF FCA. We also briefly explore five different forms of fuzzy logic, and suggest further study. In support of this research, we introduce novel methods of simulating and visualizing FCA. Acknowledgements I would like to thank my supervisor, Paola Flocchini, for her guidance and insight. I would also like to thank my parents, Michael and Lisette Forrester for their continued support. I dedicate this work to my beloved pet cat Felonious, who died during the writing of this thesis.

Journal ArticleDOI
TL;DR: A dynamic theorem proving algorithm that can compute all the consistency-based diagnostic sets directly, without computing all the conflict sets and therefore the hitting sets of the collection of the corresponding conflict sets like the classical methods.
Abstract: Research highlights? The number of components theorem in minimal diagnosis was proposed, which effectively decrease the computation of the non-minimal diagnosis. ? Dynamic Theorem Proving algorithm can dynamically determining the satisfiability of the similar clause sets. ? Dynamic Theorem Proving can obtain whether all the nodes in the same level are diagnosis with just only one dynamic calculation, while SAT-MBD method has to call the algorithm once each for every node. ? Our algorithm can directly compute all the minimal diagnosis, without computing all the conflict sets and the hitting sets. In this paper, a dynamic theorem proving (DTP) algorithm is proposed for dynamically judging whether a component set is consistency-based diagnosis in model-based diagnosis. Firstly, the model of the system to be diagnosed and all the observations are described with conjunctive normal forms (CNF), and the problem of diagnosis is translated into the satisfiability of the related clauses in the CNF files. Next, all the minimal consistency-based diagnostic sets are derived by calling DTP dynamically combining with the CSSE-tree. As the theorem about the number of components in minimal diagnosis is proposed, the majority of the non-minimal diagnosis can never be produced. Moreover, this approach can compute all the consistency-based diagnostic sets directly, without computing all the conflict sets and therefore the hitting sets of the collection of the corresponding conflict sets like the classical methods. Finally, the approach's soundness, completeness and complexity are analyzed and proved, and results show that the program is easy to be implemented, and the diagnosis efficiency is highly improved.