scispace - formally typeset
Search or ask a question

Showing papers on "Disjunctive normal form published in 2005"


Book ChapterDOI
14 Mar 2005
TL;DR: A new decomposition inference rule is presented, which can be combined with any resolution-based calculus compatible with the standard notion of redundancy, and is expected to be suitable for practical usage.
Abstract: Resolution-based calculi are among the most widely used calculi for theorem proving in first-order logic. Numerous refinements of resolution are nowadays available, such as e.g. basic superposition, a calculus highly optimized for theorem proving with equality. However, even such an advanced calculus does not restrict inferences enough to obtain decision procedures for complex logics, such as \(\mathcal{SHIQ}\). In this paper, we present a new decomposition inference rule, which can be combined with any resolution-based calculus compatible with the standard notion of redundancy. We combine decomposition with basic superposition to obtain three new decision procedures: (i) for the description logic \(\mathcal{SHIQ}\), (ii) for the description logic \(\mathcal{ALCHIQ}b\), and (iii) for answering conjunctive queries over \(\mathcal{SHIQ}\) knowledge bases. The first two procedures are worst-case optimal and, based on the vast experience in building efficient theorem provers, we expect them to be suitable for practical usage.

67 citations


Journal ArticleDOI
TL;DR: It is proved that it is computationally hard to simulate Winnow's behavior for learning DNF over an expanded feature space of exponentially many conjunctions, and thus that such kernel functions for Winnow are not efficiently computable.
Abstract: The paper studies machine learning problems where each example is described using a set of Boolean features and where hypotheses are represented by linear threshold elements. One method of increasing the expressiveness of learned hypotheses in this context is to expand the feature set to include conjunctions of basic features. This can be done explicitly or where possible by using a kernel function. Focusing on the well known Perceptron and Winnow algorithms, the paper demonstrates a tradeoff between the computational efficiency with which the algorithm can be run over the expanded feature space and the generalization ability of the corresponding learning algorithm. We first describe several kernel functions which capture either limited forms of conjunctions or all conjunctions. We show that these kernels can be used to efficiently run the Perceptron algorithm over a feature space of exponentially many conjunctions; however we also show that using such kernels, the Perceptron algorithm can provably make an exponential number of mistakes even when learning simple functions. We then consider the question of whether kernel functions can analogously be used to run the multiplicative-update Winnow algorithm over an expanded feature space of exponentially many conjunctions. Known upper bounds imply that the Winnow algorithm can learn Disjunctive Normal Form (DNF) formulae with a polynomial mistake bound in this setting. However, we prove that it is computationally hard to simulate Winnow's behavior for learning DNF over such a feature set. This implies that the kernel functions which correspond to running Winnow for this problem are not efficiently computable, and that there is no general construction that can run Winnow with kernels.

49 citations


Book ChapterDOI
04 Apr 2005
TL;DR: In this paper, a hybrid SAT solver that can apply conflict analysis and implications to both CNF formulae and general circuits is presented. But the problem of finding all satisfying assignments for a generic Boolean circuit is not addressed.
Abstract: Finding all satisfying assignments of a propositional formula has many applications to the synthesis and verification of hardware and software. An approach to this problem that has recently emerged augments a clause-recording propositional satisfiability solver with the ability to add “blocking clauses.” One generates a blocking clause from a satisfying assignment by taking its complement. The resulting clause prevents the solver from visiting the same solution again. Every time a blocking clause is added the search is resumed until the instance becomes unsatisfiable. Various optimization techniques are applied to get smaller blocking clauses, since enumerating each satisfying assignment would be very inefficient. In this paper, we present an improved algorithm for finding all satisfying assignments for a generic Boolean circuit. Our work is based on a hybrid SAT solver that can apply conflict analysis and implications to both CNF formulae and general circuits. Thanks to this capability, reduction of the blocking clauses can be efficiently performed without altering the solver's state (e.g., its decision stack). This reduces the overhead incurred in resuming the search. Our algorithm performs conflict analysis on the blocking clause to derive a proper conflict clause for the modified formula. Besides yielding a valid, nontrivial backtracking level, the derived conflict clause is usually more effective at pruning the search space, since it may encompass both satisfiable and unsatisfiable points. Another advantage is that the derived conflict clause provides more flexibility in guiding the score-based heuristics that select the decision variables. The efficiency of our new algorithm is demonstrated by our preliminary results on SAT-based unbounded model checking of VIS benchmark models.

45 citations


Journal ArticleDOI
Baruch Schieber1, Daniel Geist1, Ayal Zaks1
TL;DR: This work considers the disjunctive normal form representation of Boolean functions, and shows how to compute a minimum "disjoint" representation; i.e., a representation in which the domains of the disjuncts are mutually disjoint.

37 citations


Book ChapterDOI
TL;DR: This work considers PAC learning under the uniform distribution and shows that if the kernel uses conjunctions of length ω(n) then the maximum margin hypothesis will fail on the uniform distributions as well, illustrating that margin based algorithms may overfit when learning simple target functions with natural kernels.
Abstract: Recent work has introduced Boolean kernels with which one can learn linear threshold functions over a feature space containing all conjunctions of length up to k (for any 1 ≤ k ≤ n) over the original n Boolean features in the input space. This motivates the question of whether maximum margin algorithms such as Support Vector Machines can learn Disjunctive Normal Form expressions in the Probably Approximately Correct (PAC) learning model by using this kernel. We study this question, as well as a variant in which structural risk minimization (SRM) is performed where the class hierarchy is taken over the length of conjunctions.We show that maximum margin algorithms using the Boolean kernels do not PAC learn t(n)-term DNF for any t(n) = ω(1), even when used with such a SRM scheme. We also consider PAC learning under the uniform distribution and show that if the kernel uses conjunctions of length ˜ω(√n) then the maximum margin hypothesis will fail on the uniform distribution as well. Our results concretely illustrate that margin based algorithms may overfit when learning simple target functions with natural kernels.

20 citations


Journal ArticleDOI
01 Jun 2005
TL;DR: It is shown that for any @e>0, log^(^3^+^@e^)n-term DNF cannot be polynomial-query learned with membership and strongly proper equivalence queries, and that logn- term DNF formulas can be poynomial- query learned with memberships and proper equivalences queries.
Abstract: We show the following: (a) For any @e>0, log^(^3^+^@e^)n-term DNF cannot be polynomial-query learned with membership and strongly proper equivalence queries. (b) For sufficiently large t, t-term DNF formulas cannot be polynomial-query learned with membership and equivalence queries that use t^1^+^@e-term DNF formulas as hypotheses, for some @e<1 (c) Read-thrice DNF formulas are not polynomial-query learnable with membership and proper equivalence queries. (d) logn-term DNF formulas can be polynomial-query learned with membership and proper equivalence queries. (This complements a result of Bshouty, Goldman, Hancock, and Matar that logn-term DNF can be so learned in polynomial time.) Versions of (a)-(c) were known previously, but the previous versions applied to polynomial-time learning and used complexity theoretic assumptions. In contrast, (a)-(c) apply to polynomial-query learning, imply the results for polynomial-time learning, and do not use any complexity-theoretic assumptions.

14 citations


Book ChapterDOI
25 May 2005
TL;DR: In this paper, both types of duality are explored, first, by investigating the structure of existing forms, and secondly, by developing new forms for target languages.
Abstract: Several classes of propositional formulas have been used as target languages for knowledge compilation. Some are based primarily on c-paths (essentially, the clauses in disjunctive normal form); others are based primarily on d-paths. Such duality is not surprising in light of the duality fundamental to classical logic. There is also duality among target languages in terms of how they treat links (complementary pairs of literals): Some are link-free; others are pairwise-linked (essentially, each pair of clauses is linked). In this paper, both types of duality are explored, first, by investigating the structure of existing forms, and secondly, by developing new forms for target languages.

11 citations


Journal Article
TL;DR: In this paper, a test case generation approach for RAISE using model-based and algebraic test cases is proposed, which combines the testing techniques of algebraic specifications and model based specifications, and the test cases are built by replacing the variables, on both sides of the axioms, with the sequences of functions calls.
Abstract: The classical work on test case generation and formal methods focuses either on algebraic or model-based specifications. In this paper we propose an approach to derive test cases in the RAISE method whose specification language RSL combines the model-based and algebraic style. Our approach integrates the testing techniques of algebraic specifications and model-based specifications. In this testing strategy, first, every function definition is partitioned by Disjunctive Normal Form (DNF) rewriting and then test arguments are generated. Next, sequences of function calls are formed. Finally, the test cases are built by replacing the variables, on both sides of the axioms, with the sequences of functions calls. These kinds of test cases not only provide the data for testing, but also serve as test oracles. Based on this combined approach, a test case generation tool has been developed.

7 citations


Journal Article
TL;DR: In this article, a method for the extraction of rules in a general fuzzy disjunctive normal form is described in detail and illustrated on real-world applications, and an algorithm demonstrating a principal possibility to extract fuzzy logic rules from multilayer perceptrons with continuous activation functions, i.e., from the kind of neural networks most universally used in applications.
Abstract: The extraction of logical rules from data has been, for nearly fifteen years, a key ap­ plication of artificial neural networks in data mining. Although Boolean rules have been extracted in the majority of cases, also methods for the extraction of fuzzy logic rules have been studied increasingly often. In the paper, those methods are discussed within a five-dimensional classification scheme for neural-networks based rule extraction, and it is pointed out that all of them share the feature of being based on some specialized neural net­ work, constructed directly for the rule extraction task. As an important representative, a method for the extraction of rules in a general fuzzy disjunctive normal form is described in detail and illustrated on real-world applications. Finally, the paper proposes an algorithm demonstrating a principal possibility to extract fuzzy logic rules from multilayer perceptrons with continuous activation functions, i.e., from the kind of neural networks most universally used in applications. However, complexity analysis of the individual steps of that algorithm reveals that it involves computations with doubly-exponential complexity, due to which it can not without simplifications serve as a practically applicable alternative to methods based on specialized neural networks.

7 citations


Journal ArticleDOI
TL;DR: A general procedure for transforming an arbitrary CNF or DNF to an orthogonal one is proposed and is tested on randomly generated Boolean formulae.
Abstract: The orthogonal conjunctive normal form of a Boolean function is a conjunctive normal form in which any two clauses contain at least a pair of complementary literals Orthogonal disjunctive normal form is defined similarly Orthogonalization is the process of transforming the normal form of a Boolean function to orthogonal normal form The problem is of great relevance in several applications, for example, in the reliability theory Moreover, such problem is strongly connected with the well-known propositional satisfiability problem Therefore, important complexity issues are involved A general procedure for transforming an arbitrary CNF or DNF to an orthogonal one is proposed Such procedure is tested on randomly generated Boolean formulae

5 citations


Journal ArticleDOI
TL;DR: The Lukasiewicz triplet is drawn, as it is the only continuous De Morgan triplet for which the difference between both fuzzified normal forms is independent of the underlying Boolean function.


Journal Article
TL;DR: The judgement theorem with respect to minimal disjunctive normal form is obtained based on Skowron discernibility matrix, from which an algorithm for all attribute reductions is pre- sented, which is much more efficient in comparison with those existing algorithm.
Abstract: Attribute reduction is one of the basic contents, it is NP-complete problem to calculate all attribute reduc- tions. Based on dividing and conquering thought, the judgement theorem with respect to minimal disjunctive normal form is obtained based on Skowron discernibility matrix, from which an algorithm for all attribute reductions is pre- sented. Theoretical analysis and experimental results show that the algorithm is much more efficient in comparison with those existing algorithm.

Journal ArticleDOI
TL;DR: A state space representation for sequencing and routing flexibility in manufacturing systems that is capable of enumerating all possible manufacturing operation routes that can be applied to a certain part is described.

Journal Article
TL;DR: In this article, set representation in proposition logic is put forward, a series of important conclusions are found, and the methods to resolve principal normal form of proposition formula and to do logic reason based on intersection and join and difference operations of sets are illuminated by examples.
Abstract: Set representation in proposition logic is put forward,a series of important conclusions are found,and the methods to resolve principal normal form of proposition formula and to do logic reason based on intersection and join and difference operations of sets are illuminated by examples.

Patent
24 May 2005
TL;DR: In this article, the setup connection tree model is established based on patterns of agent action s for expressing the logical relationship between available resources in the disjunctive normal form (d.n.f.).
Abstract: A method and apparatus for automatically locating sources of semantic err or in a mufti-agent system based on setup connection tree information, and informing the appropriate agents so that they can avoid using the faulty resources in the future. The setup connection tree model is established based on patterns of agent action s for expressing the logical relationship between available resources in the disjunctive normal form (d.n.f.). A table is used to record different sets of resources for use in the resource selection process. Thus, faulty resources can be located by means o f induction. A global database is also maintained for updating information on semantic errors in the system.

Book ChapterDOI
06 Jun 2005
TL;DR: A class of generalized DNF formulae called wDNF or weighted disjunctive normal form is introduced, and a molecular algorithm that learns a wD NF formula from training examples is presented, suggesting the possibility of building error-resilient molecular computers that are able to learn from data, potentially from wet DNA data.
Abstract: We introduce a class of generalized DNF formulae called wDNF or weighted disjunctive normal form, and present a molecular algorithm that learns a wDNF formula from training examples. Realized in DNA molecules, the wDNF machines have a natural probabilistic semantics, allowing for their application beyond the pure Boolean logical structure of the standard DNF to real-life problems with uncertainty. The potential of the molecular wDNF machines is evaluated on real-life genomics data in simulation. Our empirical results suggest the possibility of building error-resilient molecular computers that are able to learn from data, potentially from wet DNA data.

Journal ArticleDOI
TL;DR: An original technique is proposed for checking the satisfiability of formulae represented in the logical language L in the form of a set of conjuncts.
Abstract: An original technique is proposed for checking the satisfiability of formulae represented in the logical language L in the form of a set of conjuncts. Satisfiability checking is performed by means of analysis and transformation of some relations defined over the set of conjuncts.

Book ChapterDOI
22 Jul 2005
TL;DR: The uclid verifier as mentioned in this paper models a hardware or software system as an abstract state machine, where the state variables can be Boolean or integer values, or functions mapping integers to integers or Booleans.
Abstract: The uclid verifier models a hardware or software system as an abstract state machine, where the state variables can be Boolean or integer values, or functions mapping integers to integers or Booleans. The core of the verifier consists of a decision procedure that checks the validity of formulas over the combined theories of uninterpreted functions with equality and linear integer arithmetic. It operates by transforming a formula into an equisatisfiable Boolean formula and then invoking a SAT solver. This approach has worked well for the class of logic and the types of formulas encountered in verification.