scispace - formally typeset
Search or ask a question

Showing papers on "Disjunctive normal form published in 2015"


Posted Content
TL;DR: Experiments show that the two- level rules can yield noticeably better performance than one-level rules due to their dramatically larger modeling capacity, and the two algorithms based on the Hamming distance formulation are generally superior to the other two-level rule learning methods in the authors' comparison.
Abstract: As a contribution to interpretable machine learning research, we develop a novel optimization framework for learning accurate and sparse two-level Boolean rules. We consider rules in both conjunctive normal form (AND-of-ORs) and disjunctive normal form (OR-of-ANDs). A principled objective function is proposed to trade classification accuracy and interpretability, where we use Hamming loss to characterize accuracy and sparsity to characterize interpretability. We propose efficient procedures to optimize these objectives based on linear programming (LP) relaxation, block coordinate descent, and alternating minimization. Experiments show that our new algorithms provide very good tradeoffs between accuracy and interpretability.

48 citations


Proceedings Article
25 Jul 2015
TL;DR: Two novel approaches for the compilation of non-clausal formulae either with prime implicants or implicates, that is based on propositional Satisfiability (SAT) solving are described.
Abstract: Formula compilation by generation of prime implicates or implicants finds a wide range of applications in AI. Recent work on formula compilation by prime implicate/implicant generation often assumes a Conjunctive/Disjunctive Normal Form (CNF/DNF) representation. However, in many settings propositional formulae are naturally expressed in non-clausal form. Despite a large body of work on compilation of non-clausal formulae, in practice existing approaches can only be applied to fairly small formulae, containing at most a few hundred variables. This paper describes two novel approaches for the compilation of non-clausal formulae either with prime implicants or implicates, that is based on propositional Satisfiability (SAT) solving. These novel algorithms also find application when computing all prime implicates of a CNF formula. The proposed approach is shown to allow the compilation of non-clausal formulae of size significantly larger than existing approaches.

32 citations


Journal ArticleDOI
TL;DR: A novel supervised learning/classification method that incorporates the classification error into a single global objective function, and improved performance of DNDTs and DNRFs over conventional decision trees and random forests is presented.

23 citations


Proceedings ArticleDOI
16 Apr 2015
TL;DR: A novel implicit parametric shape model is proposed for segmentation and analysis of medical images that is differentiable, hence, gradient based optimization algorithms are used to find the model parameters.
Abstract: A novel implicit parametric shape model is proposed for segmentation and analysis of medical images. Functions representing the shape of an object can be approximated as a union of N polytopes. Each polytope is obtained by the intersection of M half-spaces. The shape function can be approximated as a disjunction of conjunctions, using the disjunctive normal form. The shape model is initialized using seed points defined by the user. We define a cost function based on the Chan-Vese energy functional. The model is differentiable, hence, gradient based optimization algorithms are used to find the model parameters.

22 citations


Book ChapterDOI
17 Nov 2015
TL;DR: Aalta_v2.0 as mentioned in this paper is a new explicit reasoning framework for linear temporal logic (LTL), which is built on top of propositional satisfiability (SAT) solving.
Abstract: We present here a new explicit reasoning framework for linear temporal logic (LTL), which is built on top of propositional satisfiability (SAT) solving. As a proof-of-concept of this framework, we describe a new LTL satisfiability algorithm. We implemented the algorithm in a tool, Aalta_v2.0, which is built on top of the Minisat SAT solver. We tested the effectiveness of this approach by demonstrating that Aalta_v2.0 significantly outperforms all existing LTL satisfiability solvers.

19 citations


Posted Content
TL;DR: This work builds OA models as a diagnostic screening tool for obstructive sleep apnea, that achieves high accuracy with a substantial gain in interpretability over other methods and proves theoretical bounds on the properties of patterns in an OA model.
Abstract: Or's of And's (OA) models are comprised of a small number of disjunctions of conjunctions, also called disjunctive normal form. An example of an OA model is as follows: If ($x_1 = $ `blue' AND $x_2=$ `middle') OR ($x_1 = $ `yellow'), then predict $Y=1$, else predict $Y=0$. Or's of And's models have the advantage of being interpretable to human experts, since they are a set of conditions that concisely capture the characteristics of a specific subset of data. We present two optimization-based machine learning frameworks for constructing OA models, Optimized OA (OOA) and its faster version, Optimized OA with Approximations (OOAx). We prove theoretical bounds on the properties of patterns in an OA model. We build OA models as a diagnostic screening tool for obstructive sleep apnea, that achieves high accuracy with a substantial gain in interpretability over other methods.

17 citations


Book ChapterDOI
24 Sep 2015
TL;DR: Experimental results, obtained on well-known representative problem instances, demonstrate that a SAT-based approach for formula simplification is a viable alternative to existing implementations of the Quine-McCluskey procedure.
Abstract: The problem of propositional formula minimization can be traced to the mid of the last century, to the seminal work of Quine and McCluskey, with a large body of work ensuing from this seminal work. Given a set of implicants (or implicates) of a formula, the goal for minimization is to find a smallest set of prime implicants (or implicates) equivalent to the original formula. This paper considers the more general problem of computing a smallest prime representation of a non-clausal propositional formula, which we refer to as formula simplification. Moreover, the paper proposes a novel, entirely SAT-based, approach for the formula simplification problem. The original problem addressed by the Quine-McCluskey procedure can thus be viewed as a special case of the problem addressed in this paper. Experimental results, obtained on well-known representative problem instances, demonstrate that a SAT-based approach for formula simplification is a viable alternative to existing implementations of the Quine-McCluskey procedure.

11 citations


Journal ArticleDOI
TL;DR: It is found that one family of fault-based testing strategies, namely MUMCUT, normally deliver the best performance among all the 18 strategies and is considered as effective and efficient on general Boolean expressions.
Abstract: A great amount of fault-based testing strategies have been proposed to generate test cases for detecting certain types of faults in Boolean specifications. However, most of the previous studies on these strategies were focused on the Boolean expressions in the disjunctive normal form (DNF), even the irredundant DNF (IDNF)-little work has been conducted to comprehensively investigate their performance on general Boolean specifications. In this study, we conducted a series of experiments to evaluate and compare 18 fault-based testing strategies using over 4000 randomly generated fault-seeded Boolean expressions. In the experiments, a testing strategy is regarded as effective and efficient if it can detect most of the seeded faults using a small number of test cases. Our experimental results show that if a testing strategy is highly effective and efficient when testing the Boolean expressions in the IDNF, it also shows high effectiveness and efficiency on general Boolean expressions. It is found that one family of fault-based testing strategies, namely MUMCUT, normally deliver the best performance among all the 18 strategies. Our study provides an in-depth understanding and insight of fault-based testing for general Boolean expressions.

7 citations


19 Jan 2015
TL;DR: This paper describes the set coalescing operation of isl that looks for opportunities to combine several disjuncts into a single disJunct without aecting the elements in the set.
Abstract: In polyhedral compilation, various core concepts such as the set of statement instances, the access relations, the dependences and the schedule are represented or approximated using sets and binary relations of sequences of integers bounded by (quasi-)ane constraints. When these sets and relations are represented in disjunctive normal form, it is important to keep the number of disjuncts small, both for eciency and to improve the computation of transitive closure overapproximations and AST generation. This paper describes the set coalescing operation of isl that looks for opportunities to combine several disjuncts into a single disjunct without aecting the elements in the set. The main purpose of the paper is to explain the various heuristics and to prove their correctness.

7 citations


Proceedings ArticleDOI
16 Sep 2015
TL;DR: It is suggested that Boolean logic is easier to assimilate in Disjunctive Normal Form than in other forms and that particular difficulties arise when it is necessary to backtrack to form a mental model.
Abstract: Description Logics are commonly used for the development of ontologies. Yet they are well-known to present difficulties of comprehension, e.g. when confronted with the justification for a particular entailment during the debugging process. This paper describes a study into the problems experienced in understanding and reasoning with Description Logics. In particular the study looked at: functionality in object properties; negation, disjunction and conjunction in Propositional Logic; negation and quantification; and the combination of two quantifiers. The difficulties experienced are related to theories of reasoning developed by cognitive psychologists, specifically the mental model and relational complexity theories. The study confirmed that problems are experienced with functional object properties and investigated the extent to which these difficulties can be explained by relational complexity theory. Mental model theory was used to explain performance with negation and quantifiers. This suggests that Boolean logic is easier to assimilate in Disjunctive Normal Form than in other forms and that particular difficulties arise when it is necessary to backtrack to form a mental model. On the other hand in certain cases syntactic clues seemed to contribute to reasoning strategies.

6 citations


Posted Content
TL;DR: The new temporal fault tree analysis (TFTA) described in this work is based on a new temporal logic which adds a concept of time to the Boolean logic and algebra and allows for modelling of event sequencies at all levels within a fault tree without transformations into state-space.
Abstract: Background: Fault tree analysis (FTA) is a well established method for qualitative as well as probabilistic reliability and safety analysis. As a Boolean model it does not support modelling of dynamic effects like sequence dependencies between fault events. This work describes a method that allows consideration of sequence dependencies without transformations into state-space. Concept: The new temporal fault tree analysis (TFTA) described in this work extends the Boolean FTA. The TFTA is based on a new temporal logic which adds a concept of time to the Boolean logic and algebra. This allows modelling of temporal relationships between events using two new temporal operators (PAND and SAND). With a set of temporal logic rules, a given temporal term may be simplified to its temporal disjunctive normal form (TDNF) which is similar to the Boolean DNF but includes event sequencies. In TDNF the top event's temporal system function may be reduced to a list of minimal cutset sequences (MCSS). These allow qualitative analyses similar to Boolean cutset analysis in normal FTA. Furthermore the TFTA may also be used for probabilistic analyses without using state-space models. Results: One significant aspect of the new TFTA described in this work is the possibility to take sequence dependencies into account for qualitative and probabilistic analyses without state-space transformations. Among others, this allows for modelling of event sequencies at all levels within a fault tree, a real qualitative analysis similar to the FTA's cutset analysis, and quantification of sequence dependencies within the same model.

Posted Content
TL;DR: A new scheme called JOS is introduced for perfect security in the context of secure multiparty computation in the semi-honest model that naturally requires at least three parties to solve the problem of outsource computation on confidential data.
Abstract: A client wishes to outsource computation on confidential data to a network of servers. He does not trust a server on its own, but believes that servers do not collude. To solve this problem we introduce a new scheme called \emph{JOS} for perfect security in the context of secure multiparty computation in the semi-honest model that naturally requires at least three parties. It differs from classical work such as Yao, GMW, BGW or GRR through an explicit distinction of keys and encrypted values rather than having (equal) shares of a secret. Furthermore, JOS makes use of the distributive and associative nature of its encryption schemes and, at times, "double" encrypts values. Any Boolean circuit $C$ in disjunctive normal form with $w$ variables per clause can be evaluated in O($\log k$) rounds using messages of size O($2^{w-k}$) for an arbitrary parameter $k \in [2,w]$ and O($|C|\cdot 2^{w-k}$) bit operations. We allow for collusion of up to $n-2$ parties. On the theoretical side JOS improves a large body of work in one or several metrics. On the practical side, our local computation requirements improve on the run-time of GRR using Shamir's secret sharing up to several orders of magnitude.

Posted Content
16 Feb 2015
TL;DR: This paper isolates a type normal form, ENF, generalizing the usual disjunctive normal form to handle exponentials, and shows that the eta-long beta-normal form of terms at ENF type is canonical, when the etA axiom for sums is expressed via evaluation contexts.
Abstract: In the presence of sum types, the eta-long beta-normal form of terms of lambda calculus is not canonical. Natural deduction systems for intuitionistic logic (with disjunction) suffer the same defect, thanks to the Curry-Howard correspondence. This canonicity problem has been open in Proof Theory since the 1960s, while it has been addressed in Computer Science, since the 1990s, by a number of authors using de- cision procedures: instead of deriving a notion of syntactic canonical normal form, one gives a procedure based on program analysis to de- cide when any two terms of the lambda calculus with sum types are essentially the same one. In this paper, we show the canonicity problem is difficult because it is too specialized: rather then picking a canonical representative out of a class of beta-eta-equal terms of a given type, one should do so for the enlarged class of terms that are of a type isomorphic to the given one. We isolate a type normal form, ENF, generalizing the usual disjunctive normal form to handle exponentials, and we show that the eta-long beta-normal form of terms at ENF type is canonical, when the eta axiom for sums is expressed via evaluation contexts. By coercing terms from a given type to its isomorphic ENF type, our technique gives unique canonical representatives for examples that had previously been handled using program analysis.

Journal ArticleDOI
TL;DR: The paper presents possibility of usage the formalism of structural reliability notation with an implementation of directed graphs and reliability functions recorded in accordance with the Perfect Disjunctive Normal Form notation.
Abstract: The paper presents possibility of usage the formalism of structural reliability notation with an implementation of directed graphs and reliability functions recorded in accordance with the Perfect Disjunctive Normal Form notation (PDNF). The author presents the mathematical basis used for an identification of internal structures of mechatronic machines and reduction it to the series-connected blocks. The presented method is a hybrid combination of a binary analysis of evaluated reliability functions (stored in the form of matrixes that relate to defined graphs) and blocks with binary inputs and outputs.

Posted Content
TL;DR: In this article, the authors generalize the disjunctive normal forms for any class of Boolean algebras with operators, and show non-atomicity of some free Boolean algebra with operators.
Abstract: Disjunctive normal forms can provide elegant and constructive proofs of many standard results such as completeness, decidability and so on. They were also used to show non atomicity of some free algebras of specific Boolean algebras with operators. Here, we generalize the normal forms for any class of Boolean algebras with operators.

Patent
04 Nov 2015
TL;DR: In this article, an attribute-based encryption method of a principal disjunctive normal form access strategy at the lattice was proposed, which is specifically implemented according to the following steps.
Abstract: The invention discloses an attribute-based encryption method of a principal disjunctive normal form access strategy at the lattice, and is specifically implemented according to the following steps: firstly a system is established, safety parameters lambda and q and an attribute domain U are input, and a system common parameter pp and a main private key msk are generated, secondly a secret key is generated, the common parameter pp, a main private key msk and an attribute list L are input, a private key eL about the attribute list L is generated, then message encryption is performed, the common parameter pp, a Boole access strategy policy and a message to be encrypted M which is an element of a set {0, 1} are input, ciphertext C of the access strategy policy about the message M which is an element of the set {0, 1} is output, and finally message decryption is performed, the safety parameter pp, the private key eL of the attributed list L and the ciphertext C are input, the message M which is an element of the set {0, 1} is output, and the problem existing in the prior art that an attribute-based encryption method is slow in speed and low in efficiency is solved.

Journal ArticleDOI
TL;DR: A NP hard problem CNF to DNF conversion is a vast area of research for AI, circuit design, FPGA's, PLA’s, etc and can only be considered to evaluate best performance for higher variable processing on high end systems.
Abstract: A NP hard problem CNF to DNF conversion is a vast area of research for AI, circuit design, FPGA’s (Miltersen et al. in On converting CNF To DNF, 2003), PLA’s, etc. (Beame in A switching lemma primer, 1994; Kottler and Kaufmann in SArTagnan—a parallel portfolio SAT solver with lockless physical clause sharing, 2011). Optimization and its statistics has become a potential requirement for analysis and behavior of normal form conversion. Various applications are in its requirement like gnome analysis, grid computing, bioinformatics, imaging systems, rough sets require higher variable processing algorithm. Problem statement is—design and implementation of optimal conjunctive normal form to optimal (prime implicants) disjunctive normal form conversion which is an “NP hard problem conversion to an NP complete”. Thus CNF to DNF can only be considered to evaluate best performance for higher variable processing on high end systems. The best-known representations of Boolean functions f are those as disjunctions of terms (DNFs) and as conjunctions of clauses (CNFs) (Beame 1994; Kottler and Kaufmann 2011) (Wegener in The complexity of boolean functions, 1987). It is convenient to define the DNF size of f as the minimal number of terms in a DNF representing f and the CNF size as the minimal number of clauses in a CNF representing f (Kottler and Kaufmann 2011).

Journal ArticleDOI
TL;DR: In this paper, the theory of disjunctive normal forms is generalized to binary functions of multivalued arguments and an efficient method for constructing disjoint normal forms for binary functions with a small number of zeros is proposed.
Abstract: The theory of disjunctive normal forms is generalized to binary functions of multivalued arguments. Fundamental concepts and properties of these generalizations are considered. An efficient method for constructing disjunctive normal forms for binary functions of multivalued arguments with a small number of zeros is proposed. Disjunctive normal forms of an analogue of the Yablonsky function are studied in detail.

Proceedings ArticleDOI
27 Jun 2015
TL;DR: This work presents a case example using practical policies in order to show the output using the two concepts based on Apache axis2 rampart, Apache neethi and IBM security policies, and provides two algorithms for calculating the least upper bound or the greatest lower bound of the ordered sets to enable compatibility.
Abstract: In order to enable a secure Business-to-Business (B2B) interaction between web services, it is essential to negotiate a common security policy by computing the policy intersection according to the web service (WS)-policy framework. For this purpose, both policies are transformed into Disjunctive Normal Form (DNF). Then the intersection of the two sets of monomials (alternatives) from the two DNFs is computed. If the intersection's output is only one compatible monomial, we are done: We have found a unique security policy supported by both parties. However, two other cases are also possible: There may be more than one compatible monomial, and there may be no intersection which means, no compatible alternatives are found. In both cases, additional processing steps are required in order to communicate: If there are more than one alternatives, we would like to find the optimum security policy amongst all. If there is no intersection, we would like to find a minimal extension of the security policies to enforce an intersection. WS-policy framework does not give any information on how the policy intersection can be calculated or found when alternatives are semi-compatible or fully incompatible. In addition to the issue of multiple compatible alternatives, which alternative to choose. Current research is focusing on how to measure the compatibility, however achieving policy agreement in term of policy intersection is far from being possible. In order to address this problem we introduce two separate solutions for the two cases. For the case of more than one compatible alternative (multiple-intersection), we present a Multiple Criteria Decision Making (MCDM) model using Fuzzy Analytical Hierarchy Process (AHP) for the WS-Security Policy assertions in order to calculate the optimum security policy alternative. For the case of (no-intersection) we provide two algorithms for calculating the least upper bound (lub) or the greatest lower bound (glb) of the ordered sets to enable compatibility. We present a case example using practical policies in order to show the output using the two concepts based on Apache axis2 rampart, Apache neethi and IBM security policies. Outputs are found similar using both concepts.


Journal ArticleDOI
TL;DR: The fault detection criteria for clause disjunction fault CDF and associative shift fault ASF is given and the fault hierarchy is extended by adding more fault classes in the hierarchy.
Abstract: Fault hierarchy specifies the inter-relationships amongst various fault classes in terms of their fault detection capability. Kuhn has developed a fault hierarchy for Boolean expression in disjunctive normal form which was complemented by Tsuchiya and Kikuno. Lau and Yu extended the fault hierarchy by adding more fault classes in the hierarchy. In this paper, we give the fault detection criteria for clause disjunction fault CDF and associative shift fault ASF and further extend the fault hierarchy by adding these fault classes in the fault hierarchy.

Journal ArticleDOI
TL;DR: An asymptotically tight bound is obtained for the minimum possible number of literals contained in the disjunctive normal forms of the complete function.
Abstract: It was previously established that almost every Boolean function of n variables with k zeros, where k is at most log2n–log2log2n + 1, can be associated with a Boolean function of 2k–1–1 variables with k zeros (complete function) such that the complexity of implementing the original function in the class of disjunctive normal forms is determined only by the complexity of implementing the complete function. An asymptotically tight bound is obtained for the minimum possible number of literals contained in the disjunctive normal forms of the complete function.

Journal ArticleDOI
TL;DR: Traditional Constructive Solid Geometry trees are extended to support the projection operator to automatically generate a set of equations and inequalities that express either the geometric solid or the conditions to be tested for computing various topological properties, such as homotopy equivalence.
Abstract: We extend traditional Constructive Solid Geometry (CSG) trees to support the projection operator. Existing algorithms in the literature prove various topological properties of CSG sets. Our extension readily allows these algorithms to work on a greater variety of sets, in particular parametric sets, which are extensively used in CAD/CAM systems. Constructive Solid Geometry allows for algebraic representation which makes it easy for certification tools to apply. A geometric primitive may be defined in terms of a characteristic function, which can be seen as the zero-set of a corresponding system along with inequality constraints. To handle projections, we exploit the Disjunctive Normal Form, since projection distributes over union. To handle intersections, we transform them into disjoint unions. Each point in the projected space is mapped to a contributing primitive in the original space. This way we are able to perform gradient computations on the boundary of the projected set through equivalent gradient computations in the original space. By traversing the final expression tree, we are able to automatically generate a set of equations and inequalities that express either the geometric solid or the conditions to be tested for computing various topological properties, such as homotopy equivalence. We conclude by presenting our prototype implementation and several examples. Extension of classical CSG with the projection operator.Support for gradient computations.Topological property computation.Application of formal methods like interval analysis and proof assistants to CSG models.

Journal ArticleDOI
TL;DR: In this article, two procedures to compute the output distribution φS of certain stack-filters S (so called erosion-dilation cascades) are given, one rests on the disjunctive normal form of S and also yields the rank selection probabilities.
Abstract: Two procedures to compute the output distribution φS of certain stack filters S (so called erosion-dilation cascades) are given. One rests on the disjunctive normal form of S and also yields the rank selection probabilities. The other is based on inclusion-exclusion and e.g. yields φS for some important LU LU -operators S. Properties of φS can be used to characterize smoothing properties of S. Also, in the same way as our polynomials φS are computed one could compute the reliability polynomial of a connected graph, or more generally the reliability polynomial w.r.t. any positive Boolean function.

Proceedings ArticleDOI
25 Jun 2015
TL;DR: A novel approach to find the solutions with logic formula for an original service compositionGetting by the parallel layered planning graph, it constructs composite services with a backward method according to the source services of parameters starting from the user request outputs.
Abstract: Targeting the redundancy filtering problem of Web service automatic composition, this paper proposes a novel approach to find the solutions with logic formula. For an original service composition getting by the parallel layered planning graph, it constructs composite services with a backward method according to the source services of parameters starting from the user request outputs. The combination process of the source services is treated as a process of converting a conjunctive normal to a disjunctive normal form, which can filter all the redundant services and reduce the combination scale rapidly. Experiments with large service repository illustrate that the approach is correct and can improve the efficiency of service composition.

01 Jan 2015
TL;DR: In this article, a new l ogical operation method called "orthogonal OR" is presented, which is used to calculate not only the difference but also the complement of a function as well as the EXOR and EXNOR of two minterms respectively two ternary respectively two-ternary two-ternary-vector.
Abstract: In this paper a new l ogical operation method called “ presented. It is used to calculate the difference, but also the complement of a function as well as the EXOR and EXNOR of two minterms respectively two ternary respectively two ternary-vector lo gical operation method called “orthogonal OR advantages of both methods are their results, which are already available form that has an essential advantage for continuing calculations. Since it applies, an orthogonal disjunctive normal form is equal to orthogonal antivalence normal form, subsequent Boolean differential calculus will be simplified.