scispace - formally typeset
Search or ask a question

Showing papers on "Disjunctive normal form published in 2000"


Proceedings Article
Sholom M. Weiss1, Nitin Indurkhya
29 Jun 2000
TL;DR: Experimental results on large benchmark datasets demonstrate that predictive performance can rival the best reported results in the literature.
Abstract: A lightweight rule induction method is described that generates compact Disjunctive Normal Form (DNF) rules Each class may have an equal number of unweighted rules A new example is classified by applying all rules and assigning the example to the class with the most satisfied rules The induction method attempts to minimize the training error with no pruning An overall design is specified by setting limits on the size and number of rules During training, cases are adaptively weighted using a simple cumulative error method The induction method is nearly linear in time relative to an increase in the number of induced rules or the number of cases Experimental results on large benchmark datasets demonstrate that predictive performance can rival the best reported results in the literature

91 citations


Journal ArticleDOI
TL;DR: A continuous version of cellular automata (fuzzy CA) obtained by “fuzzification” of the disjunctive normal form which describes the corresponding Boolean rule is considered, finding the remarkable fact that the display might show the well known complex Boolean behavior instead of the (correct) convergence to a fixed point.

64 citations


Patent
03 Feb 2000
TL;DR: In this article, a method and apparatus for automatically locating sources of semantic error in a multi-agent system based on setup connection tree information, and informing the appropriate agents so that they can avoid using the faulty resources in the future.
Abstract: A method and apparatus for automatically locating sources of semantic error in a multi-agent system based on setup connection tree information, and informing the appropriate agents so that they can avoid using the faulty resources in the future. The setup connection tree model is established based on patterns of agent actions for expressing the logical relationship between available resources in the disjunctive normal form (d.n.f.). A table is used to record different sets of resources for use in the resource selection process. Thus, faulty resources can be located by means of induction. A global database is also maintained for updating information on semantic errors in the system.

63 citations


Journal Article
TL;DR: The general principles underlying the discrete methods for data analysis in recognition problems are described and an approach is suggested for constructing recognition procedures that make use of logical functions.
Abstract: The general principles underlying the discrete methods for data analysis in recognition problems are described. An approach is suggested for constructing recognition procedures that make use of logical functions. Basic models are described, and issues concerning the complexity of their implementation via the construction of irredundant coverings of Boolean and integer matrices are discussed.

29 citations


Journal ArticleDOI
TL;DR: Based on the uniform distribution PAC learning model, the learnability for the class of monotone disjunctive normal form formulas with at most O (log n ) terms, denoted O ( log n )-term MDNF, is investigated.
Abstract: Based on the uniform distribution PAC learning model, the learnability for the class of monotone disjunctive normal form formulas with at most O (log n ) terms, denoted O (log n )-term MDNF, is investigated. Using the technique of restriction, an algorithm that learns O (log n )-term MDNF by examples in polynomial time is given.

29 citations


Journal ArticleDOI
Kewen Wang1
TL;DR: The framework presented in this paper does not only provide a new way of performing argumentation (abduction) in disjunctive deductive databases, but also is a simple, intuitive and unifying semantic framework fordisjunctive logic programming.
Abstract: In this paper, we propose an argumentation-based semantic framework, called DAS, for disjunctive logic programming The basic idea is to translate a disjunctive logic program into an argumentation-theoretic framework One unique feature of our proposed framework is to consider the disjunctions of negative literals as possible assumptions so as to represent incomplete information In our framework, three semantics preferred disjunctive hypothesis (PDH), complete disjunctive hypothesis (CDH) and well-founded disjunctive hypothesis (WFDH) are defined by three kinds of acceptable hypotheses to represent credulous, moderate and skeptical reasoning in artificial intelligence (AI), respectively Furthermore, our semantic framework can be extended to a wider class than that of disjunctive programs (called bi-disjunctive logic programs) In addition to being a first serious attempt in establishing an argumentation-theoretic framework for disjunctive logic programming, DAS integrates and naturally extends many key semantics, such as the minimal models, extended generalized closed world assumption (EGCWA), the well-founded model, and the disjunctive stable models In particular, novel and interesting argumentation-theoretic characterizations of the EGCWA and the disjunctive stable semantics are shown Thus the framework presented in this paper does not only provide a new way of performing argumentation (abduction) in disjunctive deductive databases, but also is a simple, intuitive and unifying semantic framework for disjunctive logic programming

23 citations


Journal ArticleDOI
TL;DR: The concepts of conjunctive normal form, implicates and prime implicates, as well as the resolution method are examined in the case of pseudo-Boolean functions.

19 citations


Journal ArticleDOI
TL;DR: It is established that every positive Boolean function can be represented by a shellable DNF, a polynomial procedure to compute the dual of a shellability DNF is proposed, and it is proved that testing the so-called lexico-exchange (LE) property (a strengthening of shellability) is NP-complete.
Abstract: Orthogonal forms of positive Boolean functions play an important role in reliability theory, since the probability that they take value 1 can be easily computed. However, few classes of disjunctive normal forms are known for which orthogonalization can be efficiently performed. An interesting class with this property is the class of shellable disjunctive normal forms (DNFs). In this paper, we present some new results about shellability. We establish that every positive Boolean function can be represented by a shellable DNF, we propose a polynomial procedure to compute the dual of a shellable DNF, and we prove that testing the so-called lexico-exchange (LE) property (a strengthening of shellability) is NP-complete.

14 citations


Book ChapterDOI
01 Jan 2000
TL;DR: A survey of all nontrivial properties of perfect binary codes given by the switching approach is presented in this paper, where some open questions are discussed as well as some open answers.
Abstract: Let C be a code (or a design or a graph) with some parameters. Let A be a subset of C. If the set C′ = (C \ A) ∪ B is a code (a design or a graph) with the same parameters as C we say that C′ is obtained from C by a switching. Special switchings for perfect binary codes are considered. A survey of all nontrivial properties of perfect codes given by the switching approach is presented. Some open questions are discussed.

11 citations


Journal ArticleDOI
TL;DR: A general procedure leading to optimized operator and quantifier rules for the sequent calculus, for natural deduction, and for clause formation is outlined, including variants of two-valued and many-valued propositional resolution, as well as a novel rule called combination.
Abstract: We investigate the problem of finding optimal axiomatizations for operators and distribution quantifiers in finitely valued first-order logics. We show that the problem can be viewed as the minimization of certain propositional formulas. We outline a general procedure leading to optimized operator and quantifier rules for the sequent calculus, for natural deduction, and for clause formation. The main tools are variants of two-valued and many-valued propositional resolution, as well as a novel rule called combination. In the case of operators and quantifiers based on semilattices, rules with a minimal branching degree can be obtained by instantiating a schema, which can also be used for optimal tableaux with sets-as-signs.

10 citations


Book ChapterDOI
13 Jun 2000
TL;DR: A DNA-based massively parallel exhaustive search is applied to solving the computational learning problems of DNF (disjunctive normal form) Boolean formulae and it is shown that the class of k-term DNFformulae (for any constant k) and theclass of general DNF formULae are efficiently learnable on DNA computer.
Abstract: We apply a DNA-based massively parallel exhaustive search to solving the computational learning problems of DNF (disjunctive normal form) Boolean formulae. Learning DNF formulae from examples is one of the most important open problems in computational learning theory and the problem of learning 3-term DNF formulae is known as intractable if RP ≠ NP. We propose new methods to encode any k-term DNF formula to a DNA strand, evaluate the encoded DNF formula for a truth-value assignment by using hybridization and PCR, and find a consistent DNF formula with the given examples. By employing these methods, we show that the class of k-term DNF formulae (for any constant k) and the class of general DNF formulae are efficiently learnable on DNA computer.

Journal ArticleDOI
26 Jul 2000
TL;DR: The method searches a decomposable partition of the set of all attributes, by using the error sizes of almost-fit decomposables extensions as a guiding measure, and then finds structural relations among the attributes in the obtained partition.
Abstract: In such areas as knowledge discovery, data mining and logical analysis of data, methodologies to find relations among attributes are considered important In this paper, given a data set (T, F) of a phenomenon, where T ⊆ {0, 1}n denotes a set of positive examples and F ⊆ {0, 1}n denotes a set of negative examples, we propose a method to identify decomposable structures among the attributes of the data Such information will reveal hierarchical structure of the phenomenon under consideration We first study computational complexity of the problem of finding decomposable Boolean extensions Since the problem turns out to be intractable (ie, NP-complete), we propose a heuristic algorithm in the second half of the paper Our method searches a decomposable partition of the set of all attributes, by using the error sizes of almost-fit decomposable extensions as a guiding measure, and then finds structural relations among the attributes in the obtained partition The results of numerical experiment on synthetically generated data sets are also reported

01 Jan 2000
TL;DR: The algorithms of Abraham and of Heidtmann have been generalized and the generalization of the second one is called the algorithm of Bertschy-Monney, which specifies a general language for representing subsets of a product of finite sets.
Abstract: A well known problem in reliability theory is the following: Given a formula φ in disjunctive normal form, we want to compute its probability p(φ). Usually, this consists in finding a representation of φ using disjoint formulas, i.e. we construct a set of formulas {f1, . . . , fn} such that the computation of the probability p(fi) of each fi is a simple task and the probability p(φ) is just the sum of the probabilities p(fi). Well-known methods for computing representations using disjoint formulas are the inclusion-exclusion method, the algorithm of Abraham [1], the more efficient algorithm of Heidtmann [8], and further methods based on them. In reliability theory, φ usually is a monotone boolean function. In our context of probabilistic model-based reasoning and Dempster-Shafer theory of evidence, the formula φ is often not monotone. This situation appears for example in model-based diagnostics when we need to compute the conditional probability of a diagnosis given the observations made on the system [2, 9]. Therefore, the algorithms of Abraham and of Heidtmann have been generalized to this situation, the generalization of the second one is called the algorithm of Bertschy-Monney [5]. Furthermore, propositional (i.e. binary) variables are usually not very handy for our purpose. Therefore, more general polytomic variables with several possible values instead of only two are considered and used in form of (finite) set constraints (SC). This means, we specify a general language for representing subsets of a product of finite sets. Again, the problem is to compute the probability of a DNF consisting of SCs. We present the generalization of the algorithm of Abraham to the present situation. Of course, there is no problem in mixing binary and polytomic variables in the generalized algorithm.

Book ChapterDOI
22 Mar 2000
TL;DR: This work provides a formal framework that explains why in many cases it is impossible to have a polynomial-time optimization of such a combination method, and why often combined decision problems are NP-hard, regardless of the complexity of the component problems.
Abstract: Most combination methods for decision procedures known from the area of automated deduction assign to a given “combined” input problem an exponential number of output pairs where the two components represent “pure” decision problems that can be decided with suitable component decision procedures. A mixed input decision problem evaluates to true iff there exists an output pair where both components evaluate to true. We provide a formal framework that explains why in many cases it is impossible to have a polynomial-time optimization of such a combination method, and why often combined decision problems are NP-hard, regardless of the complexity of the component problems. As a first application we consider Oppen’s combination method for algorithms that decide satisfiability of quantifier-free formulae in disjunctive normal form w.r.t. a first-order theory. A collection of first-order theories is given where combined problems are always NP-hard and Oppen’s method does not have a polynomial-time optimization. As a second example, similar results are given for the problem of combining algorithms that decide whether a unification problem has a solution w.r.t. to a given equational theory.

Proceedings ArticleDOI
18 Sep 2000
TL;DR: A new approach to a simple disjunctive decomposition of a Boolean function is presented, based on using symmetric relations among a function's variables to recognize intrinsic characteristics of the function.
Abstract: A new approach to a simple disjunctive decomposition of a Boolean function is presented. It is based on using symmetric relations among a function's variables to recognize intrinsic characteristics of the function. The conditions for the existence of a simple disjunctive decomposition are formulated and a hierarchical simple disjunctive decomposition is generated in a bottom-up manner without exhaustive search. Results on benchmark functions are very encouraging.

Book ChapterDOI
01 Jan 2000
TL;DR: An efficient algorithm is described, which carries out the generalised dual transformation from possibilistic disjunctivenormal form representing data into conjunctive normal form (CNF) representing knowledge and thus generates all the most interesting prime disjunctions.
Abstract: We describe the problem of mining set valued rules in large relational tables containing categorical attributes taking a finite number of values. Such rules allow for an interval of possible values to be selected for each attribute in condition instead of a single value for association rules, while conclusion contains a projection of the data restricted by the condition onto a target attribute. An example of such a rule might be “if HOUSEHOLDSIZE = {Two OR Tree} AND OCCUPATION={Professional OR Clerical} THEN PAYMENT_METHOD = {CashCheck (Max=249, Sum=4952) OR DebitCard (Max=175, Sum=3021)} WHERE Confidence=85%, Support=10%.}” We use an original conceptional and formal framework for representing multidimensional distribution induced from data by a number of so-called prime disjunctions upper bounding its surface. Each prime disjunction represents a wide multidimensional interval of impossible combinations of attribute values. This original formalism generalises the conventional boolean approach in two directions: (i) finite-valued attributes (instead of only 0 and 1), and (ii) continuous-valued semantics instead of (true and false). In addition, we describe an efficient algorithm, which carries out the generalised dual transformation from possibilistic disjunctive normal form (DNF) representing data into conjunctive normal form (CNF) representing knowledge and thus generates all the most interesting prime disjunctions. Once obtained they can be used to build different forms of rules or for other purposes (prediction, clustering etc.).

Journal ArticleDOI
TL;DR: This work defines a temporal negative normal form for the future fragment of the linear time propositional temporal logic, named FNext, and presents a new structure and a set of operators over it that allows us to efficiently manage the information about unitary implicants and implicates contained in the subformulae.
Abstract: Most theorem provers for Classical Logic transform the input formula into a particular normal form. This tranformation is done before the execution of the algorithm (Resolution and Dissolution) or it is integrated into the deductive algorithm (Tableaux). This situation is no different for Non-Classical Logics and, particularly, for Temporal Logics ([Fis91], [Fis97], [MP95], [Wol85]). However, unlike classical logic, temporal logic does not provide an extension of the notion of non negative normal form. In this work, we define a temporal negative normal form for the future fragment of the linear time propositional temporal logic, named FNext. The definition saves as much information as possible about implicants and implicates of the input formula and its subformulas. This property is the novelty of this normal form; for example, in [Fis97] the normal form is guided by the separation property and in [Ven86] the normal form provided prepares the input formula to be treated by a resolution method. Mo...

Journal ArticleDOI
TL;DR: It is shown that definitional translations can excellently compete with the usual translation by providing run-time measurements with the theorem prover KoMeT, and for some problems, proofs can only be obtained in reasonable time if Definitional translations are used.
Abstract: In this paper, we compare different normal form translations from a practical point of view. The usual translation of a closed first-order formula to a disjunctive normal form has severe drawbacks, namely the disruption of the formula's structure and an exponential worst case complexity. In contrast, definitional translations avoid these drawbacks by introducing some additional new predicates yielding a moderate increase of the length of the normal form. In implementations, the standard translation is preferred, possibly because the theorem prover has to cope with some additional redundancy introduced by the new predicates. We show that definitional translations can excellently compete with the usual translation by providing run-time measurements with our theorem prover KoMeT. Moreover, for some problems, proofs can only be obtained in reasonable time if definitional translations are used.

Journal ArticleDOI
Raymond L. Major1
01 Feb 2000
TL;DR: This work introduces a practical algorithm that forms a finite number of features using a decision tree in a polynomial amount of time and shows empirically that this procedure forms many features that subsequently appear in a tree and the new features aid in producing simpler trees when concepts are being learned from certain problem domains.
Abstract: Using decision trees as a concept description language, we examine the time complexity for learning Boolean functions with polynomial-sized disjunctive normal form expressions when feature construction is performed on an initial decision tree containing only primitive attributes. A shortcoming of several feature-construction algorithms found in the literature is that it is difficult to develop time complexity results for them. We illustrate a way to determine a limit on the number of features to use for building more concise trees within a standard amount of time. We introduce a practical algorithm that forms a finite number of features using a decision tree in a polynomial amount of time. We show empirically that our procedure forms many features that subsequently appear in a tree and the new features aid in producing simpler trees when concepts are being learned from certain problem domains. Expert systems developers can use a method such as this to create a knowledge base of information that contains specific knowledge in the form of If-Then rules.

Book ChapterDOI
01 Jan 2000
TL;DR: It was one of the most influential merits of Wolfgang Bibel in automated theorem proving to develop and to realize the idea of taking a cut-free affirmative proof calculus and of applying its rules in a backward direction, with redundancies and irrelevant information removed in order to allow efficient automation.
Abstract: It was one of the most influential merits of Wolfgang Bibel in automated theorem proving to develop and to realize — in the form of the connection method (Bibel, 1987) — the idea of taking a cut-free affirmative proof calculus and of applying its rules in a backward direction, with redundancies and irrelevant information removed in order to allow efficient automation. In the case of an input sentence in disjunctive normal form, this implies that a complementary compound instance of the input sentence is generated, i.e., a finite set of ground instances of its clauses through which all paths are complementary.

Proceedings Article
01 Jun 2000
TL;DR: In this article, the theoretical analysis of interval valued fuzzy sets (IVFS) applied to possibility measures is presented, and conditioning in the setting of IVPM is introduced considering either a canonical extension of well established rules, or more interestingly by solving the underlying Cox's axiomatic equation.
Abstract: This paper deals with the theoretical analysis of the notion of interval valued fuzzy sets (IVFS) applied to possibility measures. This permits to provide interval valued possibility measure (IVPM) and interval valued necessity measure (IVNM) as well as interval valued possibility distribution (IVPD). Particularly, two kinds of IVPM will be provided. The first one assumes a conjunctive normal form and a disjunctive normal form pertaining to a logical assertion. While the second one considers a logical AND and a logical OR as an essence to construct the underlying interval. The properties of both representations are investigated. Also, some basic mode operations involving conjunction and disjunction combinations are examined. Conditioning in the setting of IVPM is introduced considering either a canonical extension of well established rules, or more interestingly by solving the underlying Cox's axiomatic equation. Finally, some further extensions using general class of t‐norms operators are discussed.


Journal ArticleDOI
TL;DR: Alignments are proved, which connect the complexity Lv(/) of the d.n.f. of a threshold function / with the Chow parameters, which show that for almost all threshold functions, for sufficiently large n, log2Lv(/) > n — 2^/2/ilog2«(l Η-δ(η)), where δ( η) is an arbitrary function.
Abstract: We consider the problem on estimating the complexity of the disjunctive normal form (d.n.f.) of threshold functions in n variables, where the complexity is the minimal number of simple implicants in the representation of the dn.f. It is known that the complexity of the d.n.f. of almost all threshold functions is no less than n/log2n. We prove inequalities, which connect the complexity Lv(/) of the d.n.f. of a threshold function / with the Chow parameters. By using these inequalities we show that for almost all threshold functions, for sufficiently large n, log2Lv(/) > n — 2^/2/ilog2«(l Η-δ(η)), where δ(η) is an arbitrary function such that 6(/i) -> 0 and ηδ(η) -> «> as η -» °°. A threshold function (see [1, 2, 6]) is a Boolean function

12 Jul 2000
TL;DR: A statement is in disjunctive normal form if it is a disjunction (sequence of ORs) consisting of one or more disjuncts, each of which is a conjunction (AND) of oneor more literals (i.e., statement letters and negations of statement letters).
Abstract: A statement is in disjunctive normal form if it is a disjunction (sequence of ORs) consisting of one or more disjuncts, each of which is a conjunction (AND) of one or more literals (i.e., statement letters and negations of statement letters; Mendelson 1997, p. 30). Disjunctive normal form is not unique. The Wolfram Language command LogicalExpand[expr] gives disjunctive normal form (with some contractions, i.e., LogicalExpand attempts to shorten output with heuristic simplification). Examples...