scispace - formally typeset
Search or ask a question

Showing papers on "Disjunctive normal form published in 1998"


Journal ArticleDOI
01 May 1998
TL;DR: This work identifies two classes of Boolean functions that have been used: positive and definite functions, and systematically investigates their implementation for dependency analyses and shows that both are closed under existential quantification.
Abstract: Many static analyses for declarative programming/database languages use Boolean functions to express dependencies among variables or argument positions. Examples include groundness analysis, arguably the most important analysis for logic programs, finiteness analysis and functional dependency analysis for databases. We identify two classes of Boolean functions that have been used: positive and definite functions, and we systematically investigate these classes and their efficient implementation for dependency analyses. On the theoretical side, we provide syntactic characterizations and study the expressiveness and algebraic properties of the classes. In particular, we show that both are closed under existential quantification. On the practical side, we investigate various representations for the classes based on reduced ordered binary decision diagrams (ROBDDs), disjunctive normal form, conjunctive normal form, Blake canonical form, dual Blake canonical form, and a form specific to definite functions. We compare the resulting implementations of groundness analyzers based on the representations for precision and efficiency.

105 citations


01 Jul 1998
TL;DR: In this paper, a continuous version of cellular automata (fuzzy CA) obtained by fuzzification of the disjunctive normal form which describes the corresponding Boolean rule is considered.
Abstract: In this paper we consider a continuous version of cellular automata (fuzzy CA) obtained by “fuzzification” of the disjunctive normal form which describes the corresponding Boolean rule We concentrate on fuzzy rule 90, whose Boolean version deserves some attention for the complex patterns it generates We show that the behavior of fuzzy rule 90 is very simple, in that the system always converges to a fixed point In the case of finite support configurations, we also show aperiodicity of every temporal sequences, extending and complementing Jen’s result on aperiodicity of Boolean rule 90We finally show and analyze the remarkable fact that, depending on the level of state-discreteness used to visualize the dynamics of fuzzy rule 90, the display might show (after a transient) the well known complex Boolean behavior instead of the (correct) convergence to a fixed point The results of the analysis lead not only to a caveat on the dangers of visualization, but also an unexpected explanation of the dynamics of Boolean rule 90

62 citations


Journal ArticleDOI
TL;DR: A recursive inductive learning scheme that is able to acquire hand pose models in the form of disjunctive normal form expressions involving multivalued features and outperforming all other inductive algorithms is presented.
Abstract: Presents a recursive inductive learning scheme that is able to acquire hand pose models in the form of disjunctive normal form expressions involving multivalued features. Based on an extended variable-valued logic, our rule-based induction system is able to abstract compact rule sets from any set of feature vectors describing a set of classifications. The rule bases which satisfy the completeness and consistency conditions are induced and refined through five heuristic strategies. A recursive induction learning scheme in the RIEVL algorithm is designed to escape local minima in the solution space. A performance comparison of RIEVL with other inductive algorithms, ID3, NewID, C4.5, CN2, and HCV, is given in the paper. In the experiments with hand gestures, the system produced the disjunctive normal form descriptions of each pose and identified the different hand poses based on the classification rules obtained by the RIEVL algorithm. RIEVL classified 94.4 percent of the gesture images in our testing set correctly, outperforming all other inductive algorithms.

42 citations


Book ChapterDOI
01 Jan 1998
TL;DR: This chapter introduces several important issues for constructive induction and describes a new multi-strategy constructive induction algorithm (GALA2.0) which is independent of the learning algorithm, designed as a preprocessing step before standard machine learning algorithms are applied.
Abstract: Inductive algorithms rely strongly on their representational biases. Representational inadequacy can be mitigated by constructive induction. This chapter introduces several important issues for constructive induction and describes a new multi-strategy constructive induction algorithm (GALA2.0) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as a preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA2.0 on real domains for two different learners: C4.5 and backpropagation.

19 citations


Journal ArticleDOI
TL;DR: The theory of (partially defined) discrete functions is viewed/introduced as an important theoretical tool for the analysis of multi-attribute data sets and the identification of so-called stable discrete functions are presented.
Abstract: Many datadanalysis algorithms in machine learning, datamining and a variety of other disciplines essentially operate on discrete multidattribute data sets. By means of discretisation or binarization also numerical data sets can be successfully analysed. Therefore, in this paper we view/introduce the theory of (partially defined) discrete functions as an important theoretical tool for the analysis of multidattribute data sets. In particular we study monotone (partially defined) discrete functions. Compared with the theory of Boolean functions relatively little is known about (partially defined) monotone discrete functions. It appears that decision lists are useful for the representation of monotone discrete functions. Since dualization is an important tool in the theory of (monotone) Boolean functions, we study the interpretation and properties of the dual of a (monotone) binary or discrete function. We also introduce the dual of a pseudodBoolean function. The results are used to investigate extensions of partially defined monotone discrete functions and the identification of monotone discrete functions. In particular, we present a polynomial time algorithm for the identification of sodcalled stable discrete functions.

18 citations


Journal ArticleDOI
TL;DR: A compromise alternative, which is proved to satisfy pareto optimality, can thus be obtained through the aggregation of judgements based on the aggregation hierarchical structure.

17 citations


Book ChapterDOI
14 Dec 1998
TL;DR: It is proved that approximating the minimally consistent DNF formula, and a generalization of graph colorability, is very hard and the proof technique is such that the stronger the complexity hypothesis used, the larger the inapproximability ratio obtained.
Abstract: In this paper, we study the possibility of Occam's razors for a widely studied class of Boolean Formulae: Disjunctive Normal Forms (DNF). An Occam's razor is an algorithm which compresses the knowledge of observations (examples) in small formulae. We prove that approximating the minimally consistent DNF formula, and a generalization of graph colorability, is very hard. Our proof technique is such that the stronger the complexity hypothesis used, the larger the inapproximability ratio obtained. Our ratio is among the first to integrate the three parameters of Occam's razors: the number of examples, the number of description attributes and the size of the target formula labelling the examples. Theoretically speaking, our result rules out the existence of efficient deterministic Occam's razor algorithms for DNF. Practically speaking, it puts a large worst-case lower bound on the formulae's sizes found by learning systems proceeding by rule searching.

12 citations


Book ChapterDOI
01 Jan 1998
TL;DR: A branch-and-bound algorithm to trace disjunctive (conjunctive) combinations of binary predictor variables to predict a binary criterion variable and allows for finding logical classification rules that can be used to derive whether or not a given object belongs to a given category based on the attribute pattern of the object.
Abstract: This paper proposes a branch-and-bound algorithm to trace disjunctive (conjunctive) combinations of binary predictor variables to predict a binary criterion variable. The algorithm allows for finding logical classification rules that can be used to derive whether or not a given object belongs to a given category based on the attribute pattern of the object. An objective function is minimized which takes into account both accuracy in prediction and cost of the predictors. A simulation study is presented in which the performance of the algorithm is evaluated.

10 citations


Book ChapterDOI
TL;DR: A logic is defined which handles anytime deduction and anytime compilation by incorporating several major features and is semantically founded on the notion of resource which captures both the accuracy and the cost of approximation.
Abstract: One of the main characteristics of reasoning in knowledge based systems is its high computational complexity. Anytime deduction and anytime compilation are two attractive approaches that have been proposed for addressing such a difficulty. The first one offers a compromise between the time complexity needed to compute approximate answers and the quality of these answers. The second one proposes a trade-off between the space complexity of the compiled theory and the number of possible answers it can efficiently process. The purpose of our study is to define a logic which handles these two approaches by incorporating several major features. First, the logic is semantically founded on the notion of resource which captures both the accuracy and the cost of approximation. Second, a stepwise procedure is included for improving approximate answers. Third, both sound approximations and complete ones are covered. Fourth and finally, the reasoning task may be done off-line and compiled theories can be used for answering many queries.

5 citations


Proceedings Article
03 Aug 1998
TL;DR: It is reported that a variant of the procedure speciied by Dung computes the regular extension semantics of the Eshghi-Kowalski procedure.
Abstract: Can the elegant abductive proof procedure by Eshghi and Kowalski be extended to answer queries for disjunctive logic programs? If yes, what is the semantics that such an extended procedure computes? Several years ago, in an unpublished manuscript Dung speciied a proof procedure that embeds a form of linear resolution into the Eshghi-Kowalski procedure 3]. More recently, You et al. deened the regular extension semantics for disjunc-tive programs 19], which was strongly motivated by the observation that some forms of extended Eshghi-Kowalski procedure could be used to answer queries under this semantics. This paper reports the nding that a variant of the procedure speciied by Dung computes the regular extension semantics.

5 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a verification method via invariants for communication protocol modeled as 2-ECPSMs, where a human verifier describes an invariant of a given protocol in a disjunctive normal form, and a verification system shows safety or liveness based on the invariant.
Abstract: Previously, we proposed a verification method via invariants for communication protocol modeled as 2-ECPSMs. In the proposed method, a human verifier describes an invariant of a given protocol in a disjunctive normal form, and a verification system shows safety or liveness based on the invariant. The tedious work on describing invariant formulae is the most significant shortcoming of the proposed method. This paper deals with a semi-automated derivation of invariant formulae for communication protocol modeled as 2-ECFSMs. In the method, the logical formula which holds on a subset of reachable states is automatically generated. Such a subset consists of states which are teachable by synchronous communication from the initial states and those which are reachable by sequences of sending transitions from synchronously reachable states. To obtain an invariant, a human verifier supplements several disjuncts for other part of reachability set. We conducted an experiment on deriving an invariant formula of a sample protocol extracted from the OSI session protocol. By the result, 636 conjunctive formulae were automatically derived and the conjunction of those formulae was shown to be an invariant of the sample protocol, i.e. the sample protocol was shown to be safe automatically.

Proceedings ArticleDOI
21 Apr 1998
TL;DR: A learning algorithm that requires a polynomial number of examples in the size of an unknown formula under probably approximately correct (PAC) learning with a subset query, while it is not polynometric time is proposed.
Abstract: We show a positive result for learnability of all arbitrary disjunctive normal form (DNF) formula. We propose a learning algorithm that requires a polynomial number of examples in the size of an unknown formula under probably approximately correct (PAC) learning with a subset query, while it is not polynomial time. Our algorithm is based on Valiant's (1984) approach with respect to monotone-DNF.

Book ChapterDOI
25 Feb 1998
TL;DR: This work considers the class C DH R of disguised double Horn functions, i.e., the functions which and whose complement are both disguised Horn, and investigates the syntactical properties of this class and relationship to other classes of Boolean functions.
Abstract: As a natural restriction of disguised Horn functions (i.e., Boolean functions which become Horn after a renaming (change of polarity) of some of the variables), we consider the class C DH R of disguised double Horn functions, i.e., the functions which and whose complement are both disguised Horn. We investigate the syntactical properties of this class and relationship to other classes of Boolean functions. Moreover, we address the extension problem of partially defined Boolean functions (pdBfs) in C DH R, where a pdBf is a function defined on a subset (rather than the full set) of Boolean vectors. We show that the class C DH R coincides with the class C 1–DL of 1-decision lists, and with the intersections of several well-known classes of Boolean functions. Furthermore, polynomial time algorithms for the recognition of a function in C DH R from Horn formulas and other classes of formulas are provided, while the problem is intractable in general. We also present an algorithm for the extension problem which, properly implemented, runs in linear time.

Journal ArticleDOI
TL;DR: In this paper, a disjunctive logic program is naturally transformed into an argument framework and the credulous argumentation is characterized as the maximal members of all acceptable hypotheses, and it is shown that the formalism of credulous arguments can be implemented through the disjoint stable models.
Abstract: The relationship between the disjunctive stable semantics and argumentation is rarely explored. In particular, the problem of how to perform argumentation with disjunctive logic programs by the disjunctive stable semantics is still open. This paper attempts to address this problem and a satisfied solution is provided, in which a disjunctive logic program is naturally transformed into an argument framework and the credulous argumentation is characterized as the maximal members of all acceptable hypotheses. In this semantic framework, some interesting results are obtained. In particular, it is shown that the formalism of credulous argumentation can be implemented through the disjunctive stable models. As a result, the work provides not only a new way of performing argumentation (abduction) in disjunctive deductive databases, but also a natural and complete extension for the disjunctive stable semantics.

Book ChapterDOI
01 Jan 1998
TL;DR: The performance of five commonly employed evolutionary algorithms are examined on a collection of 100 separate rule induction tasks on five freely available datasets to indicate that single-member based methods fare at least as well as population based techniques when rules are restricted to fairly low complexity.
Abstract: Induction of useful rules from databases has been studied by several researchers. There remains need for systematic comparison of alternative such methods, especially considering the available variety of rule representation strategies, genetic operators, evolutionary algorithm designs, and so forth. Here, the performance of five commonly employed evolutionary algorithms are examined on a collection of 100 separate rule induction tasks on five freely available datasets. All tasks require the generation of rules in disjunctive normal form with either a fixed or free consequent maximising an accuracy/applicability tradeoff measure; tasks differ in terms of the dataset used, the identity of a fixed consequent (or no fixed consequent), and the maximum number of disjuncts allowed in the antecedent. Results generally indicate that single-member based methods (hill climbing, simulated annealing, tabu search) fare at least as well as population based techniques when rules are restricted to fairly low complexity, but this situation is reversed as rules are allowed to be more complex. These results are of import to data mining application developers and researchers wishing to find the appropriate search strategy for rule induction with respect to their particular needs.