scispace - formally typeset
Search or ask a question

Showing papers on "Boolean function published in 1998"


Proceedings Article
01 Jan 1998
TL;DR: This study investigates the possibility of completely infer a complex regulatory network architecture from input/output patterns of its variables using binary models of genetic networks, and finds the problem to be tractable within the conditions tested so far.
Abstract: Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n = 50 (elements) and k = 3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10(exp 15)) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean networks, the algorithm is generalizable to include multi-state models, essentially allowing direct application to realistic biological data sets. The ability to adequately solve the inverse problem may enable in-depth analysis of complex dynamic systems in biology and other fields.

1,031 citations


Posted Content
TL;DR: In this article, it was shown that the exponential quantum speed-up obtained for partial functions (i.e., problems involving a promise on the input) by Deutsch and Jozsa and by Simon cannot be obtained for any total function, and that there is a classical deterministic algorithm that computes some total Boolean function f with bounded-error using T black-box queries.
Abstract: We examine the number T of queries that a quantum network requires to compute several Boolean functions on {0,1}^N in the black-box model. We show that, in the black-box model, the exponential quantum speed-up obtained for partial functions (i.e. problems involving a promise on the input) by Deutsch and Jozsa and by Simon cannot be obtained for any total function: if a quantum algorithm computes some total Boolean function f with bounded-error using T black-box queries then there is a classical deterministic algorithm that computes f exactly with O(T^6) queries. We also give asymptotically tight characterizations of T for all symmetric f in the exact, zero-error, and bounded-error settings. Finally, we give new precise bounds for AND, OR, and PARITY. Our results are a quantum extension of the so-called polynomial method, which has been successfully applied in classical complexity theory, and also a quantum extension of results by Nisan about a polynomial relationship between randomized and deterministic decision tree complexity.

522 citations


Proceedings ArticleDOI
08 Nov 1998
TL;DR: This work examines the number T of queries that a quantum network requires to compute several Boolean functions on {0,1}/sup N/ in the black-box model and gives asymptotically tight characterizations of T for all symmetric f in the exact, zero-error, and bounded-error settings.
Abstract: We examine the number T of queries that a quantum network requires to compute several Boolean functions on {0,1}/sup N/ in the black-box model. We show that, in the black-box model, the exponential quantum speed-up obtained for partial functions (i.e. problems involving a promise on the input) by Deutsch and Jozsa and by Simon cannot be obtained for any total function: if a quantum algorithm computes some total Boolean function f with bounded-error using T black-box queries then there is a classical deterministic algorithm that computes f exactly with O(T/sup 6/) queries. We also give asymptotically tight characterizations of T for all symmetric f in the exact, zero-error, and bounded-error settings. Finally, we give new precise bounds for AND, OR, and PARITY. Our results are a quantum extension of the so-called polynomial method, which has been successfully applied in classical complexity theory, and also a quantum extension of results by Nisan about a polynomial relationship between randomized and deterministic decision tree complexity.

356 citations


Proceedings Article
08 Nov 1998
TL;DR: The analysis of the algorithm relates two natural combinatorial quantities that can be measured with respect to a Boolean function; one being global to the function and the other being local to it.
Abstract: We present a (randomized) test for monotonicity of Boolean functions. Namely, given the ability to query an unknown function f: {0, 1}/sup n/-{0, 1} at arguments of its choice, the test always accepts a monotone f, and rejects f with high probability if it is /spl epsiv/-far from being monotone (i.e., every monotone function differs from f on more than an /spl epsiv/ fraction of the domain). The complexity of the test is poly(n//spl epsiv/). The analysis of our algorithm relates two natural combinatorial quantities that can be measured with respect to a Boolean function; one being global to the function and the other being local to it. We also consider the problem of testing monotonicity based only on random examples labeled by the function. We show an /spl Omega/(/spl radic/2/sup n///spl epsiv/) lower bound on the number of required examples, and provide a matching upper bound (via an algorithm).

213 citations


Proceedings Article
01 Jul 1998
TL;DR: Evaluate, an algorithm for evaluating Quantified Boolean Formulae, a language that extends propositional logic in a way such that many advanced forms of propositional reasoning can be easily formulated as evaluation of a QBF, is proposed.
Abstract: The high computational complexity of advanced reasoning tasks such as belief revision and planning calls for efficient and reliable algorithms for reasoning problems harder than NP. In this paper we propose Evaluate, an algorithm for evaluating Quantified Boolean Formulae, a language that extends propositional logic in a way such that many advanced forms of propositional reasoning, e.g., reasoning about knowledge, can be easily formulated as evaluation of a QBF. Algorithms for evaluation of QBFs are suitable for the experimental analysis on a wide range of complexity classes, a property not easily found in other formalisms. Evaluate is based on a generalization of the Davis-Putnam procedure for SAT, and is guaranteed to work in polynomial space. Before presenting Evaluate, we discuss all the abstract properties of QBFs that we singled out to make the algorithm more efficient. We also briefly mention the main results of the experimental analysis, which is reported elsewhere.

175 citations


Book ChapterDOI
01 Jan 1998
TL;DR: The ability of XCS to evolve optimal populations for boolean multiplexer problems is demonstrated using condensation, a technique in which evolutionary search is suspended by setting the crossover and mutation rates to zero, and a more complex but more robust and efficient technique for obtaining optimal populations called subset extraction is presented and compared to condensation.
Abstract: Wilson’s recent XCS classifier system forms complete mappings of the payoff environment in the reinforcement learning tradition thanks to its accuracy based fitness. According to Wilson’s Generalization Hypothesis, XCS has a tendency towards generalization. With the XCS Optimality Hypothesis, I suggest that XCS systems can evolve optimal populations (representations); populations which accurately map all input/action pairs to payoff predictions using the smallest possible set of non-overlapping classifiers. The ability of XCS to evolve optimal populations for boolean multiplexer problems is demonstrated using condensation, a technique in which evolutionary search is suspended by setting the crossover and mutation rates to zero. Condensation is automatically triggered by self-monitoring of performance statistics, and the entire learning process is terminated by autotermination. Combined, these techniques allow a classifier system to evolve optimal representations of boolean functions without any form of supervision. A more complex but more robust and efficient technique for obtaining optimal populations called subset extraction is also presented and compared to condensation.

148 citations


Book ChapterDOI
31 May 1998
TL;DR: A corpus of particular Boolean functions: the idempotents are studied to construct functions which achieve the best possible tradeoffs between the cryptographic fundamental properties: balancedness, correlation-immunity, a high degree and a high nonlinearity.
Abstract: We study a corpus of particular Boolean functions: the idempotents. They enable us to construct functions which achieve the best possible tradeoffs between the cryptographic fundamental properties: balancedness, correlation-immunity, a high degree and a high nonlinearity (that is a high distance from the affine functions). They all represent extremely secure cryptographic primitives to be implemented in stream ciphers.

137 citations


Book ChapterDOI
31 May 1998
TL;DR: The definitions for some cryptographic properties are generalised, providing a measure suitable for use as a fitness function in a genetic algorithm seeking balanced Boolean functions that satisfy both correlation immunity and the strict avalanche criterion.
Abstract: Advances in the design of Boolean functions using heuristic techniques are reported. A genetic algorithm capable of generating highly nonlinear balanced Boolean functions is presented. Hill climbing techniques are adapted to locate balanced, highly nonlinear Boolean functions that also almost satisfy correlation immunity. The definitions for some cryptographic properties are generalised, providing a measure suitable for use as a fitness function in a genetic algorithm seeking balanced Boolean functions that satisfy both correlation immunity and the strict avalanche criterion. Results are presented demonstrating the effectiveness of the methods.

129 citations


Journal ArticleDOI
01 May 1998
TL;DR: This work identifies two classes of Boolean functions that have been used: positive and definite functions, and systematically investigates their implementation for dependency analyses and shows that both are closed under existential quantification.
Abstract: Many static analyses for declarative programming/database languages use Boolean functions to express dependencies among variables or argument positions. Examples include groundness analysis, arguably the most important analysis for logic programs, finiteness analysis and functional dependency analysis for databases. We identify two classes of Boolean functions that have been used: positive and definite functions, and we systematically investigate these classes and their efficient implementation for dependency analyses. On the theoretical side, we provide syntactic characterizations and study the expressiveness and algebraic properties of the classes. In particular, we show that both are closed under existential quantification. On the practical side, we investigate various representations for the classes based on reduced ordered binary decision diagrams (ROBDDs), disjunctive normal form, conjunctive normal form, Blake canonical form, dual Blake canonical form, and a form specific to definite functions. We compare the resulting implementations of groundness analyzers based on the representations for precision and efficiency.

105 citations


Journal ArticleDOI
TL;DR: A novel formulation of both routing and routability estimation that relies on a rendering of the routing constraints as a single large Boolean equation to represent all possible routes for all nets simultaneously.
Abstract: Guaranteeing or even estimating the routability of a portion of a placed field programmable gate array (FPGA) remains difficult or impossible in most practical applications. In this paper, we develop a novel formulation of both routing and routability estimation that relies on a rendering of the routing constraints as a single large Boolean equation. Any satisfying assignment to this equation specifies a complete detailed routing. By representing the equation as a binary decision diagram (BDD), we represent all possible routes for all nets simultaneously. Routability estimation is transformed to Boolean satisfiability, which is trivial for BDD's. We use the technique in the context of a perfect routability estimator for a global router. Experimental results from a standard FPGA benchmark suite suggest the technique is feasible for realistic circuits, but refinements are needed for very large designs.

100 citations


Proceedings Article
01 Jan 1998
TL;DR: A general strategy to generate Boolean genetic networks that incorporate all relevant biochemical and physiological parameters and cover all of their regulatory interactions in a deterministic manner is described.
Abstract: In this paper we show how Boolean genetic networks could be used to address complex problems in cancer biology. First, we describe a general strategy to generate Boolean genetic networks that incorporate all relevant biochemical and physiological parameters and cover all of their regulatory interactions in a deterministic manner. Second, we introduce 'realistic Boolean genetic networks' that produce time series measurements very similar to those detected in actual biological systems. Third, we outline a series of essential questions related to cancer biology and cancer therapy that could be addressed by the use of 'realistic Boolean genetic network' modeling.

Proceedings ArticleDOI
15 Apr 1998
TL;DR: This paper describes and evaluates methods for implementing formula-specific Boolean satisfiability (SAT) solver circuits in configurable hardware and demonstrates promising performance speedups on an important and complex problem with extensive applications in the CAD and AI communities.
Abstract: This paper describes and evaluates methods for implementing formula-specific Boolean satisfiability (SAT) solver circuits in configurable hardware. Starting from a general template design, our approach automatically generates VHDL for a circuit that is specific to the particular Boolean formula being solved. Such an approach tightly customizes the circuit to a particular problem instance. Thus, it represents an ideal use for dynamically-reconfigurable hardware, since it would be impractical to fabricate an ASIC for each Boolean formula being solved. Our approach also takes advantage of direct gate mappings and large degrees of fine-grained parallelism in the algorithm's Boolean logic evaluations. We compile our designs to two hardware targets: an IKOS logic emulation system, and Digital SRC's Pamette configurable computing board. Performance evaluations on the DIMACS SAT benchmark suite indicate that our approach offers speedups from 17X to more than a thousand times. Overall, this SAT solver demonstrates promising performance speedups on an important and complex problem with extensive applications in the CAD and AI communities.

Book ChapterDOI
01 Jan 1998
TL;DR: In this paper, the main objective of this chapter is to discuss different approaches to searching for optimal approximation spaces, and different constructions of approximation spaces are described, the problems of attribute and object selection are discussed.
Abstract: The main objective of this chapter is to discuss different approaches to searching for optimal approximation spaces. Basic notions concerning rough set concept based on generalized approximation spaced are presented. Different constructions of approximation spaces are described. The problems of attribute and object selection are discussed.

Journal ArticleDOI
TL;DR: This paper addresses a fundamental problem related to the induction of Boolean logic by establishing a Boolean function (or an extension) so that true is true in every given true (resp., false) vector.
Abstract: In this paper, we address a fundamental problem related to the induction of Boolean logic: Given a set of data, represented as a set of binary “truen-vectors” (or “positive examples”) and a set of “falsen-vectors” (or “negative examples”), we establish a Boolean function (or an extension)f, so thatfis true (resp., false) in every given true (resp., false) vector. We shall further require that such an extension belongs to a certain specified class of functions, e.g., class of positive functions, class of Horn functions, and so on. The class of functions represents our a priori knowledge or hypothesis about the extensionf, which may be obtained from experience or from the analysis of mechanisms that may or may not cause the phenomena under consideration. The real-world data may contain errors, e.g., measurement and classification errors might come in when obtaining data, or there may be some other influential factors not represented as variables in the vectors. In such situations, we have to give up the goal of establishing an extension that is perfectly consistent with the given data, and we are satisfied with an extensionfhaving the minimum number of misclassifications. Both problems, i.e., the problem of finding an extension within a specified class of Boolean functions and the problem of finding a minimum error extension in that class, will be extensively studied in this paper. For certain classes we shall provide polynomial algorithms, and for other cases we prove their NP-hardness.

Journal ArticleDOI
TL;DR: A function with small size unbounded weight threshold—AND circuits for which all threshold—XOR circuits have exponentially many nodes is presented, which answers the basic question of separating subsets of the hypercube by hypersurfaces induced by sparse real polynomials.
Abstract: We investigate the computational power of threshold—AND circuits versus threshold—XOR circuits. In contrast to the observation that small weight threshold—AND circuits can be simulated by small weight threshold—XOR circuit, we present a function with small size unbounded weight threshold—AND circuits for which all threshold—XOR circuits have exponentially many nodes. This answers the basic question of separating subsets of the hypercube by hypersurfaces induced by sparse real polynomials. We prove our main result by a new lower bound argument for threshold circuits. Finally we show that unbounded weight threshold gates cannot simulate alternation: There are \( AC^{0,3} \)-functions which need exponential size threshold—AND circuits.

Proceedings ArticleDOI
01 May 1998
TL;DR: A new algorithm for approximation is presented and its performance in comparison with existing techniques is analyzed, and a new decomposition algorithm is introduced that produces balanced partitions.
Abstract: Efficient techniques for the manipulation of Binary Decision Diagrams (BDDs) are key to the success of formal verification tools. Recent advances in reachability analysis and model checking algorithms have emphasized the need for efficient algorithms for the approximation and decomposition of BDDs. In this paper we present a new algorithm for approximation and analyze its performance in comparison with existing techniques. We also introduce a new decomposition algorithm that produces balanced partitions. The effectiveness of our contributions is demonstrated by improved results in reachability analysis for some hard problem instances.

Journal ArticleDOI
TL;DR: It follows that the co-ocurrence graph of the dual of a positive Boolean function can be always generated in time polynomial in the size of the function.
Abstract: Given a positive Boolean function fand a subset δ of its variables, we give a combinatorial condition characterizing the existence of a prime implicant Dˆof the Boolean dual f d of f having the property that every variable in δ appears in Dˆ We show that the recognition of this property is an NP-complete problem, suggesting an inherent computational difficulty of Boolean dualization, independently of the size of the dual function. Finally it is shown that if the cardinality of δ is bounded by a constant, then the above recognition problem is polynomial. In particular, it follows that the co-ocurrence graph of the dual of a positive Boolean function can be always generated in time polynomial in the size of the function.

Journal Article
TL;DR: The first non-trivial time-space tradeoff lower bound for functions f: {0,1}/sup n/spl rarr/{0, 1} on general branching programs was obtained in this paper.
Abstract: We obtain the first non-trivial time-space tradeoff lower bound for functions f: {0,1}/sup n//spl rarr/{0,1} on general branching programs by exhibiting a Boolean function f that requires exponential size to be computed by any branching program of length (1+/spl epsiv/)n, for some constant /spl epsiv/>0. We also give the first separation result between the syntactic and semantic read-k models for k>1 by showing that polynomial-size semantic read-twice branching programs can compute functions that require exponential size on any syntactic read-k branching program. We also show a time-space tradeoff result on the more general R-way branching program model: for any k, we give a function that requires exponential size to be computed by length kn q-way branching programs, for some q=q(k).

Proceedings ArticleDOI
08 Nov 1998
TL;DR: A simple algorithm is described that achieves error at most 1/2-/spl Omega/(1//spl radic/n), improving on the previous best bound of O(log n), and it is proved that no algorithm, given a polynomial number of samples, can guarantee error.
Abstract: We consider the problem of learning monotone Boolean functions over {0, 1}/sup n/ under the uniform distribution. Specifically, given a polynomial number of uniform random samples for an unknown monotone Boolean function f, and given polynomial completing time, we would like to approximate f as well as possible. We describe a simple algorithm that we prove achieves error at most 1/2-/spl Omega/(1//spl radic/n), improving on the previous best bound of 1/2-/spl Omega/((log/sup 2/ n)/n). We also prove that no algorithm, given a polynomial number of samples, can guarantee error 1/2-/spl omega/((log n)//spl radic/n), improving on the previous best hardness bound of O(1//spl radic/n). These lower bounds hold even if the learning algorithm is allowed membership queries. Thus this paper settles to an O(log n) factor the question of the best achievable error for learning the class of monotone Boolean functions with respect to the uniform distribution.

Journal ArticleDOI
TL;DR: Upper bounds on rates of approximation of real-valued functions of d Boolean variables by one-hidden-layer perceptron networks are given and sets of functions where these norms grow either polynomially or exponentially with d are described.

Proceedings ArticleDOI
01 May 1998
TL;DR: This work shows how a canonical representative for each NPN equivalence class can be computed efficiently and how it can be used for matching a boolean function against a set of library functions.
Abstract: Boolean matching tackles the problem whether a subcircuit of a boolean network can be substituted by a cell from a cell library. In previous approaches [7, 10, 8] each pair of a subcircuit and a cell is tested for NPN equivalence. This becomes very expensive if the cell library is large. In our approach the time complexity for matching a subcircuit against a library L is almost independent of the size of L. CPU time also remains small for matching a subcircuit against the huge set of functions obtained by bridging and fixing cell inputs; but the use of these functions in technology mapping is very profitable. Our method is based on a canonical representative for each NPN equivalence class. We show how this representative can be computed efficiently and how it can be used for matching a boolcan function against a set of library functions.

Proceedings ArticleDOI
23 May 1998
TL;DR: This work gives a characterization of span program size by a combinatorial-algebraic measure and identifies a property of bipartite graphs that is suficient for constructing Boolean functions with large monotone span program complexity.
Abstract: We give a characterization of span program size by a combinatorial-algebraic measure. The measure we consider is a generalization of a measure on covers which has been used to prove lower bounds on formula size and has also been studied with respect to communication complexity.In the monotone case our new methods yield nΩ(log n) lower bounds for the monotone span program complexity of explicit Boolean functions in n variables over arbitrary fields, improving the previous lower bounds on monotone span program size. Our characterization of span program size implies that any matrix with superpolynomial separation between its rank and cover number can be used to obtain superpolynomial lower bounds on monotone span program size. We also identify a property of bipartite graphs that is sufficient for constructing Boolean functions with large monotone span program complexity.

Journal ArticleDOI
TL;DR: The authors' function-decomposition method can discover and construct a hierarchy of new features that one can add to the original dataset or transform into a hierarchyof less complex datasets.
Abstract: The function decomposition described can identify subsets of existing features and discover nongiven functions that map these subsets to a new feature, also, it can organize the existing and new features into a hierarchy. The authors demonstrate their Hierarchy Induction Tool (HINT) system on a housing loan-allocation application. Methods for switching circuit design often implicitly deal with feature transformation. Such methods construct a circuit to implement a given or partially given tabulated Boolean function. The authors' function-decomposition method can discover and construct a hierarchy of new features that one can add to the original dataset or transform into a hierarchy of less complex datasets. The method allows the decomposition to deal with nominal-feature (that is, not necessarily binary) functions.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: A complete implementation of SPFDs using BDDs and apply it to the optimization of Boolean networks and results on benchmark circuits are very favorable.
Abstract: S. Yamashita et al. (1996) introduced a new category for expressing the flexibility that a node can have in a multi level network. Originally presented in the context of FPGA synthesis, the paper has wider implications which were discussed by R.K. Brayton (1997). SPFDs are essentially a set of incompletely specified functions. The increased flexibility that they offer is obtained by allowing both a node to change as well as its immediate fanins. The challenge with SPFDs is: (1) to compute them in an efficient way, and (2) to use their increased flexibility in a controlled way to optimize a circuit. We provide a complete implementation of SPFDs using BDDs and apply it to the optimization of Boolean networks. Two scenarios are presented, one which trades literals for wires and the other rewires the network by replacing one fanin at a node by a new fanin. Results on benchmark circuits are very favorable.

Journal Article
TL;DR: It is proved that almost every Boolean function (almost every balanced Boolean function) satisfies all above mentioned criteria on levels very close to optimal and therefore can be considered to be cryptographically strong.
Abstract: Boolean functions used in cryptographic applications have to satisfy various cryptographic criteria. Although the choice of the criteria depends on the cryptosystem in which they are used, there are some properties (balancedness, nonlinearity, high algebraic degree, correlation immunity, propagation criteria) which a cryptographically strong Boolean function ought to have. We study the above mentioned properties in the set of all Boolean functions (all balanced Boolean functions) and prove that almost every Boolean function (almost every balanced Boolean function) satisfies all above mentioned criteria on levels very close to optimal and therefore can be considered to be cryptographically strong.

Journal ArticleDOI
TL;DR: This paper presents methods for the construction of small OKFDD's based on dynamic variable ordering and decomposition-type choice, and uses an efficient reordering-based method for changing the decomposition type.
Abstract: Ordered Kronecker functional decision diagrams (OKFDD's) are a data structure for efficient representation and manipulation of Boolean functions. OKFDD's are a generalization of ordered binary decision diagrams (OBDD)s) and ordered functional decision diagrams and thus combine the advantages of both. In this paper, basic properties of OKFDD's and their efficient representation and manipulation are given. Starting with elementary manipulation algorithms, we present methods for the construction of small OKFDD's. Our approach is based on dynamic variable ordering and decomposition-type choice. For changing the decomposition type, we use an efficient reordering-based method. We briefly discuss the implementation of PUMA, an OKFDD package, which was used in all our experiments. These experiments demonstrate the quality of our methods in comparison to sifting and interleaving for OBDD's.

Proceedings ArticleDOI
04 Jan 1998
TL;DR: This paper presents a new technique, where the focus is on improving the equivalence check itself, thereby making it more robust in the absence of circuit similarity, based on tight integration of a Boolean Satisfiability Checker with BDDs.
Abstract: There has been much interest in techniques which combine the advantages of function-based methods, such as BDDs, with structure-based methods, such as ATPG, for verifying the equivalence of combinational circuits. However, most existing efforts have focused on exploiting circuit similarity through use of learning and/or ATPG-based methods rather than on making the integration between BDDs and ATPG techniques efficient. This paper presents a new technique, where the focus is on improving the equivalence check itself, thereby making it more robust in the absence of circuit similarity. It is based on tight integration of a Boolean Satisfiability Checker with BDDs, whereby BDDs are effectively used to reduce both the problem size and the number of backtracks for the satisfiability problem. This methodology does not preclude exploitation of circuit similarity, when it exists, since the improved check can be easily incorporated as the inner loop of the well-known iterative framework involving search and replacement of internally equivalent nodes. We demonstrate the significance of our contributions with practical results on the ISCAS benchmark circuits.

Proceedings ArticleDOI
23 Feb 1998
TL;DR: This work presents novel encoding schemes for Petri nets by using algebraic techniques to analyze the topology of the net, which allows one to drastically decrease the number of variables for state encoding and reduce memory and CPU requirements significantly.
Abstract: Petri nets are a graph-based formalism appropriate to model concurrent systems such as asynchronous circuits or network protocols. Symbolic techniques based on Binary Decision Diagrams (BDDs) have emerged as one of the strategies to overcome the state explosion problem in the analysis of systems modeled by Petri nets. The existing techniques for state encoding use a variable-per-place strategy that leads to encoding schemes with very low density. This drawback has been partially mitigated by using Zero-Suppressed BDDs, that provide a typical reduction of BDD sizes by a factor of two. This work presents novel encoding schemes for Petri nets. By using algebraic techniques to analyze the topology of the net, sets of places "structurally related" can be derived and encoded by only using a logarithmic number of Boolean variables. Such an approach allows one to drastically decrease the number of variables for state encoding and reduce memory and CPU requirements significantly.

Book ChapterDOI
25 Feb 1998
TL;DR: In this paper, it was shown that there is no polynomial time approximation scheme for the variable ordering problem unless P = NP and a small lower bound on the performance ratio of a polynomially time approximation algorithm under the assumption P ≠ NP.
Abstract: The size of Ordered Binary Decision Diagrams (OBDDs) is determined by the chosen variable ordering A poor choice may cause an OBDD to be too large to fit into the available memory The decision variant of the variable ordering problem is known to be NP-complete We strengthen this result by showing that there is no polynomial time approximation scheme for the variable ordering problem unless P = NP We also prove a small lower bound on the performance ratio of a polynomial time approximation algorithm under the assumption P ≠ NP

Proceedings ArticleDOI
23 Feb 1998
TL;DR: This work presents a method for functional decomposition with a novel concept for the exploitation of don't cares thereby combining two essential goals: the minimization of the number of decomposition functions in the current decomposition step and the extraction of common subfunctions for multi-output Boolean functions.
Abstract: Functional decomposition is an important technique in logic synthesis, especially for the design of lookup table based FPGA architectures. We present a method for functional decomposition with a novel concept for the exploitation of don't cares thereby combining two essential goals. The minimization of the number of decomposition functions in the current decomposition step and the extraction of common subfunctions for multi-output Boolean functions. The exploitation of symmetries of Boolean functions plays an important role in our algorithm as a means to minimize the number of decomposition functions not only for the current decomposition step but also for the (recursive) decomposition algorithm as a whole. Experimental results prove the effectiveness of our approach.