scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Majority-Inverter Graph: A New Paradigm for Logic Optimization

TL;DR: This paper proposes a paradigm shift in representing and optimizing logic by using only majority (MAJ) and inversion (INV) functions as basic operations, and develops powerful Boolean methods exploiting global properties of MIGs, such as bit-error masking.
Abstract: In this paper, we propose a paradigm shift in representing and optimizing logic by using only majority (MAJ) and inversion (INV) functions as basic operations. We represent logic functions by majority-inverter graph (MIG): a directed acyclic graph consisting of three-input majority nodes and regular/complemented edges. We optimize MIGs via a new Boolean algebra, based exclusively on majority and inversion operations, that we formally axiomatize in this paper. As a complement to MIG algebraic optimization, we develop powerful Boolean methods exploiting global properties of MIGs, such as bit-error masking. MIG algebraic and Boolean methods together attain very high optimization quality. Considering the set of IWLS’05 benchmarks, our MIG optimizer (MIGhty) enables a 7% depth reduction in LUT-6 circuits mapped by ABC while also reducing size and power activity, with respect to similar and-inverter graph (AIG) optimization. Focusing on arithmetic intensive benchmarks instead, MIGhty enables a 16% depth reduction in LUT-6 circuits mapped by ABC, again with respect to similar AIG optimization. Employed as front-end to a delay-critical 22-nm application-specified integrated circuit flow (logic synthesis + physical design) MIGhty reduces the average delay/area/power by 13%/4%/3%, respectively, over 31 academic and industrial benchmarks. We also demonstrate delay/area/power improvements by 10%/10%/5% for a commercial FPGA flow.

Content maybe subject to copyright    Report

0278-0070 (c) 2015 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE
permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCAD.2015.2488484, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Majority-Inverter Graph: A New Paradigm for
Logic Optimization
Luca Amar
´
u, Student Member, IEEE, Pierre-Emmanuel Gaillardon, Member, IEEE,
Giovanni De Micheli, Fellow, IEEE
Abstract In this paper, we propose a paradigm shift in
representing and optimizing logic by using only majority (MAJ)
and inversion (INV) functions as basic operations. We represent
logic functions by Majority-Inverter Graph (MIG): a directed
acyclic graph consisting of three-input majority nodes and regu-
lar/complemented edges. We optimize MIGs via a new Boolean
algebra, based exclusively on majority and inversion operations,
that we formally axiomatize in this work. As a complement
to MIG algebraic optimization, we develop powerful Boolean
methods exploiting global properties of MIGs, such as bit-error
masking. MIG algebraic and Boolean methods together attain
very high optimization quality. Considering the set of IWLS’05
benchmarks, our MIG optimizer (MIGhty) enables a 7% depth
reduction in LUT-6 circuits mapped by ABC while also reducing
size and power activity, with respect to similar AIG optimization.
Focusing on arithmetic intensive benchmarks instead, MIGhty
enables a 16% depth reduction in LUT-6 circuits mapped by
ABC, again with respect to similar AIG optimization. Employed
as front-end to a delay-critical 22-nm ASIC flow (logic synthesis
+ physical design) MIGhty reduces the average delay/area/power
by 13%/4%/3%, respectively, over 31 academic and industrial
benchmarks. We also demonstrate delay/area/power improve-
ments by 10%/10%/5% for a commercial FPGA flow.
Index Terms— Design methods and tools, Optimization, Ma-
jority Logic, Boolean Algebra, DAG, Logic Synthesis.
I. INTRODUCTION
N
OWADAYS, Electronic Design Automation (EDA) tools
are challenged by design goals at the frontier of what is
achievable in advanced technologies. In this scenario, extend-
ing the optimization capabilities of logic synthesis tools is of
paramount importance.
In this paper, we propose a paradigm shift in representing
and optimizing logic, by using only majority (MAJ) and
inversion (INV) as basic operations. We use the terms in-
version and complementation interchangeably. We focus on
majority functions because they lie at the core of Boolean
function classification [1]. Thanks to that, majority inher-
its the expressive power from many other function classes.
Together with inversion, majority can express all Boolean
functions. Based on these primitives, we present in this work
the Majority-Inverter Graph (MIG), a logic representation
structure consisting of three-input majority nodes and regu-
lar/complemented edges. MIGs include any AND/OR/Inverter
Graphs (AOIGs), containing also the popular AIGs [2]. To
provide native manipulation of MIGs, we introduce a novel
Boolean algebra, based exclusively on majority and inversion
operations [3]. We define a set of five transformations forming
a sound and complete axiomatic system. Using a sequence
of these primitive axioms, it is possible to manipulate ef-
ficiently a MIG and reach all points in the representation
The authors are with the Integrated Systems Laboratory, Swiss Federal In-
stitute of Technology, Lausanne, EPFL, 1015 Lausanne, Switzerland (e-mail:
name.surname@epfl.ch). Copyright (c) 2015 IEEE. Personal use of this material is
permitted. However, permission to use this material for any other purposes must be
obtained from the IEEE by sending an email to pubs-permissions@ieee.org.
space. We apply MIG algebra axioms locally, to design fast
and efficient MIG algebraic optimization methods. We also
exploit global properties of MIGs to design slower but very
effective MIG Boolean optimization methods [4]. Specifically,
we take advantage of the error masking property of majority
operators. By selectively inserting logic errors in a MIG,
successively masked by majority nodes, we enable strong
simplifications in logic networks. MIG algebraic and Boolean
methods together attain very high optimization quality. For
example when targeting depth reduction, our MIG optimizer,
MIGhty, transforms a ripple carry structure into a carry look-
ahead like one. Considering the set of IWLS’05 benchmarks,
MIGhty enables a 7% depth reduction in LUT-6 circuits
mapped by ABC [2] while also reducing size and power
activity, with respect to similar AIG optimization. Focusing on
arithmetic intensive benchmarks, MIGhty enables a 16% depth
reduction in LUT-6 circuits, again with respect to similar AIG
optimization. Employed as front-end to a delay-critical 22-
nm ASIC flow MIGhty reduces the average delay/area/power
by 13%/4%/3%, respectively, over academic and industrial
benchmarks, as compared to a leading commercial ASIC flow.
We demonstrate improvements in delay/area/power metrics by
10%/10%/5% for a commercial 28-nm FPGA flow.
The remainder of this paper is organized as follows. Section
II gives background on logic representation and optimization.
Section III presents MIGs with their properties and associ-
ated Boolean algebra. Section IV proposes MIG algebraic
optimization methods and Section V describes MIG Boolean
optimization methods. Section VI shows experimental results
for MIG-based optimization and compares them to the state-
of-the-art academic tools. Section VI also shows results for
MIG-based optimization employed as front-end to commercial
ASIC and FPGA design flows. Section VII is a conclusion.
II. BACKGROUND AND MOTIVATION
This section presents first a background on logic represen-
tation and optimization for logic synthesis. Then, it introduces
the necessary notations and definitions for this work.
A. Logic Representation
The (efficient) way logic functions are represented in EDA
tools is key to design efficient hardware. On the one hand,
logic representations aim at having the fewest number of
primitive elements (literals, sum-of-product terms, nodes in
a logic network, etc.) in order to (i) have small memory
footprint and (ii) be covered by as few library elements as
possible. On the other hand, logic representation forms must be
simple enough to manipulate. This may require having a larger
number of primitive elements but with simpler manipulation
laws. The choice of a computer data-structure is a trade-off
between compactness and manipulation easiness.

0278-0070 (c) 2015 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE
permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCAD.2015.2488484, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
In the early days of EDA, the standard representation form
for logic was the Sum Of Product (SOP) form, i.e., a dis-
junction (OR) of conjuctions (AND) made of literals [5]. This
standard was driven by PLA technology whose functionality
is naturally modeled by a SOP [6]. Other two-level forms,
such as product-of-sums or EX-SOP, have been studied at that
time [17]. Two-level logic is compact for small sized functions
but, beyond that size, it becomes too large to be efficiently
mapped into silicon. Yet, two-level logic has been supported
by efficient heuristic and exact optimization algorithms. With
the advent of VLSI, the standard representation for logic
moved from SOP to Directed Acyclic Graphs (DAGs) [7]. In
a DAG-based logic representation, nodes correspond to logic
functions (gates) and directed edges (wires) connect the nodes.
Nodes’ functions can be internally represented by SOPs lever-
aging the proven efficiency of two-level optimization. From
a global perspective, general optimization procedures run on
the entire DAG. While being potentially very compact, DAGs
without bounds on the nodes’ functionality are not easy to
optimize. This is because this kind of representation demands
that optimization techniques deal with all possible types and
sizes of functions which is impractical. On top of that, the
cumulative memory footprint for each functionally unbounded
node is potentially very large. Restricting the permissible node
function types alleviates this issue. At the extreme case, one
can focus on just one type of function per node and add
complemented/regular attributes to the edges. Even though in
principle, this restriction increases the representation size, in
practice it unlocks better (smaller) representations because it
supports more effective logic optimization simplifying a DAG.
A notable example of DAG where all the nodes realize the
same function is Binary Decision Diagrams (BDDs) [11]. In
BDDs, nodes act as 2:1 multiplexers. With additional restric-
tion on the ordering of input variables, BDDs are canonical
and provide very efficient manipulation procedures. For this
reason, BDDs found application in various areas of EDA, such
as verification, testing, optimization, automated reasoning, etc
[5]. However, the price for such an optimal manipulation
efficiency is the BDD size, which is often too large for direct
mapping into silicon. Even though BDDs are not usually
mapped directly into silicon, they support in various ways
logic manipulation tasks in some optimization algorithms [9].
Another DAG where all nodes realize the same function is
the And-Inverter Graph (AIG) [2], [10] where nodes act as
two-input ANDs. AIGs can be optimized through traditional
Boolean algebra axioms and derived theorems. Iterated over
the whole AIG, local transformations produce very effective
results and scale well with the size of the circuits. This means
that, overall, AIGs can be made remarkably small through
logic optimization. For this reason, AIG is one of the current
representation standards for logic synthesis.
B. Logic Optimization
Logic optimization consists of manipulating a logic rep-
resentation structure in order to minimize some target
metric. Usual optimization targets are size (number of
nodes/elements), depth (maximum number of levels), inter-
connections (number of edges/nets), etc.
Logic optimization methods are closely coupled to the
data structures they run on. In two-level logic representation
(SOP), optimization aims at reducing the number of terms.
ESPRESSO is the main optimization tool for SOP [6]. Its
algorithms operate on SOP cubes and manipulate the ON-
, OFF- and DC-covers iteratively. In its default settings,
ESPRESSO uses fast heuristics and does not guarantee to
reach the global optimum. However, an exact optimization of
two level logic is available (under the name of ESPRESSO-
exact) and often run in a reasonable time. The exact two-level
optimization is based on Quine-McCluskey algorithm [18].
Moving to DAG logic representation (also called multi-level
logic), optimization aims at reducing graph size and depth or
other accepted complexity metrics. There, DAG-based logic
optimization methods are divided into two groups: Algebraic
methods, which are fast and Boolean methods, which are
slower but may achieve better results [21]. Traditional al-
gebraic methods assume that DAG nodes are represented in
SOP form and treat them as polynomials [7], [19]. Algebraic
operations are selectively iterated over all DAG nodes, until
no improvement is possible. Basic algebraic operations are
extraction, decomposition, factoring, balancing and substitu-
tion [20], [21]. Their efficient runtime is enabled by theories
of weak-division and kernel extraction. In contrast, Boolean
methods do not treat the functions as polynomials but handle
their true Boolean nature using Boolean identities as well
as (global) don’t cares (circuit flexibilities) to get a better
solution [5], [21], [24]–[26]. Boolean division and substi-
tution techniques trade off runtime for better minimization
quality. Functional decomposition is another Boolean method
which aims at representing the original function by means of
simpler component functions. The first attempts at functional
decomposition [27]–[29] make use of decomposition charts to
find the best component functions. Since the decomposition
charts grow exponentially with the number of variables these
techniques are only applicable to small functions. A different,
and more scalable, approach to functional decomposition is
based on the BDD data structure. A particular class of BDD
nodes, called dominator nodes, highlights advantageous func-
tional decomposition points [9]. BDD decomposition can be
applied recursively and is capable of exploiting optimization
opportunities not visible by algebraic counterparts [9], [22],
[23]. Recently, disjoint support decomposition has also been
considered to optimize locally small functions and speedup
logic manipulation [30], [31]. It is worth mentioning that the
main difficulty in developing Boolean algorithms is due to the
unrestricted space of choices. This makes more difficult to take
good decisions during functional decomposition.
Advanced DAG optimization methodologies, and associated
tools, use both algebraic and Boolean methods. When DAG
nodes are restricted to just one function type the optimization
procedure can be made much more effective. This is because
logic transformations are designed specifically to target the
functionality of the chosen node. For example, in AIGs, logic
transformations such as balancing, refactoring, and general
rewriting are very effective. For example, balancing is based
on the associativity axiom from traditional Boolean algebra
[12], [13]. Refactoring operates on an AIG subgraph which is
first collapsed into SOP and then factored out [19]. General
rewriting conceptually includes balancing and refactoring. Its
purpose is to replace AIG subgraphs with equivalent pre-
computed AIG implementations that improve the number
of nodes and levels [12]. By applying local, but powerful,
transformations many times during AIG optimization it is

0278-0070 (c) 2015 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE
permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCAD.2015.2488484, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
possible to obtain very good result quality. The restriction
to AIGs makes it easier to assess the intermediate quality
and to develop the algorithms, but in general is more prone
to local minimum. Nevertheless, Boolean methods can still
complement AIG optimization to attain higher quality of
results [2], [24].
In this work, we present a new representation form, based
on majority and inversion, with its native Boolean algebra. We
show algebraic and Boolean optimization techniques for this
data structure unlocking new points in the design space.
Note that early attempts to majority logic have already
been reported in the 60’s [14]–[16], but, due to their inherent
complexity, failed to gain momentum later on in automated
synthesis. We address, in this paper, the unique opportunity
led by majority logic in a contemporary synthesis flow.
C. Notations and Definitions
We provide hereafter notations and definitions on Boolean
algebra and logic networks.
1) Boolean Algebra: In the binary Boolean domain, the
symbol B indicates the set of binary values {0, 1}, the symbols
and represent the conjunction (AND) and disjunction
(OR) operators, the symbol
0
represents the complementation
(INV) operator and 0/1 are the false/true logic values. Alter-
native symbols for and are · and +, respectively. The
standard Boolean algebra (originally axiomatized by Hunting-
ton [32]) is a non-empty set (B, , ,
0
, 0, 1) subject to identity,
commutativity, distributivity, associativity and complement ax-
ioms over , and
0
[1]. For the sake of completeness, we
report these basic axioms in Eq. 1. Such axioms will be used
later on in this work for proving theorems.
This axiomatization for Boolean algebra is sound and
complete [33]. Informally, it means that logic arguments, or
formulas, proved by axioms in are valid (soundness) and all
true logic arguments are provable (completeness). We refer the
reader to [33] for a more formal discussion on mathematical
logic. In practical EDA applications, only sound and complete
axiomatizations are of interest.
Other Boolean algebras exist, with different operators and
axiomatizations, such as Robbins algebra, Freges algebra,
Nicods algebra, etc. [33]. Boolean algebras are the basis to
operate on logic networks.
Identity : .I
x 0 = x
x 1 = x
Commutativity : .C
x y = y x
x y = y x
Distributivity : .D
x (y z) = (x y) (x z)
x (y z) = (x y) (x z)
Associativity : .A
x (y z) = (x y) z
x (y z) = (x y) z
Complement : .Co
x x
0
= 1
x x
0
= 0
(1)
2) Logic Network: A logic network is a Directed Acyclic
Graph (DAG) with nodes corresponding to logic functions and
directed edges representing interconnection between the nodes.
The direction of the edges follows the natural computation
from inputs to outputs. The terms logic network, Boolean net-
work, and logic circuit are used interchangeably in this paper.
A logic network is said irredundant if no node can be removed
without altering the Boolean function that it represents. A logic
network is said homogeneous if each node represents the same
logic function and has a fixed indegree, i.e., the number of
incoming edges or fan-in. In a homogeneous logic network,
edges can have a regular or complemented attribute. The depth
of a node is the length of the longest path from any primary
input variable to the node. The depth of a logic network is the
largest depth among all the nodes. The size of a logic network
is the number of its nodes.
3) Self-Dual Function: A logic function f(x, y, .., z) is said
to be self-dual if f = f
0
(x
0
, y
0
, .., z
0
) [1]. By complementation,
an equivalent self-dual formulation is f
0
= f(x
0
, y
0
, .., z
0
).
4) Majority Function: The n-input (n being odd) majority
function M returns the logic value assumed by more than
half of the inputs [1]. For example, the three input majority
function M(x, y, z) is represented in terms of , by (x
y) (x z) (y z). Also (x y) (x z) (y z) is a
valid representation for M(x, y, z). The majority function is
self-dual [1].
III. MAJORITY-INVERTER GRAPHS
In this section, we present MIGs and their representation
properties. Then, we show a new Boolean algebra natively
fitting the MIG data structure. Finally, we discuss the error
masking capabilities of MIGs from an optimization standpoint.
A. MIG Logic Representation
Definition 3.1: An MIG is a homogeneous logic network
with an indegree equal to 3 and each node representing the
majority function. In a MIG, edges are marked by a regular
or complemented attribute.
To determine some basic representation properties of MIGs,
we compare them to the well-known AND/OR/Inverter Graphs
(AOIGs) (which include AIGs). In terms of representation
expressiveness, the elementary bricks in MIGs are majority
operators while in AOIGs there are conjunctions (AND) and
disjunctions (OR). It is worth noticing that a majority operator
M(x, y, z) behaves as the conjunction operator AN D(x, y)
when z = 0 and as the disjunction operator OR(x, y) when
z = 1. Therefore, majority is actually a generalization of
both conjunction and disjunction. Recall that M (x, y, z) =
xy + xz + yz. This property leads to the following theorem.
Theorem 3.1: MIGs AOIGs.
Proof: In both AOIGs and MIGs, inverters are represented
by complemented edge markers. An AOIG node is always a
special case of a MIG node, with the third input biased to logic
0 or 1 to realize an AND or OR, respectively. On the other
hand, a MIG node is never a special case of an AOIG node,
because the functionality of the three input majority cannot be
realized by a unique AND or OR.
As a consequence of the previous theorem, MIGs are at
least as good as AOIGs but potentially much better, in terms of
representation compactness. Indeed, in the worst case, one can
replace node-wise AND/ORs by majorities with the third input
biased to a constant (0/1). However, even a more compact MIG

0278-0070 (c) 2015 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE
permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCAD.2015.2488484, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
representation can be obtained by fully exploiting its node
functionality rather than fixing one input to a logic constant.
Fig. 1 depicts a MIG representation example for f =
x
3
· (x
2
+(x
0
1
+x
0
)
0
). The starting point is a traditional AOIG.
Such AOIG has 3 nodes and 3 levels of depth, which is the best
representation possible using just AND/ORs. The first MIG
is obtained by a one-to-one replacement of AOIG nodes by
MIG nodes. As shown by Fig. 1, a better MIG representation
is possible by taking advantage of the majority function. This
transformation will be detailed in the rest of this paper. In this
way, one level of depth is saved with the same node count.
AOIG%!%MIG%
AND%
OR%
OR%
x0%x1%
x3%
x2%
f%
MAJ%
MAJ%
MAJ%
x0%x1%
x3%
x2%
f%
1%
1%
1%
MAJ%
MAJ%
f%
MAJ%
x3%
1%
1%
x2%
x3%
x0%
x1%
MIG%!%MIGopt%
Fig. 1: MIG representation for f = x
3
· (x
2
+ (x
0
1
+ x
0
)
0
).
Complementation is represented by bubbles on the edges.
MIGs inherit from AOIGs some important properties, like
universality and AIG inclusion. This is formalized by the
following.
Corollary 3.2: MIGs AIGs.
Proof: MIGs AOIGs AIGs = MIGs AIGs
Corollary 3.3: MIG is an universal representation form.
Proof: MIGs AOIGs AIGs that are universal repre-
sentation forms [10].
So far, we have shown that MIGs extend the representation
capabilities of AOIGs. However, we need a proper set of
manipulation tools to handle MIGs and automatically reach
compact representations. For this purpose, we introduce here-
after a new Boolean algebra, based on MIG primitives.
B. MIG Boolean Algebra
We present a novel Boolean algebra, defined over the set
(B, M,
0
, 0, 1), where M is the majority operator of three
variables and
0
is the complementation operator. The following
five primitive transformation rules, referred to as , form an
axiomatic system for (B, M,
0
, 0, 1). All variables belong to B.
Commutativity : .C
M(x, y, z) = M(y, x, z) = M (z, y, x)
Majority : .M
if(x = y): M(x, x, z) = M(y, y, z) = x = y
if(x = y
0
): M(x, x
0
, z) = z
Associativity : .A
M(x, u, M(y, u, z)) = M (z, u, M(y, u, x))
Distributivity : .D
M(x, y, M (u, v, z)) = M(M(x, y, u), M(x, y, v), z)
Inverter Propagation : .I
M
0
(x, y, z) = M(x
0
, y
0
, z
0
)
(2)
Axiom .C defines a commutativity property. Axiom .M
declares a 2 over 3 decision threshold. Axiom .A is an
associative law extended to ternary operators. Axiom .D
establishes a distributive relation over majority operators.
Axiom .I expresses the interaction between M and com-
plementation operators. It is worth noticing that .I does not
require operation type change like De Morgan laws, as it is
well known from self-duality [1].
We prove that (B, M,
0
, 0, 1) axiomatized by is an actual
Boolean algebra by showing that it induces a complemented
distributive lattice [34].
Theorem 3.4: The set (B, M,
0
, 0, 1) subject to axioms in
is a Boolean algebra.
Proof: The system embed median algebra axioms [35].
In such scheme, M(0, x, 1) = x follows from .M. In [36],
it is proved that a median algebra with elements 0 and 1
satisfying M (0, x, 1) = x is a distributive lattice. Moreover, in
our scenario, complementation is well defined and propagates
through the operator M (.I). Combined with the previous
property on distributivity, this makes our system a comple-
mented distributive lattice. Every complemented distributive
lattice is a Boolean algebra [34].
Note that there are other possible axiomatic systems for
(B, M,
0
, 0, 1). For example, one can show that in the presence
of .C, .A and .M , the rule in .D is redundant [37]. In
this work, we consider .D as part of the axiomatic system
for the sake of simplicity.
1) Derived Theorems: Several other complex rules, for-
mally called theorems, in (B, M,
0
, 0, 1) are derivable from
. Among the ones we encountered, three rules derived from
are of particular interest to logic optimization. We refer
to them as Ψ and are described hereafter. In the following,
the symbol z
x/y
represents a replacement operation, i.e., it
replaces x with y in all its appearence in z.
Ψ
Relevance Ψ.R
M(x, y, z) = M(x, y, z
x/y
0
)
Complementary Associativity Ψ.C
M(x, u, M(y, u
0
, z)) = M(x, u, M(y, x, z))
Substitution Ψ.S
M(x, y, z) =
M(v, M(v
0
, M
v /u
(x, y, z), u), M(v
0
, M
v /u
0
(x, y, z), u
0
))
(3)
The first rule, relevance .R), replaces reconvergent vari-
ables with their neighbors. For example, consider the func-
tion f = M(x, y, M(w, z
0
, M(x, y, z))). Variables x and
y are reconvergent because they appear in both the bot-
tom and the top majority operators. In this case, relevance
.R) replaces x with y
0
in the bottom majority as f =
M(x, y, M (w, z
0
, M(y
0
, y, z))). This representation can be
further reduced to f = M (x, y, w) by using .M .
The second rule, complementary associativity .C), deals
with variables appearing in both polarities. Its rule of re-
placement is M(x, u, M (y, u
0
, z)) = M(x, u, M(y, x, z)) as
depicted by Eq. 3.
The third rule, substitution .S), extends variable replace-
ment to the non-reconvergent case. We refer the reader to Fig.
2 for an example about substitution .S) applied to a 3-input
parity function.
Hereafter, we show how Ψ rules can be derived from .
Theorem 3.5: Ψ rules are derivable by .
Proof: Relevance (Ψ.R): Let S be the set of all possible
input patterns for M(x, y, z). Let S
x=y
(S
x=y
0
) be the subset
of S such that x = y (x = y
0
) condition is true. Note that
S
x=y
S
x=y
0
= and S
x=y
S
x=y
0
= S. According to .M,
variable z in M(x, y, z) is only relevant for S
x=y
0
. Thus, it is

0278-0070 (c) 2015 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE
permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI
10.1109/TCAD.2015.2488484, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
possible to replace x with y
0
, i.e., (x/y
0
), in all its appearance
in z, preserving the original functionality.
Complementary Associativity (Ψ.C):
M(x, u, M(u
0
, y, z)) = M(M(x, u, u
0
), M(x, u, y), z) (.D)
M(M (x, u, u
0
), M(x, u, y), z) = M(x, z, M(x, u, y)) (.M )
M(x, z, M(x, u, y)) = M(x, u, M (y, x, z)) (.A)
Substitution (Ψ.S): We set M (x, y, z) = k for brevity.
k = M (v, v
0
, k) = (.M)
M(M (u, u
0
, v), v
0
, k) = (.M)
M(M (v
0
, k, u), M(v
0
, k, u
0
), v) = (.D)
Then, M(v
0
, k, u) = M(v
0
, k
v /u
, u) (Ψ.R)
and M(v
0
, k, u
0
) = M(v
0
, k
v /u
0
, u) (Ψ.R)
Recalling that k = M(x, y, z), we finally obtain: M(x, y, z) =
M(v, M(v
0
, M
v /u
(x, y, z), u), M(v
0
, M
v /u
0
(x, y, z), u
0
))
2) Soundness and Completness: The set (B, M,
0
, 0, 1) to-
gether with axioms and derivable theorems form our major-
ity logic system. In a computer implementation of our majority
logic system, the natural data structure for (B, M,
0
, 0, 1) is
a MIG and the associated manipulation tools are and Ψ
transformations. In order to be useful in practical applications,
such as EDA, our majority logic system needs to satisfy
fundamental mathematical properties such as soundness and
completeness [33]. Soundness means that every argument
provable by the axioms in the system is valid. This guarantees
preserving of correctness. Completeness means that every
valid argument has a proof in the system. This guarantees uni-
versal logic reachability. We show that our majority Boolean
algebra is sound and complete.
Theorem 3.6: The Boolean algebra (B, M,
0
, 0, 1) axioma-
tized by is sound and complete.
Proof: We first consider soundness. Here, we need to
prove that all axioms in are valid, i.e., preserve the true
behavior (correctness) of a system. Rules .C and .M are
valid because they express basic properties (commutativity and
majority decision rule) of the majority operator. Rule .I is
valid because it derives from the self-duality of the majority
operator. For rules .D and .A, a simple way to prove
their validity is to build the corresponding truth tables and
check that they are actually the same. It is an easy exercise to
verify that it is true. We consider now completeness. Here, we
need to prove that every valid argument, i.e., (B, M,
0
, 0, 1)-
formula, has a proof in the system . By contradiction,
suppose that a true (B, M,
0
, 0, 1)-formula, say α, cannot be
proven true using rules. Such (B, M,
0
, 0, 1)-formula α
can always be reduced by Ψ.S rules into a (B, , ,
0
, 0, 1)-
formula. This is because Ψ.S can behave as Shannon’s ex-
pansion by setting v = 1 and u to a logic variable. Using
(Eq. 1), all (B, , ,
0
, 0, 1)-formulas can be proven, including
α. However, every (B, , ,
0
, 0, 1)-formula is also contained
by (B, M,
0
, 0, 1), where and are emulated by majority
operators. Moreover, rules in with one input fixed to 0 and 1
behaves as rules (Eq. 1). This means that also is capable
to prove the reduced (B, M,
0
, 0, 1)-formula α, contradicting
our assumption. Thus our system is sound and complete.
As a corollary of soundness, all rules in Ψ are valid.
Corollary 3.7: Ψ rules are valid in (B, M,
0
, 0, 1).
Proof: Ψ rules are derivable by as shown in Theo-
rem 3.5. Then, rules are sound in (B, M,
0
, 0, 1) as shown
in Theorem 3.6. Rules derivable from sound axioms are valid
in the original domain.
As a corollary of completeness, any element of a pair
of equivalent (B, M,
0
, 0, 1)-formulas, or MIGs, can be trans-
formed one into the other by a sequence of transformations.
From now on, we use MIGs to refer to functions in the
(B, M,
0
, 0, 1) domain. Still, the same arguments are valid for
(B, M,
0
, 0, 1)-formulas.
Corollary 3.8: It is possible to transform any MIG α into
any other logically equivalent MIG β, by a sequence of
transformations in .
Proof: MIGs are defined over the (B, M,
0
, 0, 1) do-
main. Following from Theorem 3.6, all valid arguments over
(B, M,
0
, 0, 1) can be proved by a sequence of rules. A valid
argument is then M (1, M(α, β
0
, 0), M (α
0
, β, 0)) = 0 which
reads α is never different from β according to the initial
hypothesis. It follows that the sequence of rules proving
such argument is also logically transforming α into β.
3) Reachability: To measure the efficiency of a logic sys-
tem, thus of its Boolean algebra, one can study (i) the ability to
perform a desired task and (ii) the number of basic operations
required to perform such a task. In the context of this work, the
task we care about is logic optimization. For the graph size and
graph depth metrics, MIGs can be smaller than AOIGs because
of Theorem 3.1. However, the complexity of sequences
required to reach those desirable MIGs is not obvious. In this
regard, we give an insight about the majority logic system
efficiency by comparing the number of rules needed to get
an optimized MIGs with the number of rules needed to
get an evenly optimized AIGs. This type of efficiency metric
is often referred to as reachability, i.e., the ability to reach a
desired representation form with the smallest number of steps
possible.
Theorem 3.9: For a given optimization goal and an initial
AOIG, the number of rules needed to reach this goal with a
MIG is smaller, or at most equal, than the number of rules
needed to reach the same goal with an AOIG.
Proof: Consider the shortest sequence of rules meeting
the optimization goal with an AOIG. On the MIG side, assume
to start with the initial AOIG replacing node-wise AND/OR
nodes with pre-configured majority nodes. Note that rules
with one input fixed to 0/1 behave as rules. So, it is possible
to emulate the same shortest sequence of rules in AOIGs
with in MIGs. This is just an upper bound on the shortest
sequence of rules. Exploiting the full expresiveness and
MIG compactness, this sequence can be further shortened.
For a deeper theoretical study on majority logic expresive-
ness, we refer the reader to [38]. In this work, we use the
mathematical theory presented so far to define a consistent
logic optimization framework. Then, we give experimental
evidence on the benefits predicted by the theory. Results in
Section VI show indeed a depth reduction, over the state-of-
the-art techniques, up to 48× thanks to our majority logic sys-
tem. More details on the experiments are given in Section VI.
Operating on MIGs via the new Boolean algebra is one nat-
ural approach to run logic optimization. Interestingly enough,
other approaches are also possible. In the following, we show
how MIGs can be optimized exploiting other properties of the
majority operator, such as bit-error masking.

Citations
More filters
01 Jun 1961
TL;DR: In this article, the Ashenhurst chart method is generalized to non-junctive decompositions by means of the don't care conditions, which leads to designs of more economical switching circuits to realize the given switching function.
Abstract: : A given switching function of n variables can frequently be decomposed into a composite function of several essentially simpler switching functions. Such decompositions lead to designs of more economical switching circuits to realize the given switching function. Ashenhurst's chart method is generalized to nondisjunctive decompositions by means of the don't care conditions. This extension provides an effective method of constructing all decompositions of switching functions. (Author)

227 citations

DOI
01 Jan 2015
TL;DR: The EPFL combinational benchmark suite consists of 23 combinational circuits designed to challenge modern logic optimization tools, available to the public and distributed in all Verilog, VHDL, BLIF and AIGER formats.
Abstract: In this paper, we present the EPFL combinational benchmark suite. We aim at completing existing benchmark suites by focusing only on natively combinational benchmarks. The EPFL combinational benchmark suite consists of 23 combinational circuits designed to challenge modern logic optimization tools. It is further divided into three parts. The first part includes 10 arithmetic benchmarks, e.g., square-root, hypotenuse, divisor, multiplier etc.. The second part consists of 10 random/control benchmarks, e.g., round-robin arbiter, lookahead XY router, alu control unit, memory controller etc.. The third part contains 3 very large circuits, featuring more than ten million gates each. All benchmarks have a moderate number of inputs/outputs ranging from few tens to about one thousand. The EPFL benchmark suite is available to the public and distributed in all Verilog, VHDL, BLIF and AIGER formats. In addition to providing the benchmarks, we keep track of the best optimization results, mapped into LUT-6, for size and depth metrics. Better logic implementations can be submitted online. After combinational equivalence checking tests, the best LUT-6 realizations will be included in the benchmark suite together with the author’s name and affiliation.

195 citations

Journal ArticleDOI
TL;DR: In this paper, the authors provide a tutorial overview of recent efforts to develop computing systems based on spin waves instead of charges and voltages, and discuss the current status and challenges to combine spin-wave gates and obtain circuits and ultimately computing systems, considering essential aspects such as gate interconnection, logic level restoration, input output consistency, and fan-out achievement.
Abstract: This paper provides a tutorial overview over recent vigorous efforts to develop computing systems based on spin waves instead of charges and voltages. Spin-wave computing can be considered a subfield of spintronics, which uses magnetic excitations for computation and memory applications. The Tutorial combines backgrounds in spin-wave and device physics as well as circuit engineering to create synergies between the physics and electrical engineering communities to advance the field toward practical spin-wave circuits. After an introduction to magnetic interactions and spin-wave physics, the basic aspects of spin-wave computing and individual spin-wave devices are reviewed. The focus is on spin-wave majority gates as they are the most prominently pursued device concept. Subsequently, we discuss the current status and the challenges to combine spin-wave gates and obtain circuits and ultimately computing systems, considering essential aspects such as gate interconnection, logic level restoration, input–output consistency, and fan-out achievement. We argue that spin-wave circuits need to be embedded in conventional complementary metal–oxide–semiconductor (CMOS) circuits to obtain complete functional hybrid computing systems. The state of the art of benchmarking such hybrid spin-wave–CMOS systems is reviewed, and the current challenges to realize such systems are discussed. The benchmark indicates that hybrid spin-wave–CMOS systems promise ultralow-power operation and may ultimately outperform conventional CMOS circuits in terms of the power-delay-area product. Current challenges to achieve this goal include low-power signal restoration in spin-wave circuits as well as efficient spin-wave transducers.

169 citations

Journal ArticleDOI
TL;DR: It is argued that spin-wave circuits need to be embedded in conventional CMOS circuits to obtain complete functional hybrid computing systems and the benchmark indicates that hybridspin-wave--CMOS systems promise ultralow-power operation and may ultimately outperform conventionalCMOS circuits in terms of the power-delay-area product.
Abstract: This paper provides a tutorial overview over recent vigorous efforts to develop computing systems based on spin waves instead of charges and voltages Spin-wave computing can be considered as a subfield of spintronics, which uses magnetic excitations for computation and memory applications The tutorial combines backgrounds in spin-wave and device physics as well as circuit engineering to create synergies between the physics and electrical engineering communities to advance the field towards practical spin-wave circuits After an introduction to magnetic interactions and spin-wave physics, all relevant basic aspects of spin-wave computing and individual spin-wave devices are reviewed The focus is on spin-wave majority gates as they are the most prominently pursued device concept Subsequently, we discuss the current status and the challenges to combine spin-wave gates and obtain circuits and ultimately computing systems, considering essential aspects such as gate interconnection, logic level restoration, input-output consistency, and fan-out achievement We argue that spin-wave circuits need to be embedded in conventional CMOS circuits to obtain complete functional hybrid computing systems The state of the art of benchmarking such hybrid spin-wave--CMOS systems is reviewed and the current challenges to realize such systems are discussed The benchmark indicates that hybrid spin-wave--CMOS systems promise ultralow-power operation and may ultimately outperform conventional CMOS circuits in terms of the power-delay-area product Current challenges to achieve this goal include low-power signal restoration in spin-wave circuits as well as efficient spin-wave transducers

115 citations

References
More filters
Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Abstract: In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.

9,021 citations

01 Jan 2001
TL;DR: Here the authors haven’t even started the project yet, and already they’re forced to answer many questions: what will this thing be named, what directory will it be in, what type of module is it, how should it be compiled, and so on.
Abstract: Writers face the blank page, painters face the empty canvas, and programmers face the empty editor buffer. Perhaps it’s not literally empty—an IDE may want us to specify a few things first. Here we haven’t even started the project yet, and already we’re forced to answer many questions: what will this thing be named, what directory will it be in, what type of module is it, how should it be compiled, and so on.

6,547 citations

Book
01 Jan 1994
TL;DR: This book covers techniques for synthesis and optimization of digital circuits at the architectural and logic levels, i.e., the generation of performance-and-or area-optimal circuits representations from models in hardware description languages.
Abstract: From the Publisher: Synthesis and Optimization of Digital Circuits offers a modern, up-to-date look at computer-aided design (CAD) of very large-scale integration (VLSI) circuits. In particular, this book covers techniques for synthesis and optimization of digital circuits at the architectural and logic levels, i.e., the generation of performance-and/or area-optimal circuits representations from models in hardware description languages. The book provides a thorough explanation of synthesis and optimization algorithms accompanied by a sound mathematical formulation and a unified notation. The text covers the following topics: modern hardware description languages (e.g., VHDL, Verilog); architectural-level synthesis of data flow and control units, including algorithms for scheduling and resource binding; combinational logic optimization algorithms for two-level and multiple-level circuits; sequential logic optimization methods; and library binding techniques, including those applicable to FPGAs.

2,311 citations

Journal Article
TL;DR: This paper provides an overview of SIS and contains descriptions of the input specification, STG (state transition graph) manipulation, new logic optimization and verification algorithms, ASTG (asynchronous signal transition graph] manipulation, and synthesis for PGA’s (programmable gate arrays).
Abstract: SIS is an interactive tool for synthesis and optimization of sequential circuits Given a state transition table, a signal transition graph, or a logic-level description of a sequential circuit, it produces an optimized net-list in the target technology while preserving the sequential input-output behavior Many different programs and algorithms have been integrated into SIS, allowing the user to choose among a variety of techniques at each stage of the process It is built on top of MISII [5] and includes all (combinational) optimization techniques therein as well as many enhancements SIS serves as both a framework within which various algorithms can be tested and compared, and as a tool for automatic synthesis and optimization of sequential circuits This paper provides an overview of SIS The first part contains descriptions of the input specification, STG (state transition graph) manipulation, new logic optimization and verification algorithms, ASTG (asynchronous signal transition graph) manipulation, and synthesis for PGA’s (programmable gate arrays) The second part contains a tutorial example illustrating the design process using SIS

1,854 citations