scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Minimization of Boolean functions

01 Nov 1956-Bell System Technical Journal (Alcatel-Lucent)-Vol. 35, Iss: 6, pp 1417-1444
TL;DR: A systematic procedure is presented for writing a Boolean function as a minimum sum of products and specific attention is given to terms which can be included in the function solely for the designer's convenience.
Abstract: A systematic procedure is presented for writing a Boolean function as a minimum sum of products This procedure is a simplification and extension of the method presented by W V Quine Specific attention is given to terms which can be included in the function solely for the designer's convenience
Citations
More filters
Journal ArticleDOI
01 Jan 1989
TL;DR: Experiments in which distance is applied to pairs of concepts and to sets of concepts in a hierarchical knowledge base show the power of hierarchical relations in representing information about the conceptual distance between concepts.
Abstract: Motivated by the properties of spreading activation and conceptual distance, the authors propose a metric, called distance, on the power set of nodes in a semantic net. Distance is the average minimum path length over all pairwise combinations of nodes between two subsets of nodes. Distance can be successfully used to assess the conceptual distance between sets of concepts when used on a semantic net of hierarchical relations. When other kinds of relationships, like 'cause', are used, distance must be amended but then can again be effective. The judgements of distance significantly correlate with the distance judgements that people make and help to determine whether one semantic net is better or worse than another. The authors focus on the mathematical characteristics of distance that presents novel cases and interpretations. Experiments in which distance is applied to pairs of concepts and to sets of concepts in a hierarchical knowledge base show the power of hierarchical relations in representing information about the conceptual distance between concepts. >

1,962 citations

Journal ArticleDOI
TL;DR: The essential features of the branch-and-bound approach to constrained optimization are described, and several specific applications are reviewed, including integer linear programming Land-Doig and Balas methods, nonlinear programming minimization of nonconvex objective functions, and the quadratic assignment problem Gilmore and Lawler methods.
Abstract: The essential features of the branch-and-bound approach to constrained optimization are described, and several specific applications are reviewed. These include integer linear programming Land-Doig and Balas methods, nonlinear programming minimization of nonconvex objective functions, the traveling-salesman problem Eastman and Little, et al. methods, and the quadratic assignment problem Gilmore and Lawler methods. Computational considerations, including trade-offs between length of computation and storage requirements, are discussed and a comparison with dynamic programming is made. Various applications outside the domain of mathematical programming are also mentioned.

1,915 citations

Book
01 Jan 2010
TL;DR: Theories are made easier to understand with 200 illustrative examples, and students can test their understanding with over 350 end-of-chapter review questions.
Abstract: Understand the structure, behavior, and limitations of logic machines with this thoroughly updated third edition. Many new topics are included, such as CMOS gates, logic synthesis, logic design for emerging nanotechnologies, digital system testing, and asynchronous circuit design, to bring students up-to-speed with modern developments. The intuitive examples and minimal formalism of the previous edition are retained, giving students a text that is logical and easy to follow, yet rigorous. Kohavi and Jha begin with the basics, and then cover combinational logic design and testing, before moving on to more advanced topics in finite-state machine design and testing. Theory is made easier to understand with 200 illustrative examples, and students can test their understanding with over 350 end-of-chapter review questions.

1,315 citations

Journal ArticleDOI
TL;DR: The theoretical background is developed here for employing data dependence to convert FORTRAN programs to parallel form and transformations that use dependence to uncover additional parallelism are discussed.
Abstract: The recent success of vector computers such as the Cray-1 and array processors such as those manufactured by Floating Point Systems has increased interest in making vector operations available to the FORTRAN programmer The FORTRAN standards committee is currently considering a successor to FORTRAN 77, usually called FORTRAN 8x, that will permit the programmer to explicitly specify vector and array operationsAlthough FORTRAN 8x will make it convenient to specify explicit vector operations in new programs, it does little for existing code In order to benefit from the power of vector hardware, existing programs will need to be rewritten in some language (presumably FORTRAN 8x) that permits the explicit specification of vector operations One way to avoid a massive manual recoding effort is to provide a translator that discovers the parallelism implicit in a FORTRAN program and automatically rewrites that program in FORTRAN 8xSuch a translation from FORTRAN to FORTRAN 8x is not straightforward because FORTRAN DO loops are not always semantically equivalent to the corresponding FORTRAN 8x parallel operation The semantic difference between these two constructs is precisely captured by the concept of dependence A translation from FORTRAN to FORTRAN 8x preserves the semantics of the original program if it preserves the dependences in that programThe theoretical background is developed here for employing data dependence to convert FORTRAN programs to parallel form Dependence is defined and characterized in terms of the conditions that give rise to it; accurate tests to determine dependence are presented; and transformations that use dependence to uncover additional parallelism are discussed

780 citations

Journal ArticleDOI
TL;DR: Cache memories are a general solution to improving the performance of a memory system by placing smaller faster memories in front of larger, slower, and cheaper memories to approach that of a perfect memory system—at a reasonable cost.
Abstract: A computer’s memory system is the repository for all the information the computer’s central processing unit (CPU, or processor) uses and produces. A perfect memory system is one that can supply immediately any datum that the CPU requests. This ideal memory is not practically implementable, however, as the three factors of memory capacity, speed, and cost are directly in opposition. By placing smaller faster memories in front of larger, slower, and cheaper memories, the performance of the memory system may approach that of a perfect memory system—at a reasonable cost. The memory hierarchies of modern general-purpose computers generally contain registers at the top, followed by one or more levels of cache memory, main memory (all three are semiconductor memory), and virtual memory (on a magnetic or optical disk). Figure 1 shows a memory hierarchy typical of today’s (1995) commodity systems. Performance of a memory system is measured in terms of latency and bandwidth. The latency of a memory request is how long it takes the memory system to produce the result of the request. The bandwidth of a memory system is the rate at which the memory system can accept requests and produce results. The memory hierarchy improves average latency by quickly returning results that are found in the higher levels of the hierarchy. The memory hierarchy usually reduces bandwidth requirements by intercepting a fraction of the memory requests at higher levels of the hierarchy. Some machines, such as high-performance vector machines, may have fewer levels in the hierarchy—increasing cost for better predictability and performance. Some of these machines contain no caches at all, relying on large arrays of main memory banks to supply very high bandwidth. Pipelined accesses of operands reduce the performance impact of long latencies in these machines. Cache memories are a general solution to improving the performance of a memory system. Although caches are smaller than typical main memory sizes, they ideally contain the most frequently accessed portions of main memory. By keeping the most heavily used data near the CPU, caches can service a large fraction of the requests without needing to access main memory (the fraction serviced is called the hit rate). Caches require locality of reference to work well transparently—they assume that accessed memory words will be accessed again quickly (temporal locality) and that memory words adjacent to an accessed word will be accessed soon after the access in question (spatial locality). When the CPU issues a request for a datum not in the cache (a cache miss), the cache loads that datum and some number of adjacent data (a cache block) into itself from main memory. To reduce cache misses, some caches are associative—a cache may place a given block in one of several places, collectively called a set. This set is content-addressable; a block may be accessed based on an address tag, one of which is coupled with each block. When a new block is brought into a set and the set is full, the cache’s replacement policy dictates which of the old blocks should be removed from the cache to make room for the new. Most caches use an approximation of least recently used

702 citations

References
More filters
Journal ArticleDOI
TL;DR: It will be shown that several of the well-known theorems on impedance networks have roughly analogous theorem in relay circuits, including the delta-wye and star-mesh transformations, and the duality theorem.
Abstract: In the control and protective circuits of complex electrical systems it is frequently necessary to make intricate interconnections of relay contacts and switches Examples of these circuits occur in automatic telephone exchanges, industrial motor-control equipment, and in almost any circuits designed to perform complex operations automatically In this article a mathematical analysis of certain of the properties of such networks will be made Particular attention will be given to the problem of network synthesis Given certain characteristics, it is required to find a circuit incorporating these characteristics The solution of this type of problem is not unique and methods of finding those particular circuits requiring the least number of relay contacts and switch blades will be studied Methods will also be described for finding any number of circuits equivalent to a given circuit in all operating characteristics It will be shown that several of the well-known theorems on impedance networks have roughly analogous theorems in relay circuits Notable among these are the delta-wye (δ-Y) and star-mesh transformations, and the duality theorem

922 citations

Journal ArticleDOI
TL;DR: The Problem of Simplifying Truth Functions is concerned with the problem of reducing the number of operations on a graph to a simple number.
Abstract: (1952). The Problem of Simplifying Truth Functions. The American Mathematical Monthly: Vol. 59, No. 8, pp. 521-531.

885 citations

Journal ArticleDOI
TL;DR: A basic part of the general synthesis problem is the design of a two-terminal network with given operating characteristics, and this work shall consider some aspects of this problem.
Abstract: THE theory of switching circuits may be divided into two major divisions, analysis and synthesis. The problem of analysis, determining the manner of operation of a given switching circuit, is comparatively simple. The inverse problem of finding a circuit satisfying certain given operating conditions, and in particular the best circuit is, in general, more difficult and more important from the practical standpoint. A basic part of the general synthesis problem is the design of a two-terminal network with given operating characteristics, and we shall consider some aspects of this problem.

774 citations


"Minimization of Boolean functions" refers background in this paper

  • ...For example, the character (0, 2, 4, 6) can be formed either by combining (0, 2) and (4, 6) or by combining (0,4) and (2,6) as given in Table III....

    [...]

  • ...In Table II, when the (0, 2, 4, 6) character is formed by combining the (0, 2) and (4, 6) characters, check marks must be placed next to the (0, 4) and (2, 6) characters as well as the (0, 2) and (4, 6) characters....

    [...]

  • ...For example, in Table lIb the label of the (4,6) (0 0 1 - 0) character can be obtained by adding 4 = (22) to the numbers of the label of the (0, 2)(0°0-0) character....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a way to simplify truth functions is proposed. But it is difficult to verify the correctness of truth functions and it is not easy to find the truth functions in practice.
Abstract: (1955). A Way to Simplify Truth Functions. The American Mathematical Monthly: Vol. 62, No. 9, pp. 627-631.

626 citations

Journal ArticleDOI
Maurice Karnaugh1
01 Nov 1953
TL;DR: The problem in this area which has been attacked most energetically is that of the synthesis of efficient combinational that is, nonsequential, logic circuits.
Abstract: THE SEARCH for simple abstract techniques to be applied to the design of switching systems is still, despite some recent advances, in its early stages The problem in this area which has been attacked most energetically is that of the synthesis of efficient combinational that is, nonsequential, logic circuits

610 citations


"Minimization of Boolean functions" refers background or methods in this paper

  • ...The first step of the procedure is to select from the prime implicant table a set of rows such that (1) in each column of the table there is a cross from at least one of the selected rows and (2) none of the selected rows can be discarded without destroying property (1)....

    [...]

  • ...For example, in Table XV(a) a (1)(0000 1) was used in forming the (0,1)(0000-) character and a (3)(00011) was used in forming the (3, 7)(00 - 1 1) character....

    [...]

  • ...These can be combined to form a new character (1, 3)(000 - 1)....

    [...]

  • ...A new character is formed (1, 3) which has a dash in the x2-position....

    [...]

  • ...XI 1'iI = Xo'Xa'XI ' + XIX" + XaX, + x,X, (e) (i) (0, 1) + (2, 6, 10, 14) + (5, 7, 13, 15) + ( 9, 11, 13, 15) (ii) (0,2) + (1,5, 9,13) + (6,7,14, 15) + (10, 11, 14, 15) (d) 0 1 2 5 6 9 10 7 11 13 14 15...

    [...]