scispace - formally typeset
Search or ask a question
Journal ArticleDOI

MIS: A Multiple-Level Logic Optimization System

TL;DR: An overview of the MIS system and a description of the algorithms used are provided, including some examples illustrating an input language used for specifying logic and don't-cares.
Abstract: MIS is both an interactive and a batch-oriented multilevel logic synthesis and minimization system. MIS starts from the combinational logic extracted, typically, from a high-level description of a macrocell. It produces a multilevel set of optimized logic equations preserving the input-output behavior. The system includes both fast and slower (but more optimal) versions of algorithms for minimizing the area, and global timing optimization algorithms to meet system-level timing constraints. This paper provides an overview of the system and a description of the algorithms used. Included are some examples illustrating an input language used for specifying logic and don't-cares. Parts on an industrial chip have been re-synthesized using MIS with favorable results as compared to equivalent manual designs.
Citations
More filters
Journal Article
TL;DR: This paper provides an overview of SIS and contains descriptions of the input specification, STG (state transition graph) manipulation, new logic optimization and verification algorithms, ASTG (asynchronous signal transition graph] manipulation, and synthesis for PGA’s (programmable gate arrays).
Abstract: SIS is an interactive tool for synthesis and optimization of sequential circuits Given a state transition table, a signal transition graph, or a logic-level description of a sequential circuit, it produces an optimized net-list in the target technology while preserving the sequential input-output behavior Many different programs and algorithms have been integrated into SIS, allowing the user to choose among a variety of techniques at each stage of the process It is built on top of MISII [5] and includes all (combinational) optimization techniques therein as well as many enhancements SIS serves as both a framework within which various algorithms can be tested and compared, and as a tool for automatic synthesis and optimization of sequential circuits This paper provides an overview of SIS The first part contains descriptions of the input specification, STG (state transition graph) manipulation, new logic optimization and verification algorithms, ASTG (asynchronous signal transition graph) manipulation, and synthesis for PGA’s (programmable gate arrays) The second part contains a tutorial example illustrating the design process using SIS

1,854 citations

Journal ArticleDOI
TL;DR: A theoretical breakthrough is presented which shows that the LUT-based FPGA technology mapping problem for depth minimization can be solved optimally in polynomial time.
Abstract: The field programmable gate-array (FPGA) has become an important technology in VLSI ASIC designs. In the past few years, a number of heuristic algorithms have been proposed for technology mapping in lookup-table (LUT) based FPGA designs, but none of them guarantees optimal solutions for general Boolean networks and little is known about how far their solutions are away from the optimal ones. This paper presents a theoretical breakthrough which shows that the LUT-based FPGA technology mapping problem for depth minimization can be solved optimally in polynomial time. A key step in our algorithm is to compute a minimum height K-feasible cut in a network, which is solved optimally in polynomial time based on network flow computation. Our algorithm also effectively minimizes the number of LUT's by maximizing the volume of each cut and by several post-processing operations. Based on these results, we have implemented an LUT-based FPGA mapping package called FlowMap. We have tested FlowMap on a large set of benchmark examples and compared it with other LUT-based FPGA mapping algorithms for delay optimization, including Chortle-d, MIS-pga-delay, and DAG-Map. FlowMap reduces the LUT network depth by up to 7% and reduces the number of LUT's by up to 50% compared to the three previous methods. >

719 citations


Cites methods from "MIS: A Multiple-Level Logic Optimiz..."

  • ...We used input/output routines and general utility functions provided by MIS [2] in our implementation....

    [...]

  • ...This class includes MIS-pga-delay by Murgai et al. which combines the technology mapping with layout synthesis [21], Chortle-d by Francis et al. which minimizes the depth increase at each bin packing step [12], and DAG-Map by Cong et al. [7, 3] based on Lawler’s labeling algorithm....

    [...]

  • ...We have tested FlowMap on a set of benchmark examples and compared it with other LUT-based FPGA mapping algorithms for delay optimization, including Chortle-d, MIS-pga-delay, and DAG-Map....

    [...]

  • ...However, overall MIS-pga-delay still used 9.8% more 5-LUTs and had 7.1% larger depth....

    [...]

  • ...These initial networks were obtained by a sequence of technology independent area and depth optimization steps using MIS....

    [...]

Book ChapterDOI
15 Jul 2010
TL;DR: This paper introduces ABC, motivates its development, and illustrates the use in formal verification of binary logic circuits appearing in synchronous hardware designs.
Abstract: ABC is a public-domain system for logic synthesis and formal verification of binary logic circuits appearing in synchronous hardware designs ABC combines scalable logic transformations based on And-Inverter Graphs (AIGs), with a variety of innovative algorithms A focus on the synergy of sequential synthesis and sequential verification leads to improvements in both domains This paper introduces ABC, motivates its development, and illustrates its use in formal verification.

666 citations


Cites background from "MIS: A Multiple-Level Logic Optimiz..."

  • ...Both SIS [35] and its predecessor MIS [8], pioneered multi-level combinational logic synthesis and became trend-setting prototypes for a large number of synthesis tools developed by industry....

    [...]

Proceedings ArticleDOI
11 Oct 1992
TL;DR: SIS serves as both a framework within which various algorithms can be tested and compared and as a tool for automatic synthesis and optimization of sequential circuits.
Abstract: A description is given of SIS, an interactive tool for synthesis and optimization of sequential circuits. Given a state transition table or a logic-level description of a sequential circuit, SIS produces an optimized net-list in the target technology while preserving the sequential input-output behavior. Many different programs and algorithms have been integrated into SIS, allowing the user to choose among a variety of techniques at each stage of the process. It is built on top of MISII and includes all (combinational) optimization techniques therein as well as many enhancements. SIS serves as both a framework within which various algorithms can be tested and compared and as a tool for automatic synthesis and optimization of sequential circuits. >

551 citations

Proceedings ArticleDOI
30 Nov 1994
TL;DR: A novel way to incorporate hardware-programmable resources into a processor microarchitecture to improve the performance of general-purpose applications through a coupling of compile-time analysis routines and hardware synthesis tools is explored.
Abstract: This paper explores a novel way to incorporate hardware-programmable resources into a processor microarchitecture to improve the performance of general-purpose applications. Through a coupling of compile-time analysis routines and hardware synthesis tools, we automatically configure a given set of the hardware-programmable functional units (PFUs) and thus augment the base instruction set architecture so that it better meets the instruction set needs of each application. We refer to this new class of general-purpose computers as PRogrammable Instruction Set Computers (PRISC). Although similar in concept, the PRISC approach differs from dynamically programmable microcode because in PRISC we define entirely-new primitive datapath operations. In this paper, we concentrate on the microarchitectural design of the simplest form of PRISC—a RISC microprocessor with a single PFU that only evaluates combinational functions. We briefly discuss the operating system and the programming language compilation techniques that are needed to successfully build PRISC and, we present performance results from a proof-of-concept study. With the inclusion of a single 32-bit-wide PFU whose hardware cost is less than that of a 1 kilobyte SRAM, our study shows a 22% improvement in processor performance on the SPECint92 benchmarks.

475 citations

References
More filters
Journal ArticleDOI
TL;DR: The hypothesis is that by reducing the instruction set one can design a suitable VLSI architecture that uses scarce resources more effectively than a CISC, and expects this approach to reduce design time, design errors, and the execution time of individual instructions.
Abstract: A general trend in computers today is to increase the complexity of architectures commensurate with the increasing potential of implementation technologies, as exemplified by the complex successors of simpler machines. Compare, for example, the DEC VAX-11 to the PDP-1 1, the IBM System/382 to the System/3, and the Intel iAPX-4323'4 to the 8086. The complexity of this class of computers, which we call CISCs for complex instruction set computers, has some negative consequences: increased design time, increased design errors, and inconsistent implementations.5 Investigations of VLSI architectures indicate that the delay-power penalty of data transfers across chip boundaries and the still-limited resources (devices) available on a single chip are major design limitations. Even a million-transistor chip is insufficient if a whole computer has to be built from it.6 This raises the question of whether the extra hardware needed to implement a CISC is the best use of \"scarce\" resources. The above findings led to the Reduced Instruction Set Computer Project. The purpose of the RISC Project is to explore alternatives to the general trend toward architectural complexity. The hypothesis is that by reducing the instruction set one can design a suitable VLSI architecture that uses scarce resources more effectively than a CISC. We also expect this approach to reduce design time, design errors, and the execution time of individual instructions. Our initial version of such a computer is called RISC I. To meet our goals of simplicity and effective single-chip implementation, we somewhat artificially placed the following design constraints on the architecture: (1) Execute one instruction per cycle. RISC I instructions should be about as fast and no more complicated than microinstructions in current machines such as the PDP-ll or VAX. (2) Make all instructions the same size. This again simplifies implementation. We intentionally postponed attempts to reduce program size. (3) Access memory only with load and store instructions ; the rest operate between registers. This restriction simplifies the design. The lack of complex addressing modes also makes it easier to restart instructions. (4) Support high-level languages. The degree of support is explained below. Our intent is to optimize the performance of RISC I for use with high-level languages. RISC I supports 32-bit addresses, 8-, 16-, and 32-bit data, and several 32-bit registers. We intend to examine support for operating systems and floating-point calculations in the future. It would appear that these constraints, based on our desire for simplicity and regularity, …

336 citations

Journal ArticleDOI
TL;DR: An experimental system for synthesizing synchronous combinational logic that allows a designer to start with a naive implementation produced automatically from a functional specification, evaluate it with respect to these many factors and incrementally improve this implementation by applying local transformations until it is acceptable for manufacture.
Abstract: A logic designer today faces a growing number of design requirements and technology restrictions, brought about by increases in circuit density and processor complexity. At the same time, the cost of engineering changes has made the correctness of chip implementations more important, and minimization of circuit count less so. These factors underscore the need for increased automation of logic design. This paper describes an experimental system for synthesizing synchronous combinational logic. It allows a designer to start with a naive implementation produced automatically from a functional specification, evaluate it with respect to these many factors and incrementally improve this implementation by applying local transformations until it is acceptable for manufacture. The use of simple local transformations in this system ensures correct implementations, isolates technology-specific data, and will allow the total process to be applied to larger, VLSI designs. The system has been used to synthesize masterslice chip implementations from functional specifications, and to remap implemented masterslice chips from one technology to another while preserving their functional behavior.

192 citations

Journal ArticleDOI
TL;DR: For your desk will come packaged as a VLSI workstation called SPUR, once the team at UC Berkeley finds a partner to transfer their upcoming prototype to industry.
Abstract: for your desk will come packaged as a VLSI workstation called SPUR, once the team at UC Berkeley finds a partner to transfer their upcoming prototype to industry.

154 citations

Journal ArticleDOI
John A. Darringer1, Daniel Brand1, Jonh V. Gerbi1, William H. Joyner1, Louise H. Trevillyan1 
TL;DR: The evolution of the Logic Synthesis System is described from an experimental tool to a production system for the synthesis of masterslice chip implementations and the primary reasons for this success are the use of local transformations to simplify logic representations at several levels of abstraction.
Abstract: For some time we have been exploring methods of transforming functional specifications into hardware implementations that are suitable for production. The complexity of this task and the potential value have continued to grow with the increasing complexity of processor design and the mounting pressure to shorten machine design times. This paper describes the evolution of the Logic Synthesis System from an experimental tool to a production system for the synthesis of masterslice chip implementations. The system was used by one project in IBM Poughkeepsie to produce 90 percent of its more than one hundred chip parts. The primary reasons for this success are the use of local transformations to simplify logic representations at several levels of abstraction, and a highly cooperative effort between logic designers and synthesis system designers to understand the logic design process practiced in Poughkeepsie and to incorporate this knowledge into the synthesis system.

149 citations