scispace - formally typeset
Search or ask a question
Author

Timothy Kam

Bio: Timothy Kam is an academic researcher from Intel. The author has contributed to research in topics: Finite-state machine & Formal verification. The author has an hindex of 20, co-authored 37 publications receiving 1388 citations. Previous affiliations of Timothy Kam include University of Southern California & University of California, Berkeley.

Papers
More filters
Proceedings ArticleDOI
01 Jun 1999
TL;DR: A coverage metric to estimate the "completeness" of a set of properties verified by model checking and a symbolic algorithm is presented to compute this metric for a subset of the CTL property specification language.
Abstract: Although model checking is an exhaustive formal verification method, a bug can still escape detection if the erroneous behavior does not violate any verified property. We propose a coverage metric to estimate the "completeness" of a set of properties verified by model checking. A symbolic algorithm is presented to compute this metric for a subset of the CTL property specification language. It has the same order of computational complexity as a model checking algorithm. Our coverage estimator has been applied in the course of some real-world model checking projects. We uncovered several coverage holes including one that eventually led to the discovery of a bug that escaped the initial model checking effort.

150 citations

Journal ArticleDOI
TL;DR: This paper presents a new mapper aimed at mitigating structural bias, based on a simplified cut-based Boolean matching algorithm, and using the speed afforded by this simplification two ideas to reduce structural bias are explored.
Abstract: Technology mapping, based on directed acyclic graph covering, suffers from the problem of structural bias: The structure of the mapped netlist depends strongly on the subject graph. In this paper, the authors present a new mapper aimed at mitigating structural bias. It is based on a simplified cut-based Boolean-matching algorithm, and using the speed afforded by this simplification, they explore two ideas to reduce structural bias. The first, called lossless synthesis, leverages recent advances in structure-based combinational-equivalence checking to combine the different networks seen during technology-independent synthesis into a single network with choices in a scalable manner. They show how cut-based mapping extends naturally to handle such networks with choices. The second idea is to combine several library gates into a single gate (called a supergate) in order to make the matching process less local. They show how supergates help address the structural-bias problem and how they fit naturally into the cut-based Boolean-matching scheme. An implementation based on these ideas significantly outperforms state-of-the-art mappers in terms of delay, area, and run-time on academic and industrial benchmarks

101 citations

Book
17 Feb 2013
TL;DR: Key themes running through the book are the exploration of behaviors contained in a non-deterministic FSM, and the representation of combinatorial problems arising in FSM synthesis by means of Binary Decision Diagrams (BDDs).
Abstract: Synthesis of Finite State Machines: Functional Optimization is one of two monographs devoted to the synthesis of Finite State Machines (FSMs). This volume addresses functional optimization, whereas the second addresses logic optimization. By functional optimization here we mean the body of techniques that: compute all permissible sequential functions for a given topology of interconnected FSMs, and select a `best' sequential function out of the permissible ones. The result is a symbolic description of the FSM representing the chosen sequential function. By logic optimization here we mean the steps that convert a symbolic description of an FSM into a hardware implementation, with the goal to optimize objectives like area, testability, performance and so on. Synthesis of Finite State Machines: Functional Optimization is divided into three parts. The first part presents some preliminary definitions, theories and techniques related to the exploration of behaviors of FSMs. The second part presents an implicit algorithm for exact state minimization of incompletely specified finite state machines (ISFSMs), and an exhaustive presentation of explicit and implicit algorithms for the binate covering problem. The third part addresses the computation of permissible behaviors at a node of a network of FSMs and the related minimization problems of non-deterministic finite state machines (NDFSMs). Key themes running through the book are the exploration of behaviors contained in a non-deterministic FSM (NDFSM), and the representation of combinatorial problems arising in FSM synthesis by means of Binary Decision Diagrams (BDDs). Synthesis of Finite State Machines: Functional Optimization will be of interest to researchers and designers in logic synthesis, CAD and design automation.

92 citations

Book
17 Apr 1997
TL;DR: The authors introduce techniques for converting a symbolic description of an FSM into a hardware implementation and extend them to the case of the implicit minimization of GPIs, where the encodability and augmentation steps are also performed implicitly.
Abstract: Synthesis of Finite State Machines: Logic Optimization is the second in a set of two monographs devoted to the synthesis of Finite State Machines (FSMs). The first volume, Synthesis of Finite State Machines: Functional Optimization, addresses functional optimization, whereas this one addresses logic optimization. The result of functional optimization is a symbolic description of an FSM which represents a sequential function chosen from a collection of permissible candidates. Logic optimization is the body of techniques for converting a symbolic description of an FSM into a hardware implementation. The mapping of a given symbolic representation into a two-valued logic implementation is called state encoding (or state assignment) and it impacts heavily area, speed, testability and power consumption of the realized circuit. The first part of the book introduces the relevant background, presents results previously scattered in the literature on the computational complexity of encoding problems, and surveys in depth old and new approaches to encoding in logic synthesis. The second part of the book presents two main results about symbolic minimization; a new procedure to find minimal two-level symbolic covers, under face, dominance and disjunctive constraints, and a unified frame to check encodability of encoding constraints and find codes of minimum length that satisfy them. The third part of the book introduces generalized prime implicants (GPIs), which are the counterpart, in symbolic minimization of two-level logic, to prime implicants in two-valued two-level minimization. GPIs enable the design of an exact procedure for two-level symbolic minimization, based on a covering step which is complicated by the need to guarantee encodability of the final cover. A new efficient algorithm to verify encodability of a selected cover is presented. If a cover is not encodable, it is shown how to augment it minimally until an encodable superset of GPIs is determined. To handle encodability the authors have extended the frame to satisfy encoding constraints presented in the second part. The covering problems generated in the minimization of GPIs tend to be very large. Recently large covering problems have been attacked successfully by representing the covering table with binary decision diagrams (BDD). In the fourth part of the book the authors introduce such techniques and extend them to the case of the implicit minimization of GPIs, where the encodability and augmentation steps are also performed implicitly. Synthesis of Finite State Machines: Logic Optimization will be of interest to researchers and professional engineers who work in the area of computer-aided design of integrated circuits.

84 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An automatic iterative abstraction-refinement methodology that extends symbolic model checking to large hardware designs and devise new symbolic techniques that analyze such counterexamples and refine the abstract model correspondingly.
Abstract: The state explosion problem remains a major hurdle in applying symbolic model checking to large hardware designs. State space abstraction, having been essential for verifying designs of industrial complexity, is typically a manual process, requiring considerable creativity and insight.In this article, we present an automatic iterative abstraction-refinement methodology that extends symbolic model checking. In our method, the initial abstract model is generated by an automatic analysis of the control structures in the program to be verified. Abstract models may admit erroneous (or "spurious") counterexamples. We devise new symbolic techniques that analyze such counterexamples and refine the abstract model correspondingly. We describe aSMV, a prototype implementation of our methodology in NuSMV. Practical experiments including a large Fujitsu IP core design with about 500 latches and 10000 lines of SMV code confirm the effectiveness of our approach.

1,040 citations

Book
31 Dec 1992
TL;DR: This book presents a synthesis of recent works concerning reactive system design, based on Robin Milner's pioneering works about synchronous process algebras, which consists in considering that a program instantaneously reacts to events, or that the machine execution time is negligible with respect to the response delays of its environment.
Abstract: This book presents a synthesis of recent works concerning reactive system design. The term `reactive system' has been introduced in order to avoid ambiguities often involved with the term `real-time system' which, while being best-known and suggestive, has been assigned so many different meanings that it is almost inevitably misunderstood. Industrial Process control system, transportation control and supervision systems, signal processing systems, etc. are examples of the systems we have in mind. Four programming languages are presented, which share the same underlying synchronous model: based on Robin Milner's pioneering works about synchronous process algebras, this model consists in considering that a program instantaneously reacts to events, or that the machine execution time is negligible with respect to the response delays of its environment. Using this abstract point of view, the time behavior of a system can be formalized in a very simple and elegant way. The languages presented are ESTEREL, a textual imperative language; ARGOS, a graphical language inspired by STATECHARTS; and LUSTRE and SIGNAL, two declarative languages. After a tutorial description of the languages, illustrated by various examples, a set of related tools is presented: compilers to sequential and distributed code, silicon compilers, verification tools.

945 citations

Book ChapterDOI
15 Jul 2010
TL;DR: This paper introduces ABC, motivates its development, and illustrates the use in formal verification of binary logic circuits appearing in synchronous hardware designs.
Abstract: ABC is a public-domain system for logic synthesis and formal verification of binary logic circuits appearing in synchronous hardware designs ABC combines scalable logic transformations based on And-Inverter Graphs (AIGs), with a variety of innovative algorithms A focus on the synergy of sequential synthesis and sequential verification leads to improvements in both domains This paper introduces ABC, motivates its development, and illustrates its use in formal verification.

666 citations