scispace - formally typeset
Search or ask a question
Author

Alberto Sangiovanni-Vincentelli

Bio: Alberto Sangiovanni-Vincentelli is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Logic synthesis & Finite-state machine. The author has an hindex of 99, co-authored 934 publications receiving 45201 citations. Previous affiliations of Alberto Sangiovanni-Vincentelli include National University of Singapore & Lawrence Berkeley National Laboratory.


Papers
More filters
Proceedings ArticleDOI
01 Jun 1999
TL;DR: A new VLSI layout methodology which addresses the main problems faced in deep sub-micron (DSM) integrated circuit design, and shows how the uniform parasitics of the fabric give rise to a reliable and predictable design.
Abstract: Proposes a new VLSI layout methodology which addresses the main problems faced in deep sub-micron (DSM) integrated circuit design. Our layout "fabric" scheme eliminates the conventional notion of power and ground routing on the integrated circuit die. Instead, power and ground are essentially "pre-routed" all over the die. By a clever arrangement of power/ground and signal pins, we almost completely eliminate the capacitive effects between signal wires. Additionally. We get a power and ground distribution network with a very low resistance at any point on the die. Another advantage of our scheme is that the arrangement of conductors ensures that on-chip inductances are uniformly negligible. Finally, characterization of the circuit delays, capacitances and resistances becomes extremely simple in our scheme, and needs to be done only once for a design. We show how the uniform parasitics of our fabric give rise to a reliable and predictable design. We have implemented our scheme using public domain layout software. Preliminary results show that it holds much promise as the layout methodology of choice in DSM integrated circuit design.

89 citations

Proceedings ArticleDOI
24 Jun 1990
TL;DR: An algorithm to optimally distribute a signal to its required destinations by partitioning the fanout signals into subsets and then recursively solving each sub-problem, which generates a fanout tree that is an improvement over the previous stage.
Abstract: We present an algorithm to optimally distribute a signal to its required destinations. The choice of the buffers and the topology of the distribution tree depends on the availability of different strength gates and on the load and the required times at the destinations. The general problem is to construct a fanout-tree for a signal so that the required time constraint at the source node is met and the fanout-tree has a minimum area. Since the area constrained fanout problem is NP-complete and area is not a major consideration in present high density designs, we restrict our attention to the simpler problem of designing fast fanout circuits without any area constraint. The proposed algorithm builds the fanout tree by partitioning the fanout signals into subsets and then recursively solving each sub-problem. At each stage the algorithm generates a fanout tree that is an improvement over the previous stage. This feature allows the user to specify the improvement desired by the fanout correction process. The performance of the algorithm, when run on randomly generated distributions of required times and on real design examples, is very promising.

88 citations

Book ChapterDOI
28 Jun 1993
TL;DR: An iterative approach to formal verification by language containment is proposed, which starts with some initial abstraction and then iteratively refine it, guided by the failure report from the verification tool.
Abstract: We propose an iterative approach to formal verification by language containment. We start with some initial abstraction and then iteratively refine it, guided by the failure report from the verification tool. We show that the procedure will terminate, propose a series of heuristic aimed at reducing the size of BDD's used in the computation, and formulate several open problems that could improve efficiency of the procedure. Finally, we present and discuss some initial experimental results.

86 citations

13 Apr 2015
TL;DR: The paper explains how to use sensors as the eyes, ears, hands, and feet for the cloud.

85 citations

Book
17 Apr 1997
TL;DR: The authors introduce techniques for converting a symbolic description of an FSM into a hardware implementation and extend them to the case of the implicit minimization of GPIs, where the encodability and augmentation steps are also performed implicitly.
Abstract: Synthesis of Finite State Machines: Logic Optimization is the second in a set of two monographs devoted to the synthesis of Finite State Machines (FSMs). The first volume, Synthesis of Finite State Machines: Functional Optimization, addresses functional optimization, whereas this one addresses logic optimization. The result of functional optimization is a symbolic description of an FSM which represents a sequential function chosen from a collection of permissible candidates. Logic optimization is the body of techniques for converting a symbolic description of an FSM into a hardware implementation. The mapping of a given symbolic representation into a two-valued logic implementation is called state encoding (or state assignment) and it impacts heavily area, speed, testability and power consumption of the realized circuit. The first part of the book introduces the relevant background, presents results previously scattered in the literature on the computational complexity of encoding problems, and surveys in depth old and new approaches to encoding in logic synthesis. The second part of the book presents two main results about symbolic minimization; a new procedure to find minimal two-level symbolic covers, under face, dominance and disjunctive constraints, and a unified frame to check encodability of encoding constraints and find codes of minimum length that satisfy them. The third part of the book introduces generalized prime implicants (GPIs), which are the counterpart, in symbolic minimization of two-level logic, to prime implicants in two-valued two-level minimization. GPIs enable the design of an exact procedure for two-level symbolic minimization, based on a covering step which is complicated by the need to guarantee encodability of the final cover. A new efficient algorithm to verify encodability of a selected cover is presented. If a cover is not encodable, it is shown how to augment it minimally until an encodable superset of GPIs is determined. To handle encodability the authors have extended the frame to satisfy encoding constraints presented in the second part. The covering problems generated in the minimization of GPIs tend to be very large. Recently large covering problems have been attacked successfully by representing the covering table with binary decision diagrams (BDD). In the fourth part of the book the authors introduce such techniques and extend them to the case of the implicit minimization of GPIs, where the encodability and augmentation steps are also performed implicitly. Synthesis of Finite State Machines: Logic Optimization will be of interest to researchers and professional engineers who work in the area of computer-aided design of integrated circuits.

84 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Journal ArticleDOI
Rainer Storn1, Kenneth Price
TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Abstract: A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

24,053 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Abstract: In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.

9,021 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations