scispace - formally typeset
Search or ask a question
Author

Alberto Sangiovanni-Vincentelli

Bio: Alberto Sangiovanni-Vincentelli is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Logic synthesis & Finite-state machine. The author has an hindex of 99, co-authored 934 publications receiving 45201 citations. Previous affiliations of Alberto Sangiovanni-Vincentelli include National University of Singapore & Lawrence Berkeley National Laboratory.


Papers
More filters
Proceedings ArticleDOI
11 Nov 1990
TL;DR: In this article, the authors propose a method based on transition relations that only requires the ability to compute the binary decision diagram for f/sub i/ and outperforms Coudert's (1990) algorithm for most examples.
Abstract: The authors propose a novel method based on transition relations that only requires the ability to compute the BDD (binary decision diagram) for f/sub i/ and outperforms O. Coudert's (1990) algorithm for most examples. The method offers a simple notational framework to express the basic operations used in BDD-based state enumeration algorithms in a unified way and a set of techniques that can speed up range computation dramatically, including a variable ordering heuristic and a method based on transition relations. >

371 citations

Journal ArticleDOI
TL;DR: The DELIGHT.SPICE tool, a union of the DELIGHT interactive optimization-based computer-aided-design system and the SPICE circuit analysis program, is presented, yielding substantial improvement in circuit performance.
Abstract: DELIGHT.SPICE is the union of the DELIGHT interactive optimization-based computer-aided-design system and the SPICE circuit analysis program. With the DELIGHT.SPICE tool, circuit designers can take advantage of recent powerful optimization algorithms and a methodology that emphasizes designer intuition and man-machine interaction. Designer and computer are complementary in adjusting parameters of electronic circuits automatically to improve their performance criteria and to study complex tradeoffs between multiple competing objectives, while simultaneously satisfying multiple-constraint specifications. The optimization runs much more efficiently than previously because the SPICE program used has been enhanced to perform DC, AC, and transient sensitivity computation. Industrial analog and digital circuits have been redesigned using this tool, yielding substantial improvement in circuit performance. >

367 citations

Journal ArticleDOI
01 Jul 1993
TL;DR: A survey of field-programmable gate array (FPGA) architectures and the programming technologies used to customize them is presented and a classification of logic blocks based on their granularity is proposed, and several logic blocks used in commercially available FPGAs are described.
Abstract: A survey of field-programmable gate array (FPGA) architectures and the programming technologies used to customize them is presented. Programming technologies are compared on the basis of their volatility, size parasitic capacitance, resistance, and process technology complexity. FPGA architectures are divided into two constituents: logic block architectures and routing architectures. A classification of logic blocks based on their granularity is proposed, and several logic blocks used in commercially available FPGAs are described. A brief review of recent results on the effect of logic block granularity on logic density and performance of an FPGA is then presented. Several commercial routing architectures are described in the context of a general routing architecture model. Finally, recent results on the tradeoff between the flexibility of an FPGA routing architecture, its routability, and its density are reviewed. >

362 citations

Journal ArticleDOI
TL;DR: In this article, a theoretical analysis of simulated annealing based on its precise model, a time-inhomogeneous Markov chain, is presented and a bound on the departure of the probability distribution of the state at finite time from the optimum is obtained.
Abstract: Simulated annealing is a randomized algorithm which has been proposed for finding globally optimum least-cost configurations in large NP-complete problems with cost functions which may have many local minima. A theoretical analysis of simulated annealing based on its precise model, a time-inhomogeneous Markov chain, is presented. An annealing schedule is given for which the Markov chain is strongly ergodic and the algorithm converges to a global optimum. The finite-time behavior of simulated annealing is also analyzed and a bound obtained on the departure of the probability distribution of the state at finite time from the optimum. This bound gives an estimate of the rate of convergence and insights into the conditions on the annealing schedule which gives optimum performance.

341 citations

Journal ArticleDOI
TL;DR: The proposed algorithm for optimal state assignment is based on an innovative strategy: logic minimization of the combinational component of the finite state machine is applied before state encoding, and has been coded in a computer program, KISS, and tested on several examples of finite state machines.
Abstract: Computer-Aided synthesis of sequential functions of VLSI systems, such as microprocessor control units, must include design optimization procedures to yield area-effective circuits. We model sequential functions as deterministic synchronous Finite State Machines (FSM's), and we consider a regular and structured implementation by means of Programmable Logic Arrays (PLA's) and feedback registers. State assignment, i.e., binary encoding of the internal states of the finite state machine, affects substantially the silicon area taken by such an implementation. Several state assignment techniques have been proposed in the past. However, to the best of our knowledge, no Computer-Aided Design tool is in use today for an efficient encoding of control logic. We propose an algorithm for optimal state assignment. Optimal state assignment is based on an innovative strategy: logic minimization of the combinational component of the finite state machine is applied before state encoding. Logic minimization is performed on a symbolic (code independent) description of the finite state machine. The minimal symbolic representation defines the constraints of a new encoding problem, whose solutions are the state assignments that allow the implementation of the PLA with at most as many product-terms as the cardinality of the minimal symbolic representation. In this class, an optimal encoding is one of minimal length satisfying these constraints. A heuristic algorithm constructs a solution to the constrained encoding problem. The algorithm has been coded in a computer program, KISS, and tested on several examples of finite state machines. Experimental results have shown that the method is an effective tool for designing finite state machines.

340 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Journal ArticleDOI
Rainer Storn1, Kenneth Price
TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Abstract: A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

24,053 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Abstract: In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.

9,021 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations