scispace - formally typeset
Search or ask a question
Author

Alberto Sangiovanni-Vincentelli

Bio: Alberto Sangiovanni-Vincentelli is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Logic synthesis & Finite-state machine. The author has an hindex of 99, co-authored 934 publications receiving 45201 citations. Previous affiliations of Alberto Sangiovanni-Vincentelli include National University of Singapore & Lawrence Berkeley National Laboratory.


Papers
More filters
Proceedings ArticleDOI
27 Sep 2004
TL;DR: An extension of a mathematical framework proposed by the authors to deal with the composition of heterogeneous reactive systems is presented, providing a complete formal support for correct-by-construction distributed deployment of a synchronous design specification over an ltta medium.
Abstract: We present an extension of a mathematical framework proposed by the authors to deal with the composition of heterogeneous reactive systems. Our extended framework encompasses diverse models of computation and communication such as synchronous, asynchronous, causality-based partial orders, and earliest execution times. We introduce an algebra of tag structures and morphisms between tag sets to define heterogeneous parallel composition formally and we use a result on pullbacks from category theory to handle properly the case of systems derived by composing many heterogeneous components. The extended framework allows us to establish theorems, from which design techniques for correct-by-construction deployment of abstract specifications can be derived. We illustrate this by providing a complete formal support for correct-by-construction distributed deployment of a synchronous design specification over an ltta medium.

33 citations

Book ChapterDOI
23 Mar 2000
TL;DR: It is shown that the discrete value function converges to the viscosity solution of the Hamilton-Jacobi-Bellman equation as a discretization parameter tends to zero.
Abstract: We consider the synthesis of optimal controls for continuous feedback systems by recasting the problem to a hybrid optimal control problem: to synthesize optimal enabling conditions for switching between locations in which the control is constant. An algorithmic solution is obtained by translating the hybrid automaton to a finite automaton using a bisimulation and formulating a dynamic programming problem with extra conditions to ensure non-Zenoness of trajectories. We show that the discrete value function converges to the viscosity solution of the Hamilton-Jacobi-Bellman equation as a discretization parameter tends to zero.

33 citations

Journal ArticleDOI
TL;DR: This paper presents three abstraction layers for WSNs and the tools that “bridge” these layers and a case study that show how the methodology covers all the aspects of the design process, from conceptual description to implementation.
Abstract: The platform-based design (PBD) methodology we present has its foundations in a clear definition of the different abstraction layers of a wireless sensor network (WSN) At the application layer, we introduce the sensor network service platform that allows the user to describe the application independently from the network architecture This functional description is then mapped into an instance of the sensor network implementation platform These two layers of abstraction are the basis for the design methodology An essential feature of the methodology is the capability of defining the design as a refinement process between contiguous layers that is based on synthesis given the existence of a formal definition of the layers of abstraction We present a framework, called Rialto, which translates application specifications into network constraints Furthermore, we present a synthesis engine, called Genesis, that starting from these constraints and an abstraction of the hardware platform performance generates a topology and a communication protocol that satisfy constraints and optimizes energy consumption We present a case study that shows how the methodology covers all the aspects of the design process, from conceptual description to implementation

33 citations

Proceedings ArticleDOI
04 Jan 1997
TL;DR: This paper surveys some state-of-the-art techniques used to perform automatic verification of combinational circuits and classifies the current approaches into two categories functional and structural.
Abstract: With the increase in the complexity of present day systems, proving the correctness of a design has become a major concern. Simulation based methodologies are generally inadequate to validate the correctness of a design with a reasonable confidence. More and more designers are moving towards formal methods to guarantee the correctness of their designs. In this paper we survey some state-of-the-art techniques used to perform automatic verification of combinational circuits. We classify the current approaches for combinational verification into two categories functional and structural. The functional methods consist of representing a circuit as a canonical decision diagram. Two circuits are equivalent if and only if their decision diagrams are equal. The structural methods consist of identifying related nodes in the circuit and using them to simplify the problem of verification. We briefly describe some of the methods in both the categories and discuss their merits and drawbacks.

33 citations

Proceedings ArticleDOI
01 Oct 1987
TL;DR: Algorithms and programming techniques needed to develop SUM (Simulation Using Massively parallel computers), a relaxation-based circuit simulator on the Connection Machine, a massively parallel processor with up to 65536 processors are described.
Abstract: Accurate circuit simulation is a very important step in the design of high performance integrated circuits. The ever increasing size of integrated circuits requires the use of an inordinate amount of computer time to be spent in circuit simulation. Parallel processors have been considered to speed up the simulation process. Massively parallel computers have been made available recently and present a new interesting paradigm for expensive CAD applications. This paper describes algorithms and programming techniques needed to develop SUM (Simulation Using Massively parallel computers), a relaxation-based circuit simulator on the Connection Machine, a massively parallel processor with up to 65536 processors. SUM can simulate circuits at almost constant CPU time per iteration, regardless of circuit size. SUM can simulate very large circuits. Circuit simulators running on the largest super computers can run circuits of comparable size, however SUM is easily scalable as the number of processors in the Connection Machine increases, with almost no increase in CPU time.

33 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Journal ArticleDOI
Rainer Storn1, Kenneth Price
TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Abstract: A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

24,053 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Abstract: In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.

9,021 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations