scispace - formally typeset
Search or ask a question
Author

Alberto Sangiovanni-Vincentelli

Bio: Alberto Sangiovanni-Vincentelli is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Logic synthesis & Finite-state machine. The author has an hindex of 99, co-authored 934 publications receiving 45201 citations. Previous affiliations of Alberto Sangiovanni-Vincentelli include National University of Singapore & Lawrence Berkeley National Laboratory.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, an extractor based on the MAGIC layout system is proposed to extract superconducting circuits from layout, where the inductances of the superconducted lines are calculated by a set of analytical models.
Abstract: The authors present an extractor, INDEX, designed to extract superconducting circuits from layout. The inductances of the superconducting lines are calculated by a set of analytical models. These self- and mutual-inductance models are generated from a series of numerical simulations and a linear programming curve-fitting. INDEX is based on the MAGIC layout system. INDEX has been tested on a number of cases with good results. A two-junction superconducting quantum interference device (SQUID) was used as one test case. >

15 citations

Proceedings ArticleDOI
08 Jul 2009
TL;DR: This work describes the use of statistical analysis to compute the probability distribution of Controller Area Network (CAN) message response times when only partial information is available about the functionality and electrical architecture of a vehicle.
Abstract: Automotive electronics architectures need to be evaluated and selected based on the estimated performance of the functions deployed on them before the details of these functions are known. End-to-end delays of controls must be estimated using incomplete and aggregate information on the computation and communication load for ECUs and buses. We describe the use of statistical analysis to compute the probability distribution of Controller Area Network (CAN) message response times when only partial information is available about the functionality and electrical architecture of a vehicle. We provide results on simulation as well as trace data to show that our statistical inference can be used for predicting the distribution of the response time of a CAN message, once its priority has been assigned, from limited information such as the bus utilization of higher priority messages.

15 citations

01 Jan 2004
TL;DR: This thesis is that correct-by-construction methods combining the benefits of synchronous specification with the efficiency of asynchronous implementation are the key to design moderately distributed complex systems composed of tightly interacting concurrent processes.
Abstract: Currently available computer-aided design (CAD) tools struggle on handling the increasingly dominant impact of interconnect delay and fall short on providing support for IP reuse With each process generation, the number of available transistors grows faster than the ability to meaningfully design them (design productivity gap) and designers are forced to iterate many times between circuit specification and layout implementation (timing-closure problem) Ironically, it is the introduction of nanometer technologies that threatens the outstanding pace of technological progress that has shaped the semiconductor industry The key to addressing these challenges is the development of methodologies based on formal methods to enable modularity, flexibility, and reusability in system design The subject of this dissertation—Latency-Insensitive Design—is a step in this direction My thesis is that correct-by-construction methods combining the benefits of synchronous specification with the efficiency of asynchronous implementation are the key to design moderately distributed complex systems composed of tightly interacting concurrent processes Major contributions are the theory of latency-insensitive protocol and the companion latency-insensitive design methodology Latency-insensitive systems are synchronous distributed systems composed by functional modules that exchange data on communication channels according to an appropriate protocol The protocol works on the assumption that the modules are stallable (a weak condition to ask them to obey) and guarantees that systems made of functionally correct modules, behave correctly independently of channel latencies The theory of latency-insensitive protocols is the foundation of a correct-by-construction methodology for integrated circuit design that handles latency's increasing impact on nanometer technologies and facilitates the assembly of IP cores for building complex SOCs, thereby reducing the number of costly iterations during the design process Thanks to the generality of its principles, latency-insensitive design can be possibly applied to other research areas like distributed deployment of embedded software (Abstract shortened by UMI)

15 citations

Proceedings ArticleDOI
01 May 1995
TL;DR: In this paper, performance sensitivities are used to derive a set of bounds on critical parasitics and to generate weights for a cost function which drives an area router for routing of very high-frequency circuits.
Abstract: Techniques are proposed for the routing of very high-frequency circuits. In this approach, performance sensitivities are used to derive a set of bounds on critical parasitics and to generate weights for a cost function which drives an area router. In addition to these bounds, design often requires that the length of interconnect lines be equal to predefined values, The routing scheme enforces both types of constraints in two phases. During the first phase all parasitic constraints are enforced on all nets. Equality constraints are enforced during the second phase by expanding each net simultaneously while ensuring that no additional violations to parasitic constraints are introduced in the layout. During both phases accurate and efficient parasitic estimations are guaranteed by compact analytical models, based on 2D and 3D field analysis. Finally, a global check on all distributed parasitics is performed. If the original constraints are not satisfied, the weights are updated based on severity of the violation and routing is applied iteratively. Several layouts synthesized using this technique have been fabricated and successfully tested, confirming the effectiveness of the approach.

15 citations

Proceedings ArticleDOI
16 Feb 2004
TL;DR: A new abstraction level - the platform - is introduced to separate circuit design from design space exploration and an analog platform encapsulates analog components concurrently modeling their behavior and their achievable performances.
Abstract: This paper describes a novel approach to system level analog design. A new abstraction level - the platform - is introduced to separate circuit design from design space exploration. An analog platform encapsulates analog components concurrently modeling their behavior and their achievable performances. Performance models are obtained through statistical sampling of circuit configurations. The design configurations space is specified with analog constraint graphs so that the sampling space is significantly reduced. System level exploration can be achieved through optimization on behavioral models constrained by performance models. Finally, an example is provided showing the effectiveness of the approach on a WCDMA amplifier.

15 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Journal ArticleDOI
Rainer Storn1, Kenneth Price
TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Abstract: A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

24,053 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Abstract: In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.

9,021 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations