scispace - formally typeset
Search or ask a question
Author

Alberto Sangiovanni-Vincentelli

Bio: Alberto Sangiovanni-Vincentelli is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Logic synthesis & Finite-state machine. The author has an hindex of 99, co-authored 934 publications receiving 45201 citations. Previous affiliations of Alberto Sangiovanni-Vincentelli include National University of Singapore & Lawrence Berkeley National Laboratory.


Papers
More filters
Proceedings ArticleDOI
10 Nov 1996
TL;DR: A methodology for hierarchical statistical circuit characterization which does not rely upon circuit-level Monte Carlo simulation is presented and permits the statistical characterization of large analog and mixed-signal systems.
Abstract: A methodology for hierarchical statistical circuit characterization which does not rely upon circuit-level Monte Carlo simulation is presented. The methodology uses principal component analysis, response surface methodology, and statistics to directly calculate the statistical distributions of higher-level parameters from the distributions of lower-level parameters. We have used the methodology to characterize a folded cascode operational amplifier and a phase-locked loop. This methodology permits the statistical characterization of large analog and mixed-signal systems, many of which are extremely time-consuming or impossible to characterize using existing methods.

74 citations

Proceedings ArticleDOI
06 Nov 1994
TL;DR: In constraint-driven synthesis, it is shown that a fundamental subproblem of crosstalk channel routing, coupling-constrained graph levelization (CCL), is NP-complete, and a novel heuristic algorithm is developed.
Abstract: Interconnect performance does not scale well into deep submicron dimensions, and the rising number of analog effects erodes the digital abstraction necessary for high levels of integration. In particular, crosstalk is an analog phenomenon of increasing relevance. To cope with the increasingly analog nature of high-performance digital system design, we propose using a constraint-driven methodology. In this paper we describe new constraint generation ideas incorporating digital sensitivity. In constraint-driven synthesis, we show that a fundamental subproblem of crosstalk channel routing, coupling-constrained graph levelization (CCL), is NP-complete, and develop a novel heuristic algorithm. To demonstrate the viability of our methodology, we introduce a gridless crosstalk-avoiding channel router as an example of a robust and truly constraint-driven synthesis tool.

74 citations

Proceedings ArticleDOI
01 Mar 1999
TL;DR: A hybrid approach to embedded software performance estimation is developed, that incorporates some aspects of both, and provides a flexible and fast simulation platform, considering also compilation issues and processor features.
Abstract: High-level cost and performance estimation, coupled with a fast hardware/software co-simulation framework, is a key enabler to a fast embedded system design cycle. Unfortunately, the problem of deriving such estimates without a detailed implementation available is very difficult. In this paper we focus on embedded software performance estimation. Current approaches use either behavioral simulation with (often manual) timing annotations, or a clock cycle-accurate model of instruction execution (e.g., an instruction set simulator). The former provides greater flexibility (no need to perform a detailed design) and high simulation speed, but cannot easily consider effects such as compiler optimization and processor architecture. The latter provides high accuracy, but requires a more detailed implementation model, and is much slower in general. We hence developed a hybrid approach, that incorporates some aspects of both. It provides a flexible and fast simulation platform, considering also compilation issues and processor features. The key idea is to use the GNU-C compiler (GCC) to generate "assembler-level" C code. This code can be annotated with timing information, and used as a very precise, yet fast, software simulation model. We report some experimental results that show the effectiveness of our approach, and we propose some future improvements.

72 citations

Journal ArticleDOI
TL;DR: The subalgorlthm, an extension of Polak's method of feasible directions to nondifferentlable problems, is shown to converge under suitable assumptions and the optimality function used in the subalgorithm is proven to satisfy a condition which guarantees that the overall algorithm converges.
Abstract: The optimal design centering, tolerancing, and tuning problem is transcribed into a mathematical programming problem of the form P_g: \min\{f(x)|\max_{\omega\in\Omega}\min_{\tau\in\Gamma} \zeta^{j}(x,\omega, \tau) \leq 0\} , x \geq 0, x, \omega, \tau \in R^{n} , f: R^n \rightarrow R^1 , \zeta: R^n \times R^n \times R^n \rightarrow R^1 , continuously differentiable, \Omega and T compact subsets of R^n , J=\{1, \cdots , p\} . A simplified form of P_g , P: \min \{f(x) \Psi (x) \underset{=}{\triangle} \max_{omega\in \Omega \min_{\tau \in T} \zeta(x,\omega, \tau ) \leq 0 \} is discussed. It is shown that $\Psi(\cdot ) is locally Lipschitz continuous but not continuously differentiable. Optimality conditions for P based on the concept of generalized gradients are derived. An algorithm, consisting of a master outer approximations algorithm proposed by Gonzaga and Polak and of a new subalgorlthm for nondifferentiable problems of the form P_{i}: \min\{f(x)| \max_{\omega\in\Omega_i\} \min_{\tau \in T} \zeta (x, \omega, \tau ) \leq 0 \} , where \Omega_i is a discrete set, is presented. The subalgorlthm, an extension of Polak's method of feasible directions to nondifferentlable problems, is shown to converge under suitable assumptions. Moreover, the optimality function used in the subalgorithm is proven to satisfy a condition which guarantees that the overall algorithm converges.

72 citations

DOI
01 Mar 1998
TL;DR: This paper presents an approach to integrate a clock-cycle-accurate instruction set simulator (ISS) with a fast event-based system simulator, and presents a cached refinement scheme to improve the performance at the expense of accuracy.
Abstract: Timing analysis for checking satisfaction of constraints is a crucial problem in real-time system design. In some current approaches, the delay of software modules is precalculated by a software performance estimation method, which is not accurate enough for hard real-time systems and complicated designs. In this paper we present an approach to integrate a clock-cycle-accurate instruction set simulator (ISS) with a fast event-based system simulator. By using the ISS, the delay of events can be measured instead of estimated. An interprocess communication architecture and a simple protocol are designed to meet the requirement of robustness and flexibility. A cached refinement scheme is presented to improve the performance at the expense of accuracy. The scheme is especially effective for applications in which the delay of basic blocks is approximately data-independent. We also discuss the implementation issues by using the Ptolemy simulation environment and the ST20 simulator as an example.

71 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Journal ArticleDOI
Rainer Storn1, Kenneth Price
TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Abstract: A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

24,053 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Abstract: In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.

9,021 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations