scispace - formally typeset
Search or ask a question
Author

Alberto Sangiovanni-Vincentelli

Bio: Alberto Sangiovanni-Vincentelli is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Logic synthesis & Finite-state machine. The author has an hindex of 99, co-authored 934 publications receiving 45201 citations. Previous affiliations of Alberto Sangiovanni-Vincentelli include National University of Singapore & Lawrence Berkeley National Laboratory.


Papers
More filters
Proceedings ArticleDOI
06 Jun 1994
TL;DR: This panel will assess the outlook for verification for complex systems, focusing on enabling technologies which show promise in this area, both now and for the future and providing their own perspective of system-level verification challenges.
Abstract: Time-to-market continues to be The Challenge faced by developers of complex electronic systems. The bottleneck - which was traditionally in the design phase of the project - has now moved downstream to the system-level verification stage. The growing adoption of top-down design methodologies based on HDL and synthesis has made the generation of large multi-million gate designs easier than before. The efficient verification of those newly created gates in the final system is now the key to solving The Challenge. Technologies such as rapid system prototyping, ASIC emulation and formal verification offer the potential to completely verify the full system. Hardware and software co-design and HDL test benches offer the potential to feed real world inputs to the verification process. This panel will assess the outlook for verification for complex systems. We will focus on enabling technologies which show promise in this area, both now and for the future. The panel will present a mixture of tutorial material, leading edge academic work, current technology and the user's perspective. In addition to current and future technologies, each speaker will specifically address how design methodology is impacted by their choice of verification methodology. The discussion of the panel will focus on: -What's required from the EDA industry to make their customers successful in the future? -System level verification can be quite expensive. Is the promised Return-On-Investment really there? -Which technologies are usable without turning design methodologies upside down? -Just what is product and what is research in this area? The target audience includes chip and system designers, design managers and executive management who are pondering the benefits and costs of implementing system-level verification. The panelists represent a mix of academia, companies offering verification solutions and users of system-level verification products. The panel will begin with a tutorial on Formal Verification. All other panelists will be limited to a short position statement, allowing ample time for discussion. Each panelist will provide their own perspective of system-level verification challenges.

3 citations

Posted Content
TL;DR: It is shown that ensemble learning methods can give improved performance on incipient anomalies and identify common pitfalls in these models through extensive experiments on two real-world datasets.
Abstract: Incipient anomalies present milder symptoms compared to severe ones, and are more difficult to detect and diagnose due to their close resemblance to normal operating conditions The lack of incipient anomaly examples in the training data can pose severe risks to anomaly detection methods that are built upon Machine Learning (ML) techniques, because these anomalies can be easily mistaken as normal operating conditions To address this challenge, we propose to utilize the uncertainty information available from ensemble learning to identify potential misclassified incipient anomalies We show in this paper that ensemble learning methods can give improved performance on incipient anomalies and identify common pitfalls in these models through extensive experiments on two real-world datasets Then, we discuss how to design more effective ensemble models for detecting incipient anomalies

3 citations

Posted Content
TL;DR: This work identifies common pitfalls in ensemble models through extensive experiments with several popular ensemble models on two real-world datasets, and discusses how to design more effective ensemble models for detecting and diagnosing Intermediate-Severity faults.
Abstract: Intermediate-Severity (IS) faults present milder symptoms compared to severe faults, and are more difficult to detect and diagnose due to their close resemblance to normal operating conditions. The lack of IS fault examples in the training data can pose severe risks to Fault Detection and Diagnosis (FDD) methods that are built upon Machine Learning (ML) techniques, because these faults can be easily mistaken as normal operating conditions. Ensemble models are widely applied in ML and are considered promising methods for detecting out-of-distribution (OOD) data. We identify common pitfalls in these models through extensive experiments with several popular ensemble models on two real-world datasets. Then, we discuss how to design more effective ensemble models for detecting and diagnosing IS faults.

3 citations

Journal ArticleDOI
TL;DR: This roundtable examines issues and attempts to provide a definite picture of where ESL design is today and where it might be in the next five to 10 years.
Abstract: This is the first of two roundtables on electronic system-level design in this issue of IEEE Design & Test. ESL design and tools have been present in the design landscape for many years. Significant ESL innovations are now part of most advanced design methodologies, spanning the domains of modeling, simulation, and synthesis. Techniques such as transaction-level modeling, automatic interconnection generation, behavioral synthesis, automatic instruction-set customization, retargetable compilers, and many others are currently used in the design of multimillion-gate chips. Yet, ESL design still seems to struggle to live up to the promise of providing increased productivity and design quality. This roundtable examines these issues and attempts to provide a definite picture of where ESL design is today and where it might be in the next five to 10 years. The participants in this roundtable include well-known experts in ESL design from the user side, universities, and tool providers. IEEE Design & Test thanks the roundtable participants: moderator Reinaldo Bergamaschi (CadComponents), Luca Benini (University of Bologna), Krisztian Flautner (ARM UK), Wido Kruijtzer (NXP Semiconductors), Alberto Sangiovanni-Vincentelli (University of California, Berkeley), and Kazutoshi Wakabayashi (NEC Japan). D&T gratefully acknowledges the help of Roundtables Editor Bill Joyner (Semiconductor Research Corp.), who organized the event.

3 citations

Proceedings ArticleDOI
30 May 2001
TL;DR: An application of scheduling theory to reactive real-time transactions (task groups) implementing a formal model of this kind, used in the context of the POLIS toolset: a network of extended finite state machines communicating asynchronously.
Abstract: The development of control-dominated embedded systems can be largely automated by making use of formal models of computation. In some of these models functional objects are not independently activated, triggered by time or external events, as in conventional real-time scheduling models, but each communication between any two functional objects carries an activation signal from the sender to the receiver. This paper presents an application of scheduling theory to reactive real-time transactions (task groups) implementing a formal model of this kind, used in the context of the POLIS toolset: a network of extended finite state machines communicating asynchronously. Task instances are activated in response to internal and/or external events and the objective of the scheduling problem is to avoid the loss of events exchanged by the tasks and to minimize the number of task instances activated in response to external events. The paper presents a schedulability analysis, two priority assignment algorithms, and an experimental part with a dashboard controller example.

3 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Journal ArticleDOI
Rainer Storn1, Kenneth Price
TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Abstract: A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

24,053 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Abstract: In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.

9,021 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations