scispace - formally typeset
Search or ask a question
Author

Alberto Sangiovanni-Vincentelli

Bio: Alberto Sangiovanni-Vincentelli is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Logic synthesis & Finite-state machine. The author has an hindex of 99, co-authored 934 publications receiving 45201 citations. Previous affiliations of Alberto Sangiovanni-Vincentelli include National University of Singapore & Lawrence Berkeley National Laboratory.


Papers
More filters
Journal ArticleDOI
TL;DR: Algorithms are presented for Boolean decomposition, which can be used to decompose a programmable logic array into a set of smaller interconnected PLAs such that the overall area of the resulting logic network is minimized.
Abstract: Algorithms are presented for Boolean decomposition, which can be used to decompose a programmable logic array (PLA) into a set of smaller interconnected PLAs such that the overall area of the resulting logic network, deemed to be the sum of the areas of the constituent PLAs, is minimized. These algorithms can also be used to identify good Boolean factors which can be used as strong divisors during the logic optimization to reduce the literal counts/area of general multilevel logic networks. Excellent results have been obtained. >

41 citations

Proceedings ArticleDOI
11 Apr 2016
TL;DR: In this article, the authors address the problem of diagnosing and repairing specifications for hybrid systems, formalized in signal temporal logic (STL), using model predictive control (MPC).
Abstract: We address the problem of diagnosing and repairing specifications for hybrid systems, formalized in signal temporal logic (STL). Our focus is on automatic synthesis of controllers from specifications using model predictive control. We build on recent approaches that reduce the controller synthesis problem to solving one or more mixed integer linear programs (MILPs), where infeasibility of an MILP usually indicates unrealizability of the controller synthesis problem. Given an infeasible STL synthesis problem, we present algorithms that provide feedback on the reasons for unrealizability, and suggestions for making it realizable. Our algorithms are sound and complete relative to the synthesis algorithm, i.e., they provide a diagnosis that makes the synthesis problem infeasible, and always terminate with a non-trivial specification that is feasible using the chosen synthesis method, when such a solution exists. We demonstrate the effectiveness of our approach on controller synthesis for various cyber-physical systems, including an autonomous driving application and an aircraft electric power system.

41 citations

Proceedings ArticleDOI
11 Nov 1990
TL;DR: A general methodology for the design of the interconnections of analog circuits to meet high-level constraints on performance is described, and sensitivities of performance to parasitics are computed, and a set of bounding constraints for Parasitics is determined.
Abstract: A general methodology for the design of the interconnections of analog circuits to meet high-level constraints on performance is described. In this approach, sensitivities of performance to parasitics are computed, and a set of bounding constraints for parasitics is determined. Sensitivities are then used to generate the weights for a cost function-driven analog area router. After the routing is completed, the actual values of critical parasitics are used to check if the user-defined constraints on circuit performance are met. If the requirements have not been satisfied, the bounding constraints generated on the parasitics are used to increase the weights associated with the parasitics which violated the constraints, and the circuit is rerouted. Results validating the effectiveness of this approach for layout-design automation of analog circuits are reported. >

41 citations

Journal ArticleDOI
TL;DR: A new platform-based methodology can revolutionize the way a car is designed and help to provide entertainment and communication, and to ensure safety.
Abstract: Electronic components are now essential to control a car's movements and chemical, mechanical, and electrical processes; to provide entertainment and communication; and to ensure safety A new platform-based methodology can revolutionize the way a car is designed

41 citations

Journal ArticleDOI
TL;DR: An adaptive optimal duty-cycle algorithm running on top of the IEEE 802.15.4 medium access control to minimize power consumption while meeting the reliability and delay requirements and a simple analytical model provides insights into the performance metrics, including the reliability, average delay, and average power consumption of theduty-cycle protocol.
Abstract: Most applications of wireless sensor networks require reliable and timely data communication with maximum possible network lifetime under low traffic regime. These requirements are very critical especially for the stability of wireless sensor and actuator networks. Designing a protocol that satisfies these requirements in a network consisting of sensor nodes with traffic pattern and location varying over time and space is a challenging task. We propose an adaptive optimal duty-cycle algorithm running on top of the IEEE 802.15.4 medium access control to minimize power consumption while meeting the reliability and delay requirements. Such a problem is complicated because simple and accurate models of the effects of the duty cycle on reliability, delay, and power consumption are not available. Moreover, the scarce computational resources of the devices and the lack of prior information about the topology make it impossible to compute the optimal parameters of the protocols. Based on an experimental implementation, we propose simple experimental models to expose the dependency of reliability, delay, and power consumption on the duty cycle at the node and validate it through extensive experiments. The coefficients of the experimental-based models can be easily computed on existing IEEE 802.15.4 hardware platforms by introducing a learning phase without any explicit information about data traffic, network topology, and medium access control parameters. The experimental-based model is then used to derive a distributed adaptive algorithm for minimizing the power consumption while meeting the reliability and delay requirements in the packet transmission. The algorithm is easily implementable on top of the IEEE 802.15.4 medium access control without any modifications of the protocol. An experimental implementation of the distributed adaptive algorithm on a test bed with off-the-shelf wireless sensor devices is presented. The experimental performance of the algorithms is compared to the existing solutions from the literature. The experimental results show that the experimental-based model is accurate and that the proposed adaptive algorithm attains the optimal value of the duty cycle, maximizing the lifetime of the network while meeting the reliability and delay constraints under both stationary and transient conditions. Specifically, even if the number of devices and their traffic configuration change sharply, the proposed adaptive algorithm allows the network to operate close to its optimal value. Furthermore, for Poisson arrivals, the duty-cycle protocol is modeled as a finite capacity queuing system in a star network. This simple analytical model provides insights into the performance metrics, including the reliability, average delay, and average power consumption of the duty-cycle protocol.

41 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Journal ArticleDOI
Rainer Storn1, Kenneth Price
TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Abstract: A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

24,053 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Abstract: In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.

9,021 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations