scispace - formally typeset
Search or ask a question
Author

Alberto Sangiovanni-Vincentelli

Bio: Alberto Sangiovanni-Vincentelli is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Logic synthesis & Finite-state machine. The author has an hindex of 99, co-authored 934 publications receiving 45201 citations. Previous affiliations of Alberto Sangiovanni-Vincentelli include National University of Singapore & Lawrence Berkeley National Laboratory.


Papers
More filters
Posted Content
TL;DR: In this article, a machine learning approach to the solution of chance constrained optimizations in the context of voltage regulation problems in power system operation is presented, where the novelty of the approach resides in approximating the feasible region of uncertainty with an ellipsoid.
Abstract: We present a machine learning approach to the solution of chance constrained optimizations in the context of voltage regulation problems in power system operation. The novelty of our approach resides in approximating the feasible region of uncertainty with an ellipsoid. We formulate this problem using a learning model similar to Support Vector Machines (SVM) and propose a sampling algorithm that efficiently trains the model. We demonstrate our approach on a voltage regulation problem using standard IEEE distribution test feeders.

1 citations

Proceedings ArticleDOI
08 Dec 2008
TL;DR: Two solutions of the maximization problem with the simplified outage probability constraint are proposed: one solves the problem using mixed integer-real programming and the other relaxes the constraints that rates be integers yielding a standard convex programming optimization that can be solved much faster.
Abstract: The problem of maximizing the sum of the transmit rates while limiting the outage probability below an appropriate threshold is investigated for networks where the nodes have limited processing capabilities. We focus on CDMA wireless network whose rates are characterized under mixed Rayleigh- lognormal fading. The outage probability is given implicitly by a complex function so that solving the optimization problem requires substantial computing. In this paper, we propose a novel explicit approximation of this function that allows solving the problem in an affordable manner. We propose two solutions of the maximization problem with the simplified outage probability constraint: one solves the problem using mixed integer-real programming. The other relaxes the constraints that rates be integers yielding a standard convex programming optimization that can be solved much faster. Numerical results show that our approaches perform well for average values of the outage requirements.

1 citations

Proceedings ArticleDOI
10 Jan 1999
TL;DR: A scheme to simplify a multi-valued network using redundancy removal techniques, which shows a 10-20% reduction in the size of theMulti-valued description.
Abstract: We introduce a scheme to simplify a multi-valued network using redundancy removal techniques. Recent methods for binary redundancy removal avoid the use of state traversal. Additionally, one method finds multiple compatible redundancies simultaneously. We extend these powerful advances in the field of binary redundancy removal to perform redundancy removal for multi-valued networks. First we perform a one-hot encoding of all the multi-valued variables of the design. Multi-valued variables are written out as binary variables, using this one-hot encoding. At the end of this step, we have a binary network which is equivalent to the multi-valued network module encoding. Next binary redundancy removal is invoked on the resulting network. In the case a binary signal s/sub i/ is determined to be stuck-at-0 redundant this means that the multi-valued signal s can never take on a value i. Further if the binary signal s/sub i/ is determined to be stuck-at-1 redundant, this means that the multi-valued signal s takes on a constant value i. All redundant binary signals are recorded in a file. The original multi-valued network is modified based on the binary redundancies thus computed. Initial experiments using this technique show a 10-20% reduction in the size of the multi-valued description.

1 citations

Proceedings ArticleDOI
TL;DR: In this paper, the authors propose a specification-centric simulation metric, SPEC, which overcomes these limitations by computing the distance using only the trajectories that violate the specification for the system, and show that modifying a reach-avoid specification with SPEC allows to synthesize a safe controller for a larger set of environments compared to SSM.
Abstract: We consider the problem of extracting safe environments and controllers for reach-avoid objectives for systems with known state and control spaces, but unknown dynamics. In a given environment, a common approach is to synthesize a controller from an abstraction or a model of the system (potentially learned from data). However, in many situations, the relationship between the dynamics of the model and the \textit{actual system} is not known; and hence it is difficult to provide safety guarantees for the system. In such cases, the Standard Simulation Metric (SSM), defined as the worst-case norm distance between the model and the system output trajectories, can be used to modify a reach-avoid specification for the system into a more stringent specification for the abstraction. Nevertheless, the obtained distance, and hence the modified specification, can be quite conservative. This limits the set of environments for which a safe controller can be obtained. We propose SPEC, a specification-centric simulation metric, which overcomes these limitations by computing the distance using only the trajectories that violate the specification for the system. We show that modifying a reach-avoid specification with SPEC allows us to synthesize a safe controller for a larger set of environments compared to SSM. We also propose a probabilistic method to compute SPEC for a general class of systems. Case studies using simulators for quadrotors and autonomous cars illustrate the advantages of the proposed metric for determining safe environment sets and controllers.

1 citations

Proceedings ArticleDOI
06 Jun 1994
TL;DR: An efficient algorithm based on a purely geometric approach that generates feasible configurations very efficiently is presented thus making full conformational analysis possible even for fairly large cyclic structures.
Abstract: Conformational analysis is the problem of finding all minimal energy three-dimensional configurations of molecules. Cyclic structures are of particular interest. An efficient algorithm based on a purely geometric approach that generates feasible configurations very efficiently is presented thus making full conformational analysis possible even for fairly large cyclic structures.

1 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Journal ArticleDOI
Rainer Storn1, Kenneth Price
TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Abstract: A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

24,053 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Abstract: In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.

9,021 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations