scispace - formally typeset
Search or ask a question
Author

Alberto Sangiovanni-Vincentelli

Bio: Alberto Sangiovanni-Vincentelli is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Logic synthesis & Finite-state machine. The author has an hindex of 99, co-authored 934 publications receiving 45201 citations. Previous affiliations of Alberto Sangiovanni-Vincentelli include National University of Singapore & Lawrence Berkeley National Laboratory.


Papers
More filters
Proceedings ArticleDOI
09 Mar 2015
TL;DR: This work addresses the problem of synthesizing safety-critical cyber-physical system architectures to minimize a cost function while guaranteeing the desired reliability by proposing two algorithms to decrease the problem complexity, i.e. Integer-Linear Programming Modulo Reliability (ILP-MR) and ILP-AR.
Abstract: We address the problem of synthesizing safety-critical cyber-physical system architectures to minimize a cost function while guaranteeing the desired reliability We cast the problem as an integer linear program on a reconfigurable graph which models the architecture Since generating symbolic probability constraints by exhaustive enumeration of failure cases on all possible graph configurations takes exponential time, we propose two algorithms to decrease the problem complexity, ie Integer-Linear Programming Modulo Reliability (ILP-MR) and Integer-Linear Programming with Approximate Reliability (ILP-AR) We compare the two approaches and demonstrate their effectiveness on the design of aircraft electric power system architectures

21 citations

Proceedings ArticleDOI
01 May 1998
TL;DR: This work analyzes the results obtained with the approach and compares them with the existing design underlining the advantages offered by a systematic approach to embedded system design in terms of performance and design time.
Abstract: A number of techniques and software tools for embedded system design have been recently proposed. However, the current practice in the designer community is heavily based on manual techniques and on past experience rather than on a rigorous approach to design. To advance the state of the art it is important to address a number of relevant design problems and solve them to demonstrate the power of the new approaches. We chose an industrial example in automotive electronics to validate our design methodology: an existing commercially available Engine Control Unit. We discuss in detail the specification, the implementation philosophy, and the architectural trade-off analysis. We analyze the results obtained with our approach and compare them with the existing design underlining the advantages offered by a systematic approach to embedded system design in terms of performance and design time.

20 citations

Proceedings ArticleDOI
08 Dec 2008
TL;DR: An optimization problem where the objective function is the total energy consumption in transmit, receive, listen and sleep states, subject to constraints of delay and reliability of the packet delivery and the decision variables are the sleep and wake time of the receivers is solved.
Abstract: We present a novel approach for minimizing the energy consumption of medium access control (MAC) protocols developed for duty-cycled wireless sensor networks (WSN) for the unslotted IEEE 802.15.4 standard while guaranteeing delay and reliability constraints. The main challenge in this optimization is the random access associated with the existing IEEE 802.15.4 hardware and MAC specification that prevents controlling the exact transmission time of the packets. Data traffic, network topology, MAC, and the key parameters of duty cycles (sleep and wake time) determine the amount of random access, which in turn determines delay, reliability and energy consumption. We formulate and solve an optimization problem where the objective function is the total energy consumption in transmit, receive, listen and sleep states, subject to constraints of delay and reliability of the packet delivery and the decision variables are the sleep and wake time of the receivers. The optimal solution can be easily implemented on existing IEEE 802.15.4 hardware platforms, by storing light look-up tables in the receiver nodes. Numerical results show that the protocol outperforms significantly existing solutions.

20 citations

Proceedings ArticleDOI
09 May 1993
TL;DR: A novel approach to the layout compaction of analog integrated circuits which observes all of the performance and technology constraints necessary to guarantee proper analog circuit functionality is described.
Abstract: The authors describe a novel approach to the layout compaction of analog integrated circuits which observes all of the performance and technology constraints necessary to guarantee proper analog circuit functionality. The approach consists of two stages: a fast constraint graph critical path algorithm followed by a general linear programming algorithm. Circuit performance is guaranteed by mapping high-level performance constraints to low-level bounds on parasitics and then to minimum spacing constraints between adjacent nets. The algorithm has been implemented and found to display remarkable completeness and efficiency.

20 citations

Proceedings ArticleDOI
10 Dec 2002
TL;DR: A formulation based on the generalized linear complementarity problem, which enables the application of efficient numerical solutions to the problem of stabilization to the trivial equilibrium of a linear system for which commands are issued to different group of actuators through a shared communication resource is developed.
Abstract: We address the problem of stabilization to the trivial equilibrium of a linear system for which commands are issued to different group of actuators through a shared communication resource. The problem is tackled by using a model predictive control scheme, which, at every step, decides the allocation of the bus and the control command values. We develop a formulation based on the generalized linear complementarity problem, which enables the application of efficient numerical solutions. Finally, we give some preliminary result on the parametric dependence of the problem's solution from the system's state.

20 citations


Cited by
More filters
Journal ArticleDOI
01 Jan 1998
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Abstract: Multilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient based learning technique. Given an appropriate network architecture, gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Convolutional neural networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. A new learning paradigm, called graph transformer networks (GTN), allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure. Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training, and the flexibility of graph transformer networks. A graph transformer network for reading a bank cheque is also described. It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. It is deployed commercially and reads several million cheques per day.

42,067 citations

Journal ArticleDOI
Rainer Storn1, Kenneth Price
TL;DR: In this article, a new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented, which requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.
Abstract: A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.

24,053 citations

Journal ArticleDOI
01 Apr 1988-Nature
TL;DR: In this paper, a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) is presented.
Abstract: Deposits of clastic carbonate-dominated (calciclastic) sedimentary slope systems in the rock record have been identified mostly as linearly-consistent carbonate apron deposits, even though most ancient clastic carbonate slope deposits fit the submarine fan systems better. Calciclastic submarine fans are consequently rarely described and are poorly understood. Subsequently, very little is known especially in mud-dominated calciclastic submarine fan systems. Presented in this study are a sedimentological core and petrographic characterisation of samples from eleven boreholes from the Lower Carboniferous of Bowland Basin (Northwest England) that reveals a >250 m thick calciturbidite complex deposited in a calciclastic submarine fan setting. Seven facies are recognised from core and thin section characterisation and are grouped into three carbonate turbidite sequences. They include: 1) Calciturbidites, comprising mostly of highto low-density, wavy-laminated bioclast-rich facies; 2) low-density densite mudstones which are characterised by planar laminated and unlaminated muddominated facies; and 3) Calcidebrites which are muddy or hyper-concentrated debrisflow deposits occurring as poorly-sorted, chaotic, mud-supported floatstones. These

9,929 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a data structure for representing Boolean functions and an associated set of manipulation algorithms, which have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large.
Abstract: In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.

9,021 citations

Book
25 Apr 2008
TL;DR: Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field.
Abstract: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.

4,905 citations