scispace - formally typeset
Search or ask a question

Showing papers by "Alberto Sangiovanni-Vincentelli published in 2004"


Proceedings ArticleDOI
07 Jun 2004
TL;DR: The main challenges are to distill the essence of the method, to formalize it and to provide a framework to support its use in areas that go beyond the original domain of application.
Abstract: Platforms have become an important concept in the design of electronic systems. We present here the motivations behind the interest shown and the challenges that we have to face to make the Platform-based Design method a standard. As a generic term, platforms have meant different things to different people. The main challenges are to distill the essence of the method, to formalize it and to provide a framework to support its use in areas that go beyond the original domain of application.

177 citations


Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper describes the description of two optimizations that decrease the overhead of desynchronization by applying temporal analysis on a formal execution model of the desynchronized design, and uncovers significant amounts of timing slack.
Abstract: The desynchronization approach combines a traditional synchronous specification style with a robust asynchronous implementation model. The main contribution of this paper is the description of two optimizations that decrease the overhead of desynchronization. First, we investigate the use of clustering to vary the granularity of desynchronization. Second, by applying temporal analysis on a formal execution model of the desynchronized design, we uncover significant amounts of timing slack. These methods are successfully applied to industrial RTL designs.

80 citations


Proceedings ArticleDOI
16 Feb 2004
TL;DR: This work proposes a synthesis-based design methodology that relieves the designers from the burden of specifying detailed mechanisms for addressing platform faults, while involving them in the definition of the overall fault-tolerance strategy.
Abstract: Designing cost-sensitive real-time control systems for safety-critical applications requires a careful analysis of the cost/coverage trade-offs of fault-tolerant solutions. This further complicates the difficult task of deploying the embedded software that implements the control algorithms on the execution platform that is often distributed around the plant (as it is typical, for instance, in automotive applications). We propose a synthesis-based design methodology that relieves the designers from the burden of specifying detailed mechanisms for addressing platform faults, while involving them in the definition of the overall fault-tolerance strategy. Thus, they can focus on addressing plant faults within their control algorithms, selecting the best components for the execution platform, and defining an accurate fault model. Our approach is centered on a new model of computation, fault tolerant data flows (FTDF), that enables the integration of formal validation techniques.

76 citations


01 Jan 2004
TL;DR: The technical cores of the chapter are two case-studies on heterogeneous fault tolerance and discrepancy minimization-based fault detection and correction and a brief survey of the future directions for fault tolerance research in wireless sensor networks.
Abstract: In this Chapter, we address fault tolerance in wireless sensor networks. In order to make the presentation self-contained, we start by providing a short summary of sensor networks and classical fault tolerance techniques. After that, we discuss the three phases of fault tolerance (fault models, fault detection and identification and resiliency mechanisms) at four levels of abstractions (hardware, system software, middleware, and applications) and four scopes (components of individual node, individual node, network, and the distributed system). The technical cores of the chapter are two case-studies on heterogeneous fault tolerance and discrepancy minimization-based fault detection and correction. We conclude the chapter with a brief survey of the future directions for fault tolerance research in wireless sensor networks.

66 citations


01 Jan 2004
TL;DR: This Dissertation proposes one such semantic foundation in the form of an algebraic framework called Agent Algebra, a formal framework that can be used to uniformly present and reason about the characteristics and the properties of the different models of computation used in a design, and about their relationships.
Abstract: The ability to incorporate increasingly sophisticated functionality makes the design of electronic embedded systems complex. Many factors, beside the traditional considerations of cost and performance, contribute to making the design and the implementation of embedded systems a challenging task. The inevitable interactions of an embedded system with the physical world require that its parts be described by multiple formalisms of heterogeneous nature. Because these formalisms evolved in isolation, system integration becomes particularly problematic. In addition, the computation, often distributed across the infrastructure, is frequently controlled by intricate communication mechanisms. This, and other safety concerns, demand a higher degree of confidence in the correctness of the design that imposes a limit on design productivity. The key to addressing the complexity problem and to achieve substantial productivity gains is a rigorous design methodology that is based on the effective use of decomposition and multiple levels of abstraction. Decomposition relies on models that describe the effect of hierarchically composing different concurrent parts of the system. An abstraction is the relationship between two representations of the same system that expose different levels of detail. To maximize their benefit, these techniques require a semantic foundation that provides the ability to formally describe and relate a wide range of concurrency models. This Dissertation proposes one such semantic foundation in the form of an algebraic framework called Agent Algebra. Agent Algebra is a formal framework that can be used to uniformly present and reason about the characteristics and the properties of the different models of computation used in a design, and about their relationships. This is accomplished by defining an algebra that consists of a set of denotations, called agents, for the elements of a model, and of the main operations that the model provides to compose and to manipulate agents. Different models of computation are constructed as distinct instances of the algebra. However, the framework takes advantage of the common algebraic structure to derive results that apply to all models in the framework, and to relate different models using structure-preserving maps. (Abstract shortened by UMI.)

35 citations


Proceedings ArticleDOI
27 Sep 2004
TL;DR: An extension of a mathematical framework proposed by the authors to deal with the composition of heterogeneous reactive systems is presented, providing a complete formal support for correct-by-construction distributed deployment of a synchronous design specification over an ltta medium.
Abstract: We present an extension of a mathematical framework proposed by the authors to deal with the composition of heterogeneous reactive systems. Our extended framework encompasses diverse models of computation and communication such as synchronous, asynchronous, causality-based partial orders, and earliest execution times. We introduce an algebra of tag structures and morphisms between tag sets to define heterogeneous parallel composition formally and we use a result on pullbacks from category theory to handle properly the case of systems derived by composing many heterogeneous components. The extended framework allows us to establish theorems, from which design techniques for correct-by-construction deployment of abstract specifications can be derived. We illustrate this by providing a complete formal support for correct-by-construction distributed deployment of a synchronous design specification over an ltta medium.

33 citations


Book ChapterDOI
01 Jan 2004
TL;DR: This chapter presents in great detail a new approach for location discovery in wireless ad-hoc sensor networks that resolve some of limitation of the current approaches and present a specific location discovery approach including all key technical details.
Abstract: Location discovery is a fundamental task in wireless ad-hoc networks. Location discovery provides a basis for a variety of location-aware applications. The goal of location discovery is to establish the position of each node as accurately as possible, given partial information about location of a subset of nodes and measured distances between some pairs of nodes. Numerous approaches and systems for location discovery have been recently proposed. The goal of this Chapter is twofold. First is to summarize and systemize the already available location discovery approaches. Second is to present in great detail a new approach for location discovery in wireless ad-hoc sensor networks that resolve some of limitation of the current approaches and present a specific location discovery approach including all key technical details..

32 citations


Journal ArticleDOI
TL;DR: Design for manufacturability denotes all techniques designers use to estimate and control yield and robustness during the design phase, prior to manufacturing.
Abstract: Design optimization during synthesis is for area and/or performance while optimization for yield occurs at the layout level. To obtain abstraction level for yield optimization by introducing an interesting approach to yield-driven logic synthesis. Design for manufacturability denotes all techniques designers use to estimate and control yield and robustness during the design phase, prior to manufacturing.

26 citations


Proceedings ArticleDOI
16 Feb 2004
TL;DR: It is shown that the synthesis for manufacturability can achieve even larger cost reduction when yield-optimized cells are added to the library, thus enabling a wider area-yield tradeoff exploration.
Abstract: As we move towards nanometer technology, manufacturing problems become overwhelmingly difficult to solve. Presently, optimization for manufacturability is performed at a post-synthesis stage and has been shown capable of reducing manufacturing cost up to 10%. As in other cases, raising the abstraction layer where optimization is applied is expected to yield substantial gains. This paper focuses on a new approach to design for manufacturability: logic synthesis for manufacturability. This methodology consists of replacing the traditional area-driven technology mapping with a new manufacturability-driven one. We leverage existing logic synthesis tools to test our method. The results obtained by using STMicroelectronics 0.13 /spl mu/m library confirm that this approach is a promising solution for designing circuits with lower manufacturing cost, while retaining performance. Finally, we show that our synthesis for manufacturability can achieve even larger cost reduction when yield-optimized cells are added to the library, thus enabling a wider area-yield tradeoff exploration.

25 citations


Book ChapterDOI
25 Mar 2004
TL;DR: A structured control synthesis procedure is applied in which constraints for state and input variables are backward propagated from the controlled output (the crankshaft speed) across successive subsystems.
Abstract: The problem of maintaining the crankshaft speed of an automotive engine within a given set interval (idle speed control), is formalized as a constrained control problem using a hybrid model of the engine. The control problem is difficult because the system has delays and a large number of constraints. The approach for the synthesis of a controller for this system is based on the theory developed for affine systems on polytopes. A structured control synthesis procedure is applied in which constraints for state and input variables are backward propagated from the controlled output (the crankshaft speed) across successive subsystems.

20 citations


Proceedings ArticleDOI
27 Sep 2004
TL;DR: This paper presents a few techniques that eliminate almost entirely the overhead while maintaining the positive aspects of the separation of concerns and experimental results on a complex design back this assertion.
Abstract: Separating the description of important aspects of a design such as behavior and architecture, or computation and communication, may yield significant advantages in design time as well as in re-usability of the design. However, exploiting fully the re-usability opportunities offered by this approach implies to keep the various aspects of the design separated while verifying the design at a given level of abstraction. In particular, simulation of the design may undergo significant overhead versus a traditional approach where the design is represented and analyzed monolithically. In this paper, we present a few techniques that eliminate almost entirely the overhead while maintaining the positive aspects of the separation of concerns. Experimental results on a complex design back this assertion.

01 Jan 2004
TL;DR: This thesis is that correct-by-construction methods combining the benefits of synchronous specification with the efficiency of asynchronous implementation are the key to design moderately distributed complex systems composed of tightly interacting concurrent processes.
Abstract: Currently available computer-aided design (CAD) tools struggle on handling the increasingly dominant impact of interconnect delay and fall short on providing support for IP reuse With each process generation, the number of available transistors grows faster than the ability to meaningfully design them (design productivity gap) and designers are forced to iterate many times between circuit specification and layout implementation (timing-closure problem) Ironically, it is the introduction of nanometer technologies that threatens the outstanding pace of technological progress that has shaped the semiconductor industry The key to addressing these challenges is the development of methodologies based on formal methods to enable modularity, flexibility, and reusability in system design The subject of this dissertation—Latency-Insensitive Design—is a step in this direction My thesis is that correct-by-construction methods combining the benefits of synchronous specification with the efficiency of asynchronous implementation are the key to design moderately distributed complex systems composed of tightly interacting concurrent processes Major contributions are the theory of latency-insensitive protocol and the companion latency-insensitive design methodology Latency-insensitive systems are synchronous distributed systems composed by functional modules that exchange data on communication channels according to an appropriate protocol The protocol works on the assumption that the modules are stallable (a weak condition to ask them to obey) and guarantees that systems made of functionally correct modules, behave correctly independently of channel latencies The theory of latency-insensitive protocols is the foundation of a correct-by-construction methodology for integrated circuit design that handles latency's increasing impact on nanometer technologies and facilitates the assembly of IP cores for building complex SOCs, thereby reducing the number of costly iterations during the design process Thanks to the generality of its principles, latency-insensitive design can be possibly applied to other research areas like distributed deployment of embedded software (Abstract shortened by UMI)

Proceedings ArticleDOI
16 Feb 2004
TL;DR: A new abstraction level - the platform - is introduced to separate circuit design from design space exploration and an analog platform encapsulates analog components concurrently modeling their behavior and their achievable performances.
Abstract: This paper describes a novel approach to system level analog design. A new abstraction level - the platform - is introduced to separate circuit design from design space exploration. An analog platform encapsulates analog components concurrently modeling their behavior and their achievable performances. Performance models are obtained through statistical sampling of circuit configurations. The design configurations space is specified with analog constraint graphs so that the sampling space is significantly reduced. System level exploration can be achieved through optimization on behavioral models constrained by performance models. Finally, an example is provided showing the effectiveness of the approach on a WCDMA amplifier.

Proceedings ArticleDOI
16 Feb 2004
TL;DR: This case study focuses on a particular aspect of this methodology that eases considerably the verification process: successive refinement, and compares it versus a parallel team of designers who developed the IC using standard design approaches.
Abstract: Productivity data for IC designs indicates an exponential increase in design time and cost with the number of elements that are to be included in a device. Present applications require the development of complex systems to support novel functionality. To cope with these difficulties, we need to change radically the present design methodology to allow for extensive re-use, early verification in the design cycle, pervasive use of software, and architecture-level optimization. Platform-based design as defined in A. Sangiovanni-Vincentelli (2002), has these characteristics. We present the application of this methodology to a complex industrial application provided by Cypress Semiconductor. In this case study, we focus on a particular aspect of this methodology that eases considerably the verification process: successive refinement. We compare this approach versus a parallel team of designers who developed the IC using standard design approaches.

Proceedings ArticleDOI
01 Jan 2004
TL;DR: A system of statistical techniques that calculate the likelihood that error of a particular value is part of a measurement, and uses the compact curves to build the probability density function of the error.
Abstract: Error modeling is a procedure of quantitatively characterizing the likelihood that a particular value of error is associated with a particular measured value. Error modeling directly affects accuracy and effectiveness of many tasks in sensor-based systems including calibration, sensor fusion and power management. We developed a system of statistical techniques that calculate the likelihood that error of a particular value is part of a measurement. The error modeling approach has three steps: (i) data set partitioning; (ii) constructing the error density model; and (iii) learn-and-test and resubstitution-based procedures for validating the models. The data set partitioning identifies a specified percentage of measurements that have the highest negative discrepancy between sensor and standard measurements. The partitioning step employs data fitting models to identify compact curves that represent the partitioned subsets. The error density modeling uses the compact curves to build the probability density function (PDF) of the error. For validation purposes, we use a resubstitution-based paradigm.

Proceedings ArticleDOI
01 Jan 2004
TL;DR: A revised reachability computation that avoids the approximations caused by the union operation in the discretized flow tube estimation and may classify as unreachable states that are reachable according to the previous algorithm because of the looser over-approximations introduced by theunion operation.
Abstract: A new approach is presented for computing approximations of the reached sets of linear hybrid automata. First, we present some new theoretical results on termination of a class of reachability algorithms, which includes Botchkarev's, based on ellipsoidal calculus. The main contribution of the paper is a revised reachability computation that avoids the approximations caused by the union operation in the discretized flow tube estimation. Therefore, the new algorithm may classify as unreachable states that are reachable according to the previous algorithm because of the looser over-approximations introduced by the union operation. We implemented the new reachability algorithm and tested it successfully on a real-life case modeling a hybrid model of a controlled car engine.

Journal ArticleDOI
TL;DR: In this article, the authors illustrate the application of an integrated control-implementation design methodology to the development of the top few layers of abstraction in the design flow of an engine control system for motorcycles.

Journal ArticleDOI
TL;DR: This paper describes the application of SPFD-based wire removal techniques for circuit implementations utilizing networks of PLAs as well as standard-cells and demonstrates that the most effective approach is to perform wire removal both before and after clustering.
Abstract: Wire removal is a technique by which the total number of wires between individual circuit nodes is reduced, either by removing wires or replacing them with other new wires. The wire removal techniques we describe in this paper are based on both binary and multivalued sets of pairs of functions to be distinguished (SPFDs). Recently, it was shown that a design style based on a multilevel network of approximately equal-sized programmable logic arrays (PLAs) results in a dense, fast, and crosstalk-resistant layout. This paper describes the application of SPFD-based wire removal techniques for circuit implementations utilizing networks of PLAs as well as standard-cells. In our first set of wire removal experiments (which utilize binary SPFD-based wire removal), we demonstrate that the benefit of SPFD-based wire removal is insignificant when the circuit is mapped using standard cells. We demonstrate that this technique is very effective in the context of a network of PLAs. In the next set of wire removal experiments, we focus only on circuits implemented using a network of PLAs. Three separate wire removal experiments are performed. Wire removal is invoked before clustering the original netlist into a network of PLAs, or after clustering, or both before and after clustering. For wire removal before clustering, binary SPFD-based wire removal is used. For wire removal after clustering, multivalued SPFD-based wire removal is used since the multioutput PLAs can be viewed as multivalued single output nodes. We demonstrate that these techniques are effective. The most effective approach is to perform wire removal both before and after clustering. Using these techniques, we obtain a reduction in placed and routed circuit area of about 11%. This reduction is significantly higher (about 20%) for the larger circuits we used in our experiments.

Journal ArticleDOI
TL;DR: The overall objective of the SEA Initiative is to develop a framework and a seamless process-flow for the design of complex embedded real-time systems in automotive applications to address several key issues faced by the automotive industry.

Proceedings ArticleDOI
27 Sep 2004
TL;DR: This paper constructs a framework, called Agent Algebra, where the different models reside and share a common algebraic structure, and shows that, unlike abstract interpretations, conservative approximations preserve refinement verification results from an abstract to a concrete model while avoiding false positives.
Abstract: Embedded systems are electronic devices that function in the context of a real environment, by sensing and reacting to a set of stimuli. Because of their close interaction with the environment, and to simplify their design, different parts of an embedded system are best described using different notations and different techniques. In this case, we say that the system is heterogeneous.We informally refer to the notation and the rules that are used to specify and verify the elements of heterogeneous system and their collective behavior as a model of computation. In this paper, we focus in particular on abstraction and refinement relationships in the form of conservative approximations. We do so by constructing a framework, called Agent Algebra, where the different models reside and share a common algebraic structure. We compare our techniques to the well established notion of abstract interpretation. We show that, unlike abstract interpretations, conservative approximations preserve refinement verification results from an abstract to a concrete model while avoiding false positives. In addition, we use the inverse of a conservative approximation to identify components that can be used indifferently in several models, thus enabling reuse across domains of computation.

Journal Article
TL;DR: In this paper, the authors propose a mathematical framework offering diverse models of computation and a formal foundation for correct-by-construction deployment of synchronous designs over distributed architecture (such as GALS or LTTA).
Abstract: Recently we proposed a mathematical framework offering diverse models of computation and a formal foundation for correct-by-construction deployment of synchronous designs over distributed architecture (such as GALS or LTTA). In this paper, we extend our framework to model explicitly causality relations and scheduling constraints. We show how the formal results on the preservation of semantics hold also for these cases and we discuss the overall contribution in the context of previous work on desynchronization.