scispace - formally typeset
Search or ask a question

Showing papers presented at "Formal Methods in 2010"


Book ChapterDOI
Egon Börger1
01 Jan 2010
TL;DR: In this article, the main ingredients of Abstract State Machines (ASM) for high-level system design and analysis are discussed and a survey of their application highlights in industrial software-based system engineering is presented.
Abstract: We explain the main ingredients of the Abstract State Machines (ASM) method for high-level system design and analysis and survey some of its application highlights in industrial software-based system engineering. We illustrate the method by defining models for three simple control systems (sluice gate, traffic light, package router) and by characterising Event-B machines as a specific class of ASMs.We point to directions for future research and applications of the method in other areas than software engineering.

880 citations


Book ChapterDOI
29 Nov 2010
TL;DR: A subject reduction property is proved which shows that well-typedness is preserved during execution; in particular, "method not understood" errors do not occur at runtime for well-TYped ABS models.
Abstract: This paper presents ABS, an abstract behavioral specification language for designing executable models of distributed object-oriented systems. The language combines advanced concurrency and synchronization mechanisms for concurrent object groups with a functional language for modeling data. ABS uses asynchronous method calls, interfaces for encapsulation, and cooperative scheduling of method activations inside concurrent objects. This feature combination results in a concurrent object-oriented model which is inherently compositional. We discuss central design issues for ABS and formalize the type system and semantics of Core ABS, a calculus with the main features of ABS. For Core ABS, we prove a subject reduction property which shows that well-typedness is preserved during execution; in particular, "method not understood" errors do not occur at runtime for well-typed ABS models. Finally, we briefly discuss the tool support developed for ABS.

349 citations


Journal ArticleDOI
01 Oct 2010
TL;DR: How output error probabilities change with increasing number of simultaneous faults is shown and the results obtained show that output error probability resulting from multiple-event transient or multiple-bit upsets can vary across different outputs and different circuits by several orders of magnitude.
Abstract: Transient faults in logic circuits are becoming an important reliability concern for future technology nodes. Radiation-induced faults have received significant attention in recent years, while multiple transients originating from a single radiation hit are predicted to occur more often. Furthermore, some effects, like reconvergent fanout-induced glitches, are more pronounced in the case of multiple faults. Therefore, to guide the design process and the choice of circuit optimization techniques, it is important to model multiple faults and their propagation through logic circuits, while evaluating the changes in error rates resulting from multiple simultaneous faults. In this paper, we show how output error probabilities change with increasing number of simultaneous faults and we also analyze the impact of multiple errors in state flip-flops, during the cycles following the cycle when fault(s) occurred. The results obtained using the proposed framework show that output error probability resulting from multiple-event transient or multiple-bit upsets can vary across different outputs and different circuits by several orders of magnitude. The results also show that the impact of different masking factors also varies across circuits and this information can be valuable for customizing protection techniques.

117 citations


Journal ArticleDOI
01 Sep 2010
TL;DR: A novel abstraction-refinement framework for Markov decision processes (MDPs), which are widely used for modelling and verifying systems that exhibit both probabilistic and nondeterministic behaviour, is presented.
Abstract: In the field of model checking, abstraction refinement has proved to be an extremely successful methodology for combating the state-space explosion problem. However, little practical progress has been made in the setting of probabilistic verification. In this paper we present a novel abstraction-refinement framework for Markov decision processes (MDPs), which are widely used for modelling and verifying systems that exhibit both probabilistic and nondeterministic behaviour. Our framework comprises an abstraction approach based on stochastic two-player games, two refinement methods and an efficient algorithm for an abstraction-refinement loop. The key idea behind the abstraction approach is to maintain a separation between nondeterminism present in the original MDP and nondeterminism introduced during the abstraction process, each type being represented by a different player in the game. Crucially, this allows lower and upper bounds to be computed for the values of reachability properties of the MDP. These give a quantitative measure of the quality of the abstraction and form the basis of the corresponding refinement methods. We describe a prototype implementation of our framework and present experimental results demonstrating automatic generation of compact, yet precise, abstractions for a large selection of real-world case studies.

113 citations


Proceedings ArticleDOI
26 Jul 2010
TL;DR: A new language, Feldspar, is presented, enabling high-level and platform-independent description of digital signal processing (DSP) algorithms, based on a low-level, functional core language which has a relatively small semantic gap to machine-oriented languages like C.
Abstract: A new language, Feldspar, is presented, enabling high-level and platform-independent description of digital signal processing (DSP) algorithms. Feldspar is a pure functional language embedded in Haskell. It offers a high-level dataflow style of programming, as well as a more mathematical style based on vector indices. The key to generating efficient code from such descriptions is a high-level optimization technique called vector fusion. Feldspar is based on a low-level, functional core language which has a relatively small semantic gap to machine-oriented languages like C. The core language serves as the interface to the back-end code generator, which produces C. For very small examples, the generated code performs comparably to hand-written C code when run on a DSP target. While initial results are promising, to achieve good performance on larger examples, issues related to memory access patterns and array copying will have to be addressed.

96 citations


Journal ArticleDOI
01 Oct 2010
TL;DR: This paper proposes indirect temperature sensing to accurately estimate the temperature at arbitrary locations on the die based on the noisy temperature readings from a limited number of sensors which are located further away from the locations of interest.
Abstract: Dynamic thermal management techniques require accurate runtime temperature information in order to operate effectively and efficiently. In this paper, we propose two novel solutions for accurate sensing of on-chip temperature. Our first technique is used at design time for sensor allocation and placement to minimize the number of sensors while maintaining the desired accuracy. The experimental results show that this technique can improve the efficiency and accuracy of sensor allocation and placement compared to previous work and can reduce the number of required thermal sensors by about 16% on average. Secondly, we propose indirect temperature sensing to accurately estimate the temperature at arbitrary locations on the die based on the noisy temperature readings from a limited number of sensors which are located further away from the locations of interest. Our runtime technique for temperature estimation reduces the standard deviation and maximum value of temperature estimation errors by an order of magnitude.

93 citations


Proceedings ArticleDOI
26 Jul 2010
TL;DR: In this paper, a BMC-based approach is proposed to verify SystemC TLM properties, such as the effect of a transaction and that the transaction is only started after a certain event.
Abstract: Electronic System Level (ESL) design manages the enormous complexity of todays systems by using abstract models. In this context Transaction Level Modeling (TLM) is state-of-the-art for describing complex communication without all the details. As ESL language, SystemC has become the de facto standard. Since the SystemC TLM models are used for early software development and as reference for hardware implementation their correct functional behavior is crucial. Admittedly, the best possible verification quality can be achieved with formal approaches. However, formal verification of TLM models is a hard task. Existing methods basically consider local properties or have extremely high run-time. In contrast, the approach proposed in this paper can verify “true” TLM properties, i.e. major TLM behavior like for instance the effect of a transaction and that the transaction is only started after a certain event can be proven. Our approach works as follows: After a fully automatic SystemC-to-C transformation, the TLM property is mapped to monitoring logic using C assertions and finite state machines. To detect a violation of the property the approach uses a BMC-based formulation over the outermost loop of the SystemC scheduler. In addition, we improve this verification method significantly by employing induction on the C model forming a complete and efficient approach. As shown by experiments state-of-the-art proof techniques allow proving important non-trivial behavior of SystemC TLM designs.

79 citations


Book ChapterDOI
08 Nov 2010
TL;DR: This work considers statistical (sampling-based) solution methods for verifying probabilistic properties with unbounded until, and shows how the choice of termination probability--when applied to Markov chains--is tied to the subdominant eigenvalue of the transition probability matrix, which relates it to iterative numerical solution techniques for the same problem.
Abstract: We consider statistical (sampling-based) solution methods for verifying probabilistic properties with unbounded until. Statistical solution methods for probabilistic verification use sample execution trajectories for a system to verify properties with some level of confidence. The main challenge with properties that are expressed using unbounded until is to ensure termination in the face of potentially infinite sample execution trajectories. We describe two alternative solution methods, each one with its own merits. The first method relies on reachability analysis, and is suitable primarily for large Markov chains where reachability analysis can be performed efficiently using symbolic data structures, but for which numerical probability computations are expensive. The second method employs a termination probability and weighted sampling. This method does not rely on any specific structure of the model, but error control is more challenging. We show how the choice of termination probability--when applied to Markov chains--is tied to the subdominant eigenvalue of the transition probability matrix, which relates it to iterative numerical solution techniques for the same problem.

59 citations


Journal ArticleDOI
01 Jun 2010
TL;DR: This paper proposes a logic adapted to the specification of properties of mixed-signal circuits in the temporal domain as well as in the frequency domain that consists of evaluating the property on a representative subset of behaviors and answering the question of whether the circuit satisfies the property with a probability greater than or equal to some threshold.
Abstract: In this paper, we consider verifying properties of mixed-signal circuits, i.e., circuits for which there is an interaction between analog (continuous) and digital (discrete) values. We use a simulation-based approach that consists of evaluating the property on a representative subset of behaviors and answering the question of whether the circuit satisfies the property with a probability greater than or equal to some threshold. We propose a logic adapted to the specification of properties of mixed-signal circuits in the temporal domain as well as in the frequency domain. We also demonstrate the applicability of the method on different models of Δ---Σ modulators for which previous formal verification attempts were too conservative and required excessive computation time.

54 citations


Book ChapterDOI
29 Nov 2010
TL;DR: This work defines the necessary proof obligations to ensure valid compositions and decompositions and shows that shared event composition preserves refinement proofs, that is, in order to maintain refinement of compositions, it is sufficient to prove refinement between corresponding sub-components.
Abstract: The construction of specifications is often a combination of smaller sub-components. Composition and decomposition are techniques supporting reuse and allowing formal combination of sub-components through refinement steps. Sub-components can result from a design or architectural goal and a refinement framework should allow them to be further developed, possibly in parallel. We propose the definition of composition and decomposition in the Event-B formalism following a shared event approach where sub-components interact via synchronised shared events and shared states are not allowed. We define the necessary proof obligations to ensure valid compositions and decompositions. We also show that shared event composition preserves refinement proofs, that is, in order to maintain refinement of compositions, it is sufficient to prove refinement between corresponding sub-components. A case study applying these two techniques is illustrated using Rodin, the Event-B toolset.

49 citations


Proceedings ArticleDOI
26 Jul 2010
TL;DR: The goal of the formalization is to provide a concise and mathematically rigorous reference augmenting the prose of the official language standard, and ultimately to aid developers of Verilog-based tools; e.g., simulators, test generators, and verification tools.
Abstract: This paper describes a formal executable semantics for the Verilog hardware description language. The goal of our formalization is to provide a concise and mathematically rigorous reference augmenting the prose of the official language standard, and ultimately to aid developers of Verilog-based tools; e.g., simulators, test generators, and verification tools. Our semantics applies equally well to both synthesizeable and behavioral designs and is given in a familiar, operational-style within a logic providing important additional benefits above and beyond static formalization. In particular, it is executable and searchable so that one can ask questions about how a, possibly nondeterministic, Verilog program can legally behave under the formalization. The formalization should not be seen as the final word on Verilog, but rather as a starting point and basis for community discussions on the Verilog semantics.

Book ChapterDOI
29 Nov 2010
TL;DR: This article describes the variability modelling features of the ABS Modelling framework, which consists of four languages, namely, μ TVL for describing feature models at a high level of abstraction, the Delta Modelling Language DML for describing variability of the ‘code' base in terms of delta modules.
Abstract: The HATS project aims at developing a model-centric methodology for the design, implementation and verification of highly configurable systems, such as software product lines, centred around the Abstract Behavioural Specification (ABS) modelling Language. This article describes the variability modelling features of the ABS Modelling framework. It consists of four languages, namely, μ TVL for describing feature models at a high level of abstraction, the Delta Modelling Language DML for describing variability of the ‘code' base in terms of delta modules, the Product Line Configuration Language CL for linking feature models and delta modules together and the Product Selection Language PSL for describing a specific product to extract from a product line. Both formal semantics and examples of each language are presented.

Journal ArticleDOI
01 Feb 2010
TL;DR: This paper studies the related model-checking problem (pushdown module checking) with respect to properties expressed by CTL and CTL* formulas and shows that pushdown modulechecking against CTL (resp., CTL*) is 2Exptime-complete (resp, 3Expt time-complete).
Abstract: Model checking is a useful method to verify automatically the correctness of a system with respect to a desired behavior, by checking whether a mathematical model of the system satisfies a formal specification of this behavior. Many systems of interest are open, in the sense that their behavior depends on the interaction with their environment. The model checking problem for finite-state open systems (called module checking) has been intensively studied in the literature. In this paper, we focus on open pushdown systems and we study the related model-checking problem (pushdown module checking, for short) with respect to properties expressed by CTL and CTL * formulas. We show that pushdown module checking against CTL (resp., CTL *) is 2Exptime-complete (resp., 3Exptime-complete). Moreover, we prove that for a fixed CTL or CTL * formula, the problem is Exptime-complete.

Book ChapterDOI
08 Nov 2010
TL;DR: A framework based on the Calculus of Inductive Constructions and its associated tool the Coq proof assistant is presented to allow certification of model transformations in the context of Model-Driven Engineering.
Abstract: We present a framework based on the Calculus of Inductive Constructions (CIC) and its associated tool the Coq proof assistant to allow certification of model transformations in the context of Model-Driven Engineering (MDE). The approached is based on a semi-automatic translation process from metamodels, models and transformations of the MDE technical space into types, propositions and functions of the CIC technical space. We describe this translation and illustrate its use in a standard case study.

Book ChapterDOI
29 Nov 2010
TL;DR: The main concepts of ASLan++ are introduced at a small but very instructive running example, abstracted form a company intranet scenario, that features non-linear and inter-dependent workflows, communication security at different abstraction levels including an explicit credentials-based authentication mechanism, dynamic access control policies, and the related security goals.
Abstract: This paper introduces ASLan++, the AVANTSSAR Specification Language. ASLan++ has been designed for formally specifying dynamically composed security-sensitive web services and service-oriented architectures, their associated security policies, as well as their security properties, at both communication and application level. We introduce the main concepts of ASLan++ at a small but very instructive running example, abstracted form a company intranet scenario, that features non-linear and inter-dependent workflows, communication security at different abstraction levels including an explicit credentials-based authentication mechanism, dynamic access control policies, and the related security goals. This demonstrates the flexibility and expressiveness of the language, and that the resulting models are logically adequate, while on the other hand they are clear to read and feasible to construct for system designers who are not experts in formal methods.

Proceedings ArticleDOI
26 Jul 2010
TL;DR: This work describes a temporal monitoring framework for the SystemC specification language defined by Tabakov et al. at FMCAD'08, and argues that the additional expressive powers and flexibility of the framework does not incur a serious performance hit.
Abstract: Monitoring temporal SystemC properties is crucial for the validation of functional and transaction-level models, yet the current SystemC standard provides no support for temporal specifications. In this work we describe a temporal monitoring framework for the SystemC specification language defined by Tabakov et al. at FMCAD'08. Our framework uses a very minimal modification of the SystemC kernel, exposing event notifications and simulation phases. The user code is instrumented to allow observation of the relevant parts of the model state. As proof of concept, we use the framework to specify and check properties of two SystemC models. We show that monitoring SystemC properties using this framework has reasonable overhead (0.01% – 1%) and has decreasing marginal cost. Finally, we demonstrate that monitoring at different levels of abstraction requires very small changes to the specification and the generated monitors. Based on our empirical results we argue that the additional expressive powers and flexibility of the framework does not incur a serious performance hit.

Journal ArticleDOI
01 Oct 2010
TL;DR: This paper presents a NoC router with an explicit SDRAM-aware flow control based on priority-based arbitration, which significantly improves memory latency and utilization compared to the conventional NoC design with no SDRam-aware router.
Abstract: Networks-on-chip (NoCs) may interface with lots of synchronous dynamic random access memories (SDRAM) to provide enough memory bandwidth and guaranteed quality-of-service for future systems-on-chip (SoCs). SDRAM is commonly controlled by a memory subsystem that schedules memory requests to improve memory efficiency and latency. However, a memory subsystem is still a performance bottleneck in the entire NoC. Therefore, memory-aware NoC optimization has attracted considerable attention. This paper presents a NoC router with an explicit SDRAM-aware flow control. Based on priority-based arbitration, our SDRAM-aware flow controller schedules memory requests to prevent bank conflict, data contention, and short turn-around bank interleaving. Moreover, our multi-stage scheduling scheme further improves memory performance and saves NoC hardware costs. Experimental results show that our cost-efficient SDRAM-aware NoC design significantly improves memory latency and utilization compared to the conventional NoC design with no SDRAM-aware router.

Book ChapterDOI
21 Jun 2010
TL;DR: This chapter presents standard performance metrics and discusses proposed security metrics that are suitable for quantification and formulate metrics, such as cost and an abstract combined performance and security measure that explicitly express the tradeoff, and shows that system parameters can be found that optimise those metrics.
Abstract: A tradeoff is a situation that involves losing one quality or aspect of something in return for gaining another quality or aspect. Speaking about the tradeoff between performance and security indicates that both, performance and security, can be measured, and that to increase one, we have to pay in terms of the other. While established metrics for performance of systems exist this is not quite the case for security. In this chapter we present standard performance metrics and discuss proposed security metrics that are suitable for quantification. The dilemma of inferior metrics can be solved by considering indirect metrics such as computation cost of security mechanisms. Security mechanisms such as encryption or security protocols come at a cost in terms of computing resources. Quantification of performance has long been done by means of stochastic models. With growing interest in the quantification of security stochastic modelling has been applied to security issues as well. This chapter reviews existing approaches in the combined analysis and evaluation of performance and security. We find that most existing approaches take either security or performance as given and investigate the respective other. For instance [34] investigates the performance of a server running a security protocol, while [21] quantifies security without considering the cost of increased security. For special applications, mobile Ad-hoc networks in [5] and the email system in [32] we will see that models exist which can be used to explore the performance-security tradeoff. To illustrate general aspects of the security-performance tradeoff we set up a simple Generalised Stochastic Petri Net (GSPN) model that allows us to study both, performance and security and especially the tradeoff between both. We formulate metrics, such as cost and an abstract combined performance and security measure that explicitly express the tradeoff and we show that system parameters can be found that optimise those metrics. These parameters are optimal for neither performance nor security, but for the combination of both.

Proceedings ArticleDOI
26 Jul 2010
TL;DR: It is demonstrated that ARPRET not only achieves completely predictable execution of PRET-C programs, but also improves the throughput when compared to the pure software execution of PRET, the proposed synchronous language for predictable and lightweight multi-threaded C.
Abstract: We propose a new language called Precision Timed C (PRET-C), for predictable and lightweight multi-threading in C. PRET-C supports synchronous concurrency, preemption, and a high-level construct for logical time. In contrast to existing synchronous languages, PRET-C offers C-based shared memory communications between concurrent threads that is guaranteed to be thread safe. Due to the proposed synchronous semantics, the mapping of logical time to physical time can be achieved much more easily than with plain C, thanks to a Worst Case Reaction Time (WCRT) analyzer (not presented here). Associated to the PRET-C programming language, we present a dedicated target architecture, called ARPRET, which combines a hardware accelerator associated to an existing softcore processor. This allows us to improve the throughput while preserving the predictability. With extensive benchmarking, we then demonstrate that ARPRET not only achieves completely predictable execution of PRET-C programs, but also improves the throughput when compared to the pure software execution of PRET-C. The PRET-C software approach is also significantly more efficient in comparison to two other light-weight concurrent C variants (namely SC and Protothreads), as well as the well-known Esterel synchronous programming language.

Journal ArticleDOI
01 Oct 2010
TL;DR: Flexible test architecture named test access control system for 3-D integrated circuits (TACS-3D) is proposed, which reveals up to 54% test time improvement under the same TSV usage.
Abstract: 3-D integration provides another way to put more devices in a smaller footprint. However, it also introduces new challenges in testing. Flexible test architecture named test access control system for 3-D integrated circuits (TACS-3D) is proposed for 3-D integrated circuits (IC) testing. Integration of heterogeneous design-for-testability methods for logic, memory, and through-silicon via (TSV) testing further reduces the usage of test pins and TSVs. To highly reuse pre-bond test circuits in post-bond test, an innovative linking mechanism shares TSVs and test pins of the 3-D IC. No matter how many layers are there in the 3-D IC, a large portion of TSVs and test pins is reserved for data application. Therefore, smaller post-bond test time is expected. A test chip composed of a network security processor platform is taken as an example. Less than 0.4% test overhead increases in area and time between 2-D and 3-D cases. Compared with the instinctively direct access, TACS-3D reveals up to 54% test time improvement under the same TSV usage.

Journal ArticleDOI
01 Feb 2010
TL;DR: Efficient procedures for model checking Markov reward models, that allow to evaluate, among others, the performability of computer-communication systems, and the logic CSRL (Continuous Stochastic Reward Logic) to specify performability measures are described.
Abstract: This paper describes efficient procedures for model checking Markov reward models, that allow us to evaluate, among others, the performability of computer-communication systems. We present the logic CSRL (Continuous Stochastic Reward Logic) to specify performability measures. It provides flexibility in measure specification and paves the way for the numerical evaluation of a wide variety of performability measures. The formal measure specification in CSRL also often helps in reducing the size of the Markov reward models that need to be numerically analysed. The paper presents background on Markov-reward models, as well as on the logic CSRL (syntax and semantics), before presenting an important duality result between reward and time. We discuss CSRL model-checking algorithms, and present five numerical algorithms and their computational complexity for verifying time- and reward-bounded until-properties, one of the key operators in CSRL. The versatility of our approach is illustrated through a performability case study.

Proceedings ArticleDOI
26 Jul 2010
TL;DR: B bounded model checking (BMC) based on Satisfiability Modulo Theory (SMT) solvers, which work on a mixed integer-real model that is generated for programs with floating points are implemented.
Abstract: Software model checking has recently been successful in discovering bugs in production software. Most tools have targeted heap related programming mistakes and control-heavy programs. However, real-time and embedded controllers implemented in software are susceptible to computational numeric instabilities. We target verification of numerical programs that use floating-point types, to detect loss of numerical precision incurred in such programs. Techniques based on abstract interpretation have been used in the past for such analysis. We use bounded model checking (BMC) based on Satisfiability Modulo Theory (SMT) solvers, which work on a mixed integer-real model that we generate for programs with floating points. We have implemented these techniques in our software verification platform. We report experimental results on benchmark examples to study the effectiveness of model checking on such problems, and the effect of various model simplifications on the performance of model checking.

Book ChapterDOI
29 Nov 2010
TL;DR: This paper generalizes a previously developed compositional technique and tool set for the automatic verification of control---flow based temporal safety properties to product lines defined by SHVMs, and proves soundness of the generalization.
Abstract: Software product line engineering allows large software systems to be developed and adapted for varying customer needs. The products of a software product line can be described by means of a hierarchical variability model specifying the commonalities and variabilities between the artifacts of the individual products. The number of products generated by a hierarchical model is exponential in its size, which poses a serious challenge to software product line analysis and verification. For an analysis technique to scale, the effort has to be linear in the size of the model rather than linear in the number of products it generates. Hence, efficient product line verification is only possible if compositional verification techniques are applied that allow the analysis of products to be relativized on the properties of their variation points. In this paper, we propose simple hierarchical variability models (SHVM) with explicit variation points as a novel way to describe a set of products consisting of sets of methods. SHVMs provide a trade---off between expressiveness and a clean and simple model suitable for compositional verification. We generalize a previously developed compositional technique and tool set for the automatic verification of control---flow based temporal safety properties to product lines defined by SHVMs, and prove soundness of the generalization. The desired property relativization is achieved by introducing variation point specifications. We evaluate the proposed technique on a number of test cases.

Book ChapterDOI
21 Jun 2010
TL;DR: In this article, the authors review the mathematical model underlying measurement-based quantum computation and the first quantum cryptographic protocol designed using the unique features of MBQC, and present a quantum encryption protocol based on the unique properties of MB-QC.
Abstract: Measurement-based quantum computation (MBQC) is a novel approach to quantum computation where the notion of measurement is the main driving force of computation. This is in contrast with the more traditional circuit model which is based on unitary operation. We review here the mathematical model underlying MBQC and the first quantum cryptographic protocol designed using the unique features of MBQC.

Journal ArticleDOI
01 Jun 2010
TL;DR: The focus of this paper is on the translation of the official (informal and descriptive) specification of two non-trivial DDR2 properties into stl/psl assertions and study both the benefits and the current limits of such approach.
Abstract: The formal specification component of verification can be exported to simulation through the idea of property checkers. The essence of this approach is the automatic construction of an observer from the specification in the form of a program that can be interfaced with a simulator and alert the user if the property is violated by a simulation trace. Although not complete, this lighter approach to formal verification has been effectively used in software and digital hardware to detect errors. Recently, the idea of property checkers has been extended to analog and mixed-signal systems. In this paper, we apply the property-based checking methodology to an industrial and realistic example of a DDR2 memory interface. The properties describing the DDR2 analog behavior are expressed in the formal specification language stl/psl in form of assertions. The simulation traces generated from an actual DDR2 interface design are checked with respect to the stl/psl assertions using the amt tool. The focus of this paper is on the translation of the official (informal and descriptive) specification of two non-trivial DDR2 properties into stl/psl assertions. We study both the benefits and the current limits of such approach.

Journal ArticleDOI
01 Jun 2010
TL;DR: Two extensions for an analog equivalence checking method are proposed, enabling the checking of strongly nonlinear circuits with floating nodes such as digital library cells and the introduction of reachability analysis is significantly restricting the investigated state space to the relevant parts, avoiding false negatives.
Abstract: In this contribution two extensions for an analog equivalence checking method are proposed, enabling the checking of strongly nonlinear circuits with floating nodes such as digital library cells. Therefore, a structural recognition and mapping of eigenvalues, representing the dynamics, to circuit elements via circuit variables is presented. Additionally, the introduction of reachability analysis is significantly restricting the investigated state space to the relevant parts, avoiding false negatives. The newly introduced methods are compared to existing ones by application to industrial examples.

Journal ArticleDOI
01 Feb 2010
TL;DR: A family of compositional approaches all based on assume-guarantee reasoning to reducing the verification complexity are developed and it is shown that for the three hierarchical protocols with certain realistic features that were developed for multiple chip-multiprocessors, more than a 20-fold improvement in terms of the number of states visited can be achieved.
Abstract: Multicore architectures are considered inevitable, given that sequential processing hardware has hit various limits. Unfortunately, the memory system of multicore processors is a huge bottleneck. To combat this problem, one needs to design aggressively optimized cache coherence protocols. This introduces the design correctness problem for advanced cache coherence protocols which will be hierarchically organized for scalable designs. Experiences show that monolithic formal verification will not scale to hierarchical designs. Hence, one needs to handle the complexity of several coherence protocols running concurrently, i.e. hierarchical protocols, using compositional techniques. To solve the problem, we develop a family of compositional approaches all based on assume-guarantee reasoning to reducing the verification complexity. We show that for the three hierarchical protocols with certain realistic features that we developed for multiple chip-multiprocessors, more than a 20-fold improvement in terms of the number of states visited can be achieved. Also, to avoid false alarms wasting designer time, we have developed an error trace justification method to eliminate false alarms using heuristics that also capitalize on our assume-guarantee approaches. Our techniques need no special tool support. They can be carried out using the widely used Murphi model checker along with support tools for abstraction and error trace justification that we have built.

Proceedings ArticleDOI
26 Jul 2010
TL;DR: This work demonstrates how a concolic execution tool can be modified to automatically analyze controller implementations and produce test cases achieving a coverage goal, and verify robustness of an implementation under input uncertainties.
Abstract: Software controllers for physical processes are at the core of many safety-critical systems such as avionics, automotive engine control, and process control. Despite their importance, the design and implementation of software controllers remains an art form; dependability is generally poor, and the cost of verifying systems is prohibitive. We illustrate the potential of applying program analysis tools on problems in controller design and implementation by focusing on concolic execution, a technique for systematic testing for software. In particular, we demonstrate how a concolic execution tool can be modified to automatically analyze controller implementations and (a) produce test cases achieving a coverage goal, (b) synthesize ranges for controller variables that can be used to allocate bits in a fixed-point implementation, and (c) verify robustness of an implementation under input uncertainties. We have implemented these algorithms on top of the Splat test generation tool and have carried out preliminary experiments on control software that demonstrates feasibility of the techniques.

Journal ArticleDOI
01 Dec 2010
TL;DR: Two approaches to tool-supported automatic verification of dense real-time systems against scenario-based requirements, where a system is modeled as a network of timed automata (TAs) or as a set of driving live sequence charts (LSCs), and a requirement is specified as a separate monitored LSC chart are proposed.
Abstract: This article proposes two approaches to tool-supported automatic verification of dense real-time systems against scenario-based requirements, where a system is modeled as a network of timed automata (TAs) or as a set of driving live sequence charts (LSCs), and a requirement is specified as a separate monitored LSC chart. We make timed extensions to a kernel subset of the LSC language and define a trace-based semantics. By translating a monitored LSC chart to a behavior-equivalent observer TA and then non-intrusively composing this observer with the original TA-modeled real-time system, the problems of scenario-based verification reduce to computation tree logic (CTL) real-time model checking problems. When the real-time system is modeled as a set of driving LSC charts, we translate these driving charts and the monitored chart into a behavior-equivalent network of TAs by using a "one-TA-per-instance line" approach, and then reduce the problems of scenario-based verification also to CTL real-time model checking problems. We show how we exploit the expressivity of the TA formalism and the CTL query language of the real-time model checker Uppaal to accomplish these tasks. The proposed two approaches are implemented in the Uppaal tool and built as a tool chain, respectively. We carry out a number of experiments with both verification approaches, and the results indicate that these methods are viable, computationally feasible, and the tools are effective.

Journal ArticleDOI
01 Oct 2010
TL;DR: Two new types of linear feedback shift registers are presented, the Single-State-Skip and the Variable- state-Skip LFSRs, which get the well-known high compression efficiency of TSE with substantially reduced test sequences, thus bridging the gap between test data compression and TSE methods.
Abstract: Even though test set embedding (TSE) methods offer very high compression efficiency, their excessively long test application times prohibit their use for testing systems-on-chip (SoC). To alleviate this problem we present two new types of linear feedback shift registers (LFSRs), the Single-State-Skip and the Variable-State-Skip LFSRs. Both are normal LFSRs with the addition of the State-Skip circuit, which is used instead of the characteristic-polynomial feedback structure for performing successive jumps of constant and variable length in their state sequence. By using Single-State-Skip LFSRs for testing single or multiple identical cores and Variable-State-Skip LFSRs for testing multiple non-identical cores we get the well-known high compression efficiency of TSE with substantially reduced test sequences, thus bridging the gap between test data compression and TSE methods.