scispace - formally typeset
Search or ask a question

Showing papers presented at "Formal Methods in 2011"


Book ChapterDOI
13 Jun 2011
TL;DR: Methods to analyse Markov decision processes, which model both stochastic and nondeterministic behaviour, and a wide range of their properties, including specifications in the temporal logics PCTL and LTL, probabilistic safety properties and cost- or reward-based measures are described.
Abstract: This tutorial provides an introduction to probabilistic model checking, a technique for automatically verifying quantitative properties of probabilistic systems. We focus on Markov decision processes (MDPs), which model both stochastic and nondeterministic behaviour. We describe methods to analyse a wide range of their properties, including specifications in the temporal logics PCTL and LTL, probabilistic safety properties and cost- or reward-based measures. We also discuss multi-objective probabilistic model checking, used to analyse trade-offs between several different quantitative properties. Applications of the techniques in this tutorial include performance and dependability analysis of networked systems, communication protocols and randomised distributed algorithms. Since such systems often comprise several components operating in parallel, we also cover techniques for compositional modelling and verification of multi-component probabilistic systems. Finally, we describe three large case studies which illustrate practical applications of the various methods discussed in the tutorial.

333 citations


Book ChapterDOI
20 Jun 2011
TL;DR: This work provides a general notion of product program that supports a direct reduction of relational verification to standard verification, and illustrates the benefits of the method with selected examples, including non-interference, standard loop optimizations, and a state-of-the-art optimization for incremental computation.
Abstract: Relational program logics are formalisms for specifying and verifying properties about two programs or two runs of the same program. These properties range from correctness of compiler optimizations or equivalence between two implementations of an abstract data type, to properties like non-interference or determinism. Yet the current technology for relational verification remains underdeveloped. We provide a general notion of product program that supports a direct reduction of relational verification to standard verification. We illustrate the benefits of our method with selected examples, including non-interference, standard loop optimizations, and a state-of-the-art optimization for incremental computation. All examples have been verified using the Why tool.

218 citations


Book ChapterDOI
20 Jun 2011
TL;DR: A formal model of a distributed car control system in which every car is controlled by adaptive cruise control is developed and it is verified that the control model satisfies its main safety objective and guarantees collision freedom for arbitrarily many cars driving on a street, even if new cars enter the lane from on-ramps or multi-lane streets.
Abstract: Car safety measures can be most effective when the cars on a street coordinate their control actions using distributed cooperative control. While each car optimizes its navigation planning locally to ensure the driver reaches his destination, all cars coordinate their actions in a distributed way in order to minimize the risk of safety hazards and collisions. These systems control the physical aspects of car movement using cyber technologies like local and remote sensor data and distributed V2V and V2I communication. They are thus cyber-physical systems. In this paper, we consider a distributed car control system that is inspired by the ambitions of the California PATH project, the CICAS system, SAFESPOT and PReVENT initiatives.We develop a formal model of a distributed car control system in which every car is controlled by adaptive cruise control. One of the major technical difficulties is that faithful models of distributed car control have both distributed systems and hybrid systems dynamics. They form distributed hybrid systems, which makes them very challenging for verification. In a formal proof system, we verify that the control model satisfies its main safety objective and guarantees collision freedom for arbitrarily many cars driving on a street, even if new cars enter the lane from on-ramps or multi-lane streets. The system we present is in many ways one of the most complicated cyber-physical systems that has ever been fully verified formally.

185 citations


Book ChapterDOI
13 Jun 2011
TL;DR: This chapter gives an introduction to active learning of Mealy machines, an automata model particularly suited for modeling the behavior of realistic reactive systems.
Abstract: In this chapter we give an introduction to active learning of Mealy machines, an automata model particularly suited for modeling the behavior of realistic reactive systems. Active learning is characterized by its alternation of an exploration phase and a testing phase. During exploration phases so-called membership queries are used to construct hypothesis models of a system under learning. In testing phases so-called equivalence queries are used to compare respective hypothesis models to the actual system. These two phases are iterated until a valid model of the target system is produced.

150 citations


Journal ArticleDOI
01 Apr 2011
TL;DR: A new pseudopolynomial algorithm is presented for solving two-player games played on a weighted graph with mean-payoff objective and with energy constraints, improving the best known worst-case complexity for pseudopoly Nominal mean- payoff algorithms.
Abstract: In this paper, we study algorithmic problems for quantitative models that are motivated by the applications in modeling embedded systems. We consider two-player games played on a weighted graph with mean-payoff objective and with energy constraints. We present a new pseudopolynomial algorithm for solving such games, improving the best known worst-case complexity for pseudopolynomial mean-payoff algorithms. Our algorithm can also be combined with the procedure by Andersson and Vorobyov to obtain a randomized algorithm with currently the best expected time complexity. The proposed solution relies on a simple fixpoint iteration to solve the log-space equivalent problem of deciding the winner of energy games. Our results imply also that energy games and mean-payoff games can be reduced to safety games in pseudopolynomial time.

148 citations


Journal ArticleDOI
01 Jun 2011
TL;DR: This work proposes a generic notion of enforcement monitors based on a memory device and finite sets of control states and enforcement operations and specifies their enforcement abilities w.r.t. the general Safety-Progress classification of properties.
Abstract: Runtime enforcement is a powerful technique to ensure that a program will respect a given set of properties. We extend previous work on this topic in several directions. Firstly, we propose a generic notion of enforcement monitors based on a memory device and finite sets of control states and enforcement operations. Moreover, we specify their enforcement abilities w.r.t. the general Safety-Progress classification of properties. Furthermore, we propose a systematic technique to produce a monitor from the automaton recognizing a given safety, guarantee, obligation or response property. Finally, we show that this notion of enforcement monitors is more amenable to implementation and encompasses previous runtime enforcement mechanisms.

121 citations


Book ChapterDOI
20 Jun 2011
TL;DR: It is argued that for certain forms of trace analysis the best weapon is a high level programming language augmented with constructs for temporal reasoning, and SCALA's combination of object oriented and functional programming features makes it an ideal host language for such an API.
Abstract: In this paper we describe TRACECONTRACT, an API for trace analysis, implemented in the SCALA programming language. We argue that for certain forms of trace analysis the best weapon is a high level programming language augmented with constructs for temporal reasoning. A trace is a sequence of events, which may for example be generated by a running program, instrumented appropriately to generate events. The API supports writing properties in a notation that combines an advanced form of data parameterized state machines with temporal logic. The implementation utilizes SCALA's support for defining internal Domain Specific Languages (DSLs). Furthermore SCALA's combination of object oriented and functional programming features, including partial functions and pattern matching, makes it an ideal host language for such an API.

113 citations


Proceedings ArticleDOI
01 Jul 2011
TL;DR: This work argues that its approach to specification mining for synthesis from LTL based on specification mining is a natural way to discover the designer's intent and demonstrates the effectiveness of the approach on examples from the domains of digital circuits and robotic controllers.
Abstract: Automatic synthesis of a reactive system from its formal specification is appealing but often difficult due to the tedium of writing auxiliary specifications, especially on the environment. In several instances, specifications are found unrealizable as a result of insufficient environmental assumptions. We present an approach to this problem for synthesis from LTL based on specification mining. For a satisfiable but unrealizable specification, a counter-strategy can be computed from the synthesis game as a witness to unrealizability. Our algorithm mines environment assumptions from this counter-strategy as well as user scenarios if they are provided. We argue that our approach is a natural way to discover the designer's intent. We demonstrate the effectiveness of our approach on examples from the domains of digital circuits and robotic controllers.

110 citations


Journal ArticleDOI
01 Dec 2011
TL;DR: This paper presents new monolithic and compositional algorithms based on a reduction of the LTL realizability problem to a game whose winning condition is defined by a universal automaton on infinite words with a k-co-Büchi acceptance condition.
Abstract: In this paper, we present new monolithic and compositional algorithms to solve the LTL realizability problem. Those new algorithms are based on a reduction of the LTL realizability problem to a game whose winning condition is defined by a universal automaton on infinite words with a k-co-Buchi acceptance condition. This acceptance condition asks that runs visit at most k accepting states, so it implicitly defines a safety game. To obtain efficient algorithms from this construction, we need several additional ingredients. First, we study the structure of the underlying automata constructions, and we show that there exists a partial order that structures the state space of the underlying safety game. This partial order can be used to define an efficient antichain algorithm. Second, we show that the algorithm can be implemented in an incremental way by considering increasing values of k in the acceptance condition. Finally, we show that for large LTL formulas that are written as conjunctions of smaller formulas, we can solve the problem compositionally by first computing winning strategies for each conjunct that appears in the large formula. We report on the behavior of those algorithms on several benchmarks. We show that the compositional algorithms are able to handle LTL formulas that are several pages long.

109 citations


Journal ArticleDOI
01 Feb 2011
TL;DR: This paper presents how the UPPAAL tool was maintained during 15 years, its current architecture with its challenges, and the future directions of the tool are given.
Abstract: UPPAAL is a tool suitable for model checking real-time systems described as networks of timed automata communicating by channel synchronizations and extended with integer variables. Its first version was released in 1995 and its development is still very active. It now features an advanced modeling language, a user-friendly graphical interface, and a performant model checker engine. In addition, several flavors of the tool have matured in recent years. In this paper, we present how we managed to maintain the tool during 15 years, its current architecture with its challenges, and we give the future directions of the tool. Copyright © 2011 John Wiley & Sons, Ltd.

107 citations


Book ChapterDOI
20 Jun 2011
TL;DR: The authors, the organizers and participants, report the experiences from the 1st Verified Software Competition, held in August 2010 in Edinburgh at the VSTTE 2010 conference.
Abstract: We, the organizers and participants, report our experiences from the 1st Verified Software Competition, held in August 2010 in Edinburgh at the VSTTE 2010 conference.

Book ChapterDOI
26 Oct 2011
TL;DR: This work presents an approach to prove safety (collision freedom) of multi-lane motorway traffic with lane-change manoeuvres by introducing a new spatial interval logic based on the view of each car that provides a local safety proof for unboundedly many cars.
Abstract: We present an approach to prove safety (collision freedom) of multi-lane motorway traffic with lane-change manoeuvres. This is ultimately a hybrid verification problem due to the continuous dynamics of the cars. We abstract from the dynamics by introducing a new spatial interval logic based on the view of each car. To guarantee safety, we present two variants of a lane-change controller, one with perfect knowledge of the safety envelopes of neighbouring cars and one which takes only the size of the neighbouring cars into account. Based on these controllers we provide a local safety proof for unboundedly many cars by showing that at any moment the reserved space of each car is disjoint from the reserved space of any other car.

Book ChapterDOI
13 Jun 2011
TL;DR: This chapter introduces the approach that is investigated within the Connect project and that deals with the dynamic synthesis of emergent connectors that mediate the interaction protocols executed by the networked systems.
Abstract: This chapter deals with interoperability among pervasive networked systems, in particular accounting for the heterogeneity of protocols from the application down to the middleware layer, which is mandatory for today's and even more for tomorrow's open and highly heterogeneous networks. The chapter then surveys existing approaches to middleware interoperability, further providing a formal specification so as to allow for rigorous characterization and assessment. In general, existing approaches fail to address interoperability required by today's ubiquitous and heterogeneous networking environments where interaction protocols run by networked systems need to be mediated at both application and middleware layers. To meet such a goal, this chapter introduces the approach that is investigated within the Connect project and that deals with the dynamic synthesis of emergent connectors that mediate the interaction protocols executed by the networked systems.

Book ChapterDOI
13 Jun 2011
TL;DR: It is shown that model- based testing and learning are strongly related, and that learning can be fully expressed in the concepts of model-based testing.
Abstract: Model-based testing is one of the promising technologies to increase the efficiency and effectiveness of software testing. In model-based testing, a model specifies the required behaviour of a system, and test cases are algorithmically generated from this model. Obtaining a valid model, however, is often difficult if the system is complex, contains legacy or third-party components, or if documentation is incomplete. Test-based modelling, also called automata learning, turns model-based testing around: it aims at automatically generating a model from test observations. This paper first gives an overview of formal, model-based testing in general, and of model-based testing for labelled transition system models in particular. Then the practice of model-based testing, the difficulty of obtaining models, and the role of learning are discussed. It is shown that model-based testing and learning are strongly related, and that learning can be fully expressed in the concepts of model-based testing. In particular, test coverage in model-based testing and precision of learned models turn out to be two sides of the same coin.

Book ChapterDOI
20 Jun 2011
TL;DR: This paper explores the idea of using a SAT Modulo Theories (SMT) solver for proving properties of relational specifications by axiomatizing all relational operators in a first-order SMT logic, and taking advantage of the background theories supported by SMT solvers.
Abstract: This paper explores the idea of using a SAT Modulo Theories (SMT) solver for proving properties of relational specifications. The goal is to automatically establish or refute consistency of a set of constraints expressed in a first-order relational logic, namely Alloy, without limiting the analysis to a bounded scope. Existing analysis of relational constraints - as performed by the Alloy Analyzer - is based on SAT solving and thus requires finitizing the set of values that each relation can take. Our technique complements this approach by axiomatizing all relational operators in a first-order SMT logic, and taking advantage of the background theories supported by SMT solvers. Consequently, it can potentially prove that a formula is a tautology - a capability completely missing from the Alloy Analyzer - and generate a counterexample when the proof fails. We also report on our experiments of applying this technique to various systems specified in Alloy.

Book ChapterDOI
26 Oct 2011
TL;DR: The proof rules are based on several proof obligations that can be implemented in a tool support such as the Rodin platform and are illustrated by applying them to prove liveness properties of realistic examples.
Abstract: Event-B is a formal method which is widely used in modelling safety critical systems. So far, the main properties of interest in Event-B are safety related. Even though some liveness properties, e,g, termination, are already within the scope of Event-B, more general liveness properties, e.g. progress or persistence, are currently unsupported. We present in this paper proof rules to reason about important classes of liveness properties. We illustrate our proof rules by applying them to prove liveness properties of realistic examples. Our proof rules are based on several proof obligations that can be implemented in a tool support such as the Rodin platform.

Book ChapterDOI
20 Jun 2011
TL;DR: This work formalizes in the Coq proof assistant an idealized model of a hypervisor, and formally establish that the hypervisor ensures strong isolation properties between the different operating systems, and guarantees that requests from guest operating systems are eventually attended.
Abstract: Hypervisors allow multiple guest operating systems to run on shared hardware, and offer a compelling means of improving the security and the flexibility of software systems. We formalize in the Coq proof assistant an idealized model of a hypervisor, and formally establish that the hypervisor ensures strong isolation properties between the different operating systems, and guarantees that requests from guest operating systems are eventually attended.

Book ChapterDOI
20 Jun 2011
TL;DR: This paper presents a proof of linearisability of the lazy implementation of a set due to Heller et al, and develops a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points.
Abstract: Linearisability is the key correctness criterion for concurrent implementations of data structures shared by multiple processes. In this paper we present a proof of linearisability of the lazy implementation of a set due to Heller et al. The lazy set presents one of the most challenging issues in verifying linearisability: a linearisation point of an operation set by a process other than the one executing it. For this we develop a proof strategy based on refinement which uses thread local simulation conditions and the technique of potential linearisation points. The former allows us to prove linearisability for arbitrary numbers of processes by looking at only two processes at a time, the latter permits disposing with reasoning about the past. All proofs have been mechanically carried out using the interactive prover KIV.

Book ChapterDOI
13 Jun 2011
TL;DR: The Abstract Behavioral Specification (ABS) language facilitates to precisely model the behavior of highly configurable, distributed systems.
Abstract: The Abstract Behavioral Specification (ABS) language facilitates to precisely model the behavior of highly configurable, distributed systems. Its basis is Core ABS which is a strongly typed, abstract, object-based, concurrent, fully executable modeling language. Spatial variability of ABS models is represented by feature models, delta modules containing modifications of ABS models, product line configurations linking delta modules with product features and product selections specifying actual product instances. Temporal variability is captured by dynamic delta modules that can be applied to perform runtime updates. The feasibility of ABS is demonstrated by modeling an industrial-scale web merchandising system.

Proceedings ArticleDOI
01 Jul 2011
TL;DR: This paper describes the formal verification framework the authors have built on top of publicly-available tools, which gives us the flexibility to work on myriad different problems that occur in microprocessor design.
Abstract: In recent years, leading microprocessor companies have made huge investments to improve the reliability of their products. Besides expanding their validation and CAD tools teams, they have incorporated formal verification methods into their design flows. Formal verification (FV) engineers require extensive training, and FV tools from CAD vendors are expensive. At first glance, it may seem that FV teams are not affordable by smaller companies. We have not found this to be true. This paper describes the formal verification framework we have built on top of publicly-available tools. This framework gives us the flexibility to work on myriad different problems that occur in microprocessor design.

Book ChapterDOI
20 Jun 2011
TL;DR: This paper introduces a time-triggered approach to runtime verification, where the monitor frequently takes samples from the system to analyze the system's health and proposes formal semantics of sampling-based monitoring and discusses how to optimize the sampling period using minimum auxiliary memory.
Abstract: The literature of runtime verification mostly focuses on event-triggered solutions, where a monitor is invoked by every change in the state of the system and evaluates properties of the system. This constant invocation introduces two major drawbacks to the system under scrutiny at run time: (1) significant overhead and (2) unpredictability. To circumvent the latter drawback, in this paper, we introduce a time-triggered approach, where the monitor frequently takes samples from the system to analyze the system's health. We propose formal semantics of sampling-based monitoring and discuss how to optimize the sampling period using minimum auxiliary memory. We show that such optimization is NP-complete and consequently introduce a mapping to Integer Linear Programming. Experiments on benchmark applications show that our approach introduces bounded overhead and effectively reduces involvement of the monitor at run time using negligible auxiliary memory.

Book ChapterDOI
20 Jun 2011
TL;DR: A novel encoding of symbolic transition-based Buchi automata and a novel, "sloppy," transition encoding, both of which result in improved scalability and describe and extensively test a new multi-encoding approach utilizing these novel encoding techniques to create 30 encoding variations.
Abstract: Formal behavioral specifications written early in the system-design process and communicated across all design phases have been shown to increase the efficiency, consistency, and quality of the system under development. To prevent introducing design or verification errors, it is crucial to test specifications for satisfiability. Our focus here is on specifications expressed in linear temporal logic (LTL). We introduce a novel encoding of symbolic transition-based Buchi automata and a novel, "sloppy," transition encoding, both of which result in improved scalability. We also define novel BDD variable orders based on tree decomposition of formula parse trees. We describe and extensively test a new multi-encoding approach utilizing these novel encoding techniques to create 30 encoding variations. We show that our novel encodings translate to significant, sometimes exponential, improvement over the current standard encoding for symbolic LTL satisfiability checking.

Book ChapterDOI
26 Oct 2011
TL;DR: A unifying approach for the static analysis of string values based on abstract interpretation is proposed, and several abstract domains that track different types of information are presented that can address specific properties.
Abstract: In this paper we propose a unifying approach for the static analysis of string values based on abstract interpretation, and we present several abstract domains that track different types of information. In this way, the analysis can be tuned at different levels of precision and efficiency, and it can address specific properties.

Book ChapterDOI
20 Jun 2011
TL;DR: The notion of remorsefree dominance between strategies is introduced, where one strategy is preferred over another if it outperforms the other strategy in comparable situations, even if neither strategy is guaranteed to achieve all objectives.
Abstract: Will the cost for observing additional real-world phenomena in a world model be recovered by the resulting increase in the quality of the implementations based on the model? We address the quest for optimal models in light of industrial practices in systems engineering, where the development of control strategies is based on combined models of a system and its environment. We introduce the notion of remorsefree dominance between strategies, where one strategy is preferred over another if it outperforms the other strategy in comparable situations, even if neither strategy is guaranteed to achieve all objectives. We call a world model optimal if it is sufficiently precise to allow for a remorsefree dominating strategy that is guaranteed to remain dominant even if the world model is refined. We present algorithms for the automatic verification and synthesis of dominant strategies, based on tree automata constructions from reactive synthesis.

Book ChapterDOI
03 Oct 2011
TL;DR: This paper provides an overview of several research areas of ASCENS: the SOTA approach to ensemble engineering and the underlying formal model called GEM, formal notions of adaptation and awareness, the SCEL language, quantitative analysis of ensembles, and finally software-engineering methods for ensemble engineering.
Abstract: Today’s developers often face the demanding task of developing software for ensembles: systems with massive numbers of nodes, operating in open and non-deterministic environments with complex interactions, and the need to dynamically adapt to new requirements, technologies or environmental conditions without redeployment and without interruption of the system’s functionality. Conventional development approaches and languages do not provide adequate support for the problems posed by this challenge. The goal of the ASCENS project is to develop a coherent, integrated set of methods and tools to build software for ensembles. To this end we research foundational issues that arise during the development of these kinds of systems, and we build mathematical models that address them. Based on these theories we design a family of languages for engineering ensembles, formal methods that can handle the size, complexity and adaptivity required by ensembles, and software-development methods that provide guidance for developers. In this paper we provide an overview of several research areas of ASCENS: the SOTA approach to ensemble engineering and the underlying formal model called GEM, formal notions of adaptation and awareness, the SCEL language, quantitative analysis of ensembles, and finally software-engineering methods for ensembles.

Book ChapterDOI
26 Oct 2011
TL;DR: This paper presents an approach to compositional contractbased verification of Simulink models that uses Synchronous Data Flow graphs as a formalism to obtain sequential program statements that can be analysed using traditional refinement-based verification techniques.
Abstract: This paper presents an approach to compositional contractbased verification of Simulink models. The verification approach uses Synchronous Data Flow (SDF) graphs as a formalism to obtain sequential program statements that can then be analysed using traditional refinement-based verification techniques. Automatic generation of the proof obligations needed for verification of correctness with respect to contracts, as well as automatic proofs are also discussed.

Journal ArticleDOI
01 Oct 2011
TL;DR: The techniques introduced here doubled the performance of the BMC solver on both SAT and UNSAT problems, and can be integrated into other SAT based BMC tools to achieve similar speedups.
Abstract: Traditional incremental SAT solvers have achieved great success in the domain of Bounded Model Checking (BMC). Recently, modern solvers have introduced advanced preprocessing procedures that have allowed them to obtain high levels of performance. Unfortunately, many preprocessing techniques such as variable and (blocked) clause elimination cannot be directly used in an incremental manner. This work focuses on extending these techniques and Craig interpolation so that they can be used effectively together in incremental SAT solving (in the context of BMC). The techniques introduced here doubled the performance of our BMC solver on both SAT and UNSAT problems. For UNSAT problems, preprocessing had the added advantage that Craig interpolation was able to find the fixed point sooner, reducing the number of incremental SAT iterations. Furthermore, our ideas seem to perform better as the benchmarks become larger, and/or deeper, which is exactly when they are needed. Lastly, our methods can be integrated into other SAT based BMC tools to achieve similar speedups.

Book ChapterDOI
13 Jun 2011
TL;DR: This chapter examines the issue of interoperability in considerable detail, looking initially at the problem space, and in particular the key barriers to interoperability, and then moving on to the solution space, focusing on research in the middleware and semantic interoperability communities.
Abstract: Distributed systems are becoming more complex in terms of both the level of heterogeneity encountered coupled with a high level of dynamism of such systems. Taken together, this makes it very difficult to achieve the crucial property of interoperability that is enabling two arbitrary systems to work together relying only on their declared service specification. This chapter examines this issue of interoperability in considerable detail, looking initially at the problem space, and in particular the key barriers to interoperability, and then moving on to the solution space, focusing on research in the middleware and semantic interoperability communities. We argue that existing approaches are simply unable to meet the demands of the complex distributed systems of today and that the lack of integration between the work on middleware and semantic interoperability is a clear impediment to progress in this area. We outline a roadmap towards meeting the challenges of interoperability including the need for integration across these two communities, resulting in middleware solutions that are intrinsically based on semantic meaning. We also advocate a dynamic approach to interoperability based on the concept of emergent middleware.

Proceedings ArticleDOI
01 Jul 2011
TL;DR: The UML Profile for Modeling and Analysis of Real-Time and Embedded systems (MARTE) defines a mathematically expressive model of time, the Clock Constraint Specification Language (CCSL), to specify timed annotations on UML diagrams and thus provides them with formally defined timed interpretations.
Abstract: The UML Profile for Modeling and Analysis of Real-Time and Embedded systems (MARTE) defines a mathematically expressive model of time, the Clock Constraint Specification Language (CCSL), to specify timed annotations on UML diagrams and thus provides them with formally defined timed interpretations. Thanks to its expressive capability, the CCSL allows for the specification of static and dynamic properties, of deterministic and non-deterministic behaviors, or of systems with multiple clock domains. Code generation from such multi-clocked specifications (for the purpose of synthesizing a simulator, for instance) is known to be a difficult issue. We address it by using the approach of controller synthesis. In our framework, a timed CCSL specification is regarded as a property whose satisfaction should be enforced for any UML diagram carrying it as annotation. To do so, CCSL statements are first translated into dynamical polynomial systems. Such systems can be manipulated using the model-checker Sigali to synthesize an executable property (a controller) which enforces the satisfaction of the specified timing constraints on the UML diagram with which it is executed.

Proceedings ArticleDOI
20 Sep 2011
TL;DR: It is argued that formal analysis of the range of offered devices can provide a systematic means of comparison and barriers to the use of such techniques are explored, demonstrating how layers of specification may be used to make it possible to reuse common specification.
Abstract: This paper is concerned with the scaleable and systematic analysis of interactive systems. The motivating problem is the procurement of medical devices. In such situations several different manufacturers offer solutions that support a par- ticular clinical activity. Apart from cost, which is a dominating factor, the variations between devices are relatively subtle and the consequences of particular design fea- tures are not clear from manufacturers' manuals, demonstrations or trial uses. De- spite their subtlety these differences can be important to the safety and usability of the device. The paper argues that formal analysis of the range of offered devices can provide a systematic means of comparison. The paper also explores barriers to the use of such techniques, demonstrating how layers of specification may be used to make it possible to reuse common specification. Infusion pumps provide a motivat- ing example. A specific model is described and analysed and comparison between competitive devices is discussed.