scispace - formally typeset
Search or ask a question

Showing papers presented at "Formal Methods in 2013"


Journal ArticleDOI
22 Oct 2013
TL;DR: This work develops an innovative framework based on the concept that the determinant for the effectiveness of a formula is the number of statements with risk values higher than the risk value of the faulty statement.
Abstract: An important research area of Spectrum-Based Fault Localization (SBFL) is the effectiveness of risk evaluation formulas. Most previous studies have adopted an empirical approach, which can hardly be considered as sufficiently comprehensive because of the huge number of combinations of various factors in SBFL. Though some studies aimed at overcoming the limitations of the empirical approach, none of them has provided a completely satisfactory solution. Therefore, we provide a theoretical investigation on the effectiveness of risk evaluation formulas. We define two types of relations between formulas, namely, equivalent and better. To identify the relations between formulas, we develop an innovative framework for the theoretical investigation. Our framework is based on the concept that the determinant for the effectiveness of a formula is the number of statements with risk values higher than the risk value of the faulty statement. We group all program statements into three disjoint sets with risk values higher than, equal to, and lower than the risk value of the faulty statement, respectively. For different formulas, the sizes of their sets are compared using the notion of subset. We use this framework to identify the maximal formulas which should be the only formulas to be used in SBFL.

294 citations


Journal ArticleDOI
16 Feb 2013
TL;DR: The applicability and efficiency of the methods are demonstrated by deploying them to analyse and detect potential weaknesses in a variety of large case studies, including algorithms for energy management in Microgrids and collective decision making for autonomous systems.
Abstract: We present automatic verification techniques for the modelling and analysis of probabilistic systems that incorporate competitive behaviour. These systems are modelled as turn-based stochastic multi-player games, in which the players can either collaborate or compete in order to achieve a particular goal. We define a temporal logic called rPATL for expressing quantitative properties of stochastic multi-player games. This logic allows us to reason about the collective ability of a set of players to achieve a goal relating to the probability of an event’s occurrence or the expected amount of cost/reward accumulated. We give an algorithm for verifying properties expressed in this logic and implement the techniques in a probabilistic model checker, as an extension of the PRISM tool. We demonstrate the applicability and efficiency of our methods by deploying them to analyse and detect potential weaknesses in a variety of large case studies, including algorithms for energy management in Microgrids and collective decision making for autonomous systems.

153 citations


Journal ArticleDOI
01 Oct 2013
TL;DR: HModest is presented, an extension to the Modest modelling language—which is originally designed for stochastic timed systems without complex continuous aspects—that adds differential equations and inclusions as an expressive way to describe the continuous system evolution.
Abstract: The theory of hybrid systems is well-established as a model for real-world systems consisting of continuous behaviour and discrete control. In practice, the behaviour of such systems is also subject to uncertainties, such as measurement errors, or is controlled by randomised algorithms. These aspects can be modelled and analysed using stochastic hybrid systems. In this paper, we present HModest, an extension to the Modest modelling language--which is originally designed for stochastic timed systems without complex continuous aspects--that adds differential equations and inclusions as an expressive way to describe the continuous system evolution. Modest is a high-level language inspired by classical process algebras, thus compositional modelling is an integral feature. We define the syntax and semantics of HModest and show that it is a conservative extension of Modest that retains the compositional modelling approach. To allow the analysis of HModest models, we report on the implementation of a connection to recently developed tools for the safety verification of stochastic hybrid systems, and illustrate the language and the tool support with a set of small, but instructive case studies.

132 citations


Journal ArticleDOI
30 Jul 2013
TL;DR: A novel container-based heap-tracking technique, based on the observation that many memory leaks in Java programs occur due to containers that keep references to unused data entries, that tracks only containers and directly identifies the source of the leak.
Abstract: A memory leak in a Java program occurs when object references that are no longer needed are unnecessarily maintained. Such leaks are difficult to detect because static analysis typically cannot precisely identify these redundant references, and existing dynamic leak detection tools track and report fine-grained information about individual objects, producing results that are usually hard to interpret and lack precision.In this article we introduce a novel container-based heap-tracking technique, based on the fact that many memory leaks in Java programs occur due to incorrect uses of containers, leading to containers that keep references to unused data entries. The novelty of the described work is twofold: (1) instead of tracking arbitrary objects and finding leaks by analyzing references to unused objects, the technique tracks only containers and directly identifies the source of the leak, and (2) the technique computes a confidence value for each container based on a combination of its memory consumption and its elements' staleness (time since last retrieval), while previous approaches do not consider such combined metrics. Our experimental results show that the reports generated by the proposed technique can be very precise: for two bugs reported by Sun, a known bug in SPECjbb 2000, and an example bug from IBM developerWorks, the top containers in the reports include the containers that leak memory.

129 citations


Journal ArticleDOI
01 Oct 2013
TL;DR: This paper gives an introduction to PTAs and describes techniques for analysing a wide range of quantitative properties, such as “ the maximum probability of the airbag failing to deploy within 0.02 seconds”, “the maximum expected time for the protocol to terminate” or ”the minimum expected energy consumption required to complete all tasks”.
Abstract: Probabilistic timed automata (PTAs) are a formalism for modelling systems whose behaviour incorporates both probabilistic and real-time characteristics. Applications include wireless communication protocols, automotive network protocols and randomised security protocols. This paper gives an introduction to PTAs and describes techniques for analysing a wide range of quantitative properties, such as "the maximum probability of the airbag failing to deploy within 0.02 seconds", "the maximum expected time for the protocol to terminate" or "the minimum expected energy consumption required to complete all tasks". We present a temporal logic for specifying such properties and then give a survey of available model-checking techniques for formulae specified in this logic. We then describe two case studies in which PTAs are used for modelling and analysis: a probabilistic non-repudiation protocol and a task-graph scheduling problem.

119 citations


Journal ArticleDOI
30 Jul 2013
TL;DR: The results indicate that FLP and TCP can statistically significantly reduce fault localization costs for 73% and 76% of cases, respectively, and that FLINT significantly outperforms similarity-based localization techniques in 52% of the cases considered in the study.
Abstract: Test case prioritization techniques seek to maximize early fault detection. Fault localization seeks to use test cases already executed to help find the fault location. There is a natural interplay between the two techniques; once a fault is detected, we often switch focus to fault fixing, for which localization may be a first step. In this article we introduce the Fault Localization Prioritization (FLP) problem, which combines prioritization and localization. We evaluate three techniques: a novel FLP technique based on information theory, FLINT (Fault Localization using INformation Theory), that we introduce in this article, a standard Test Case Prioritization (TCP) technique, and a “test similarity technique” used in previous work. Our evaluation uses five different releases of four software systems. The results indicate that FLP and TCP can statistically significantly reduce fault localization costs for 73p and 76p of cases, respectively, and that FLINT significantly outperforms similarity-based localization techniques in 52p of the cases considered in the study.

103 citations


Journal ArticleDOI
01 Oct 2013
TL;DR: It is proved that Bayesian SMC can make the probability of giving a wrong answer arbitrarily small, which is essential for scaling up to large Stateflow/Simulink models.
Abstract: We address the problem of model checking stochastic systems, i.e., checking whether a stochastic system satisfies a certain temporal property with a probability greater (or smaller) than a fixed threshold. In particular, we present a Statistical Model Checking (SMC) approach based on Bayesian statistics. We show that our approach is feasible for a certain class of hybrid systems with stochastic transitions, a generalization of Simulink/Stateflow models. Standard approaches to stochastic discrete systems require numerical solutions for large optimization problems and quickly become infeasible with larger state spaces. Generalizations of these techniques to hybrid systems with stochastic effects are even more challenging. The SMC approach was pioneered by Younes and Simmons in the discrete and non-Bayesian case. It solves the verification problem by combining randomized sampling of system traces (which is very efficient for Simulink/Stateflow) with hypothesis testing (i.e., testing against a probability threshold) or estimation (i.e., computing with high probability a value close to the true probability). We believe SMC is essential for scaling up to large Stateflow/Simulink models. While the answer to the verification problem is not guaranteed to be correct, we prove that Bayesian SMC can make the probability of giving a wrong answer arbitrarily small. The advantage is that answers can usually be obtained much faster than with standard, exhaustive model checking techniques. We apply our Bayesian SMC approach to a representative example of stochastic discrete-time hybrid system models in Stateflow/Simulink: a fuel control system featuring hybrid behavior and fault tolerance. We show that our technique enables faster verification than state-of-the-art statistical techniques. We emphasize that Bayesian SMC is by no means restricted to Stateflow/Simulink models. It is in principle applicable to a variety of stochastic models from other domains, e.g., systems biology.

100 citations


Journal ArticleDOI
22 Oct 2013
TL;DR: MOEA can be used to better understand the relationship among performance measures and has shown to be very effective in creating SEE models.
Abstract: Ensembles of learning machines are promising for software effort estimation (SEE), but need to be tailored for this task to have their potential exploited. A key issue when creating ensembles is to produce diverse and accurate base models. Depending on how differently different performance measures behave for SEE, they could be used as a natural way of creating SEE ensembles. We propose to view SEE model creation as a multiobjective learning problem. A multiobjective evolutionary algorithm (MOEA) is used to better understand the tradeoff among different performance measures by creating SEE models through the simultaneous optimisation of these measures. We show that the performance measures behave very differently, presenting sometimes even opposite trends. They are then used as a source of diversity for creating SEE ensembles. A good tradeoff among different measures can be obtained by using an ensemble of MOEA solutions. This ensemble performs similarly or better than a model that does not consider these measures explicitly. Besides, MOEA is also flexible, allowing emphasis of a particular measure if desired. In conclusion, MOEA can be used to better understand the relationship among performance measures and has shown to be very effective in creating SEE models.

88 citations


Journal ArticleDOI
22 Oct 2013
TL;DR: The results show that users find more relevant functions with higher precision with Portfolio than with Google Code Search and Koders, and by using PageRank, Portfolio is able to rank returned relevant functions more efficiently.
Abstract: Different studies show that programmers are more interested in finding definitions of functions and their uses than variables, statements, or ordinary code fragments. Therefore, developers require support in finding relevant functions and determining how these functions are used. Unfortunately, existing code search engines do not provide enough of this support to developers, thus reducing the effectiveness of code reuse. We provide this support to programmers in a code search system called Portfolio that retrieves and visualizes relevant functions and their usages. We have built Portfolio using a combination of models that address surfing behavior of programmers and sharing related concepts among functions. We conducted two experiments: first, an experiment with 49 C/Cpp programmers to compare Portfolio to Google Code Search and Koders using a standard methodology for evaluating information-retrieval-based engines; and second, an experiment with 19 Java programmers to compare Portfolio to Koders. The results show with strong statistical significance that users find more relevant functions with higher precision with Portfolio than with Google Code Search and Koders. We also show that by using PageRank, Portfolio is able to rank returned relevant functions more efficiently.

83 citations


Book ChapterDOI
01 Jan 2013
TL;DR: This paper first presents design principles, syntax and operational semantics of SCEL, then shows how a dialect can be defined by appropriately instantiating the features of the language left open to deal with different application domains and uses this dialect to model a simple, yet illustrative, example application.
Abstract: SCEL is a new language specifically designed to model autonomic components and their interaction. It brings together various programming abstractions that permit to directly represent knowledge, behaviors and aggregations according to specific policies. It also supports naturally programming self-awareness, context-awareness, and adaptation. In this paper, we first present design principles, syntax and operational semantics of SCEL. Then, we show how a dialect can be defined by appropriately instantiating the features of the language we left open to deal with different application domains and use this dialect to model a simple, yet illustrative, example application. Finally, we demonstrate that adaptation can be naturally expressed in SCEL.

79 citations


Book ChapterDOI
01 Jan 2013
TL;DR: A new feature of the Rodin tool, the theory component, that allows users to extend the mathematical language supported by the tool, using theories, which allow users to define new data types and polymorphic operators in a systematic and practical way.
Abstract: The Rodin tool for Event-B supports formal modelling and proof using a mathematical language that is based on predicate logic and set theory. Although Rodin has in-built support for a rich set of operators and proof rules, for some application areas there may be a need to extend the set of operators and proof rules supported by the tool. This paper outlines a new feature of the Rodin tool, the theory component, that allows users to extend the mathematical language supported by the tool. Using theories, Rodin users may define new data types and polymorphic operators in a systematic and practical way. Theories also allow users to extend the proof capabilities of Rodin by defining new proof rules that get incorporated into the proof mechanisms. Soundness of new definitions and rules is provided through validity proof obligations.

Journal ArticleDOI
01 Feb 2013
TL;DR: This paper describes the framework and shows how it can be applied for flexible proof production and checking for two different SMT solvers, clsat and cvc3, and reports empirical results showing good performance relative to solver execution time.
Abstract: Producing and checking proofs from SMT solvers is currently the most feasible method for achieving high confidence in the correctness of solver results. The diversity of solvers and relative complexity of SMT over, say, SAT means that flexibility, as well as performance, is a critical characteristic of a proof-checking solution for SMT. This paper describes such a solution, based on a Logical Framework with Side Conditions (LFSC). We describe the framework and show how it can be applied for flexible proof production and checking for two different SMT solvers, clsat and cvc3. We also report empirical results showing good performance relative to solver execution time.

Journal ArticleDOI
21 Mar 2013
TL;DR: This work proposes several novel algorithms to generate ranking functions for relations over machine integers: a complete method based on a reduction to Presburger arithmetic, and a template-matching approach for predefined classes of ranking functions based on reduction to SAT- and QBF-solving.
Abstract: Ranking function synthesis is a key component of modern termination provers for imperative programs. While it is well-known how to generate linear ranking functions for relations over (mathematical) integers or rationals, efficient synthesis of ranking functions for machine-level integers (bit-vectors) is an open problem. This is particularly relevant for the verification of low-level code. We propose several novel algorithms to generate ranking functions for relations over machine integers: a complete method based on a reduction to Presburger arithmetic, and a template-matching approach for predefined classes of ranking functions based on reduction to SAT- and QBF-solving. The utility of our algorithms is demonstrated on examples drawn from Windows device drivers.

Journal ArticleDOI
30 Jul 2013
TL;DR: In the framework of bounded satisfiability checking, it is shown how a descriptive model can be refined into an operational one, and how the correctness of such a refinement can be verified for the bounded case, setting the stage for a stepwise system development method based on a bounded model refinement.
Abstract: We introduce bounded satisfiability checking, a verification technique that extends bounded model checking by allowing also the analysis of a descriptive model, consisting of temporal logic formulae, instead of the more customary operational model, consisting of a state transition system. We define techniques for encoding temporal logic formulae into Boolean logic that support the use of bi-infinite time domain and of metric time operators. In the framework of bounded satisfiability checking, we show how a descriptive model can be refined into an operational one, and how the correctness of such a refinement can be verified for the bounded case, setting the stage for a stepwise system development method based on a bounded model refinement. Finally, we show how the adoption of a modular approach can make the bounded refinement process more manageable and efficient. All introduced concepts are extensively applied to a set of case studies, and thoroughly experimented through Zot, our SAT solver-based verification toolset.

Journal ArticleDOI
30 Jul 2013
TL;DR: The aim of this article is to empirically investigate the customization of the DRD by documenting only the information items that will probably be required for executing an activity, and to expect that such a significant reduction in DRD information should mitigate the effects of some inhibitors that currently prevent practitioners from documenting design decision rationale.
Abstract: A complete and detailed (full) Design Rationale Documentation (DRD) could support many software development activities, such as an impact analysis or a major redesign However, this is typically too onerous for systematic industrial use as it is not cost effective to write, maintain, or read The key idea investigated in this article is that DRD should be developed only to the extent required to support activities particularly difficult to execute or in need of significant improvement in a particular context The aim of this article is to empirically investigate the customization of the DRD by documenting only the information items that will probably be required for executing an activity This customization strategy relies on the hypothesis that the value of a specific DRD information item depends on its category (eg, assumptions, related requirements, etc) and on the activity it is meant to support We investigate this hypothesis through two controlled experiments involving a total of 75 master students as experimental subjects Results show that the value of a DRD information item significantly depends on its category and, within a given category, on the activity it supports Furthermore, on average among activities, documenting only the information items that have been required at least half of the time (ie, the information that will probably be required in the future) leads to a customized DRD containing about half the information items of a full documentation We expect that such a significant reduction in DRD information should mitigate the effects of some inhibitors that currently prevent practitioners from documenting design decision rationale

Journal ArticleDOI
30 Jul 2013
TL;DR: The article introduces the FiniteSat algorithm for efficient detection of finite satisfiability in such class diagrams, and analyzes its limitations in terms of complex hierarchy structures.
Abstract: Models lie at the heart of the emerging model-driven engineering approach. In order to guarantee precise, consistent, and correct models, there is a need for efficient powerful methods for verifying model correctness. Class diagram is the central language within UML. Its correctness problems involve issues of contradiction, namely the consistency problem, and issues of finite instantiation, namely the finite satisfiability problem.This article analyzes the problem of finite satisfiability of class diagrams with class hierarchy constraints and generalization-set constraints. The article introduces the FiniteSat algorithm for efficient detection of finite satisfiability in such class diagrams, and analyzes its limitations in terms of complex hierarchy structures. FiniteSat is strengthened in two directions. First, an algorithm for identification of the cause for a finite satisfiability problem is introduced. Second, a method for propagation of generalization-set constraints in a class diagram is introduced. The propagation method serves as a preprocessing step that improves FiniteSat performance, and helps developers in clarifying intended constraints. These algorithms are implemented in the FiniteSatUSE tool [BGU Modeling Group 2011b], as part of our ongoing effort for constructing a model-level integrated development environment [BGU Modeling Group 2010a].

Journal ArticleDOI
01 Oct 2013
TL;DR: The paper gives a summary of the existing results about algorithmic analysis of probabilistic pushdown automata and their subclasses.
Abstract: The paper gives a summary of the existing results about algorithmic analysis of probabilistic pushdown automata and their subclasses.

Proceedings Article
21 Nov 2013
TL;DR: FERAL, a framework for simulator coupling, which enables the integration of simulators with heterogeneous simulation models, and its approach for the horizontal and vertical integration of simulation models is presented.
Abstract: Simulation technologies are imperative for embedded systems development. They enable the evaluation of decisions already early in development processes. Simulators are focused on a subset of effects that affect the operation of embedded systems. Accurate prediction of embedded system behavior on system level, however, requires the consideration of multiple effects, e.g. communication behavior, system environments, and functional behavior of all relevant system components. This requires the coupling of specialized simulators to create holistic simulation scenarios. In this paper, we present FERAL, our framework for simulator coupling, which enables the integration of simulators with heterogeneous simulation models. We describe the overall coupling approach of FERAL, its simulation model, and its approach for the horizontal and vertical integration of simulation models. We show the applicability of FERAL by a realistic example that demonstrates the potential of simulator coupling for early fault detection.

Book ChapterDOI
17 Jun 2013
TL;DR: A selection of approaches used for modeling biological systems and formalizing their interesting properties in temporal logics and a brief account of high performance model checking techniques are given.
Abstract: Model checking together with other formal methods and techniques is being adapted for applications to biological systems. We present a selection of approaches used for modeling biological systems and formalizing their interesting properties in temporal logics. We also give a brief account of high performance model checking techniques and add a few case studies that demonstrate the use of model checking in computational systems biology. The primary aim is to give a reference for further reading.

Journal ArticleDOI
22 Oct 2013
TL;DR: The string analysis algorithm is implemented, and used to augment an industrial security analysis for Web applications by automatically detecting and verifying sanitizers—methods that eliminate malicious patterns from untrusted strings, making these strings safe to use in security-sensitive operations.
Abstract: We propose a novel technique for statically verifying the strings generated by a program. The verification is conducted by encoding the program in Monadic Second-order Logic (M2L). We use M2L to describe constraints among program variables and to abstract built-in string operations. Once we encode a program in M2L, a theorem prover for M2L, such as MONA, can automatically check if a string generated by the program satisfies a given specification, and if not, exhibit a counterexample. With this approach, we can naturally encode relationships among strings, accounting also for cases in which a program manipulates strings using indices. In addition, our string analysis is path sensitive in that it accounts for the effects of string and Boolean comparisons, as well as regular-expression matches.We have implemented our string analysis algorithm, and used it to augment an industrial security analysis for Web applications by automatically detecting and verifying sanitizers—methods that eliminate malicious patterns from untrusted strings, making these strings safe to use in security-sensitive operations. On the 8 benchmarks we analyzed, our string analyzer discovered 128 previously unknown sanitizers, compared to 71 sanitizers detected by a previously presented string analysis.

Book ChapterDOI
01 Jan 2013
TL;DR: The ParaPhrase project is a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011, that aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns.
Abstract: This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism.

Journal ArticleDOI
01 Feb 2013
TL;DR: In this paper, the authors present a new approach based on a set of effective word-level simplifications that are traditionally employed in automated theorem proving, heuristic quantifier instantiation methods used in SMT solvers, and model finding techniques based on skeletons/templates.
Abstract: In recent years, bit-precise reasoning has gained importance in hardware and software verification. Of renewed interest is the use of symbolic reasoning for synthesising loop invariants, ranking functions, or whole program fragments and hardware circuits. Solvers for the quantifier-free fragment of bit-vector logic exist and often rely on SAT solvers for efficiency. However, many techniques require quantifiers in bit-vector formulas to avoid an exponential blow-up during construction. Solvers for quantified formulas usually flatten the input to obtain a quantified Boolean formula, losing much of the word-level information in the formula. We present a new approach based on a set of effective word-level simplifications that are traditionally employed in automated theorem proving, heuristic quantifier instantiation methods used in SMT solvers, and model finding techniques based on skeletons/templates. Experimental results on two different types of benchmarks indicate that our method outperforms the traditional flattening approach by multiple orders of magnitude of runtime.

Book ChapterDOI
01 Jan 2013
TL;DR: This work adapts the Multi-lane Spatial Logic MLSL, introduced for proving the safety (collision freedom) of traffic manoeuvres on multi-lane motorways, to the setting of country roads with two-way traffic to show that it can separate the purely spatial reasoning from the underlying car dynamics in the safety proof.
Abstract: We adapt the Multi-lane Spatial Logic MLSL, introduced in [1] for proving the safety (collision freedom) of traffic manoeuvres on multi-lane motorways, where all cars drive in one direction, to the setting of country roads with two-way traffic. To this end, we need suitably refined sensor functions and length measurement in MLSL. Our main contribution is to show that also here we can separate the purely spatial reasoning from the underlying car dynamics in the safety proof.

Journal ArticleDOI
01 Feb 2013
TL;DR: This paper proposes a novel approach, that exploits the structure of the scenario to partition and drive the search, both for bounded model checking and k-induction, and fully leverages the advanced features of modern SMT solvers, such as incrementality, unsatisfiable core extraction, and interpolation.
Abstract: Hybrid automata are a widely used framework to model complex critical systems, where continuous physical dynamics are combined with discrete transitions The expressive power of Satisfiability Modulo Theories (SMT) solvers can be used to symbolically model networks of hybrid automata, using formulas in the theory of reals, and SAT-based verification algorithms, such as bounded model checking and k-induction, can be naturally lifted to the SMT case In this paper, we tackle the important problem of scenario-based verification, ie checking if a network of hybrid automata accepts some desired interactions among the components, expressed as Message Sequence Charts (MSCs) We propose a novel approach, that exploits the structure of the scenario to partition and drive the search, both for bounded model checking and k-induction We also show how to obtain information explaining the reasons for infeasibility in the case of invalid scenarios The expressive power of the SMT framework allows us to exploit a local time semantics, where the timescales of the automata in the network are synchronized upon shared events The approach fully leverages the advanced features of modern SMT solvers, such as incrementality, unsatisfiable core extraction, and interpolation An experimental evaluation demonstrates the effectiveness of the approach in proving both feasibility and unfeasibility, and the adequacy of the automatically generated explanations

Book ChapterDOI
01 Jan 2013
TL;DR: The set of vectors of upper bounds that allow an infinite run to exist and how to construct this set is calculated by employing results from previous works, including an algorithm given by Valk and Jantzen for finding the set of minimal elements of an upward closed set.
Abstract: Multiweighted energy games are two-player multiweighted games that concern the existence of infinite runs subject to a vector of lower and upper bounds on the accumulated weights along the run. We assume an unknown upper bound and calculate the set of vectors of upper bounds that allow an infinite run to exist. For both a strict and a weak upper bound we show how to construct this set by employing results from previous works, including an algorithm given by Valk and Jantzen for finding the set of minimal elements of an upward closed set. Additionally, we consider energy games where the weight of some transitions is unknown, and show how to find the set of suitable weights using the same algorithm.

Journal ArticleDOI
30 Jul 2013
TL;DR: This article provides and proves a complete set of algebraic laws so that equivalence between pattern expressions can be proven, and defines an always-terminating normalization of pattern expression to a canonical form which is unique modulo equivalence in first-order logic.
Abstract: In a pattern-oriented software design process, design decisions are made by selecting and instantiating appropriate patterns, and composing them together. In our previous work, we enabled these decisions to be formalized by defining a set of operators on patterns with which instantiations and compositions can be represented. In this article, we investigate the algebraic properties of these operators. We provide and prove a complete set of algebraic laws so that equivalence between pattern expressions can be proven. Furthermore, we define an always-terminating normalization of pattern expression to a canonical form which is unique modulo equivalence in first-order logic.By a case study, the pattern-oriented design of an extensible request-handling framework, we demonstrate two practical applications of the algebraic framework. First, we can prove the correctness of a finished design with respect to the design decisions made and the formal specification of the patterns. Second, we can even derive the design from these components.

Journal ArticleDOI
01 Oct 2013
TL;DR: This work considers two-player zero-sum stochastic games on graphs with ω-regular winning conditions specified as parity objectives, which have applications in the design and control of reactive systems and in classes of interest obtained as special cases.
Abstract: We consider two-player zero-sum stochastic games on graphs with ?-regular winning conditions specified as parity objectives. These games have applications in the design and control of reactive systems. We survey the complexity results for the problem of deciding the winner in such games, and in classes of interest obtained as special cases, based on the information and the power of randomization available to the players, on the class of objectives and on the winning mode. On the basis of information, these games can be classified as follows: (a) partial-observation (both players have partial view of the game); (b) one-sided partial-observation (one player has partial-observation and the other player has complete-observation); and (c) complete-observation (both players have complete view of the game). The one-sided partial-observation games have two important subclasses: the one-player games, known as partial-observation Markov decision processes (POMDPs), and the blind one-player games, known as probabilistic automata. On the basis of randomization, (a) the players may not be allowed to use randomization (pure strategies), or (b) they may choose a probability distribution over actions but the actual random choice is external and not visible to the player (actions invisible), or (c) they may use full randomization. Finally, various classes of games are obtained by restricting the parity objective to a reachability, safety, Buchi, or coBuchi condition. We also consider several winning modes, such as sure-winning (i.e., all outcomes of a strategy have to satisfy the winning condition), almost-sure winning (i.e., winning with probability 1), limit-sure winning (i.e., winning with probability arbitrarily close to 1), and value-threshold winning (i.e., winning with probability at least ?, where ? is a given rational).

Journal ArticleDOI
22 Oct 2013
TL;DR: This article develops a partitioning of program paths based on the program output, which produces a semantic signature of a program—describing all the different symbolic expressions that the output can assume along different program paths.
Abstract: Efficient program path exploration is important for many software engineering activities such as testing, debugging, and verification. However, enumerating all paths of a program is prohibitively expensive. In this article, we develop a partitioning of program paths based on the program output. Two program paths are placed in the same partition if they derive the output similarly, that is, the symbolic expression connecting the output with the inputs is the same in both paths. Our grouping of paths is gradually created by a smart path exploration. Our experiments show the benefits of the proposed path exploration in test-suite construction.Our path partitioning produces a semantic signature of a program—describing all the different symbolic expressions that the output can assume along different program paths. To reason about changes between program versions, we can therefore analyze their semantic signatures. In particular, we demonstrate the applications of our path partitioning in testing and debugging of software regressions.

Journal ArticleDOI
01 Jun 2013
TL;DR: This work presents the first subquadratic symbolic algorithm to compute the almost-sure winning set for MDPs with Büchi objectives, and improves the algorithm for symbolic scc computation; the previous known algorithm takes linear symbolic steps, and the new algorithm improves the constants associated with the linear number of steps.
Abstract: We consider Markov decision processes (MDPs) with Buchi (liveness) objectives. We consider the problem of computing the set of almost-sure winning states from where the objective can be ensured with probability 1. Our contributions are as follows: First, we present the first subquadratic symbolic algorithm to compute the almost-sure winning set for MDPs with Buchi objectives; our algorithm takes $O(n \cdot\sqrt{m})$ symbolic steps as compared to the previous known algorithm that takes O(n 2) symbolic steps, where n is the number of states and m is the number of edges of the MDP. In practice MDPs have constant out-degree, and then our symbolic algorithm takes $O(n \cdot\sqrt{n})$ symbolic steps, as compared to the previous known O(n 2) symbolic steps algorithm. Second, we present a new algorithm, namely win-lose algorithm, with the following two properties: (a) the algorithm iteratively computes subsets of the almost-sure winning set and its complement, as compared to all previous algorithms that discover the almost-sure winning set upon termination; and (b) requires $O(n \cdot\sqrt{K})$ symbolic steps, where K is the maximal number of edges of strongly connected components (scc's) of the MDP. The win-lose algorithm requires symbolic computation of scc's. Third, we improve the algorithm for symbolic scc computation; the previous known algorithm takes linear symbolic steps, and our new algorithm improves the constants associated with the linear number of steps. In the worst case the previous known algorithm takes 5?n symbolic steps, whereas our new algorithm takes 4?n symbolic steps.

Proceedings ArticleDOI
01 Jan 2013
TL;DR: The tool presents itself as a classic image-editing environment with functionalities such as area selection and hyperlink creation, thus reducing the barriers that prevent non-experts in formal methods from using PVS.
Abstract: We present PVSio-web which extends the simulation component of the PVS proof system with functionalities for rapid prototyping device user interfaces. The tool presents itself as a classic image-editing environment with functionalities such as area selection and hyperlink creation, thus reducing the barriers that prevent non-experts in formal methods from using PVS. Designers load a picture of the layout of the device user interface under development, specify interactive areas over the layout, and link them to a PVS specification. They can then explore the behaviour of the formal user interface specification through point-and-click interactions. The architecture of the tool is general, and can be used as the basis for extending other verification tools. A demonstration of the capabilities of PVSio-web is presented through an example based on a commercial medical device user interface. Our ultimate aim is to promote and facilitate the use of formal verification tools when developing device user interfaces.