scispace - formally typeset
Search or ask a question

Showing papers by "Joost-Pieter Katoen published in 2015"


Book ChapterDOI
18 Jul 2015
TL;DR: ProPhESY, a tool for analyzing parametric Markov chains (MCs), can compute a rational function (i.e., a fraction of two polynomials in the model parameters) for reachability and expected reward objectives and supports the novel feature of conditional probabilities.
Abstract: We present PROPhESY, a tool for analyzing parametric Markov chains (MCs). It can compute a rational function (i.e., a fraction of two polynomials in the model parameters) for reachability and expected reward objectives. Our tool outperforms state-of-the-art tools and supports the novel feature of conditional probabilities. PROPhESY supports incremental automatic parameter synthesis (using SMT techniques) to determine “safe” and “unsafe” regions of the parameter space. All values in these regions give rise to instantiated MCs satisfying or violating the (conditional) probability or expected reward objective. PROPhESY features a web front-end supporting visualization and user-guided parameter synthesis. Experimental results show that PROPhESY scales to MCs with millions of states and several parameters. Open image in new window

128 citations


Book ChapterDOI
27 Apr 2015
TL;DR: This paper proposes a scalable approach to repair a model by modifying it with respect to certain side conditions such that the property is satisfied, which avoids expensive computations and is therefore applicable to large models.
Abstract: For discrete-time probabilistic models there are efficient methods to check whether they satisfy certain properties. If a property is refuted, available techniques can be used to explain the failure in form of a counterexample. However, there are no scalable approaches to repair a model, i.e., to modify it with respect to certain side conditions such that the property is satisfied. In this paper we propose such a method, which avoids expensive computations and is therefore applicable to large models. A prototype implementation is used to demonstrate the applicability and scalability of our technique.

55 citations


Book ChapterDOI
24 Aug 2015
TL;DR: In this paper, the authors considered the computational hardness of computing expected outcomes and deciding (universal) (positive) almost-sure termination of probabilistic programs, and showed that computing lower and upper bounds of expected outcomes is also computationally hard.
Abstract: This paper considers the computational hardness of computing expected outcomes and deciding (universal) (positive) almost–sure termination of probabilistic programs. It is shown that computing lower and upper bounds of expected outcomes is \(\varSigma _1^0\)– and \(\varSigma _2^0\)–complete, respectively. Deciding (universal) almost–sure termination as well as deciding whether the expected outcome of a program equals a given rational value is shown to be \(\varPi ^0_2\)–complete. Finally, it is shown that deciding (universal) positive almost–sure termination is \(\varSigma _2^0\)–complete (\(\varPi _3^0\)–complete).

50 citations


Posted Content
TL;DR: In this paper, a Markov decision process is proposed for controller synthesis for stochastic and partially unknown environments, where the expected performance is measured using a cost function that is unknown prior to run-time exploration of the state space.
Abstract: We consider controller synthesis for stochastic and partially unknown environments in which safety is essential. Specifically, we abstract the problem as a Markov decision process in which the expected performance is measured using a cost function that is unknown prior to run-time exploration of the state space. Standard learning approaches synthesize cost-optimal strategies without guaranteeing safety properties. To remedy this, we first compute safe, permissive strategies. Then, exploration is constrained to these strategies and thereby meets the imposed safety requirements. Exploiting an iterative learning procedure, the resulting policy is safety-constrained and optimal. We show correctness and completeness of the method and discuss the use of several heuristics to increase its scalability. Finally, we demonstrate the applicability by means of a prototype implementation.

46 citations


Posted Content
TL;DR: This paper considers the computational hardness of computing expected outcomes and deciding (universal) (positive) almost–sure termination of probabilistic programs and it is shown that computing lower and upper bounds of expected outcomes is \(\varSigma _1^0\)– and \(\var sigma _2^0)\–complete, respectively.
Abstract: This paper considers the computational hardness of computing expected outcomes and deciding (universal) (positive) almost-sure termination of probabilistic programs. It is shown that computing lower and upper bounds of expected outcomes is $\Sigma_1^0$- and $\Sigma_2^0$-complete, respectively. Deciding (universal) almost-sure termination as well as deciding whether the expected outcome of a program equals a given rational value is shown to be $\Pi^0_2$-complete. Finally, it is shown that deciding (universal) positive almost-sure termination is $\Sigma_2^0$-complete ($\Pi_3^0$-complete).

42 citations


Journal ArticleDOI
TL;DR: In this paper, the semantic intricacies of conditioning in probabilistic programs are investigated and an operational semantics in terms of Markov models and expected rewards coincide with quantitative pre-conditions are presented.

31 citations


Book ChapterDOI
01 Jan 2015
TL;DR: An operational interpretation as well as a weakest pre-condition semantics are provided for an elementary probabilistic guarded command language and important features such as sampling, conditioning, loop divergence, and non-determinism are treated.
Abstract: We present two views of probabilistic programs and their relationship. An operational interpretation as well as a weakest pre-condition semantics are provided for an elementary probabilistic guarded command language. Our study treats important features such as sampling, conditioning, loop divergence, and non-determinism.

23 citations


Book ChapterDOI
04 Nov 2015
TL;DR: The key idea is to interpret DFTs as directed graphs and exploit graph rewriting to simplify them, and present a collection of rewrite rules, address their correctness, and give a simple heuristic to determine the order of rewriting.
Abstract: Fault trees are a popular industrial technique for reliability modelling and analysis. Their extension with common reliability patterns, such as spare management, functional dependencies, and sequencing -- known as dynamic fault trees DFTs -- has an adverse effect on scalability, prohibiting the analysis of complex, industrial cases by, e.g., probabilistic model checkers. This paper presents a novel, fully automated reduction technique for DFTs. The key idea is to interpret DFTs as directed graphs and exploit graph rewriting to simplify them. We present a collection of rewrite rules, address their correctness, and give a simple heuristic to determine the order of rewriting. Experiments on a large set of benchmarks show substantial DFT simplifications, yielding state space reductions and timing gains of up to two orders of magnitude.

21 citations


Posted Content
TL;DR: A quantitative weakest pre-condition semantics is provided and it is shown that an inductive semantics for conditioning in non-deterministic probabilistic programs cannot exist.
Abstract: We investigate the semantic intricacies of conditioning, a main feature in probabilistic programming. We provide a weakest (liberal) pre-condition (w(l)p) semantics for the elementary probabilistic programming language pGCL extended with conditioning. We prove that quantitative weakest (liberal) pre-conditions coincide with conditional (liberal) expected rewards in Markov chains and show that semantically conditioning is a truly conservative extension. We present two program transformations which entirely eliminate conditioning from any program and prove their correctness using the w(l)p-semantics. Finally, we show how the w(l)p-semantics can be used to determine conditional probabilities in a parametric anonymity protocol and show that an inductive w(l)p-semantics for conditioning in non-deterministic probabilistic programs cannot exist.

19 citations


Book ChapterDOI
26 May 2015
TL;DR: It is shown that, as far as model checking (and reachability) is concerned, open intervals does not cause any problem, and with minor modification existing algorithms can be used for model checking interval Markov chains against PCTL formulas.
Abstract: We consider the model checking problem for interval Markov chains with open intervals. Interval Markov chains are generalizations of discrete time Markov chains where the transition probabilities are intervals, instead of constant values. We focus on the case where the intervals are open. At first sight, open intervals present technical challenges, as optimal (min, max) value for reachability may not exist. We show that, as far as model checking (and reachability) is concerned, open intervals does not cause any problem, and with minor modification existing algorithms can be used for model checking interval Markov chains against PCTL formulas.

15 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a counterexample for probabilistic model checker PRISM, which describes how a smallest possible subset of the commands can be identified which together make the system erroneous.
Abstract: Providing compact and understandable counterexamples for violated system properties is an essential task in model checking. Existing works on counterexamples for probabilistic systems so far computed either a large set of system runs or a subset of the system's states, both of which are of limited use in manual debugging. Many probabilistic systems are described in a guarded command language like the one used by the popular model checker PRISM. In this paper we describe how a smallest possible subset of the commands can be identified which together make the system erroneous. We additionally show how the selected commands can be further simplified to obtain a well-understandable counterexample.

Book ChapterDOI
02 Sep 2015
TL;DR: This work presents a novel method to synthesize parameter instances (if such exist) of PHA satisfying a multi-objective bounded horizon specification over expected rewards and provides statistical guarantees on the synthesized parameter instances.
Abstract: Technical systems interacting with the real world can be elegantly modelled using probabilistic hybrid automata (PHA). Parametric probabilistic hybrid automata are dynamical systems featuring hybrid discrete-continuous dynamics and parametric probabilistic branching, thereby generalizing PHA by capturing a family of PHA within a single model. Such system models have a broad range of applications, from control systems over network protocols to biological components. We present a novel method to synthesize parameter instances (if such exist) of PHA satisfying a multi-objective bounded horizon specification over expected rewards. Our approach combines three techniques: statistical model checking of model instantiations, a symbolic version of importance sampling to handle the parametric dependence, and SAT-modulo-theory solving for finding feasible parameter instances in a multi-objective setting. The method provides statistical guarantees on the synthesized parameter instances. To illustrate the practical feasibility of the approach, we present experiments showing the potential benefit of the scheme compared to a naive parameter exploration approach.

Proceedings ArticleDOI
22 Jun 2015
TL;DR: A simulator (slimsim) is introduced for a subset of AADL extended with formalized behavioral semantics for nominal and error models, which allows to perform probabilistic analysis using the Monte Carlo method on linear-hybrid, stochastic models.
Abstract: We introduce a simulator (slimsim) for a subset of AADL extended with formalized behavioral semantics for nominal and error models The simulator allows to perform probabilistic analysis using the Monte Carlo method, on linear-hybrid, stochastic models, which describe a combination of nominal and error behaviors of hard- and software components The tool supports the use of different strategies, which control the behavior of the simulator when dealing with various forms of non-determinism The simulator is tested using benchmarks of the COMPASS toolset, as well as a case study by Airbus Defense and Space

Journal ArticleDOI
TL;DR: It is argued that graph grammars naturally model dynamic data structures such as lists, trees and combinations thereof and can be exploited to obtain finite abstractions of pointer-manipulating programs, thus enabling model checking.

Journal ArticleDOI
01 Oct 2015
TL;DR: The theoretical foundations of this approach and its correctness are the main focus of this paper, and a prototypical tool entitled Juggrnaut is presented that realizes the approach and shows encouraging experimental verification results.
Abstract: This paper presents a novel abstraction framework for heap data structures. It employs graph grammars, more precisely context-free hyperedge replacement grammars. We will show that this is a very natural formalism for modelling dynamic data structures in an intuitive way. Our approach aims at extending finite-state verification techniques to handle pointer-manipulating programs operating on complex dynamic data structures that are potentially unbounded in their size. The theoretical foundations of our approach and its correctness are the main focus of this paper. In addition, we present a prototypical tool entitled Juggrnaut that realizes our approach and show encouraging experimental verification results for three case studies: a doubly-linked list reversal, the flattening of binary trees, and the Deutsch---Schorr---Waite tree traversal algorithm.

Book ChapterDOI
24 Jun 2015
TL;DR: This paper investigates how counterexamples for properties concerning expected costs (or, equivalently, expected rewards) of events can be computed and proposes heuristic approaches based on path search and best-first search, which are applicable to very large systems when deriving a minimum subsystem becomes infeasible due to the system size.
Abstract: The computation of counterexamples for probabilistic systems has gained a lot of attention during the last few years. All of the proposed methods focus on the situation when the probabilities of certain events are too high. In this paper we investigate how counterexamples for properties concerning expected costs (or, equivalently, expected rewards) of events can be computed. We propose methods to extract a minimal subsystem which already leads to costs beyond the allowed bound. Besides these exact methods, we present heuristic approaches based on path search and on best-first search, which are applicable to very large systems when deriving a minimum subsystem becomes infeasible due to the system size. Experiments show that we can compute counterexamples for systems with millions of states.

Journal ArticleDOI
TL;DR: The modelling and analysis of a microgrid with wind, microturbines, and the main grid as generation resources is reported, modelled as a parallel composition of various stochastic hybrid automata, using the statistical model checker Uppaal-SMC.
Abstract: This paper reports on the modelling and analysis of a microgrid with wind, microturbines, and the main grid as generation resources. The microgrid is modelled as a parallel composition of various stochastic hybrid automata. Extensive simulation runs of the behaviour of the main individual microgrid components give insight into the complex dynamics of the system and provide useful information to determine adequate parameter settings. The analysis of the microgrid focuses on determining the probability of linear temporal logic properties expressed in the logic LTL, using the statistical model checker Uppaal-SMC.

01 Jan 2015
TL;DR: This thesis develops a framework of layering for modal transition systems and probabilistic versions thereof by developing new equivalence relations for nondeterministic and Markovian models and defines a quotient system, which is then investigated and proved to preserve interesting linear-time properties.
Abstract: Model checking is an automated verification method guaranteeing that a mathematical model of a system satisfies a formally described property. It can be used to assess both qualitative and quantitative properties of complex software and hardware systems. Model checking suffers from the well-known state space explosion problem where the number of states grows exponentially in the number of program variables, channels and parallel components. Reduction techniques can be used to shrink the state space of system models by hiding redundant information and removing irrelevant details. The reduced state space can then be used for analysis provided it preserves a rich class of properties of interest. This thesis presents reduction techniques for a wide range of nondeterministic and probabilistic models. Our reduction techniques are based on the notions of equivalence relations and layering. Equivalence relations reduce the state space of system models, by aggregating equivalent states into a single state. The reduced state space obtained under an equivalence relation, is called a quotient system. An example equivalence relation that is widely used to reduce the state space of nondeterministic and probabilistic models is bisimulation. On the other hand, layering involves carrying out structural transformations for the systems that are modeled as a network of system models, e.g., distributed systems. As a result of these structural transformations, the new state space obtained is smaller than the original non-layered one. The first part of this thesis focuses on developing new equivalence relations for nondeterministic and Markovian models. For each of these relations, we define a quotient system, investigate its relationship with bisimulation and prove that it preserves interesting linear-time properties. In the second part of this thesis we focus on layering based state space reduction for more expressive specification formalisms that support a stepwise refinement methodology. We develop a framework of layering for modal transition systems and probabilistic versions thereof. This involves a layered composition

Book ChapterDOI
12 Oct 2015
TL;DR: Probabilistic programs are sequential programs, written in languages like C, Java, Scala, or ML, with two added constructs: (1) the ability to draw values at random from probability distributions, and (2) the able to condition values of variables in a program through observations.
Abstract: Probabilistic programs [6] are sequential programs, written in languages like C, Java, Scala, or ML, with two added constructs: (1) the ability to draw values at random from probability distributions, and (2) the ability to condition values of variables in a program through observations. For a comprehensive treatment, see [3]. They have a wide range of applications. Probabilistic programming is at the heart of machine learning for describing distribution functions; Bayesian inference is pivotal in their analysis. Probabilistic programs are central in security for describing cryptographic constructions (such as randomised encryption) and security experiments. In addition, probabilistic programs are an active research topic in quantitative information flow. Quantum programs are inherently probabilistic due to the random outcomes of quantum measurements. Finally, probabilistic programs can be used for approximate computing, e.g., by specifying reliability requirements for programs that allocate data in unreliable memory and use unreliable operations in hardware (so as to save energy dissipation) [1]. Other applications include [4] scientific modeling, information retrieval, bio–informatics, epidemiology, vision, seismic analysis, semantic web, business intelligence, human cognition, and more. Microsoft has started an initiative to improve the usability of probabilistic programming which has resulted in languages such as R2 [13] and Tabular [5] emerged.

01 Jan 2015
TL;DR: This paper describes the development of probabilistic automata using modal Stochastic Games and describes the design and refinement of these automata with real-time constraints.
Abstract: ion and Refinement of Probabilistic Automata using Modal Stochastic Games

DOI
01 Jan 2015
TL;DR: This report documents the program and the outcomes of Dagstuhl Seminar 15181 "Challenges and Trends in Probabilistic Programming", which brought researchers from various research communities together to exploit synergies and realize cross-fertilisation.
Abstract: This report documents the program and the outcomes of Dagstuhl Seminar 15181 "Challenges and Trends in Probabilistic Programming". Probabilistic programming is at the heart of machine learning for describing distribution functions; Bayesian inference is pivotal in their analysis. Probabilistic programs are used in security for describing both cryptographic constructions (such as randomised encryption) and security experiments. In addition, probabilistic models are an active research topic in quantitative information now. Quantum programs are inherently probabilistic due to the random outcomes of quantum measurements. Finally, there is a rapidly growing interest in program analysis of probabilistic programs, whether it be using model checking, theorem proving, static analysis, or similar. Dagstuhl Seminar 15181 brought researchers from these various research communities together so as to exploit synergies and realize cross-fertilisation.

Posted Content
TL;DR: It is shown that the satisfiability problem for probabilistic CTL (PCTL, for short) is undecidable, and an exponential-time algorithm is presented for the Satisfiability of a bounded, negation-closed fragment of PCTL that is EXPTIME-hard.
Abstract: This paper shows that the satisfiability problem for probabilistic CTL (PCTL, for short) is undecidable. By a reduction from $1\frac{1}{2}$-player games with PCTL winning objectives, we establish that the PCTL satisfiability problem is ${\Sigma}_1^1$-hard. We present an exponential-time algorithm for the satisfiability of a bounded, negation-closed fragment of PCTL, and show that the satisfiability problem for this fragment is EXPTIME-hard.

01 Jan 2015
TL;DR: This paper describes how a smallest possible subset of the commands can be identified which together make the system erroneous and shows how the selected commands can been further simplified to obtain a well-understandable counterexample.
Abstract: Providing compact and understandable counterexamples for violated system properties is an essential task in model checking. Existing works on counterexamples for probabilistic systems so far computed either a large set of system runs or a subset of the system's states, both of which are of limited use in manual debugging. Many probabilistic systems are described in a guarded command language like the one used by the popular model checker PRISM. In this paper we describe how a smallest possible subset of the commands can be identified which together make the system erroneous. We additionally show how the selected commands can be further simplified to obtain a well-understandable counterexample.



01 Jan 2015
TL;DR: The goal is to take this reliability analysis of approximate computations a step further by employing a more expressive language of probabilistic programs: the so-called Probabilistic guarded command language (pGCL)—an extension of Dijkstra’s guardedcommand language with probabilism choices.
Abstract: Approximate computing is a computational paradigm in which the requirement of having fully deterministic and accurate arithmetical operations is dropped [4]. For diverse practical problem (e.g. video decoding or image compression) by relaxing this requirement one can obtain considerably more efficient algorithms—be it in terms of execution time, memory usage, energy consumption, etc. When moving from accurate to approximate computations, a natural question arises: How accurate is the result of the approximate computation compared to the result of the accurate one? Any attempt to seriously answer this question raises the need for an appropriate formal model of approximate computations. Probabilistic programs extend conventional programs with probabilistic choices or random assignments and indeed provide a well–studied and suitable model for this purpose. Their syntax and semantics [5, 6, 3, 2] are flexible and expressive enough to model many aspects of approximate computing quite naturally. A recent approach for dealing with quantitative reliability analysis of approximate computations brought up the concept of unreliable operators [1]. In this approach instead of performing e.g. the accurate assignment x = x + y, one performs the approximate assignment x = x +. y, the latter being specified through a reliability value 0 ≤ e ≤ 1. Then with probability 1−e the approximate assignment yields the expected effect, while with probability e it produces an arbitrary—and thus erroneous—effect. This model, however, is unable to quantify how much the erroneous effect departs from the accurate one. Our goal is to take this reliability analysis a step further by employing a more expressive language of probabilistic programs: the so-called probabilistic guarded command language (pGCL) [6]—an extension of Dijkstra’s guarded command language with probabilistic choices. By employing this language besides representing the probability with wich erroneous computation occurs, we can provide more stochastic information such as the most likely effect or the expected (i.e. average) effect of the erroneous computations. For instance we can model an approximate assignment as a pGCL program