scispace - formally typeset
Search or ask a question

Showing papers by "Jean-François Raskin published in 2013"


Journal ArticleDOI
TL;DR: This paper proposes an efficient automata-based approach to linear time logic (LTL) model checking of variability-intensive systems, and provides an in-depth treatment of the FTS model checking algorithm.
Abstract: The premise of variability-intensive systems, specifically in software product line engineering, is the ability to produce a large family of different systems efficiently. Many such systems are critical. Thorough quality assurance techniques are thus required. Unfortunately, most quality assurance techniques were not designed with variability in mind. They work for single systems, and are too costly to apply to the whole system family. In this paper, we propose an efficient automata-based approach to linear time logic (LTL) model checking of variability-intensive systems. We build on earlier work in which we proposed featured transitions systems (FTSs), a compact mathematical model for representing the behaviors of a variability-intensive system. The FTS model checking algorithms verify all products of a family at once and pinpoint those that are faulty. This paper complements our earlier work, covering important theoretical aspects such as expressiveness and parallel composition as well as more practical things like vacuity detection and our logic feature LTL. Furthermore, we provide an in-depth treatment of the FTS model checking algorithm. Finally, we present SNIP, a new model checker for variability-intensive systems. The benchmarks conducted with SNIP confirm the speedups reported previously.

239 citations


Posted Content
TL;DR: In this article, the authors extend the quantitative synthesis framework by going beyond the worst-case by constructing strategies that guarantee some quantitative requirement in the worst case while providing a higher expected value against a particular stochastic model of the environment given as input.
Abstract: We extend the quantitative synthesis framework by going beyond the worst-case. On the one hand, classical analysis of two-player games involves an adversary (modeling the environment of the system) which is purely antagonistic and asks for strict guarantees. On the other hand, stochastic models like Markov decision processes represent situations where the system is faced to a purely randomized environment: the aim is then to optimize the expected payoff, with no guarantee on individual outcomes. We introduce the beyond worst-case synthesis problem, which is to construct strategies that guarantee some quantitative requirement in the worst-case while providing an higher expected value against a particular stochastic model of the environment given as input. This problem is relevant to produce system controllers that provide nice expected performance in the everyday situation while ensuring a strict (but relaxed) performance threshold even in the event of very bad (while unlikely) circumstances. We study the beyond worst-case synthesis problem for two important quantitative settings: the mean-payoff and the shortest path. In both cases, we show how to decide the existence of finite-memory strategies satisfying the problem and how to synthesize one if one exists. We establish algorithms and we study complexity bounds and memory requirements.

53 citations


Book ChapterDOI
16 Mar 2013
TL;DR: In this paper, the authors extend the qualitative LTL synthesis problem to a quantitative setting, where the value of an infinite word is the mean-payoff of the weights of its letters, and the synthesis problem then amounts to automatically constructing (if possible) a reactive system whose executions all satisfy a given LTL formula and have meanpayoff values greater than or equal to some given threshold.
Abstract: The classical LTL synthesis problem is purely qualitative: the given LTL specification is realized or not by a reactive system. LTL is not expressive enough to formalize the correctness of reactive systems with respect to some quantitative aspects. This paper extends the qualitativeLTL synthesis setting to a quantitative setting. The alphabet of actions is extended with a weight function ranging over the integer numbers. The value of an infinite word is the mean-payoff of the weights of its letters. The synthesis problem then amounts to automatically construct (if possible) a reactive system whose executions all satisfy a given LTL formula and have mean-payoff values greater than or equal to some given threshold. The latter problem is called LTL$_\textsf{MP}$ synthesis and the LTL$_\textsf{MP}$ realizability problem asks to check whether such a system exists. By reduction to two-player mean-payoff parity games, we first show that LTL$_\textsf{MP}$ realizability is not more difficult than LTL realizability: it is 2ExpTime-Complete. While infinite memory strategies are required to realize LTL$_\textsf{MP}$ specifications in general, we show that e-optimality can be obtained with finite-memory strategies, for any e>0. To obtain efficient algorithms in practice, we define a Safraless procedure to decide whether there exists a finite-memory strategy that realizes a given specification for some given threshold. This procedure is based on a reduction to two-player energy safety games which are in turn reduced to safety games. Finally, we show that those safety games can be solved efficiently by exploiting the structure of their state spaces and by using antichains as a symbolic data-structure. All our results extend to multi-dimensional weights. We have implemented an antichain-based procedure and we report on some promising experimental results.

43 citations


Book ChapterDOI
01 Jan 2013
TL;DR: The set of vectors of upper bounds that allow an infinite run to exist and how to construct this set is calculated by employing results from previous works, including an algorithm given by Valk and Jantzen for finding the set of minimal elements of an upward closed set.
Abstract: Multiweighted energy games are two-player multiweighted games that concern the existence of infinite runs subject to a vector of lower and upper bounds on the accumulated weights along the run. We assume an unknown upper bound and calculate the set of vectors of upper bounds that allow an infinite run to exist. For both a strict and a weak upper bound we show how to construct this set by employing results from previous works, including an algorithm given by Valk and Jantzen for finding the set of minimal elements of an upward closed set. Additionally, we consider energy games where the weight of some transitions is unknown, and show how to find the set of suitable weights using the same algorithm.

35 citations


Posted Content
TL;DR: In this article, the exact complexity of natural decision problems on the set of strategies that survive iterated elimination of dominated strategies is studied, and the authors obtain automata which recognize all the possible outcomes of such strategies.
Abstract: Iterated admissibility is a well-known and important concept in classical game theory, e.g. to determine rational behaviors in multi-player matrix games. As recently shown by Berwanger, this concept can be soundly extended to infinite games played on graphs with omega-regular objectives. In this paper, we study the algorithmic properties of this concept for such games. We settle the exact complexity of natural decision problems on the set of strategies that survive iterated elimination of dominated strategies. As a byproduct of our construction, we obtain automata which recognize all the possible outcomes of such strategies.

25 citations


Book ChapterDOI
15 Oct 2013
TL;DR: This paper revisits the decidability results presented in [5] and shows that the problem is NExpTime-complete and can effectively compute fixed points that characterise the sets of states that are reachable within T time units from a given state.
Abstract: We study the time-bounded reachability problem for monotonic hybrid automata (MHA), i.e., rectangular hybrid automata for which the rate of each variable is either always non-negative or always non-positive. In this paper, we revisit the decidability results presented in [5] and show that the problem is NExpTime-complete. We also show that we can effectively compute fixed points that characterise the sets of states that are reachable (resp. co-reachable) within T time units from a given state.

19 citations


Journal ArticleDOI
TL;DR: This paper shows how to exploit the structure of some automata-based construction to efficiently solve the LTL synthesis problem, and shows that the CPre operator can be replaced by a weaker operator CPrecrit where the adversary is restricted to play a subset of critical signals.
Abstract: In this paper, we show how to exploit the structure of some automata-based construction to efficiently solve the LTL synthesis problem. We focus on a construction proposed in Schewe and Finkbeiner that reduces the synthesis problem to a safety game, which can then be solved by computing the solution of the classical fixpoint equation νX.Safe ∩ CPre(X), where CPre(X) are the controllable predecessors of X. We have shown in previous works that the sets computed during the fixpoint algorithm can be equipped with a partial order that allows one to represent them very compactly, by the antichain of their maximal elements. However the computation of CPre(X) cannot be done in polynomial time when X is represented by an antichain (unless P = NP). This motivates the use of SAT solvers to compute CPre(X). Also, we show that the CPre operator can be replaced by a weaker operator CPrecrit where the adversary is restricted to play a subset of critical signals. We show that the fixpoints of the two operators coincide, and so, instead of applying iteratively CPre, we can apply iteratively CPrecrit. In practice, this leads to important improvements on previous LTL synthesis methods. The reduction to SAT problems and the weakening of the CPre operator into CPrecrit and their performance evaluations are new.

18 citations


Book ChapterDOI
15 Oct 2013
TL;DR: It is shown that in contrast to multi-dimensional mean-payoff games that are known to be coNP-complete,Multi-dimensional total- payoff games are undecidable, and conservative approximations of these objectives are introduced.
Abstract: We consider two-player games played on weighted directed graphs with mean-payoff and total-payoff objectives, two classical quantitative objectives. While for single-dimensional games the complexity and memory bounds for both objectives coincide, we show that in contrast to multi-dimensional mean-payoff games that are known to be coNP-complete, multi-dimensional total-payoff games are undecidable. We introduce conservative approximations of these objectives, where the payoff is considered over a local finite window sliding along a play, instead of the whole play. For single dimension, we show that (i) if the window size is polynomial, deciding the winner takes polynomial time, and (ii) the existence of a bounded window can be decided in NP ∩ coNP, and is at least as hard as solving mean-payoff games. For multiple dimensions, we show that (i) the problem with fixed window size is EXPTIME-complete, and (ii) there is no primitive-recursive algorithm to decide the existence of a bounded window. © 2013 Springer International Publishing.

14 citations


Proceedings ArticleDOI
08 Jul 2013
TL;DR: In this article, Queue-Dispatch Asynchronous Systems (QDSAS) is proposed as a mathematical model that faithfully formalizes the synchronization mechanisms and the behavior of the scheduler in those systems.
Abstract: To make the development of efficient multi-core applications easier, libraries, such as Grand Central Dispatch (GCD), have been proposed. When using such a library, the programmer writes so-called blocks, which are chunks of codes, and dispatches them, using synchronous or asynchronous calls, to several types of waiting queues. A scheduler is then responsible for dispatching those blocks on the available cores. Blocks can synchronize via a global memory. In this paper, we propose Queue-Dispatch Asynchronous Systems as a mathematical model that faithfully formalizes the synchronization mechanisms and the behavior of the scheduler in those systems. We study in detail their relationships to classical formalisms such as pushdown systems, Petri nets, fifo systems, and counter systems. Our main technical contributions are precise worst-case complexity results for the Parikh coverability problem and the termination question for several subclasses of our model. We give an outlook on extending our model towards verifying input-parametrized fork- join behaviour with the help of abstractions, and conclude with a hands-on approach for verifying GCD programs in practice.

12 citations


Book ChapterDOI
24 Jun 2013
TL;DR: π-Petri nets are introduced, an extension of plain Petri nets with ω-labeled input and output arcs that is well-suited to analyse parametric concurrent systems with dynamic thread creation and complexity bounds for the reachability, (place) boundedness and coverability problems on ωPN.
Abstract: We introduce ω-Petri nets (ωPN), an extension of plain Petri nets with ω-labeled input and output arcs, that is well-suited to analyse parametric concurrent systems with dynamic thread creation. Most techniques (such as the Karp and Miller tree or the Rackoff technique) that have been proposed in the setting of plain Petri nets do not apply directly to ωPN because ωPN define transition systems that have infinite branching. This motivates a thorough analysis of the computational aspects of ωPN. We show that an ωPN can be turned into a plain Petri net that allows to recover the reachability set of the ωPN, but that does not preserve termination. This yields complexity bounds for the reachability, (place) boundedness and coverability problems on ωPN. We provide a practical algorithm to compute a coverability set of the ωPN and to decide termination by adapting the classical Karp and Miller tree construction. We also adapt the Rackoff technique to ωPN, to obtain the exact complexity of the termination problem. Finally, we consider the extension of ωPN with reset and transfer arcs, and show how this extension impacts the decidability and complexity of the aforementioned problems.

7 citations


Posted Content
TL;DR: This paper investigates the algorithmic properties of several subclasses of mean-payoff games where the players have asymmetric information about the state of the game, including a generalization of perfect information games where positional strategies are sufficient.
Abstract: Mean-payoff games are important quantitative models for open reactive systems. They have been widely studied as games of perfect information. In this paper we investigate the algorithmic properties of several subclasses of mean-payoff games where the players have asymmetric information about the state of the game. These games are in general undecidable and not determined according to the classical definition. We show that such games are determined under a more general notion of winning strategy. We also consider mean-payoff games where the winner can be determined by the winner of a finite cycle-forming game. This yields several decidable classes of mean-payoff games of asymmetric information that require only finite-memory strategies, including a generalization of perfect information games where positional strategies are sufficient. We give an exponential time algorithm for determining the winner of the latter.

Posted Content
TL;DR: Doomsday equilibria as mentioned in this paper are a strategy profile such that all players satisfy their own objective, and if any coalition of players deviates and violates even one of the players objective, then the objective of every player is violated.
Abstract: Two-player games on graphs provide the theoretical frame- work for many important problems such as reactive synthesis. While the traditional study of two-player zero-sum games has been extended to multi-player games with several notions of equilibria, they are decidable only for perfect-information games, whereas several applications require imperfect-information games. In this paper we propose a new notion of equilibria, called doomsday equilibria, which is a strategy profile such that all players satisfy their own objective, and if any coalition of players deviates and violates even one of the players objective, then the objective of every player is violated. We present algorithms and complexity results for deciding the existence of doomsday equilibria for various classes of omega-regular objectives, both for imperfect-information games, and for perfect-information games. We provide optimal complexity bounds for imperfect-information games, and in most cases for perfect-information games.

Posted Content
TL;DR: In this paper, the authors considered two-player games with mean-payoff and total payoff objectives, and showed that both objectives are undecidable in the multi-dimensional setting.
Abstract: We consider two-player games played on weighted directed graphs with mean-payoff and total-payoff objectives, two classical quantitative objectives. While for single-dimensional games the complexity and memory bounds for both objectives coincide, we show that in contrast to multi-dimensional mean-payoff games that are known to be coNP-complete, multi-dimensional total-payoff games are undecidable. We introduce conservative approximations of these objectives, where the payoff is considered over a local finite window sliding along a play, instead of the whole play. For single dimension, we show that (i) if the window size is polynomial, deciding the winner takes polynomial time, and (ii) the existence of a bounded window can be decided in NP $\cap$ coNP, and is at least as hard as solving mean-payoff games. For multiple dimensions, we show that (i) the problem with fixed window size is EXPTIME-complete, and (ii) there is no primitive-recursive algorithm to decide the existence of a bounded window.

Posted Content
TL;DR: This paper investigates the algorithmic properties of several subclasses of mean-payoff games where the players have asymmetric information about the state of the game, including a generalization of perfect information games where positional strategies are sufficient.
Abstract: Mean-payoff games are important quantitative models for open reactive systems. They have been widely studied as games of full observation. In this paper we investigate the algorithmic properties of several sub-classes of mean-payoff games where the players have asymmetric information about the state of the game. These games are in general undecidable and not determined according to the classical definition. We show that such games are determined under a more general notion of winning strategy. We also consider mean-payoff games where the winner can be determined by the winner of a finite cycle-forming game. This yields several decidable classes of mean-payoff games of asymmetric information that require only finite-memory strategies, including a generalization of full-observation games where positional strategies are sufficient. We give an exponential time algorithm for determining the winner of the latter.

Posted Content
TL;DR: It is shown that an {\omega}PN can be turned into an plain Petri net that allows to recover the reachability set of the {\omego}PN, but that does not preserve termination, which yields complexity bounds for the reachable, (place) boundedness and coverability problems on {\omegas}PN.
Abstract: We introduce {\omega}-Petri nets ({\omega}PN), an extension of plain Petri nets with {\omega}-labeled input and output arcs, that is well-suited to analyse parametric concurrent systems with dynamic thread creation. Most techniques (such as the Karp and Miller tree or the Rackoff technique) that have been proposed in the setting of plain Petri nets do not apply directly to {\omega}PN because {\omega}PN define transition systems that have infinite branching. This motivates a thorough analysis of the computational aspects of {\omega}PN. We show that an {\omega}PN can be turned into an plain Petri net that allows to recover the reachability set of the {\omega}PN, but that does not preserve termination. This yields complexity bounds for the reachability, (place) boundedness and coverability problems on {\omega}PN. We provide a practical algorithm to compute a coverability set of the {\omega}PN and to decide termination by adapting the classical Karp and Miller tree construction. We also adapt the Rackoff technique to {\omega}PN, to obtain the exact complexity of the termination problem. Finally, we consider the extension of {\omega}PN with reset and transfer arcs, and show how this extension impacts the decidability and complexity of the aforementioned problems.