scispace - formally typeset
Search or ask a question

Showing papers by "Jean-François Raskin published in 2017"


Journal ArticleDOI
TL;DR: This paper introduces a novel rule for synthesis of reactive systems, applicable to systems made of n components which have each their own objectives, and shows that contrary to the previous proposals, it defines sets of solutions which are rectangular.
Abstract: In this paper, we introduce a novel rule for synthesis of reactive systems, applicable to systems made of n components which have each their own objectives. This rule is based on the notion of admissible strategies. We compare this rule with previous rules defined in the literature, and show that contrary to the previous proposals, it defines sets of solutions which are rectangular. This property leads to solutions which are robust and resilient, and allows one to synthesize strategies separately for each agent. We provide algorithms with optimal complexity and also an abstraction framework compatible with the new rule.

41 citations


Book ChapterDOI
24 Apr 2017
TL;DR: In this article, a single exponential translation from limit-deterministic Buchi automata (LDBA) to deterministic parity automata is presented, which can be concatenated with a recent efficient translation from LTL to LDBA to yield a double exponential, "Safraless" LTL-to-DPA construction.
Abstract: Controller synthesis for general linear temporal logic (LTL) objectives is a challenging task. The standard approach involves translating the LTL objective into a deterministic parity automaton (DPA) by means of the Safra-Piterman construction. One of the challenges is the size of the DPA, which often grows very fast in practice, and can reach double exponential size in the length of the LTL formula. In this paper we describe a single exponential translation from limit-deterministic Buchi automata (LDBA) to DPA, and show that it can be concatenated with a recent efficient translation from LTL to LDBA to yield a double exponential, “Safraless” LTL-to-DPA construction. We also report on an implementation, a comparison with the SPOT library, and performance on several sets of formulas, including instances from the 2016 SyntComp competition.

39 citations


Journal ArticleDOI
TL;DR: This work introduces the reactive synthesis competition (SYNTCOMP), a long-term effort intended to stimulate and guide advances in the design and application of synthesis procedures for reactive systems, and presents and analyze the results of the competition.
Abstract: We introduce the reactive synthesis competition (SYNTCOMP), a long-term effort intended to stimulate and guide advances in the design and application of synthesis procedures for reactive systems. The first iteration of SYNTCOMP is based on the controller synthesis problem for finite-state systems and safety specifications. We provide an overview of this problem and existing approaches to solve it, and report on the design and results of the first SYNTCOMP. This includes the definition of the benchmark format, the collection of benchmarks, the rules of the competition, and the five synthesis tools that participated. We present and analyze the results of the competition and draw conclusions on the state of the art. Finally, we give an outlook on future directions of SYNTCOMP.

34 citations


Journal ArticleDOI
TL;DR: The beyond worst-case synthesis problem, which is to construct strategies that guarantee some quantitative requirement in the worst- case while providing a higher expected value against a particular stochastic model of the environment given as input, is introduced.
Abstract: Classical analysis of two-player quantitative games involves an adversary (modeling the environment of the system) which is purely antagonistic and asks for strict guarantees while Markov decision processes model systems facing a purely randomized environment: the aim is then to optimize the expected payoff, with no guarantee on individual outcomes. We introduce the beyond worst-case synthesis problem, which is to construct strategies that guarantee some quantitative requirement in the worst-case while providing a higher expected value against a particular stochastic model of the environment given as input. We study the beyond worst-case synthesis problem for two important quantitative settings: the mean-payoff and the shortest path. In both cases, we show how to decide the existence of finite-memory strategies satisfying the problem and how to synthesize one if one exists. We establish algorithms and we study complexity bounds and memory requirements.

31 citations


Journal ArticleDOI
28 Nov 2017
TL;DR: In this paper, the authors present the participants of SYNTCOMP 2017, with a focus on changes with respect to the previous years and on the two completely new tools that have entered the competition.
Abstract: We report on the fourth reactive synthesis competition (SYNTCOMP 2017). We introduce two new benchmark classes that have been added to the SYNTCOMP library, and briefly describe the benchmark selection, evaluation scheme and the experimental setup of SYNTCOMP 2017. We present the participants of SYNTCOMP 2017, with a focus on changes with respect to the previous years and on the two completely new tools that have entered the competition. Finally, we present and analyze the results of our experimental evaluation, including a ranking of tools with respect to quantity and quality of solutions.

28 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the participants of SYNTCOMP 2017, with a focus on changes with respect to the previous years and on the two completely new tools that have entered the competition.
Abstract: We report on the fourth reactive synthesis competition (SYNTCOMP 2017). We introduce two new benchmark classes that have been added to the SYNTCOMP library, and briefly describe the benchmark selection, evaluation scheme and the experimental setup of SYNTCOMP 2017. We present the participants of SYNTCOMP 2017, with a focus on changes with respect to the previous years and on the two completely new tools that have entered the competition. Finally, we present and analyze the results of our experimental evaluation, including a ranking of tools with respect to quantity and quality of solutions.

27 citations


Journal ArticleDOI
01 Jun 2017
TL;DR: This work shows how to compute a single strategy to enforce that for all dimensions i, the probability of outcomes, satisfyingf_i(\rho ) \ge v_i$$fi(ρ)≥vi is at least $$\alpha _i$$αi.
Abstract: Markov decision processes (MDPs) with multi-dimensional weights are useful to analyze systems with multiple objectives that may be conflicting and require the analysis of trade-offs. We study the complexity of percentile queries in such MDPs and give algorithms to synthesize strategies that enforce such constraints. Given a multi-dimensional weighted MDP and a quantitative payoff function f, thresholds $$v_i$$vi (one per dimension), and probability thresholds $$\alpha _i$$źi, we show how to compute a single strategy to enforce that for all dimensions i, the probability of outcomes $$\rho $$ź satisfying $$f_i(\rho ) \ge v_i$$fi(ź)źvi is at least $$\alpha _i$$źi. We consider classical quantitative payoffs from the literature (sup, inf, lim sup, lim inf, mean-payoff, truncated sum, discounted sum). Our work extends to the quantitative case the multi-objective model checking problem studied by Etessami et al. (Log Methods Comput Sci 4(4), 2008) in unweighted MDPs.

25 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider two-player zero-sum games of infinite duration and their quantitative versions are used to model the interaction between a controller (Eve) and its environment (Adam).
Abstract: Two-player zero-sum games of infinite duration and their quantitative versions are used in verification to model the interaction between a controller (Eve) and its environment (Adam). The question usually addressed is that of the existence (and computability) of a strategy for Eve that can maximize her payoff against any strategy of Adam. In this work, we are interested in strategies of Eve that minimize her regret, i.e. strategies that minimize the difference between her actual payoff and the payoff she could have achieved if she had known the strategy of Adam in advance. We give algorithms to compute the strategies of Eve that ensure minimal regret against an adversary whose choice of strategy is (1) unrestricted, (2) limited to positional strategies, or (3) limited to word strategies, and show that the two last cases have natural modelling applications. These results apply for quantitative games defined with the classical payoff functions $$\mathsf {Inf}$$Inf, $$\mathsf {Sup}$$Sup, $${\mathsf {LimInf}}$$LimInf, $$\mathsf {LimSup}$$LimSup, and mean-payoff. We also show that our notion of regret minimization in which Adam is limited to word strategies generalizes the notion of good for games introduced by Henzinger and Piterman, and is related to the notion of determinization by pruning due to Aminof, Kupferman and Lampert.

18 citations


Book ChapterDOI
22 Apr 2017
TL;DR: The recently introduced notion of weak subgame perfect equilibrium weak SPE is focused on, a variant of the classical notion of SPE, where players who deviate can only use strategies deviating from their initial strategy in a finite number of histories.
Abstract: We study multi-player turn-based games played on a directed graph, where the number of players and vertices can be infinite. An outcome is assigned to every play of the game. Each player has a preference relation on the set of outcomes which allows him to compare plays. We focus on the recently introduced notion of weak subgame perfect equilibrium weak SPE, a variant of the classical notion of SPE, where players who deviate can only use strategies deviating from their initial strategy in a finite number of histories. We give general conditions on the structure of the game graph and the preference relations of the players that guarantee the existence of a weak SPE, which moreover is finite-memory.

16 citations


Proceedings ArticleDOI
20 Jun 2017
TL;DR: In this article, the determinability of weighted automata over the semiring (Z ∪ {--- ∞}, max, +) is a long-standing open question, and two ways of approaching it by constraining the search space of deterministic WA: k-delay and r-regret.
Abstract: Decidability of the determinization problem for weighted automata over the semiring (Z ∪ {--- ∞}, max, +), WA for short, is a long-standing open question. We propose two ways of approaching it by constraining the search space of deterministic WA: k-delay and r-regret. A WA N is k-delay determinizable if there exists a deterministic automaton D that defines the same function as N and for all words α in the language of N, the accepting run of D on α is always at most k-away from a maximal accepting run of N on α. That is, along all prefixes of the same length, the absolute difference between the running sums of weights of the two runs is at most k. A WA N is r-regret determinizable if for all words α in its language, its non-determinism can be resolved on the fly to construct a run of N such that the absolute difference between its value and the value assigned to α by N is at most r.We show that a WA is determinizable if and only if it is k-delay determinizable for some k. Hence deciding the existence of some k is as difficult as the general determinization problem. When k and r are given as input, the k-delay and r-regret determinization problems are shown to be EXPtime-complete. We also show that determining whether a WA is r-regret determinizable for some r is in EXPtime.

16 citations


DOI
01 Jul 2017
TL;DR: Mayr et al. as mentioned in this paper extend the framework of [BFRR14] and follow-up papers, which focused on quantitative objectives, by addressing the case of parity objectives, a natural way to represent functional requirements of systems.
Abstract: The beyond worst-case synthesis problem was introduced recently by Bruyere et al. [BFRR14]: it aims at building system controllers that provide strict worst-case performance guarantees against an antagonistic environment while ensuring higher expected performance against a stochastic model of the environment. Our work extends the framework of [BFRR14] and follow-up papers, which focused on quantitative objectives, by addressing the case of $\omega$-regular conditions encoded as parity objectives, a natural way to represent functional requirements of systems. We build strategies that satisfy a main parity objective on all plays, while ensuring a secondary one with sufficient probability. This setting raises new challenges in comparison to quantitative objectives, as one cannot easily mix different strategies without endangering the functional properties of the system. We establish that, for all variants of this problem, deciding the existence of a strategy lies in ${\sf NP} \cap {\sf coNP}$, the same complexity class as classical parity games. Hence, our framework provides additional modeling power while staying in the same complexity class. [BFRR14] Veronique Bruyere, Emmanuel Filiot, Mickael Randour, and Jean-Francois Raskin. Meet your expectations with guarantees: Beyond worst-case synthesis in quantitative games. In Ernst W. Mayr and Natacha Portier, editors, 31st International Symposium on Theoretical Aspects of Computer Science, STACS 2014, March 5-8, 2014, Lyon, France, volume 25 of LIPIcs, pages 199-213. Schloss Dagstuhl - Leibniz - Zentrum fuer Informatik, 2014.

Proceedings Article
01 Jan 2017
TL;DR: This work goes beyond both the “expectation” and “threshold” approaches and considers a “guaranteed payoff optimization (GPO)” problem for POMDPs, where the objective is to find a policy σ such that each possible outcome yields a discounted-sum payoff of at least t.
Abstract: A standard objective in partially-observable Markov decision processes (POMDPs) is to find a policy that maximizes the expected discounted-sum payoff. However, such policies may still permit unlikely but highly undesirable outcomes, which is problematic especially in safety-critical applications. Recently, there has been a surge of interest in POMDPs where the goal is to maximize the probability to ensure that the payoff is at least a given threshold, but these approaches do not consider any optimization beyond satisfying this threshold constraint. In this work we go beyond both the “expectation” and “threshold” approaches and consider a “guaranteed payoff optimization (GPO)” problem for POMDPs, where we are given a threshold t and the objective is to find a policy σ such that a) each possible outcome of σ yields a discounted-sum payoff of at least t, and b) the expected discounted-sum payoff of σ is optimal (or near-optimal) among all policies satisfying a). We present a practical approach to tackle the GPO problem and evaluate it on standard POMDP benchmarks.

Journal ArticleDOI
TL;DR: A new notion of equilibria is proposed, called doomsdayEquilibria, which is a strategy profile where all players satisfy their own objective, and if any coalition of players deviates and violates even one of the players' objective, then the objective of every player is violated.
Abstract: Two-player games on graphs provide the theoretical framework for many important problems such as reactive synthesis. While the traditional study of two-player zero-sum games has been extended to multi-player games with several notions of equilibria, they are decidable only for perfect-information games, whereas several applications require imperfect-information. In this paper we propose a new notion of equilibria, called doomsday equilibria, which is a strategy profile where all players satisfy their own objective, and if any coalition of players deviates and violates even one of the players' objective, then the objective of every player is violated. We present algorithms and complexity results for deciding the existence of doomsday equilibria for various classes of ω-regular objectives, both for imperfect-information games, and for perfect-information games. We provide optimal complexity bounds for imperfect-information games, and in most cases for perfect-information games.



Journal ArticleDOI
TL;DR: The aim of this paper is to show that the new data structure of pseudo-antichains (an extension of antichains) provides another interesting alternative, especially for the class of monotonic MDPs, and to design efficient pseudo- antichain based symblicit algorithms for two quantitative settings: the expected mean-payoff and the stochastic shortest path.
Abstract: When treating Markov decision processes (MDPs) with large state spaces, using explicit representations quickly becomes unfeasible. Lately, Wimmer et al. have proposed a so-called symblicit algorithm for the synthesis of optimal strategies in MDPs, in the quantitative setting of expected mean-payoff. This algorithm, based on the strategy iteration algorithm of Howard and Veinott, efficiently combines symbolic and explicit data structures, and uses binary decision diagrams as symbolic representation. The aim of this paper is to show that the new data structure of pseudo-antichains (an extension of antichains) provides another interesting alternative, especially for the class of monotonic MDPs. We design efficient pseudo-antichain based symblicit algorithms (with open source implementations) for two quantitative settings: the expected mean-payoff and the stochastic shortest path. For two practical applications coming from automated planning and $$\mathsf {LTL}$$ synthesis, we report promising experimental results w.r.t. both the run time and the memory consumption. We also show that a variant of pseudo-antichains allows to handle the infinite state spaces underlying the qualitative verification of probabilistic lossy channel systems.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the algorithmic properties of several subclasses of mean-payoff games where the players have asymmetric information about the state of the game and gave an exponential time algorithm for determining the winner of the latter.

Proceedings ArticleDOI
10 Jul 2017
TL;DR: It is proved that admissible strategies always exist in concurrent games, and they are characterised precisely and shown how to perform assume-admissible synthesis.
Abstract: In this paper, we study the notion of admissibility for randomised strategies in concurrent games. Intuitively, an admissible strategy is one where the player plays 'as well as possible', because there is no other strategy that dominates it, i.e., that wins (almost surely) against a superset of adversarial strategies. We prove that admissible strategies always exist in concurrent games, and we characterise them precisely. Then, when the objectives of the players are ω-regular, we show how to perform assume-admissible synthesis, i.e., how to compute admissible strategies that win (almost surely) under the hypothesis that the other players play admissible strategies only.

Posted Content
TL;DR: Mayr et al. as mentioned in this paper extend the framework of [BFRR14] and follow-up papers, which focused on quantitative objectives, by addressing the case of parity objectives, a natural way to represent functional requirements of systems.
Abstract: The beyond worst-case synthesis problem was introduced recently by Bruyere et al. [BFRR14]: it aims at building system controllers that provide strict worst-case performance guarantees against an antagonistic environment while ensuring higher expected performance against a stochastic model of the environment. Our work extends the framework of [BFRR14] and follow-up papers, which focused on quantitative objectives, by addressing the case of $\omega$-regular conditions encoded as parity objectives, a natural way to represent functional requirements of systems. We build strategies that satisfy a main parity objective on all plays, while ensuring a secondary one with sufficient probability. This setting raises new challenges in comparison to quantitative objectives, as one cannot easily mix different strategies without endangering the functional properties of the system. We establish that, for all variants of this problem, deciding the existence of a strategy lies in ${\sf NP} \cap {\sf coNP}$, the same complexity class as classical parity games. Hence, our framework provides additional modeling power while staying in the same complexity class. [BFRR14] Veronique Bruyere, Emmanuel Filiot, Mickael Randour, and Jean-Francois Raskin. Meet your expectations with guarantees: Beyond worst-case synthesis in quantitative games. In Ernst W. Mayr and Natacha Portier, editors, 31st International Symposium on Theoretical Aspects of Computer Science, STACS 2014, March 5-8, 2014, Lyon, France, volume 25 of LIPIcs, pages 199-213. Schloss Dagstuhl - Leibniz - Zentrum fuer Informatik, 2014.

Book ChapterDOI
25 Jul 2017
TL;DR: It is shown that admissible strategies may not exist in timed games with a continuous semantics of time, even for safety objectives, and symbolic algorithms are provided to solve the model-checking problem under admissibility and the assume-admissible synthesis problem for real-time non-zero sum n-player games for safety objective.
Abstract: In this paper, we study the notion of admissibility in timed games. First, we show that admissible strategies may not exist in timed games with a continuous semantics of time, even for safety objectives. Second, we show that the discrete time semantics of timed games is better behaved w.r.t. admissibility: the existence of admissible strategies is guaranteed in that semantics. Third, we provide symbolic algorithms to solve the model-checking problem under admissibility and the assume-admissible synthesis problem for real-time non-zero sum n-player games for safety objectives.

Posted Content
19 Jul 2017
TL;DR: This work investigates in detail turned-based lexicographic games with all the classically studied ω-objectives, and studies the threshold problem which asks whether player 1 can ensure a payoff greater than or equal to a given Boolean threshold vector w.r.t. the Lexicographic order.
Abstract: In recent years, two-player zero-sum games with multidimensional objectives have received a lot of interest as a model for intricate systems that are required to satisfy several properties. In this framework, player 1 wins if he can ensure that all objectives are satisfied against any behavior of player 2. It is however often natural to provide more significance to one objective over another, a situation that can be modeled by lexicographically ordering the objectives. Inspired by recent work on concurrent games with ω-regular objectives by Bouyer et al., we investigate in detail turned-based lexicographic games with all the classically studied ω-objectives. Given a tuple of objectives, the payoff associated with a given play of the game is a Boolean vector, the components of which indicate which objectives are satisfied. We study the threshold problem which asks whether player 1 can ensure a payoff greater than or equal to a given Boolean threshold vector w.r.t. the lexicographic order. We provide precise results that refine and complete the ones by Bouyer et al. for turn-based games, including exact complexity classes, deterministic algorithms for computing the values and the optimal strategies, and memory requirements for those strategies. Whereas the threshold problem is PSPACE-complete for several ω-regular objectives, we show that it is fixed parameter tractable in those cases.

Proceedings ArticleDOI
04 Sep 2017
TL;DR: It is shown that in stark contrast with the perfect information variant, admissible strategies are only guaranteed to exist when players have objectives that are closed sets.
Abstract: In this invited paper, we study the concept of admissible strategies for two player win/lose infinite sequential games with imperfect information. We show that in stark contrast with the perfect information variant, admissible strategies are only guaranteed to exist when players have objectives that are closed sets. As a consequence, we also study decision problems related to the existence of admissible strategies for regular games as well as finite duration games.

DOI
19 Jul 2017
TL;DR: In this article, the authors studied the threshold problem for two-player zero-sum games with monotonically ordered and regular objectives and provided polynomial time algorithms for B\"uchi, coB\"uchi and explicit Muller objectives for a large subclass of monotonic preorders.
Abstract: In recent years, two-player zero-sum games with multiple objectives have received a lot of interest as a model for the synthesis of complex reactive systems. In this framework, Player 1 wins if he can ensure that all objectives are satisfied against any behavior of Player 2. When this is not possible to satisfy all the objectives at once, an alternative is to use some preorder on the objectives according to which subset of objectives Player 1 wants to satisfy. For example, it is often natural to provide more significance to one objective over another, a situation that can be modelled with lexicographically ordered objectives for instance. Inspired by recent work on concurrent games with multiple {\omega}-regular objectives by Bouyer et al., we investigate in detail turned-based games with monotonically ordered and {\omega}-regular objectives. We study the threshold problem which asks whether player 1 can ensure a payoff greater than or equal to a given threshold w.r.t. a given monotonic preorder. As the number of objectives is usually much smaller than the size of the game graph, we provide a parametric complexity analysis and we show that our threshold problem is in FPT for all monotonic preorders and all classical types of {\omega}-regular objectives. We also provide polynomial time algorithms for B\"uchi, coB\"uchi and explicit Muller objectives for a large subclass of monotonic preorders that includes among others the lexicographic preorder. In the particular case of lexicographic preorder, we also study the complexity of computing the values and the memory requirements of optimal strategies.

Posted Content
TL;DR: It is proved that admissible strategies always exist in concurrent games, and they are characterised precisely, when the objectives of the players are omega-regular.
Abstract: In this paper, we study the notion of admissibility for randomised strategies in concurrent games. Intuitively, an admissible strategy is one where the player plays `as well as possible', because there is no other strategy that dominates it, i.e., that wins (almost surely) against a super set of adversarial strategies. We prove that admissible strategies always exist in concurrent games, and we characterise them precisely. Then, when the objectives of the players are omega-regular, we show how to perform assume-admissible synthesis, i.e., how to compute admissible strategies that win (almost surely) under the hypothesis that the other players play admissible

Posted Content
TL;DR: It is shown that a WA is determinizable if and only if it is k-delay determinizable for some k, which means deciding the existence of some k is as difficult as the general determinization problem.
Abstract: Decidability of the determinization problem for weighted automata over the semiring $(\mathbb{Z} \cup {-\infty}, \max, +)$, WA for short, is a long-standing open question. We propose two ways of approaching it by constraining the search space of deterministic WA: k-delay and r-regret. A WA N is k-delay determinizable if there exists a deterministic automaton D that defines the same function as N and for all words {\alpha} in the language of N, the accepting run of D on {\alpha} is always at most k-away from a maximal accepting run of N on {\alpha}. That is, along all prefixes of the same length, the absolute difference between the running sums of weights of the two runs is at most k. A WA N is r-regret determinizable if for all words {\alpha} in its language, its non-determinism can be resolved on the fly to construct a run of N such that the absolute difference between its value and the value assigned to {\alpha} by N is at most r. We show that a WA is determinizable if and only if it is k-delay determinizable for some k. Hence deciding the existence of some k is as difficult as the general determinization problem. When k and r are given as input, the k-delay and r-regret determinization problems are shown to be EXPtime-complete. We also show that determining whether a WA is r-regret determinizable for some r is in EXPtime.

Posted Content
TL;DR: In this article, the expressive power and algorithmic properties of weighted expressions are investigated and the decision problems such as emptiness, universality and comparison are PSpace-c for these expressions.
Abstract: In this paper, we investigate the expressive power and the algorithmic properties of weighted expressions, which define functions from finite words to integers. First, we consider a slight extension of an expression formalism, introduced by Chatterjee. et. al. in the context of infinite words, by which to combine values given by unambiguous (max,+)-automata, using Presburger arithmetic. We show that important decision problems such as emptiness, universality and comparison are PSpace-c for these expressions. We then investigate the extension of these expressions with Kleene star. This allows to iterate an expression over smaller fragments of the input word, and to combine the results by taking their iterated sum. The decision problems turn out to be undecidable, but we introduce the decidable and still expressive class of synchronised expressions.

Book ChapterDOI
11 Sep 2017
TL;DR: In this paper, the expressive power and algorithmic properties of weighted expressions are investigated and the decision problems such as emptiness, universality and comparison are PSpace-c for these expressions.
Abstract: In this paper, we investigate the expressive power and the algorithmic properties of weighted expressions, which define functions from finite words to integers. First, we consider a slight extension of an expression formalism, introduced by Chatterjee et al. in the context of infinite words, by which to combine values given by unambiguous (max,+)-automata, using Presburger arithmetic. We show that important decision problems such as emptiness, universality and comparison are PSpace-c for these expressions. We then investigate the extension of these expressions with Kleene star. This allows to iterate an expression over smaller fragments of the input word, and to combine the results by taking their iterated sum. The decision problems turn out to be undecidable, but we introduce the decidable and still expressive class of synchronised expressions.