scispace - formally typeset
Search or ask a question

Showing papers by "Jean-François Raskin published in 2015"


Journal ArticleDOI
TL;DR: The first solution of multi-mean-payoff games with infinite-memory strategies is presented, and it is shown that mean-pay off-sup objectives can be decided in NP ?
Abstract: In mean-payoff games, the objective of the protagonist is to ensure that the limit average of an infinite sequence of numeric weights is nonnegative. In energy games, the objective is to ensure that the running sum of weights is always nonnegative. Multi-mean-payoff and multi-energy games replace individual weights by tuples, and the limit average (resp., running sum) of each coordinate must be (resp., remain) nonnegative. We prove finite-memory determinacy of multi-energy games and show inter-reducibility of multi-mean-payoff and multi-energy games for finite-memory strategies. We improve the computational complexity for solving both classes with finite-memory strategies: we prove coNP-completeness improving the previous known EXPSPACE bound. For memoryless strategies, we show that deciding the existence of a winning strategy for the protagonist is NP-complete. We present the first solution of multi-mean-payoff games with infinite-memory strategies: we show that mean-payoff-sup objectives can be decided in NP ? coNP , whereas mean-payoff-inf objectives are coNP-complete.

105 citations


Book ChapterDOI
18 Jul 2015
TL;DR: This paper shows how to compute a single strategy to enforce that for all dimensions i, the probability of outcomes satisfying f_i(\rho ) is at least \(\alpha _i\), and considers classical quantitative payoffs from the literature.
Abstract: Markov decision processes (MDPs) with multi-dimensional weights are useful to analyze systems with multiple objectives that may be conflicting and require the analysis of trade-offs. In this paper, we study the complexity of percentile queries in such MDPs and give algorithms to synthesize strategies that enforce such constraints. Given a multi-dimensional weighted MDP and a quantitative payoff function f, thresholds \(v_i\) (one per dimension), and probability thresholds \(\alpha _i\), we show how to compute a single strategy to enforce that for all dimensions i, the probability of outcomes \(\rho \) satisfying \(f_i(\rho ) \ge v_i\) is at least \(\alpha _i\). We consider classical quantitative payoffs from the literature (sup, inf, lim sup, lim inf, mean-payoff, truncated sum, discounted sum). Our work extends to the quantitative case the multi-objective model checking problem studied by Etessami et al. [16] in unweighted MDPs.

55 citations


Journal Article
TL;DR: In this article, the authors revisited the stochastic shortest path problem and showed how recent results allow one to improve over the classical solutions, and presented algorithms to synthesize strategies with multiple guarantees on the distribution of the length of paths reaching a given target, rather than simply minimizing its expected value.
Abstract: In this invited contribution, we revisit the stochastic shortest path problem, and show how recent results allow one to improve over the classical solutions: we present algorithms to synthesize strategies with multiple guarantees on the distribution of the length of paths reaching a given target, rather than simply minimizing its expected value. The concepts and algorithms that we propose here are applications of more general results that have been obtained recently for Markov decision processes and that are described in a series of recent papers.

47 citations


Journal ArticleDOI
TL;DR: It is shown that in contrast to multi-dimensional mean-payoff games that are known to be coNP-complete,Multi-dimensional total-payoffs games are undecidable, and conservative approximations of these objectives are introduced.
Abstract: We consider two-player games played on weighted directed graphs with mean-payoff and total-payoff objectives, two classical quantitative objectives. While for single-dimensional games the complexity and memory bounds for both objectives coincide, we show that in contrast to multi-dimensional mean-payoff games that are known to be coNP-complete, multi-dimensional total-payoff games are undecidable. We introduce conservative approximations of these objectives, where the payoff is considered over a local finite window sliding along a play, instead of the whole play. For single dimension, we show that (i) if the window size is polynomial, deciding the winner takes polynomial time, and (ii) the existence of a bounded window can be decided in NP ? coNP, and is at least as hard as solving mean-payoff games. For multiple dimensions, we show that (i) the problem with fixed window size is EXPTIME-complete, and (ii) there is no primitive-recursive algorithm to decide the existence of a bounded window.

43 citations


Posted Content
TL;DR: In this article, a novel rule for synthesis of reactive systems, applicable to systems made of n components which have each their own objectives, is proposed, based on the notion of admissible strategies.
Abstract: In this paper, we introduce a novel rule for synthesis of reactive systems, applicable to systems made of n components which have each their own objectives. It is based on the notion of admissible strategies. We compare our novel rule with previous rules defined in the literature, and we show that contrary to the previous proposals, our rule defines sets of solutions which are rectangular. This property leads to solutions which are robust and resilient. We provide algorithms with optimal complexity and also an abstraction framework.

30 citations


Proceedings ArticleDOI
06 Jul 2015
TL;DR: The multidimensional BAS threshold problem is solvable in P. This solves the infinite-memory threshold problem left open by Bruyère et al., and this complexity cannot be improved without improving the currently known complexity of classical mean-payoff games.
Abstract: The beyond worst-case threshold problem (BWC), recently introduced by Bruyere et al., asks given a quantitative game graph for the synthesis of a strategy that i) enforces some minimal level of performance against any adversary, and ii) achieves a good expectation against a stochastic model of the adversary. They solved the BWC problem for finite-memory strategies and unidimensional mean-payoff objectives and they showed membership of the problem in NPcoNP. They also noted that infinite-memory strategies are more powerful than finite-memory ones, but the respective threshold problem was left open. We extend these results in several directions. First, we consider multidimensional mean-payoff objectives. Second, we study both finite-memory and infinite-memory strategies. We show that the multidimensional BWC problem is coNPc in both cases. Third, in the special case when the worst-case objective is unidimensional (but the expectation objective is still multidimensional) we show that the complexity decreases to NPcoNP. This solves the infinite-memory threshold problem left open by Bruyere et al., and this complexity cannot be improved without improving the currently known complexity of classical mean-payoff games. Finally, we introduce a natural relaxation of the BWC problem, the beyond almost-sure threshold problem (BAS), which asks for the synthesis of a strategy that ensures some minimal level of performance with probability one and a good expectation against the stochastic model of the adversary. We show that the multidimensional BAS threshold problem is solvable in P.

25 citations


Book ChapterDOI
18 Jul 2015
TL;DR: This paper studies the set of thresholds that the protagonist can force in a zero-sum two-player multidimensional mean-payoff game and shows that the Pareto curve can be effectively constructed, and under the former natural assumptions, this construction can be done in deterministic polynomial time.
Abstract: In this paper, we study the set of thresholds that the protagonist can force in a zero-sum two-player multidimensional mean-payoff game. The set of maximal elements of such a set is called the Pareto curve, a classical tool to analyze trade-offs. As thresholds are vectors of real numbers in multiple dimensions, there exist usually an infinite number of such maximal elements. Our main results are as follow. First, we study the geometry of this set and show that it is definable as a finite union of convex sets given by linear inequations. Second, we provide a \(\varSigma _2\) P algorithm to decide if this set intersects a convex set defined by linear inequations, and we prove the optimality of our algorithm by providing a matching complexity lower bound for the problem. Furthermore, we show that, under natural assumptions, i.e. fixed number of dimensions and polynomially bounded weights in the game, the problem can be solved in deterministic polynomial time. Finally, we show that the Pareto curve can be effectively constructed, and under the former natural assumptions, this construction can be done in deterministic polynomial time.

24 citations


Proceedings ArticleDOI
01 Sep 2015
TL;DR: This paper introduces a novel rule for synthesis of reactive systems, applicable to systems made of n components which have each their own objectives, based on the notion of admissible strategies, and shows that contrary to the previous proposals, this rule define sets of solutions which are rectangular.
Abstract: In this paper, we introduce a novel rule for synthesis of reactive systems, applicable to systems made of n components which have each their own objectives. It is based on the notion of admissible strategies. We compare our novel rule with previous rules defined in the literature, and we show that contrary to the previous proposals, our rule define sets of solutions which are rectangular. This property leads to solutions which are robust and resilient. We provide algorithms with optimal complexity and also an abstraction framework.

18 citations


Journal ArticleDOI
TL;DR: The first iteration of the reactive synthesis competition (SYNTCOMP) as discussed by the authors was based on the controller synthesis problem for finite-state systems and safety specifications, and existing approaches to solve it are summarized in Table 1.
Abstract: We introduce the reactive synthesis competition (SYNTCOMP), a long-term effort intended to stimulate and guide advances in the design and application of synthesis procedures for reactive systems. The first iteration of SYNTCOMP is based on the controller synthesis problem for finite-state systems and safety specifications. We provide an overview of this problem and existing approaches to solve it, and report on the design and results of the first SYNTCOMP. This includes the definition of the benchmark format, the collection of benchmarks, the rules of the competition, and the five synthesis tools that participated. We present and analyze the results of the competition and draw conclusions on the state of the art. Finally, we give an outlook on future directions of SYNTCOMP.

16 citations


Proceedings ArticleDOI
01 Sep 2015
TL;DR: In this paper, the existence of weak subgame perfect equilibria was shown to be decidable for n-player turn-based games played on a finite directed graph, where players who deviate cannot use the full class of strategies but only a subclass with a finite number of deviation steps.
Abstract: We study n-player turn-based games played on a finite directed graph. For each play, the players have to pay a cost that they want to minimize. Instead of the well-known notion of Nash equilibrium (NE), we focus on the notion of subgame perfect equilibrium (SPE), a refinement of NE well-suited in the framework of games played on graphs. We also study natural variants of SPE, named weak (resp. very weak) SPE, where players who deviate cannot use the full class of strategies but only a subclass with a finite number of (resp. a unique) deviation step(s). Our results are threefold. Firstly, we characterize in the form of a Folk theorem the set of all plays that are the outcome of a weak SPE. Secondly, for the class of quantitative reachability games, we prove the existence of a finite-memory SPE and provide an algorithm for computing it (only existence was known with no information regarding the memory). Moreover, we show that the existence of a constrained SPE, i.e. an SPE such that each player pays a cost less than a given constant, can be decided. The proofs rely on our Folk theorem for weak SPEs (which coincide with SPEs in the case of quantitative reachability games) and on the decidability of MSO logic on infinite words. Finally with similar techniques, we provide a second general class of games for which the existence of a (constrained) weak SPE is decidable.

16 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate functional weighted automata for four different measures: the sum, the mean, the discounted sum of weights along edges and the ratio between rewards and costs.
Abstract: A weighted automaton is functional if any two accepting runs on the same finite word have the same value. In this paper, we investigate functional weighted automata for four different measures: the sum, the mean, the discounted sum of weights along edges and the ratio between rewards and costs. On the positive side, we show that functionality is decidable for the four measures. Furthermore, the existential and universal threshold problems, the language inclusion problem and the equivalence problem are all decidable when the weighted automata are functional. On the negative side, we also study the quantitative extension of the realizability problem and show that it is undecidable for sum, mean and ratio. We finally show how to decide whether the language associated with a given functional automaton can be defined with a deterministic one, for sum, mean and discounted sum. The results on functionality and determinizability are expressed for the more general class of functional group automata. This allows one to formulate within the same framework new results related to discounted sum automata and known results on sum and mean automata. Ratio automata do not fit within this general scheme and different techniques are required to decide functionality.

Posted Content
TL;DR: This work focuses on the notion of subgame perfect equilibrium (SPE), a refinement of NE well-suited in the framework of games played on graphs, and proves the existence of a finite-memory SPE and provides an algorithm for computing it.
Abstract: We study $n$-player turn-based games played on a finite directed graph. For each play, the players have to pay a cost that they want to minimize. Instead of the well-known notion of Nash equilibrium (NE), we focus on the notion of subgame perfect equilibrium (SPE), a refinement of NE well-suited in the framework of games played on graphs. We also study natural variants of SPE, named weak (resp. very weak) SPE, where players who deviate cannot use the full class of strategies but only a subclass with a finite number of (resp. a unique) deviation step(s). Our results are threefold. Firstly, we characterize in the form of a Folk theorem the set of all plays that are the outcome of a weak SPE. Secondly, for the class of quantitative reachability games, we prove the existence of a finite-memory SPE and provide an algorithm for computing it (only existence was known with no information regarding the memory). Moreover, we show that the existence of a constrained SPE, i.e. an SPE such that each player pays a cost less than a given constant, can be decided. The proofs rely on our Folk theorem for weak SPEs (which coincide with SPEs in the case of quantitative reachability games) and on the decidability of MSO logic on infinite words. Finally with similar techniques, we provide a second general class of games for which the existence of a (constrained) weak SPE is decidable.

Book ChapterDOI
12 Jan 2015
TL;DR: This invited contribution revisits the stochastic shortest path problem, and shows how recent results allow one to improve over the classical solutions: it presents algorithms to synthesize strategies with multiple guarantees on the distribution of the length of paths reaching a given target, rather than simply minimizing its expected value.
Abstract: In this invited contribution, we revisit the stochastic shortest path problem, and show how recent results allow one to improve over the classical solutions: we present algorithms to synthesize strategies with multiple guarantees on the distribution of the length of paths reaching a given target, rather than simply minimizing its expected value. The concepts and algorithms that we propose here are applications of more general results that have been obtained recently for Markov decision processes and that are described in a series of recent papers.

Journal ArticleDOI
18 Jul 2015
TL;DR: In this paper, the synthesis of circuits for succinct safety specifications given in the AIG format is studied and symbolic compositional algorithms are proposed to solve the synthesis problem compositionally starting for the sub-specifications.
Abstract: We study the synthesis of circuits for succinct safety specifications given in the AIG format. We show how AIG safety specifications can be decomposed automatically into sub-specifications. Then we propose symbolic compositional algorithms to solve the synthesis problem compositionally starting for the sub-specifications. We have evaluated the compositional algorithms on a set of benchmarks including those proposed for the first synthesis competition organised in 2014 by the Synthesis Workshop affiliated to the CAV conference. We show that a large number of benchmarks can be decomposed automatically and solved more efficiently with the compositional algorithms that we propose in this paper.

Journal ArticleDOI
30 Apr 2015
TL;DR: This article proposes Queue-Dispatch Asynchronous Systems as a mathematical model that faithfully formalizes the synchronization mechanisms and behavior of the scheduler in those systems and studies in detail their relationships to classical formalisms such as pushdown systems, Petri nets, Fifo systems, and counter systems.
Abstract: Recently, new libraries, such as Grand Central Dispatch (GCD), have been proposed to directly harness the power of multicore platforms and to make the development of concurrent software more accessible to software engineers. When using such a library, the programmer writes so-called blocks, which are chunks of code, and dispatches them using synchronous or asynchronous calls to several types of waiting queues. A scheduler is then responsible for dispatching those blocks among the available cores. Blocks can synchronize via a global memory. In this article, we propose Queue-Dispatch Asynchronous Systems as a mathematical model that faithfully formalizes the synchronization mechanisms and behavior of the scheduler in those systems. We study in detail their relationships to classical formalisms such as pushdown systems, Petri nets, Fifo systems, and counter systems. Our main technical contributions are precise worst-case complexity results for the Parikh coverability problem and the termination problem for several subclasses of our model. We also consider an extension of Qdas with a fork-join mechanism. Adding fork-join to any of the subclasses that we have identified leads to undecidability of the coverability problem. This motivates the study of over-approximations. Finally, we consider handmade abstractions as a practical way of verifying programs that cannot be faithfully modeled by decidable subclasses of Qdas.

Journal ArticleDOI
TL;DR: π-Petri nets ωPN is introduced, an extension of plain Petri nets with ω-labeled input and output arcs, that is well-suited to analyse parametric concurrent systems with dynamic thread creation.
Abstract: We introduce ω-Petri nets ωPN, an extension of plain Petri nets with ω-labeled input and output arcs, that is well-suited to analyse parametric concurrent systems with dynamic thread creation. Most techniques such as the Karp and Miller tree or the Rackoff technique that have been proposed in the setting of plain Petri nets do not apply directly to ωPN because ωPN define transition systems that have infinite branching. This motivates a thorough analysis of the computational aspects of ωPN. We show that an ωPN can be turned into a plain Petri net that allows us to recover the reachability set of the ωPN, but that does not preserve termination an ωPN terminates iff it admits no infinitely long execution. This yields complexity bounds for the reachability, boundedness, place boundedness and coverability problems on ωPN. We provide a practical algorithm to compute a coverability set of the ωPN and to decide termination by adapting the classical Karp and Miller tree construction. We also adapt the Rackoff technique to ωPN, to obtain the exact complexity of the termination problem. Finally, we consider the extension of ωPN with reset and transfer arcs, and show how this extension impacts the decidability and complexity of the aforementioned problems.

Journal ArticleDOI
TL;DR: It turns out that for computing such a bisimulation up to congruence, often only a small fraction of the subset spaces need to be explored, and the formalization of the authors' idea is extremely elegant and leads to an algorithm that is efficient in practice.
Abstract: This idea does not lead to an efficient algorithm per se, as it can be shown that the entire state spaces of the two subset constructions (or at least their reachable parts) may need to be explored to establish bisimilarity if the two input automata are equivalent. The contribution of Bonchi and Pous is to show that bisimilarity—that is, the existence of a bisimulation relation between states—can be proved without constructing the deterministic automata explicitly. They show that, instead, it suffices to compute a bisimulation up to congruence. It turns out that for computing such a bisimulation up to congruence, often only a small fraction of the subset spaces need to be explored. As you will see, the formalization of their idea is extremely elegant and leads to an algorithm that is efficient in practice. The authors evaluate their algorithm on benchmarks. They explore how it relates to another class of promising recent approaches, called antichain methods, which also avoid explicit subset constructions for solving related problems, such as language inclusion for NFA over finite and infinite words and the solution of games played on graphs with imperfect information. In addition, they show how their construction can be improved by exploiting polynomial time computable simulation relations on NFA, an idea that was suggested by the antichain approach. As this work vividly demonstrates, even classical, well-studied problems like NFA equivalence can still offer surprising research opportunities, and new ideas may lead to elegant algorithmic improvements of practical importance.

Posted Content
TL;DR: In this paper, the authors study the window mean-payoff objectives that were introduced recently as an alternative to the classical mean payoff objectives and show that some of the window means are decidable in games with partial-observation.
Abstract: Mean-payoff games (MPGs) are infinite duration two-player zero-sum games played on weighted graphs. Under the hypothesis of perfect information, they admit memoryless optimal strategies for both players and can be solved in NP-intersect-coNP. MPGs are suitable quantitative models for open reactive systems. However, in this context the assumption of perfect information is not always realistic. For the partial-observation case, the problem that asks if the first player has an observation-based winning strategy that enforces a given threshold on the mean-payoff, is undecidable. In this paper, we study the window mean-payoff objectives that were introduced recently as an alternative to the classical mean-payoff objectives. We show that, in sharp contrast to the classical mean-payoff objectives, some of the window mean-payoff objectives are decidable in games with partial-observation.

Proceedings ArticleDOI
01 Aug 2015
TL;DR: This work gives algorithms to compute strategies of Eve that ensure minimal regret against an adversary whose choice of strategy is (i) unrestricted, (ii) limited to positional strategies, or (iii)limited to word strategies, and shows that the two last cases have natural modelling applications.
Abstract: Two-player zero-sum games of infinite duration and their quantitative versions are used in verification to model the interaction between a controller (Eve) and its environment (Adam). The question usually addressed is that of the existence (and computability) of a strategy for Eve that can maximize her payoff against any strategy of Adam. In this work, we are interested in strategies of Eve that minimize her regret, i.e. strategies that minimize the difference between her actual payoff and the payoff she could have achieved if she had known the strategy of Adam in advance. We give algorithms to compute the strategies of Eve that ensure minimal regret against an adversary whose choice of strategy is (i) unrestricted, (ii) limited to positional strategies, or (iii) limited to word strategies, and show that the two last cases have natural modelling applications. We also show that our notion of regret minimization in which Adam is limited to word strategies generalizes the notion of good for games introduced by Henzinger and Piterman, and is related to the notion of determinization by pruning due to Aminof, Kupferman and Lampert.

Posted Content
TL;DR: In this paper, the authors studied the problem of minimizing regret in discounted-sum games played on weighted game graphs and gave algorithms for the general problem of computing the minimal regret of the controller (Eve) as well as several variants depending on which strategies the environment (Adam) is permitted to use.
Abstract: In this paper, we study the problem of minimizing regret in discounted-sum games played on weighted game graphs. We give algorithms for the general problem of computing the minimal regret of the controller (Eve) as well as several variants depending on which strategies the environment (Adam) is permitted to use. We also consider the problem of synthesizing regret-free strategies for Eve in each of these scenarios.

Posted Content
TL;DR: In this article, it was shown that the multidimensional BWC problem is coNP-complete for both finite-memory and infinite-memory strategies, and that the complexity of the problem is polynomial in the number of players.
Abstract: The beyond worst-case threshold problem (BWC), recently introduced by Bruy\`ere et al., asks given a quantitative game graph for the synthesis of a strategy that i) enforces some minimal level of performance against any adversary, and ii) achieves a good expectation against a stochastic model of the adversary. They solved the BWC problem for finite-memory strategies and unidimensional mean-payoff objectives and they showed membership of the problem in NP$\cap$coNP. They also noted that infinite-memory strategies are more powerful than finite-memory ones, but the respective threshold problem was left open. We extend these results in several directions. First, we consider multidimensional mean-payoff objectives. Second, we study both finite-memory and infinite-memory strategies. We show that the multidimensional BWC problem is coNP-complete in both cases. Third, in the special case when the worst-case objective is unidimensional (but the expectation objective is still multidimensional) we show that the complexity decreases to NP$\cap$coNP. This solves the infinite-memory threshold problem left open by Bruy\`ere et al., and this complexity cannot be improved without improving the currently known complexity of classical mean-payoff games. Finally, we introduce a natural relaxation of the BWC problem, the beyond almost-sure threshold problem (BAS), which asks for the synthesis of a strategy that ensures some minimal level of performance with probability one and a good expectation against the stochastic model of the adversary. We show that the multidimensional BAS threshold problem is solvable in P.

Posted Content
TL;DR: In this article, the authors summarize new solution concepts useful for the synthesis of reactive systems in the context of non-zero sum games played on graphs, as part of the contributions obtained in the inVEST project funded by the European Research Council.
Abstract: In this invited contribution, we summarize new solution concepts useful for the synthesis of reactive systems that we have introduced in several recent publications. These solution concepts are developed in the context of non-zero sum games played on graphs. They are part of the contributions obtained in the inVEST project funded by the European Research Council.

Posted Content
TL;DR: The notion of regret minimization in which Adam is limited to word strategies generalizes the notion of good for games introduced by Henzinger and Piterman, and is related to the notionof determinization by pruning due to Aminof, Kupferman and Lampert.
Abstract: Two-player zero-sum games of infinite duration and their quantitative versions are used in verification to model the interaction between a controller (Eve) and its environment (Adam). The question usually addressed is that of the existence (and computability) of a strategy for Eve that can maximize her payoff against any strategy of Adam. In this work, we are interested in strategies of Eve that minimize her regret, i.e. strategies that minimize the difference between her actual payoff and the payoff she could have achieved if she had known the strategy of Adam in advance. We give algorithms to compute the strategies of Eve that ensure minimal regret against an adversary whose choice of strategy is (i) unrestricted, (ii) limited to positional strategies, or (iii) limited to word strategies. We also establish relations between the latter version and other problems studied in the literature.

Book ChapterDOI
12 Oct 2015
TL;DR: In this paper, the authors study the window mean-payoff objectives introduced recently as an alternative to the classical mean payoff objectives, and they show that some of the window means pay off objectives are decidable in games with partial-observation.
Abstract: Mean-payoff games (MPGs) are infinite duration two-player zero-sum games played on weighted graphs. Under the hypothesis of perfect information, they admit memoryless optimal strategies for both players and can be solved in Open image in new window . MPGs are suitable quantitative models for open reactive systems. However, in this context the assumption of perfect information is not always realistic. For the partial-observation case, the problem that asks if the first player has an observation-based winning strategy that enforces a given threshold on the mean-payoff, is undecidable. In this paper, we study the window mean-payoff objectives introduced recently as an alternative to the classical mean-payoff objectives. We show that, in sharp contrast to the classical mean-payoff objectives, some of the window mean-payoff objectives are decidable in games with partial-observation.

Posted Content
TL;DR: This paper shows that the problem becomes EXPTIME-complete for DNF/CNF Boolean combinations of heterogeneous measures taken among {WMP, Inf, Sup, LimInf, LimSup} and that exponential memory strategies are sufficient for both players to win.
Abstract: In this paper, we study two-player zero-sum turn-based games played on a finite multidimensional weighted graph. In recent papers all dimensions use the same measure, whereas here we allow to combine different measures. Such heterogeneous multidimensional quantitative games provide a general and natural model for the study of reactive system synthesis. We focus on classical measures like the Inf, Sup, LimInf, and LimSup of the weights seen along the play, as well as on the window mean-payoff (WMP) measure. This new measure is a natural strengthening of the mean-payoff measure. We allow objectives defined as Boolean combinations of heterogeneous constraints. While multidimensional games with Boolean combinations of mean-payoff constraints are undecidable, we show that the problem becomes EXPTIME-complete for DNF/CNF Boolean combinations of heterogeneous measures taken among {WMP, Inf, Sup, LimInf, LimSup} and that exponential memory strategies are sufficient for both players to win. We provide a detailed study of the complexity and the memory requirements when the Boolean combination of the measures is replaced by an intersection. EXPTIME-completeness and exponential memory strategies still hold for the intersection of measures in {WMP, Inf, Sup, LimInf, LimSup}, and we get PSPACE-completeness when WMP measure is no longer considered. To avoid EXPTIME-or PSPACE-hardness, we impose at most one occurrence of WMP measure and fix the number of Sup measures, and we propose several refinements (on the number of occurrences of the other measures) for which we get polynomial algorithms and lower memory requirements. For all the considered classes of games, we also study parameterized complexity.