scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2018"


Journal ArticleDOI
TL;DR: It is shown that, when the dynamic system stores only a bounded number of objects in each state, a finite abstraction can be constructed that is faithful for μ L, yielding decidability of verification, and notably implies that first-order ltl cannot be captured by μ L.
Abstract: We consider μ L , μ L a , and μ L p , three variants of the first-order μ-calculus studied in verification of data-aware processes, that differ in the form of quantification on objects across states. Each of these three logics has a distinct notion of bisimulation. We show that the three notions collapse for generic dynamic systems, which include all state-based systems specified using a logical formalism, e.g., the situation calculus. Hence, for such systems, μ L , μ L a , and μ L p have the same expressive power. We also show that, when the dynamic system stores only a bounded number of objects in each state (e.g., for bounded situation calculus action theories), a finite abstraction can be constructed that is faithful for μ L (the most general variant), yielding decidability of verification. This contrasts with the undecidability for first-order ltl , and notably implies that first-order ltl cannot be captured by μ L .

36 citations


Journal ArticleDOI
TL;DR: The model checking problem for the logic is defined and it is shown that it is PSpace-complete, and a labelling algorithm for solving the verification problem is proposed that is amenable to symbolic implementation.
Abstract: We introduce Strategy Logic with Knowledge, a novel formalism to reason about knowledge and strategic ability in memoryless multi-agent systems with incomplete information. We exemplify its expressive power; we define the model checking problem for the logic and show that it is PSpace-complete . We propose a labelling algorithm for solving the verification problem that we show is amenable to symbolic implementation. We introduce , an extension of the open-source model checker MCMAS, implementing the proposed algorithm. We report the benchmarks obtained on a number of scenarios from the literature, including the dining cryptographers protocol.

27 citations


Journal ArticleDOI
TL;DR: This paper introduces Graded Strategy Logic, an extension of SL by graded quantifiers over tuples of strategy variables, and proves that the model-checking problem of Graded SL is decidable, which is not harder than merely checking for the existence of such an equilibrium.
Abstract: Strategy Logic (SL) is a logical formalism for strategic reasoning in multi-agent systems. Its main feature is that it has variables for strategies that are associated to specific agents using a binding operator. In this paper we introduce Graded Strategy Logic ( Graded SL), an extension of SL by graded quantifiers over tuples of strategy variables, i.e., “there exist at least g different tuples ( x 1 , . . . , x n ) of strategies” where g is a cardinal from the set N ∪ { ℵ 0 , ℵ 1 , 2 ℵ 0 } . We prove that the model-checking problem of Graded SL is decidable. We then turn to the complexity of fragments of Graded SL. When the g's are restricted to finite cardinals, written Graded N SL, the complexity of model-checking is no harder than for SL, i.e., it is non-elementary in the quantifier-block rank. We illustrate our formalism by showing how to count the number of different strategy profiles that are Nash equilibria (NE). By analysing the structure of the specific formulas involved, we conclude that the important problem of checking for the existence of a unique NE can be solved in 2 ExpTime , which is not harder than merely checking for the existence of such an equilibrium.

26 citations


Journal ArticleDOI
TL;DR: A compositional framework is proposed, together with assume-guarantee rules, which enables winning strategies synthesised for individual components to be composed to a winning strategy for the composed game.
Abstract: Design of autonomous systems is facilitated by automatic synthesis of controllers from formal models and specifications. We focus on stochastic games, which can model interaction with an adverse environment, as well as probabilistic behaviour arising from uncertainties. Our contribution is twofold. First, we study long-run specifications expressed as quantitative multi-dimensional mean-payoff and ratio objectives. We then develop an algorithm to synthesise e-optimal strategies for conjunctions of almost sure satisfaction for mean payoffs and ratio rewards (in general games) and Boolean combinations of expected mean-payoffs (in controllable multi-chain games). Second, we propose a compositional framework, together with assume-guarantee rules, which enables winning strategies synthesised for individual components to be composed to a winning strategy for the composed game. The framework applies to a broad class of properties, which also include expected total rewards, and has been implemented in the software tool PRISM-games.

25 citations


Journal ArticleDOI
TL;DR: This work first proves computational relationships among classic models on graphs, investigates the gathering problem and disprove conjectures previously posed in the literature, and compares classic models against luminous models.
Abstract: In this paper we study the computational power of mobile robots without global coordination. A comprehensive evaluation of the computational power of robots moving within the Euclidean plane has been proposed by Das et al. in 2016. In their work, the authors study the relations between classic synchronization models, namely fully-synchronous, semi-synchronous, and asynchronous, and variations of them where robots are endowed with a visible light, i.e. they are luminous. Here we are interested in similar settings but for robots moving on graphs. In particular, we first prove computational relationships among classic models on graphs. To this respect, we investigate the gathering problem and disprove conjectures previously posed in the literature. Second, we compare classic models against luminous models. Third, we highlight the differences among different luminous models. Finally, we compare our results with those holding in the Euclidean plane.

25 citations


Journal ArticleDOI
TL;DR: In this paper, the complexity of game theoretic decision problems relating to such games (such as the existence of Nash equilibria) have been comprehensively classified, and the authors study Reactive Modules Games in which agents have only partial visibility of their environment.
Abstract: Reactive Modules is a high-level modelling language for concurrent, distributed, and multi-agent systems, which is used in a number of practical model checking tools. Reactive Modules Games are a game-theoretic extension of Reactive Modules, in which system components are assumed to act strategically in an attempt to satisfy a temporal logic formula representing their individual goal. Reactive Modules Games with perfect information have been extensively studied, and the complexity of game theoretic decision problems relating to such games (such as the existence of Nash equilibria) have been comprehensively classified. In this article, we study Reactive Modules Games in which agents have only partial visibility of their environment.

20 citations


Journal ArticleDOI
TL;DR: It is shown by examples that qMDPs can be used in analysis of quantum algorithms and protocols and an algorithm for finding optimal scheduler that attains the supremum reachability probability is developed.
Abstract: We introduce the notion of quantum Markov decision process (qMDP) as a semantic model of nondeterministic and concurrent quantum programs. It is shown by examples that qMDPs can be used in analysis of quantum algorithms and protocols. We study various reachability problems of qMDPs both for the finite-horizon and for the infinite-horizon. The (un)decidability and complexity of these problems are settled, and the relationship between one of them and the joint spectral radius problem, a long-standing open problem in matrix analysis and control theory, is clarified. Some of these results show a certain separation between the MDP and qMDP models. We also develop an algorithm for finding an optimal scheduler for a large class of qMDPs. Finally, the results of reachability problems are applied in the analysis of the safety problem for qMDPs.

18 citations


Journal ArticleDOI
TL;DR: In this paper, a linear-time and linear-space algorithm was proposed to compare two sequences by considering all their minimal absent words, where a word is an absent word of some sequence if it does not occur in the sequence.
Abstract: Sequence comparison is a prerequisite to virtually all comparative genomic analyses. It is often realised by sequence alignment techniques, which are computationally expensive. This has led to increased research into alignment-free techniques, which are based on measures referring to the composition of sequences in terms of their constituent patterns. These measures, such as q-gram distance, are usually computed in time linear with respect to the length of the sequences. In this paper, we focus on the complementary idea: how two sequences can be efficiently compared based on information that does not occur in the sequences. A word is an absent word of some sequence if it does not occur in the sequence. An absent word is minimal if all its proper factors occur in the sequence. Here we present the first linear-time and linear-space algorithm to compare two sequences by considering all their minimal absent words. In the process, we present results of combinatorial interest, and also extend the proposed techniques to compare circular sequences. We also present an algorithm that, given a word x of length n, computes the largest integer for which all factors of x of that length occur in some minimal absent word of x in time and space O ( n ) . Finally, we show that the known asymptotic upper bound on the number of minimal absent words of a word is tight.

18 citations


Journal ArticleDOI
Huacheng Yu1
TL;DR: A new combinatorial algorithm for triangle finding and Boolean matrix multiplication that runs in O ˆ ( n 3 / log 4 ⁡ n ) time, where the O £ notation suppresses poly(loglog) factors.
Abstract: We present a new combinatorial algorithm for triangle finding and Boolean matrix multiplication that runs in O ˆ ( n 3 / log 4 ⁡ n ) time, where the O ˆ notation suppresses poly(loglog) factors. This improves the previous best combinatorial algorithm by Chan that runs in O ˆ ( n 3 / log 3 ⁡ n ) time. Our algorithm generalizes the divide-and-conquer strategy of Chan's algorithm. Moreover, we propose a general framework for detecting triangles in graphs and computing Boolean matrix multiplication. Roughly speaking, if we can find the “easy parts” of a given instance efficiently, we can solve the whole problem faster than n 3 .

17 citations


Journal ArticleDOI
TL;DR: This paper shows how to build in linear time an O(n)-space data structure, which can answer in constant time queries on whether any two vertices are 2-vertex-connected, and can produce a “witness” of this property when two query vertices v and w are not 2- Vertex connectivity.
Abstract: Given a directed graph, two vertices v and w are 2-vertex-connected if there are two internally vertex-disjoint paths from v to w and two internally vertex-disjoint paths from w to v. In this paper, we show how to compute this relation in \(O(m+n)\) time, where n is the number of vertices and m is the number of edges of the graph. As a side result, we show how to build in linear time an O(n)-space data structure, which can answer in constant time queries on whether any two vertices are 2-vertex-connected. Additionally, when two query vertices v and w are not 2-vertex-connected, our data structure can produce in constant time a “witness” of this property, by exhibiting a vertex or an edge that is contained in all paths from v to w or in all paths from w to v. We are also able to compute in linear time a sparse certificate for 2-vertex connectivity, i.e., a subgraph of the input graph that has O(n) edges and maintains the same 2-vertex connectivity properties as the input graph.

15 citations


Journal ArticleDOI
TL;DR: This work studies the model checking (MC) problem for Halpern and Shoham's modal logic of time intervals (HS), interpreted on Kripke structures, under the homogeneity assumption, and shows that it is in P NP.
Abstract: Some temporal properties of reactive systems, such as actions with duration and temporal aggregations, which are inherently interval-based, can not be properly expressed by the standard, point-based temporal logics LTL, CTL and CTL⁎, as they give a state-by-state account of system evolution. Conversely, interval temporal logics—which feature intervals, instead of points, as their primitive entities—naturally express them. We study the model checking (MC) problem for Halpern and Shoham's modal logic of time intervals (HS), interpreted on Kripke structures, under the homogeneity assumption. HS is the best known interval-based temporal logic, which has one modality for each of the 13 ordering relations between pairs of intervals (Allen's relations), apart from equality. We focus on MC for some HS fragments featuring modalities for (a subset of) Allen's relations meet, met-by, started-by, and finished-by, showing that it is in P NP . Additionally, we provide some complexity lower bounds to the problem.

Journal ArticleDOI
TL;DR: By taking advantage of the structure of the system, compositional synthesis algorithm can significantly outperform centralized alternative, both from time and memory perspective, and can solve problems where the centralized algorithm is infeasible.
Abstract: We consider the controller synthesis problem for multi-agent systems that consist of a set of controlled and uncontrolled agents. Controlled agents may need to cooperate with each other and react to actions of uncontrolled agents in order to fulfill their objectives. Moreover, agents may be imperfect, i.e., only partially observe their environment. We propose a framework for controller synthesis based on compositional reactive synthesis. We implement the algorithms symbolically and apply them to a robot motion planning case study where multiple robots are placed on a grid-world with static obstacles and other dynamic, uncontrolled and potentially adversarial robots. We consider different objectives such as collision avoidance, keeping a formation and bounded reachability. We show that by taking advantage of the structure of the system, compositional synthesis algorithm can significantly outperform centralized alternative, both from time and memory perspective, and can solve problems where the centralized algorithm is infeasible.

Journal ArticleDOI
TL;DR: This paper shows how the runtime complexity of imperative programs can be analysed fully automatically by a transformation to term rewrite systems, the complexity of which can then be automatically verified by existing complexity tools.
Abstract: In this paper we show how the runtime complexity of imperative programs can be analysed fully automatically by a transformation to term rewrite systems, the complexity of which can then be automatically verified by existing complexity tools. We restrict to well-formed Jinja bytecode programs that only make use of non-recursive methods. The analysis can handle programs with cyclic data only if the termination behaviour is independent thereof. We exploit a term-based abstraction of programs within the abstract interpretation framework. The proposed transformation encompasses two stages. For the first stage we perform a combined control and data flow analysis by evaluating program states symbolically, which is shown to yield a finite representation of all execution paths of the given program through a graph, dubbed computation graph. In the second stage we encode the (finite) computation graph as a term rewrite system. This is done while carefully analysing complexity preservation and reflection of the employed transformations such that the complexity of the obtained term rewrite system reflects on the complexity of the initial program. Finally, we show how the approach can be automated and provide ample experimental evidence of the advantages of the proposed analysis.

Journal ArticleDOI
TL;DR: Exact equivalence turns out to be a theoretical tool to prove the product-form of models by showing that they are exactly equivalent to models which are known to be quasi-reversible.
Abstract: In this paper we consider two relations over stochastic automata, named lumpable bisimulation and exact equivalence, that induce a strong and an exact lumping, respectively, on the underlying Markov chains. We show that an exact equivalence over the states of a non-synchronising automaton is indeed a lumpable bisimulation for the corresponding reversed automaton and then it induces a strong lumping on the time-reversed Markov chain underlying the model. This property allows us to prove that the class of quasi-reversible models is closed under exact equivalence. Quasi-reversibility is a pivotal property to study product-form models. Hence, exact equivalence turns out to be a theoretical tool to prove the product-form of models by showing that they are exactly equivalent to models which are known to be quasi-reversible. Algorithms for computing both lumpable bisimulation and exact equivalence are introduced. Case studies as well as performance tests are also presented.

Journal ArticleDOI
TL;DR: This framework allows to convert many known \(\mathsf {\#P}\)-hardness results for counting problems into results of the following type: if the given problem admits an algorithm with running time \(2^{o(n)}\) on graphs with \(n\) vertices and \(\mathcal {O}(n)\) edges, then the problem fails.
Abstract: We devise a framework for proving tight lower bounds under the counting exponential-time hypothesis \(\mathsf {\#ETH}\) introduced by Dell et al. Our framework allows to convert many known \(\mathsf {\#P}\)-hardness results for counting problems into results of the following type: If the given problem admits an algorithm with running time \(2^{o(n)}\) on graphs with \(n\) vertices and \(\mathcal {O}(n)\) edges, then \(\mathsf {\#ETH}\) fails. As exemplary applications of this framework, we obtain such tight lower bounds for the evaluation of the zero-one permanent, the matching polynomial, and the Tutte polynomial on all non-easy points except for two lines.

Journal ArticleDOI
TL;DR: Distributed vertex coloring is employed to design improved centralized local algorithms for: maximal independent set, maximal matching, and an approximation scheme for maximum (weighted) matching over bounded degree graphs.
Abstract: We consider two models of computation: centralized local algorithms and local distributed algorithms. Algorithms in one model are adapted to the other model to obtain improved algorithms. Distributed vertex coloring is employed to design improved centralized local algorithms for: maximal independent set, maximal matching, and an approximation scheme for maximum (weighted) matching over bounded degree graphs. The improvement is threefold: the algorithms are deterministic, stateless, and the number of probes grows polynomially in log ⁎ ⁡ n , where n is the number of vertices of the input graph. The recursive centralized local improvement technique by Nguyen and Onak (FOCS 2008) is employed to obtain a distributed approximation scheme for maximum (weighted) matching.

Journal ArticleDOI
TL;DR: In this article, the authors study the proof-theoretic strength of CP+ ∀ red for quantified Boolean formulas (QBF), obtained by augmenting propositional Cutting Planes with a universal reduction rule.
Abstract: We study the cutting planes system CP+ ∀ red for quantified Boolean formulas (QBF), obtained by augmenting propositional Cutting Planes with a universal reduction rule, and analyse the proof-theoretic strength of this new calculus. While in the propositional case, Cutting Planes is of intermediate strength between resolution and Frege, our findings here show that the situation in QBF is slightly more complex: while CP+ ∀ red is again weaker than QBF Frege and stronger than the CDCL-based QBF resolution systems Q-Res and QU-Res , it turns out to be incomparable to even the weakest expansion-based QBF resolution system ∀ Exp+Res . A similar picture holds for a semantic version semCP+ ∀ red . Technically, our results establish the effectiveness of two lower bound techniques for CP+ ∀ red : via strategy extraction and via monotone feasible interpolation.

Journal ArticleDOI
TL;DR: This paper uses the abstraction of Discrete Time Markov Chains in order to speed-up the process of model repair for temporal logic reachability properties and presents a framework based on abstraction and refinement, which reduces the state space of the probabilistic system to repair at the price of obtaining an approximate solution.
Abstract: Given a Discrete Time Markov Chain M and a probabilistic temporal logic formula φ, where M violates φ, the problem of Model Repair is to obtain a new model M ′ , such that M ′ satisfies φ. Additionally, the changes made to M in order to obtain M ′ should be minimum with respect to all such M ′ . The state explosion problem makes the repair of large probabilistic systems almost infeasible. In this paper, we use the abstraction of Discrete Time Markov Chains in order to speed-up the process of model repair for temporal logic reachability properties. We present a framework based on abstraction and refinement, which reduces the state space of the probabilistic system to repair at the price of obtaining an approximate solution. A metric space is defined over the set of DTMCs, in order to measure the differences between the initial and the repaired models. For the repair, we introduce an algorithm and we discuss its important properties, such as soundness and complexity. As a proof of concept, we provide experimental results for probabilistic systems with diverse structures of state spaces, including the well-known Craps game, the IPv4 Zeroconf protocol, a message authentication protocol and the gambler's ruin model.

Journal ArticleDOI
TL;DR: This paper starts with an effect algebraic approach to the study of non-locality and contextuality by defining a fully faithful embedding of the category of effect algebras in this presheaf category over the natural numbers.
Abstract: Non-locality and contextuality are among the most counterintuitive aspects of quantum theory. They are difficult to study using classical logic and probability theory. In this paper we start with an effect algebraic approach to the study of non-locality and contextuality. We will see how different slices over the category of set valued functors on the natural numbers induce different settings in which non-locality and contextuality can be studied. This includes the Bell, Hardy and Kochen–Specker-type paradoxes. We link this to earlier sheaf theoretic approaches by defining a fully faithful embedding of the category of effect algebras in this presheaf category over the natural numbers.

Journal ArticleDOI
TL;DR: For every setting of the parameters of the model, it is proved that computing the partition function is either solvable in polynomial time or #P-hard.
Abstract: We prove a complexity dichotomy theorem for the six-vertex model. For every setting of the parameters of the model, we prove that computing the partition function is either solvable in polynomial time or #P-hard. The dichotomy criterion is explicit.

Journal ArticleDOI
TL;DR: This work describes an algorithm that sends L + O ( L ( T + 1 ) log ⁡ L + T ) bits in expectation and succeeds with high probability in L without any a priori knowledge of T.
Abstract: Alice and Bob want to run a protocol over a noisy channel, where some bits are flipped adversarially. Several results show how to make an L-bit noise-free communication protocol robust over such a channel. In a recent breakthrough, Haeupler described an algorithm sending a number of bits that is conjecturally near optimal for this model. However, his algorithm critically requires prior knowledge of the number of bits that will be flipped by the adversary. We describe an algorithm requiring no such knowledge, under the additional assumption that the channel connecting Alice and Bob is private. If an adversary flips T bits, our algorithm sends L + O ( L ( T + 1 ) log ⁡ L + T ) bits in expectation and succeeds with high probability in L. It does so without any a priori knowledge of T. Assuming a lower bound conjectured by Haeupler, our result is optimal up to logarithmic factors.

Journal ArticleDOI
TL;DR: A dynamic version of the primal-dual method for optimization problems, and applies it to obtain the following results: an O ( 1 ) -approximately optimal solution in O ( log 3 ⁡ n ) amortized update time, where n is the number of nodes in the graph.
Abstract: We develop a dynamic version of the primal-dual method for optimization problems, and apply it to obtain the following results. (1) For the dynamic set-cover problem, we maintain an O ( f 2 ) -approximately optimal solution in O ( f ⋅ log ⁡ ( m + n ) ) amortized update time, where f is the maximum “frequency” of an element, n is the number of sets, and m is the maximum number of elements in the universe at any point in time. (2) For the dynamic b-matching problem, we maintain an O ( 1 ) -approximately optimal solution in O ( log 3 ⁡ n ) amortized update time, where n is the number of nodes in the graph.

Journal ArticleDOI
TL;DR: In this article, the authors present a uniform framework for reversible π-calculi that is parametric with respect to a data structure that stores information about the extrusion of a name.
Abstract: This paper presents a study of causality in a reversible, concurrent setting. There exist various notions of causality in π-calculus, which differ in the treatment of parallel extrusions of the same name. Hence, by using a parametric way of bookkeeping the order and the dependencies among extruders it is possible to map different causal semantics into the same framework. Starting from this simple observation, we present a uniform framework for reversible π-calculi that is parametric with respect to a data structure that stores information about the extrusion of a name. Different data structures yield different approaches to the parallel extrusion problem. We map three well-known causal semantics into our framework. We prove causal-consistency for the three instances of our framework. Furthermore, we prove a causal correspondence between the appropriate instances of the framework and the Boreale-Sangiorgi semantics and an operational correspondence with the reversible π-calculus causal semantics.

Journal ArticleDOI
TL;DR: What becomes of three classical themes: the Universal Machine, Church's Thesis and the Turing Test are discussed, providing ingredients for a fundamental theory of computation that shifts from what is computed to how it is computed and by whom, moving from output to social behavior.
Abstract: Computation today is interactive agency in social networks. In this discussion paper, we look at this trend through the lens of logic, identifying two main lines. One is ‘epistemization’, making computational tasks refer explicitly to knowledge or beliefs of the agents performing them. The other line is using games as a model for computation, leading to ‘gamification’ of classical tasks, and computing by agents that may have preferences. This provides ingredients for a fundamental theory of computation that shifts from what is computed to how it is computed and by whom, moving from output to social behavior. The true impact of this shift is not in learning how to replace humans, but in creating new societies where humans and machines interact. While we do not offer a Turing-style account of this richer world, we discuss what becomes of three classical themes: the Universal Machine, Church's Thesis and the Turing Test. 1

Journal ArticleDOI
TL;DR: The main result is that, if the exponential-time hypothesis (ETH) is true, then solving ( 1 8 − O ( δ ) -NE O (δ) -SW for an n × n bimatrix game requires n Ω ˜ ( log ⁡ n ) time.
Abstract: We study the problem of finding approximate Nash equilibria that satisfy certain conditions, such as providing good social welfare. In particular, we study the problem ϵ-NE δ-SW: find an ϵ-approximate Nash equilibrium (ϵ-NE) that is within δ of the best social welfare achievable by an ϵ-NE. Our main result is that, if the exponential-time hypothesis (ETH) is true, then solving ( 1 8 − O ( δ ) ) -NE O ( δ ) -SW for an n × n bimatrix game requires n Ω ˜ ( log ⁡ n ) time. Building on this result, we show similar conditional running time lower bounds for a number of other decision problems for ϵ-NE, where, for example, the payoffs or supports of players are constrained. We show quasi-polynomial lower bounds for these problems assuming ETH, where these lower bounds apply to ϵ-Nash equilibria for all ϵ 1 8 . The hardness of these other decision problems has so far only been studied in the context of exact equilibria.

Journal ArticleDOI
TL;DR: It is shown that under some general conditions the finite memory determinacy of a class of two-player win/lose games played on finite graphs implies the existence of a Nash equilibrium built from finite memory strategies for the corresponding class of multi-player multi-outcome games.
Abstract: We show that under some general conditions the finite memory determinacy of a class of two-player win/lose games played on finite graphs implies the existence of a Nash equilibrium built from finite memory strategies for the corresponding class of multi-player multi-outcome games. This generalizes a previous result by Brihaye, De Pril and Schewe. We provide a number of example that separate the various criteria we explore. Our proofs are generally constructive, that is, provide upper bounds for the memory required, as well as algorithms to compute the relevant Nash equilibria.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of verifying formal properties of the underlying mathematical representation of these models, which is a Continuous Time Markov Chain, often with a huge state space.
Abstract: Many complex systems can be described by population models, in which a pool of agents interacts and produces complex collective behaviours. We consider the problem of verifying formal properties of the underlying mathematical representation of these models, which is a Continuous Time Markov Chain, often with a huge state space. To circumvent the state space explosion, we rely on stochastic approximation techniques, which replace the large model by a simpler one, guaranteed to be probabilistically consistent. We show how to efficiently and accurately verify properties of random individual agents, specified by Continuous Stochastic Logic extended with Timed Automata (CSL-TA), and how to lift these specifications to the collective level, approximating the number of agents satisfying them using second or higher order stochastic approximation techniques.

Journal ArticleDOI
TL;DR: In this article, the authors develop tools to handle infinitely branching WSTS by exploiting the crucial property that in the (ideal) completion of a well-quasi-ordered set, downward-closed sets are finite unions of ideals.
Abstract: Most decidability results concerning well-structured transition systems apply to the finitely branching variant. Yet some models (inserting automata, ω-Petri nets, …) are naturally infinitely branching. Here we develop tools to handle infinitely branching WSTS by exploiting the crucial property that in the (ideal) completion of a well-quasi-ordered set, downward-closed sets are finite unions of ideals. Then, using these tools, we derive decidability results and we delineate the undecidability frontier in the case of the termination, the maintainability and the coverability problems. Coverability and boundedness under new effectiveness conditions are shown decidable.

Journal ArticleDOI
TL;DR: It is shown that the landscape of decidable properties changes drastically when origin information is added, and equivalence of nondeterministic top-down and MSO transducers with origin becomes decidable.
Abstract: A tree transducer with origin translates an input tree into a pair of output tree and origin information. The origin information maps each node in the output tree to the unique node in the input tree that created it. In this way, the implementation of the transducer becomes part of its semantics. We show that the landscape of decidable properties changes drastically when origin information is added. For instance, equivalence of nondeterministic top-down and MSO transducers with origin becomes decidable. Both problems are undecidable without origin. The equivalence of deterministic top-down tree-to-string transducers is decidable with origin, while without origin it has (until very recently) been a long standing open problem. With origin, we can decide if a deterministic macro tree transducer can be realized by a deterministic top-down tree transducer; without origin this is an open problem.

Journal ArticleDOI
TL;DR: A new instantiation, called delayed promotion, is proposed that tries to reduce the possible exponential behaviors exhibited by the original method in the worst case, and often outperforms both the state-of-the-art solvers and the original priority promotion approach.
Abstract: Parity games are two-player infinite-duration games on graphs that play a crucial role in various fields of theoretical computer science. Finding efficient algorithms to solve these games in practice is widely acknowledged as a core problem in formal verification, as it leads to efficient solutions of the model-checking and satisfiability problems of expressive temporal logics, e.g., the modal μ Calculus . Their solution can be reduced to the problem of identifying sets of positions of the game, called dominions, in each of which a player can force a win by remaining in the set forever. Recently, a novel technique to compute dominions, called priority promotion, has been proposed, which is based on the notions of quasi dominion, a relaxed form of dominion, and dominion space. The underlying framework is general enough to accommodate different instantiations of the solution procedure, whose correctness is ensured by the nature of the space itself. In this paper we propose a new such instantiation, called delayed promotion, that tries to reduce the possible exponential behaviors exhibited by the original method in the worst case. The resulting procedure often outperforms both the state-of-the-art solvers and the original priority promotion approach.