scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2010"


Journal ArticleDOI
TL;DR: An overview of several upper bound heuristics that have been proposed and tested for the problem of determining the treewidth of a graph and finding tree decompositions and it is shown that in many cases, the heuristic give tree decomposition whose width is close to the exact treewitzer of the input graphs.
Abstract: For more and more applications, it is important to be able to compute the treewidth of a given graph and to find tree decompositions of small width reasonably fast. This paper gives an overview of several upper bound heuristics that have been proposed and tested for the problem of determining the treewidth of a graph and finding tree decompositions. Each of the heuristics produces tree decompositions whose width may be larger than the optimal width. However, experiments show that in many cases, the heuristics give tree decompositions whose width is close to the exact treewidth of the input graphs.

206 citations


Journal ArticleDOI
TL;DR: This work introduces strategy logic, a logic that treats strategies in two-player games as explicit first-order objects, and shows that strategy logic is decidable, by constructing tree automata that recognize sets of strategies.
Abstract: We introduce strategy logic, a logic that treats strategies in two-player games as explicit first-order objects. The explicit treatment of strategies allows us to specify properties of nonzero-sum games in a simple and natural way. We show that the one-alternation fragment of strategy logic is strong enough to express the existence of Nash equilibria and secure equilibria, and subsumes other logics that were introduced to reason about games, such as ATL, ATL^*, and game logic. We show that strategy logic is decidable, by constructing tree automata that recognize sets of strategies. While for the general logic, our decision procedure is nonelementary, for the simple fragment that is used above we show that the complexity is polynomial in the size of the game graph and optimal in the size of the formula (ranging from polynomial to 2EXPTIME depending on the form of the formula).

192 citations


Journal ArticleDOI
TL;DR: A unified approach to evaluate the relative expressive power of process calculi is presented and a small set of criteria that an encoding should satisfy to be considered a valid means for language comparison are identified.
Abstract: We present a unified approach to evaluate the relative expressive power of process calculi. In particular, we identify a small set of criteria (that have already been somehow presented in the literature) that an encoding should satisfy to be considered a valid means for language comparison. We argue that the combination of such criteria is a valid proposal by noting that: (i) several well-known encodings appeared in the literature satisfy them; (ii) this notion is not trivial, because some known encodings do not satisfy all the criteria we have proposed; (iii) several well-known separation results can be formulated in terms of our criteria; and (iv) some widely believed (but never formally proved) separation results can be proved by using the criteria we propose. Moreover, the criteria defined induce general proof-techniques for separation results that can be easily instantiated to cover known case-studies.

183 citations


Journal ArticleDOI
TL;DR: It is proved that the 20-year-old conjecture that TPTL is strictly more expressive than MTL is proved, and it is shown that T PTL formulae using only modality F can be translated into MTL.
Abstract: TPTL and MTL are two classical timed extensions of LTL . In this paper, we prove the 20-year-old conjecture that TPTL is strictly more expressive than MTL . But we show that, surprisingly, the TPTL formula proposed by Alur and Henzinger for witnessing this conjecture it can be expressed in MTL . More generally, we show that TPTL formulae using only modality F can be translated into MTL .

70 citations


Journal ArticleDOI
TL;DR: It is shown that the two NP-complete problems of Dodgson Score and Young Score have differing computational complexities when the winner is close to being a Condorcet winner and it is proved that the corresponding problem for Young elections is W[2]-complete.
Abstract: We show that the two NP-complete problems of Dodgson Score and Young Score have differing computational complexities when the winner is close to being a Condorcet winner. On the one hand, we present an efficient fixed-parameter algorithm for determining a Condorcet winner in Dodgson elections by a minimum number of switches in the votes. On the other hand, we prove that the corresponding problem for Young elections, where one has to delete votes instead of performing switches, is W[2]-complete. In addition, we study Dodgson elections that allow ties between the candidates and give fixed-parameter tractability as well as W[2]-completeness results depending on the cost model for switching ties.

65 citations


Journal ArticleDOI
TL;DR: A new lower bound is given to the covering radius of the first order Reed-Muller code RM(1,n), where n@?{9,11,13}.
Abstract: We give a new lower bound to the covering radius of the first order Reed-Muller code RM(1,n), where n@?{9,11,13}. Equivalently, we present the n-variable Boolean functions for n@?{9,11,13} with maximum nonlinearity found till now. In 2006, 9-variable Boolean functions having nonlinearity 241, which is strictly greater than the bent concatenation bound of 240, have been discovered in the class of Rotation Symmetric Boolean Functions (RSBFs) by Kavut, Maitra and Yucel. To improve this nonlinearity result, we have firstly defined some subsets of the n-variable Boolean functions as the generalized classes of ''k-RSBFs and k-DSBFs (k-Dihedral Symmetric Boolean Functions)'', where k is a positive integer dividing n. Secondly, utilizing a steepest-descent like iterative heuristic search algorithm, we have found 9-variable Boolean functions with nonlinearity 242 within the classes of both 3-RSBFs and 3-DSBFs. Thirdly, motivated by the fact that RSBFs are invariant under a special permutation of the input vector, we have classified all possible permutations up to the linear equivalence of Boolean functions that are invariant under those permutations.

57 citations


Journal ArticleDOI
TL;DR: It is proved that the complexity of the orbits of random points of Martin-Lof random points in dynamical systems over metric spaces equals the Kolmogorov-Sinai entropy of the system and the supremum of the complexity for orbits equals the topological entropy.
Abstract: We consider the dynamical behavior of Martin-Lof random points in dynamical systems over metric spaces with a computable dynamics and a computable invariant measure. We use computable partitions to define a sort of effective symbolic model for the dynamics. Through this construction, we prove that such points have typical statistical behavior (the behavior which is typical in the Birkhoff ergodic theorem) and are recurrent. We introduce and compare some notions of complexity for orbits in dynamical systems and prove: (i) that the complexity of the orbits of random points equals the Kolmogorov-Sinai entropy of the system, (ii) that the supremum of the complexity of orbits equals the topological entropy.

57 citations


Journal Article
TL;DR: In this article, a control synthesis problem for a generator with a global specification and with a combination of a coordinator and local controllers is formulated and solved, and conditions under which the result coincides with the supremal controllable sublanguage are stated.
Abstract: Supervisory control of distributed DES with a global specification and local supervisors is a difficult problem. For global specifications, the equivalent conditions for local control synthesis to equal global control synthesis may not be met. This paper formulates and solves a control synthesis problem for a generator with a global specification and with a combination of a coordinator and local controllers. Conditional controllability is proven to be an equivalent condition for the existence of such a coordinated controller. A procedure to compute the least restrictive solution within our coordination control architecture is provided and conditions under which the result coincides with the supremal controllable sublanguage are stated.

50 citations


Journal ArticleDOI
TL;DR: It is proved that the translation from processes to nets is a bisimulation between these two transition systems, which shows that differential interaction nets are sufficiently expressive for representing concurrency and mobility, as formalized by the pi-calculus.
Abstract: We propose and study a translation of a pi-calculus without sums nor recursion into an untyped version of differential interaction nets. We define a transition system of labeled processes and a transition system of labeled differential interaction nets. We prove that our translation from processes to nets is a bisimulation between these two transition systems. This shows that differential interaction nets are sufficiently expressive for representing concurrency and mobility, as formalized by the pi-calculus. Our study will concern essentially a replication-free fragment of the pi-calculus, but we shall also give indications on how to deal with a restricted form of replication.

47 citations


Journal ArticleDOI
TL;DR: It is shown that the maximal length k of an increasing subsequence of a permutation of the set of integers {1,2,...,n} can be computed in time O(nloglogk) in the RAM model, improving the previous 30-year bound of O( nlogk).
Abstract: We consider the complexity of computing a longest increasing subsequence (LIS) parameterised by the length of the output. Namely, we show that the maximal length k of an increasing subsequence of a permutation of the set of integers {1,2,...,n} can be computed in time O(nloglogk) in the RAM model, improving the previous 30-year bound of O(nlogk). The algorithm also improves on the previous O(nloglogn) bound. The optimality of the new bound is an open question. Reducing the computation of a longest common subsequence (LCS) between two strings to an LIS computation leads to a simple O(rloglogk)-time algorithm for two sequences having r pairs of matching symbols and an LCS of length k.

44 citations


Journal ArticleDOI
TL;DR: This work presents a symbolic abstraction-refinement approach to the solution of two-player games with reachability or safety goals, and discusses the property required of the abstraction scheme in order to achieve convergence and termination of the technique.
Abstract: Games that model realistic systems can have very large state-spaces, making their direct solution difficult. We present a symbolic abstraction-refinement approach to the solution of two-player games with reachability or safety goals. Given a reachability or safety property, an initial set of states, and a game representation, our approach starts by constructing a simple abstraction of the game, guided by the predicates present in the property and in the initial set. The abstraction is then refined, until it is possible to either prove, or disprove, the property over the initial states. Specifically, we evaluate the property on the abstract game in three-valued fashion, computing an over-approximation (the may states), and an under-approximation (the must states), of the states that satisfy the property. If this computation fails to yield a certain yes/no answer to the validity of the property on the initial states, our algorithm refines the abstraction by splitting uncertain abstract states (states that are may-states, but not must-states). The approach lends itself to an efficient symbolic implementation. We discuss the property required of the abstraction scheme in order to achieve convergence and termination of our technique.

Journal ArticleDOI
TL;DR: In this article, the relations among security notions for key encapsulation and data encapsulation for hybrid public key encryption (PKE) schemes were studied. And the authors showed that combinations of these security notions lead to a secure hybrid PKE scheme (by proving a composition theorem).
Abstract: In hybrid public key encryption (PKE), first a key encapsulation mechanism (KEM) is used to fix a random session key that is then fed into a highly efficient data encapsulation mechanism (DEM) to encrypt the actual message. A well-known composition theorem states that if both the KEM and the DEM have a high enough level of security (i.e., security against chosen-ciphertext attacks), then so does the hybrid PKE scheme. It is not known if these strong security requirements on the KEM and DEM are also necessary, nor if such general composition theorems exist for weaker levels of security. Using six different security notions for KEMs, 10 for DEMs, and six for PKE schemes, we completely characterize in this work which combinations lead to a secure hybrid PKE scheme (by proving a composition theorem) and which do not (by providing counterexamples). Furthermore, as an independent result, we revisit and extend prior work on the relations among security notions for KEMs and DEMs.

Journal ArticleDOI
TL;DR: This paper shows that the ''texts'' or ''positive presentations'' of concepts in inductive inference can be viewed as special cases of the ''admissible representations'' of computable analysis.
Abstract: Based on the observation that the category of concept spaces with the positive information topology is equivalent to the category of countably based T"0 topological spaces, we investigate further connections between the learning in the limit model of inductive inference and topology. In particular, we show that the ''texts'' or ''positive presentations'' of concepts in inductive inference can be viewed as special cases of the ''admissible representations'' of computable analysis. We also show that several structural properties of concept spaces have well known topological equivalents. In addition to topological methods, we use algebraic closure operators to analyze the structure of concept spaces, and we show the connection between these two approaches. The goal of this paper is not only to introduce new perspectives to learning theorists, but also to present the field of inductive inference in a way more accessible to domain theorists and topologists.

Journal ArticleDOI
TL;DR: Two formalisms for representing regular languages are considered: constant height pushdown automata and straight line programs for regular expressions, and it is constructively proved that their sizes are polynomially related.
Abstract: We consider two formalisms for representing regular languages: constant height pushdown automata and straight line programs for regular expressions. We constructively prove that their sizes are polynomially related. Comparing them with the sizes of finite state automata and regular expressions, we obtain optimal exponential and double exponential gaps, i.e., a more concise representation of regular languages.

Journal ArticleDOI
TL;DR: A language of accessible functors to specify history-dependent automata in a modular way is introduced, leading to a clean formulation and a generalisation of previous results, and to the proof of existence of a final coalgebra in a wide range of cases.
Abstract: The semantics of name-passing calculi is often defined employing coalgebraic models over presheaf categories This elegant theory lacks finiteness properties, hence it is not apt to implementation Coalgebras over named sets, called history-dependent automata, are better suited for the purpose due to locality of names A theory of behavioural functors for named sets is still lacking: the semantics of each language has been given in an ad-hoc way, and algorithms were implemented only for the @p-calculus Existence of the final coalgebra for the @p-calculus was never proved We introduce a language of accessible functors to specify history-dependent automata in a modular way, leading to a clean formulation and a generalisation of previous results, and to the proof of existence of a final coalgebra in a wide range of cases

Journal ArticleDOI
TL;DR: This work presents a complete picture of the computational complexity of checking strong and weak semantic preorders/equivalences between pushdown processes and finite-state processes and study fixed-parameter tractability in two important input parameters.
Abstract: Simulation preorder/equivalence and bisimulation equivalence are the most commonly used equivalences in concurrency theory. Their standard definitions are often called strong simulation/bisimulation, while weak simulation/bisimulation abstracts from internal @t-actions. We study the computational complexity of checking these strong and weak semantic preorders/equivalences between pushdown processes and finite-state processes. We present a complete picture of the computational complexity of these problems and also study fixed-parameter tractability in two important input parameters: x, the size of the finite control of the pushdown process, and y, the size of the finite-state process. All simulation problems are generally EXPTIME-complete and only become polynomial if both parameters x and y are fixed. Weak bisimulation equivalence is PSPACE-complete, but becomes polynomial if and only if parameter x is fixed. Strong bisimulation equivalence is PSPACE-complete, but becomes polynomial if either parameter x or y is fixed.

Journal ArticleDOI
TL;DR: The algorithm proposed by Goldreich and Ron [9] (ECCC-2000) for testing the expansion of a graph distinguishes with high probability between @a-expanders of degree bound d and graphs which are -far from having expansion at least @W(@a^2).
Abstract: We study the problem of testing the expansion of graphs with bounded degree d in sublinear time. A graph is said to be an @a-expander if every vertex set U@?V of size at most 12|V| has a neighborhood of size at least @a|U|. We show that the algorithm proposed by Goldreich and Ron [9] (ECCC-2000) for testing the expansion of a graph distinguishes with high probability between @a-expanders of degree bound d and graphs which are -far from having expansion at least @W(@a^2). This improves a recent result of Czumaj and Sohler [3] (FOCS-07) who showed that this algorithm can distinguish between @a-expanders of degree bound d and graphs which are -far from having expansion at least @W(@a^2/logn). It also improves a recent result of Kale and Seshadhri [12] (ECCC-2007) who showed that this algorithm can distinguish between @a-expanders and graphs which are -far from having expansion at least @W(@a^2) with twice the maximum degree. Our methods combine the techniques of [3], [9] and [12].

Journal ArticleDOI
TL;DR: The inclusion problem for pattern languages is studied to demonstrate that there is no effective procedure deciding the inclusion for the class of all pattern languages over all alphabets, and to disproves the prevalent conjecture on the inclusion of so-called similar E-pattern languages.
Abstract: We study the inclusion problem for pattern languages, which-due to Jiang et al. [T. Jiang, A. Salomaa, K. Salomaa, S. Yu, Decision problems for patterns, Journal of Computer and System Sciences 50 (1995) 53-63]-is known to be undecidable. More precisely, Jiang et al. demonstrate that there is no effective procedure deciding the inclusion for the class of all pattern languages over all alphabets. Most applications of pattern languages, however, consider classes over fixed alphabets, and therefore it is practically more relevant to ask for the existence of alphabet-specific decision procedures. Our first main result states that, for all but very particular cases, this version of the inclusion problem is also undecidable. The second main part of our paper disproves the prevalent conjecture on the inclusion of so-called similar E-pattern languages, and it explains the devastating consequences of this result for the intensive previous research on the most prominent open decision problem for pattern languages, namely the equivalence problem for general E-pattern languages.

Journal ArticleDOI
TL;DR: This paper builds on the antichain approach to develop an algorithm for constructing the winning strategies in parity games of imperfect information and reports on an experimental implementation, which is the first implementation of a procedure for solving imperfect-information parity games on graphs.
Abstract: We consider two-player parity games with imperfect information in which strategies rely on observations that provide imperfect information about the history of a play. To solve such games, i.e., to determine the winning regions of players and corresponding winning strategies, one can use the subset construction to build an equivalent perfect-information game. Recently, an algorithm that avoids the inefficient subset construction has been proposed. The algorithm performs a fixed-point computation in a lattice of antichains, thus maintaining a succinct representation of state sets. However, this representation does not allow to recover winning strategies. In this paper, we build on the antichain approach to develop an algorithm for constructing the winning strategies in parity games of imperfect information. One major obstacle in adapting the classical procedure is that the complementation of attractor sets would break the invariant of downward-closedness on which the antichain representation relies. We overcome this difficulty by decomposing problem instances recursively into games with a combination of reachability, safety, and simpler parity conditions. We also report on an experimental implementation of our algorithm; to our knowledge, this is the first implementation of a procedure for solving imperfect-information parity games on graphs.

Journal ArticleDOI
TL;DR: This article proves that ready simulation is fully abstract with respect to failure inclusion, when adding the conjunction operator to the standard setting of labelled transition systems with (CSP-style) parallel composition, and proves the semantic formalism robust when adding disjunction, external choice and hiding operators.
Abstract: This article provides new insight into the connection between the trace-based lower part of van Glabbeek's linear-time, branching-time spectrum and its simulation-based upper part. We establish that ready simulation is fully abstract with respect to failure inclusion, when adding the conjunction operator that was proposed by the authors in [TCS 373 (1-2) 19-40] to the standard setting of labelled transition systems with (CSP-style) parallel composition. More precisely, we actually prove a stronger result by considering a coarser relation than failure inclusion, namely a preorder that relates processes with respect to inconsistencies that may arise under conjunctive composition. Ready simulation is also shown to satisfy standard logic properties. In addition, our semantic formalism proves itself robust when adding disjunction, external choice and hiding operators, and is thus suited for studying mixed operational and logic languages. Finally, the utility of our formalism is demonstrated by means of a small example that deals with specifying and reasoning about mode logics within aircraft control systems.

Journal ArticleDOI
TL;DR: This work investigates semantic coherence conditions between the axiomatisation of a particular logic and its coalgebraic semantics that guarantee that the cut-rule is admissible in the ensuing sequent calculus and isolates a purely syntactic property of the set of modal rules that guarantees cut elimination.
Abstract: We give two generic proofs for cut elimination in propositional modal logics, interpreted over coalgebras. We first investigate semantic coherence conditions between the axiomatisation of a particular logic and its coalgebraic semantics that guarantee that the cut-rule is admissible in the ensuing sequent calculus. We then independently isolate a purely syntactic property of the set of modal rules that guarantees cut elimination. Apart from the fact that cut elimination holds, our main result is that the syntactic and semantic assumptions are equivalent in case the logic is amenable to coalgebraic semantics. As applications we present a new proof of the (already known) interpolation property for coalition logic and newly establish the interpolation property for the conditional logics LCK and LCKID.

Journal ArticleDOI
TL;DR: An approach based on allowing convex combinations of computations, similar to Segala and Lynch’s use of randomized schedulers is developed, and it is proved that bisimulation is sound and complete for this variant of pCTL ∗ .
Abstract: We investigate weak bisimulation of probabilistic systems in the presence of nondeterminism, i.e. labelled concurrent Markov chains (LCMC) with silent transitions. We develop an approach based on allowing convex combinations of computations, similar to Segala and Lynch’s use of randomized schedulers. The definition of weak bisimulation destroys the additivity property of the probability distributions, yielding instead capacities. The mathematics behind capacities naturally captures the intuition that when we deal with nondeterminism we must work with bounds on the possible probabilities rather than with their exact values. Our analysis leads to three new developments: • We identify a characterization of “image finiteness” for countable-state systems and present a new definition of weak bisimulation for these LCMCs. We prove that our definition coincides with that of Philippou, Lee and Sokolsky for finite state systems. • We show that bisimilar states have matching computations. The notion of matching involves convex combinations of transitions. • We study a minor variant of the probabilistic logic pCTL ∗ – the variation arises from an extra path formula to address action labels. We show that bisimulation is sound and complete for this variant of pCTL ∗ . This is an extended complete version of a paper that was presented at CONCUR 2002.

Journal ArticleDOI
TL;DR: The implementation and practical use of the developed techniques yield a novel and powerful framework which improves the current state-of-the-art of methods for proving termination of CSR.
Abstract: Termination is one of the most interesting problems when dealing with context-sensitive rewrite systems. Although a good number of techniques for proving termination of context-sensitive rewriting (CSR) have been proposed so far, the adaptation to CSR of the dependency pair approach, one of the most powerful techniques for proving termination of rewriting, took some time and was possible only after introducing some new notions like collapsing dependency pairs, which are specific for CSR. In this paper, we develop the notion of context-sensitive dependency pair (CSDP) and show how to use CSDPs in proofs of termination of CSR. The implementation and practical use of the developed techniques yield a novel and powerful framework which improves the current state-of-the-art of methods for automatically proving termination of CSR.

Journal ArticleDOI
TL;DR: A new efficient simulation algorithm is presented that is obtained as a modification of Henzinger et al.'s algorithm and whose correctness is based on some techniques used in applications of abstract interpretation to model checking.
Abstract: A number of algorithms for computing the simulation preorder and equivalence are available. Let @S denote the state space, -> the transition relation and P"s"i"m the partition of @S induced by simulation equivalence. The algorithms by Henzinger, Henzinger, Kopke and by Bloom and Paige run in O(|@S||->|)-time and, as far as time complexity is concerned, they are the best available algorithms. However, these algorithms have the drawback of a space complexity that is more than quadratic in the size of the state space @S. The algorithm by Gentilini, Piazza, Policriti - subsequently corrected by van Glabbeek and Ploeger - appears to provide the best compromise between time and space complexity. Gentilini et al.'s algorithm runs in O(|P"s"i"m|^2|->|)-time while the space complexity is in O(|P"s"i"m|^2+|@S|log|P"s"i"m|). We present here a new efficient simulation algorithm that is obtained as a modification of Henzinger et al.'s algorithm and whose correctness is based on some techniques used in applications of abstract interpretation to model checking. Our algorithm runs in O(|P"s"i"m||->|)-time and O(|P"s"i"m||@S|log|@S|)-space. Thus, this algorithm improves the best known time bound while retaining an acceptable space complexity that is in general less than quadratic in the size of the state space |@S|. An experimental evaluation showed good comparative results with respect to Henzinger, Henzinger and Kopke's algorithm.

Journal ArticleDOI
TL;DR: A new static analysis is defined that computes an abstract transition system that supports the validation of systems where several copies of an ambient may appear and also design new weaker and more efficient analyses by means of simple widening operators.
Abstract: This paper concerns the application of formal methods to biological systems, modeled specifically in BioAmbients, a variant of the Mobile Ambients calculus. Following the semantic-based approach of abstract interpretation, we define a new static analysis that computes an abstract transition system. Our analysis has two main advantages with respect to the analyses appearing in the literature: (i) it is able to address temporal properties which are more general than invariant properties; (ii) it supports, by means of a particular labeling discipline, the validation of systems where several copies of an ambient may appear. We also design new weaker and more efficient analyses by means of simple widening operators.

Journal ArticleDOI
TL;DR: In this paper, the authors propose means to predict termination in a higher-order imperative and concurrent language a la ML using the realizability technique, which is a technique for proving termination in typed formalisms.
Abstract: We propose means to predict termination in a higher-order imperative and concurrent language a la ML. We follow and adapt the classical method for proving termination in typed formalisms, namely the realizability technique. There is a specific difficulty with higher-order state, which is that one cannot define a realizability interpretation simply by induction on types, because applying a function may have side-effects at types not smaller than the type of the function. Moreover, such higher-order side-effects may give rise to computations that diverge without resorting to explicit recursion. We overcome these difficulties by introducing a type and effect system for our language that enforces a stratification of the memory. The stratification prevents the circularities in the memory that may cause divergence, and allows us to define a realizability interpretation of the types and effects, which we then use to establish that typable sequential programs in our system are guaranteed to terminate, unless they use explicit recursion in a divergent way. We actually prove a more general fairness property, that is, any typable thread yields the scheduler after some finite computation. Our realizability interpretation also copes with dynamic thread creation.

Journal ArticleDOI
TL;DR: The smallest class of languages containing the singletons and closed under Boolean operations, product and shuffle is studied, including the smallest class containing the languages composed of a single word of length 2 which is closed under boolean operations and shuffle by a letter.
Abstract: There is an increasing interest in the shuffle product on formal languages, mainly because it is a standard tool for modeling process algebras. It still remains a mysterious operation on regular languages. Antonio Restivo proposed as a challenge to characterize the smallest class of languages containing the singletons and closed under Boolean operations, product and shuffle. This problem is still widely open, but we present some partial results on it. We also study some other smaller classes, including the smallest class containing the languages composed of a single word of length 2 which is closed under Boolean operations and shuffle by a letter (resp. shuffle by a letter and by the star of a letter). The proof techniques have both an algebraic and a combinatorial flavor.

Journal ArticleDOI
TL;DR: This paper considers the uniform satisfiability problem for arbitrary MSO-definable local temporal logics and proves multi-exponential lower and upper bounds that depend on the number of alternations of set quantifiers present in the chosenMSO-modalities.
Abstract: We continue our study of the complexity of MSO-definable local temporal logics over concurrent systems that can be described by Mazurkiewicz traces. In previous papers, we showed that the satisfiability problem for any such logic is in PSPACE (provided the dependence alphabet is fixed, Gastin and Kuske (2003) [10]) and remains in PSPACE for all classical local temporal logics even if the dependence alphabet is part of the input, Gastin and Kuske (2007) [8]. In this paper, we consider the uniform satisfiability problem for arbitrary MSO-definable local temporal logics. For this problem, we prove multi-exponential lower and upper bounds that depend on the number of alternations of set quantifiers present in the chosen MSO-modalities.

Journal ArticleDOI
TL;DR: This work proposes a process-algebraic framework in which the control on the scheduler can be specified in syntactic terms, and shows how to apply it to solve the problem of restricted schedulers.
Abstract: When dealing with process calculi and automata which express both nondeterministic and probabilistic behavior, it is customary to introduce the notion of scheduler to resolve the nondeterminism. It has been observed that for certain applications, notably those in security, the scheduler needs to be restricted so not to reveal the outcome of the protocol's random choices, or otherwise the model of adversary would be too strong even for ''obviously correct'' protocols. We propose a process-algebraic framework in which the control on the scheduler can be specified in syntactic terms, and we show how to apply it to solve the problem mentioned above. We also consider the definition of (probabilistic) may and must preorders, and we show that they are precongruences with respect to the restricted schedulers. Furthermore, we show that all the operators of the language, except replication, distribute over probabilistic summation, which is a useful property for verification.

Journal ArticleDOI
TL;DR: Deterministic pushdown automata (pda) are shown to be weaker than Las Vegas pda, which in turn are weaker than one-sided-error pda; bounded-error two-sided error pda and nondeterministic pda are incomparable, and error probabilities can in general not be decreased arbitrarily.
Abstract: We study the most important probabilistic computation modes for pushdown automata. First we show that deterministic pushdown automata (pda) are weaker than Las Vegas pda, which in turn are weaker than one-sided-error pda. Next one-sided-error pda are shown to be weaker than (nondeterministic) pda. Finally bounded-error two-sided error pda and nondeterministic pda are incomparable. To show the limited power of bounded-error two-sided pda we apply communication arguments; in particular we introduce a non-standard model of communication which we analyze with the help of the discrepancy method. The power of randomization for pda is considerable, since we construct languages which are not deterministic context-free (resp. not context-free) but are recognizable with even arbitrarily small error by one-sided-error (resp. bounded-error) pda. On the other hand we show that, in contrast to many other fundamental models of computing, error probabilities can in general not be decreased arbitrarily: we construct languages which are recognizable by one-sided-error pda with error probability 12, but not by one-sided-error pushdown automata with error probability p<12. A similar result, with error probability 13, holds for bounded-error two-sided error pda.