scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2019"


Journal ArticleDOI
TL;DR: The introduced equivalence is used to study the expressiveness of AbC in terms of encoding aspects of broadcast channel-based interactions and to establish formal relationships between system descriptions at different levels of abstraction.
Abstract: We propose a process calculus, named AbC, to study the behavioural theory of interactions in collective-adaptive systems by relying on attribute-based communication. An AbC system consists of a set of parallel components each of which is equipped with a set of attributes. Communication takes place in an implicit multicast fashion, and interaction among components is dynamically established by taking into account “connections” as determined by predicates over their attributes. The structural operational semantics of AbC is based on Labelled Transition Systems that are also used to define bisimilarity between components. Labelled bisimilarity is in full agreement with a barbed congruence, defined by relying on simple basic observables and context closure. The introduced equivalence is used to study the expressiveness of AbC in terms of encoding aspects of broadcast channel-based interactions and to establish formal relationships between system descriptions at different levels of abstraction.

26 citations


Journal ArticleDOI
TL;DR: A comprehensive list of decision problems about the dynamical behaviour of reaction systems (such as cycles and fixed/periodic points, attractors, and reachability) is provided along with the corresponding computational complexity, which ranges from tractable problems to PSPACE-complete problems.
Abstract: Reaction systems are discrete dynamical systems inspired by bio-chemical processes, whose dynamical behaviour is expressed by set-theoretic operations on finite sets. Reaction systems thus provide a description of bio-chemical phenomena that complements the more traditional approaches, for instance those based on differential equations. A comprehensive list of decision problems about the dynamical behaviour of reaction systems (such as cycles and fixed/periodic points, attractors, and reachability) is provided along with the corresponding computational complexity, which ranges from tractable problems to PSPACE-complete problems.

18 citations


Journal ArticleDOI
TL;DR: A strong AC 0 version of the planted clique conjecture is proved: AC 0 -circuits asymptotically almost surely can not distinguish between a random graph and this graph with a randomly plantedClique of any size ≤ n ξ (where 0 ≤ ξ 1 ).
Abstract: We demonstrate some lower bounds for parameterized problems via parameterized classes corresponding to the classical AC 0 . Among others, we derive such a lower bound for all fpt-approximations of the parameterized clique problem and for a parameterized halting problem, which recently turned out to link problems of computational complexity, descriptive complexity, and proof theory. To show the lower bound mentioned first we prove a strong AC 0 version of the planted clique conjecture: AC 0 -circuits asymptotically almost surely can not distinguish between a random graph and this graph with a randomly planted clique of any size ≤ n ξ (where 0 ≤ ξ 1 ).

13 citations


Journal ArticleDOI
TL;DR: It is shown, using the decomposition method, how congruence formats can be relaxed for weak semantics that are stability-respecting, and it is proved that a congruent format for a stability- Respectable weak semantics is also acongruence format for its divergence-preserving counterpart.
Abstract: In two earlier papers we derived congruence formats with regard to transition system specifications for weak semantics on the basis of a decomposition method for modal formulas. The idea is that a congruence format for a semantics must ensure that the formulas in the modal characterisation of this semantics are always decomposed into formulas that are again in this modal characterisation. The stability and divergence requirements that are imposed on many of the known weak semantics have so far been outside the realm of this method. Stability refers to the absence of a τ-transition. We show, using the decomposition method, how congruence formats can be relaxed for weak semantics that are stability-respecting. This relaxation for instance brings the priority operator within the range of the stability-respecting branching bisimulation format. Divergence, which refers to the presence of an infinite sequence of τ-transitions, escapes the inductive decomposition method. We circumvent this problem by proving that a congruence format for a stability-respecting weak semantics is also a congruence format for its divergence-preserving counterpart.

11 citations


Journal ArticleDOI
TL;DR: In this article, Dachman-Soled et al. proposed a new notion called locally decodable and updatable non-malleable codes, which informally, provides the security guarantees of a non malleable code while also allowing for efficient random access.
Abstract: In a recent result, Dachman-Soled et al. (TCC 2015) proposed a new notion called locally decodable and updatable non-malleable codes, which informally, provides the security guarantees of a non-malleable code while also allowing for efficient random access. They also considered locally decodable and updatable non-malleable codes that are leakage-resilient, allowing for adversaries who continually leak information in addition to tampering. Unfortunately, the locality of their construction in the continual setting was \(\varOmega (\log n)\), meaning that if the original message size was n blocks, then \(\varOmega (\log n)\) blocks of the codeword had to be accessed upon each decode and update instruction.

11 citations


Journal ArticleDOI
TL;DR: The logical characterization of bisimulation may fail when there are uncountably many labels, but with a stronger assumption on the transition functions (continuity instead of just measurability), the logical characterization result for arbitrarily many labels is regained.
Abstract: Logical characterizations of probabilistic bisimulation and simulation for Labelled Markov Processes were given by Desharnais et al. These results hold for systems defined on analytic state spaces and assume countably many labels in the case of bisimulation and finitely many labels in the case of simulation. We revisit these results by giving simpler and more streamlined proofs. In particular, our proof for simulation has the same structure as the one for bisimulation, relying on a new result of a topological nature. We also propose a new notion of event simulation. Our proofs assume countably many labels, and we show that the logical characterization of bisimulation may fail when there are uncountably many labels. However, with a stronger assumption on the transition functions (continuity instead of just measurability), we regain the logical characterization result for arbitrarily many labels. These results arose from a game-theoretic understanding of probabilistic simulation and bisimulation.

11 citations


Journal ArticleDOI
TL;DR: Any Las Vegas algorithm relying on collision detection can be transposed into a Monte Carlo algorithm without collision detection at the cost of a logarithmic slowdown, which it is proved is optimal.
Abstract: We consider networks of entities which interact using beeps. In the basic model by Cornejo and Kuhn (2010), entities either beep or listen in each round. Those who beep cannot detect simultaneous beeps. Those who listen distinguish only between silence and non-silence. We call this model BL (beep or listen). Stronger models enable collision detection when beeping ( B c d L ), listening ( B L c d ), or both ( B c d L c d ). We identify a set of generic design patterns in beeping algorithms: multi-slot phases; exclusive beeps; adaptive probability; internal or peripheral collision detection (and their emulation). Using them, we formulate concisely a number of algorithms for basic tasks like colouring, degree computation, and MIS. We analyse their complexities, improving known bounds of the MIS algorithm by Jeavons et al. (2016). Finally, inspired by Afek et al. (2013), we show that all Las Vegas algorithms using collision detection are convertible into Monte Carlo algorithms with emulated detection, with a logarithmic slowdown.

11 citations


Journal ArticleDOI
TL;DR: A linear time algorithm for locating all maximal square 2D palindromes in a given 2D text and provides a tradeoff in terms of output size; if the output size is small, the second algorithm is preferable, while the first would be more efficient if theoutput size is Θ ( n 3 ) .
Abstract: This paper extends the problem of palindrome searching into a higher dimension, addressing two definitions of 2D palindromes. The first definition implies a square, while the second definition (also known as a centrosymmetric factor) can be any rectangular shape. We present a linear time algorithm for locating all maximal square 2D palindromes in a given 2D text. For the second definition of palindromes (rect2DP), we present two different algorithms. Given a text of size n × n , the first algorithm has time O ( n 3 ) , which is linear in the worst case output size. The second algorithm has time O ( n 2 log ⁡ n + o c c log ⁡ n ) , where occ is the number of maximal rect2DP in the output. This provides a tradeoff in terms of output size; if the output size is small, the second algorithm is preferable, while the first would be more efficient if the output size is Θ ( n 3 ) .

10 citations


Journal ArticleDOI
TL;DR: The notion of pattern is extended and the notion of a topological minor of a binary CSP instance is introduced to obtain a compact mechanism for expressing novel tractable subproblems of the CSP, including new generalisations of the class of acyclic instances.
Abstract: The binary Constraint Satisfaction Problem (CSP) is to decide whether there exists an assignment to a set of variables which satisfies specified constraints between pairs of variables. A binary CSP instance can be presented as a labelled graph encoding both the forms of the constraints and where they are imposed. We consider subproblems defined by restricting the allowed form of this graph. One type of restriction is to forbid certain specified substructures (patterns). This captures some tractable classes of the CSP, but does not capture classes defined by language restrictions, or the well-known structural property of acyclicity. We extend the notion of pattern and introduce the notion of a topological minor of a binary CSP instance. By forbidding a finite set of patterns from occurring as topological minors we obtain a compact mechanism for expressing novel tractable subproblems of the CSP, including new generalisations of the class of acyclic instances.

8 citations


Journal ArticleDOI
TL;DR: In this paper, the authors give faster and simpler fully polynomial-time approximation schemes (FPTASes) for the #P-complete problem of counting 0/1 Knapsack solutions, and for its random generation counterpart.
Abstract: We give faster and simpler fully polynomial-time approximation schemes (FPTASes) for the #P-complete problem of counting 0/1 Knapsack solutions, and for its random generation counterpart. Our method is based on dynamic programming and discretization of large numbers through floating-point arithmetic. We improve both deterministic counting FPTASes in (Gopalan et al., FOCS 2011), (Stefankovic et al., SIAM J. Comput. 2012) and the randomized counting and random generation algorithms in (Dyer, STOC 2003). We also improve the complexity of the problem of counting 0/1 Knapsack solutions in an arc-weighted DAG.

8 citations


Journal ArticleDOI
TL;DR: This paper considers all fragments of Halpern and Shoham's interval temporal logic HS with a decidable satisfiability problem over the rationals, and provides a complete classification of them in terms of their expressiveness and computational complexity by solving the last few open problems.
Abstract: Interval temporal logics provide a natural framework for temporal reasoning about interval structures over linearly ordered domains, where intervals are taken as first-class citizens. Their expressive power and computational behavior mainly depend on two parameters: the set of modalities they feature and the linear orders over which they are interpreted. In this paper, we consider all fragments of Halpern and Shoham's interval temporal logic HS with a decidable satisfiability problem over the rationals, and we provide a complete classification of them in terms of their expressiveness and computational complexity by solving the last few open problems.

Journal ArticleDOI
TL;DR: The problem of weighted pattern matching in which a string pattern P is given, a weight threshold 1 z, and a weighted text X arriving on-line is studied, leading to a new algorithm that processes each arriving position of X in O ( z + σ ) time using O ( m + z ) extra space.
Abstract: A weighted sequence is a sequence of probability distributions over an alphabet of size σ. Weighted sequences arise naturally in many applications. We study the problem of weighted pattern matching in which we are given a string pattern P of length m, a weight threshold 1 z , and a weighted text X arriving on-line. We say that P occurs in X at position i if the product of probabilities of the letters of P at positions i − m + 1 , … , i in X is at least 1 z . We first discuss how to apply a known general scheme that transforms off-line pattern matching algorithms to on-line algorithms to obtain an on-line algorithm that requires O ( ( σ + log ⁡ z ) log ⁡ m ) or O ( σ log 2 ⁡ m ) time per arriving position; with the space requirement however being O ( m min ⁡ ( σ , z ) ) . Our main result is a new algorithm that processes each arriving position of X in O ( z + σ ) time using O ( m + z ) extra space.

Journal ArticleDOI
TL;DR: This paper establishes similar characterizations for conjunctive Grammars, that is, for grammars extended with a conjunction operator, as well as for Boolean grammARS, which are further equipped with a negation operator, and shows that no such characterization is possible for several subclasses of linear grammar.
Abstract: A famous theorem by Greibach (“The hardest context-free language”, SIAM J. Comp., 1973) states that there exists such a context-free language L 0 , that every context-free language over any alphabet is reducible to L 0 by a homomorphic reduction—in other words, is representable as its inverse homomorphic image h − 1 ( L 0 ) , for a suitable homomorphism h. This paper establishes similar characterizations for conjunctive grammars, that is, for grammars extended with a conjunction operator, as well as for Boolean grammars, which are further equipped with a negation operator. At the same time, it is shown that no such characterization is possible for several subclasses of linear grammars.

Journal ArticleDOI
Sabine Broda1, Markus Holzer, Eva Maia1, Nelma Moreira1, Rogério Reis1 
TL;DR: It turns out that the prefix automaton A Pre is central to reverse expressions, because the determinisation of the double reversal of A Pre can be represented as a quotient of any of the considered deterministic automata that are considered in this investigation.
Abstract: We contribute new relations to the taxonomy of different conversions from regular expressions to equivalent finite automata. In particular, we are interested in transformations that construct automata such as, the follow automaton, the partial derivative automaton, the prefix automaton, the automata based on pointed expressions recently introduced and studied, and last but not least the position, or Glushkov automaton ( A POS ), and their double reversed construction counterparts. We deepen the understanding of these constructions and show that with the artefacts used to construct the Glushkov automaton one is able to capture most of them. As a byproduct we define a dual version A POS ← of the position automaton which plays a similar role as A POS but now for the reverse expression. Moreover, it turns out that the prefix automaton A Pre is central to reverse expressions, because the determinisation of the double reversal of A Pre (first reverse the expression, construct the automaton A Pre , and then reverse the automaton) can be represented as a quotient of any of the considered deterministic automata that we consider in this investigation. This shows that although the conversion of regular expressions and reversal of regular expressions to finite automata seems quite similar, there are significant differences.

Journal ArticleDOI
TL;DR: The result that any nondeterministic pushdown automaton augmented with reversal-bounded counters, where the pushdown can “flip” its contents a bounded number of times, can be accepted by a machine with only reversal- bounded counters is obtained.
Abstract: The store language of a machine of some arbitrary type is the set of all store configurations (state plus store contents but not input) that can appear in an accepting computation. New algorithms and characterizations of store languages are obtained, such as the result that any nondeterministic pushdown automaton augmented with reversal-bounded counters, where the pushdown can “flip” its contents a bounded number of times, can be accepted by a machine with only reversal-bounded counters. Connections are made between store languages and several model checking and reachability problems, such as accepting the set of all predecessor and successor configurations of a machine from a given (regular) set of configurations. For a variety of different machine models often containing multiple parallel data stores, these sets can be accepted with a machine model that is simpler than the original model itself. Store languages are key to showing these properties.

Journal ArticleDOI
TL;DR: This article first refine the specification of diagnosability by identifying three criteria: detecting faulty runs or providing information for all runs, and requiring or not a uniform detection delay, then gives a complete picture of relations between the different diagnosable specifications for probabilistic systems and establishes characterisations for most of them in the finite-state case.
Abstract: Diagnosis of partially observable stochastic systems prone to faults was introduced in the late nineties. Diagnosability, i.e. the existence of a diagnoser, may be specified in different ways: exact diagnosability requires that almost surely a fault is detected and that no fault is erroneously claimed; approximate diagnosability tolerates a small error probability when claiming a fault; last, accurate approximate diagnosability guarantees that the error probability can be chosen arbitrarily small. In this article, we first refine the specification of diagnosability by identifying three criteria: (1) detecting faulty runs or providing information for all runs (2) considering finite or infinite runs, and (3) requiring or not a uniform detection delay. We then give a complete picture of relations between the different diagnosability specifications for probabilistic systems and establish characterisations for most of them in the finite-state case. Based on these characterisations, we develop decision procedures, study their complexity and prove their optimality. We also design synthesis algorithms to construct diagnosers and we analyse their memory requirements. Finally we establish undecidability of the diagnosability problems for which we provided no characterisation.

Journal ArticleDOI
TL;DR: It is shown that relay-networks can be used to model dynamic networks, in a way, akin to Kripke's possible worlds, and a max-flow min-cut theorem is proved for the Renyi entropy with order less than one; it is also shown that linear network coding fails for relay networks.
Abstract: The paper presents four distinct new ideas and results for communication networks: 1) We show that relay-networks (i.e. communication networks where different nodes use the same coding functions) can be used to model dynamic networks, in a way, akin to Kripke's possible worlds. Changes in the network are modelled by considering a multiverse where different possible situations arise as worlds existing in parallel. 2) We introduce the term model, which is a simple, graph-free symbolic approach to communication networks. This model yields an algorithm to calculate the capacity of a given communication network. 3) We state and prove variants of a theorem concerning the dispersion of information in single-receiver communications. The dispersion theorem resembles the max-flow min-cut theorem for commodity networks. The proof uses a very weak kind of network coding, called routing with dynamic headers. 4) We show that the solvability of an abstract multi-user communication problem is equivalent to the solvability of a single-target communication in a suitable relay network. In the paper, we develop a number of technical ramifications of these ideas and results. We prove a max-flow min-cut theorem for the Renyi entropy with order less than one, given that the sources are equiprobably distributed; conversely, we show that the max-flow min-cut theorem fails for order greater than one. We also show that linear network coding fails for relay networks, although routing with dynamic headers is asymptotically sufficient to reach capacity.

Journal ArticleDOI
TL;DR: A general technique is devised that provides lower bounds for all tree-like QBF systems of the form P +∀ red , where P is a propositional system and a full characterisation of hardness for tree- like QBF Frege is obtained.
Abstract: We examine the tree-like versions of QBF Frege and extended Frege systems. While in the propositional setting, tree-like and dag-like Frege are equivalent, we show that this is not the case for QBF Frege, where tree-like systems are exponentially weaker. This applies to the version of QBF Frege where the universal reduction rule substitutes universal variables by 0/1 constants. To show lower bounds for tree-like QBF Frege we devise a general technique that provides lower bounds for all tree-like QBF systems of the form P +∀ red , where P is a propositional system. The lower bound is based on the semantic measure of strategy size corresponding to the size of countermodels for false QBFs. We also obtain a full characterisation of hardness for tree-like QBF Frege. Lower bounds for this system either arise from a lower bound to propositional Frege, from a circuit lower bound, or from a lower bound to strategy size.

Journal ArticleDOI
TL;DR: A logical characterization of (bi)simulation metrics obtained by a probabilistic variant of Hennessy-Milner logic enriched with variables, whose semantics is defined following the equational μ-calculus approach based on the novel notions of mimicking formULae and syntactical distance on formulae.
Abstract: In this paper we propose a logical characterization of (bi)simulation metrics obtained by a probabilistic variant of Hennessy-Milner logic enriched with variables, whose semantics is defined following the equational μ-calculus approach. Our characterization is based on the novel notions of mimicking formulae and syntactical distance on formulae. The former ones are the quantitative analogous to characteristic formulae. The latter is a 1-bounded pseudometric on formulae measuring their syntactical disparities. The characterization is obtained by showing that the (bi)simulation distance between processes corresponds to the syntactical distance between their mimicking formulae. We also discuss the expressive power of mimicking formulae with respect to probabilistic (bi)simulations. We show that two processes are bisimilar if and only if their mimicking formulae are syntactically equivalent. Moreover, we obtain that mimicking formulae of processes coincide with their characteristic formulae for ready simulation and that negation free mimicking formulae coincide with the characteristic formulae for simulation.

Journal ArticleDOI
TL;DR: For the class of weighted languages recognizable by finite-state automata, this work proves closure properties, a Chomsky-Schutzenberger theorem, and a Buchi-Elgot-Trakhtenbrot theorem.
Abstract: We consider finite-state automata that are equipped with a storage. Moreover, the transitions are weighted by elements of a unital valuation monoid. A weighted automaton with storage recognizes a weighted language, which is a mapping from input strings to elements of the carrier set of the unital valuation monoid. For the class of weighted languages recognizable by such automata we prove closure properties, a Chomsky-Schutzenberger theorem, and a Buchi-Elgot-Trakhtenbrot theorem. In case of idempotent, locally finite, and sequential unital valuation monoids, the recognized weighted languages are step functions.

Journal ArticleDOI
TL;DR: This work studies sets of directed acyclic graphs, called regular DAG languages, which are accepted by a recently introduced type of DAG automata motivated by current developments in natural language proces.
Abstract: We study sets of directed acyclic graphs, called regular DAG languages, which are accepted by a recently introduced type of DAG automata motivated by current developments in natural language proces ...

Journal ArticleDOI
TL;DR: This paper gives t-revealing codes in the binary Hamming space F n which have applications to the list decoding problem of the Levenshtein's channel model, and to the information retrievalProblem of the Yaakobi-Bruck model of associative memories.
Abstract: In this paper, we introduce t-revealing codes in the binary Hamming space F n . Let C ⊆ F n be a code and denote by I t ( C ; x ) the set of elements of C which are within (Hamming) distance t from a word x ∈ F n . A code C is t-revealing if the majority voting on the coordinates of the words in I t ( C ; x ) gives unambiguously x. These codes have applications, for instance, to the list decoding problem of the Levenshtein's channel model, where the decoder provides a list based on several different outputs of the channel with the same input, and to the information retrieval problem of the Yaakobi-Bruck model of associative memories. We give t-revealing codes which improve some of the key parameters for these applications compared to earlier code constructions.

Journal ArticleDOI
TL;DR: For any n ≥ 3 and q ≥ 3, it was shown in this paper that the equality function on n variables over a domain of size q cannot be realized by matchgates under holographic transformations.
Abstract: For any n ≥ 3 and q ≥ 3 , we prove that the Equality function ( = n ) on n variables over a domain of size q cannot be realized by matchgates under holographic transformations. This is a consequence of our theorem on the structure of blockwise symmetric matchgate signatures. This has the implication that the standard holographic algorithms based on matchgates, a methodology known to be universal for #CSP over the Boolean domain, cannot produce P-time algorithms for planar #CSP over any higher domain q ≥ 3 .

Journal ArticleDOI
TL;DR: The length and weight of polynomials sign-representing Boolean functions of the form ⊕ k f, the XOR of k copies of f on disjoint sets of variables are considered and it is shown that for an infinite family of functions f, a naive construction does not yield a shortest polynomial sign-Representing ⊚ k f.
Abstract: A multilinear polynomial p is said to sign-represent a Boolean function f : { − 1 , 1 } n → { − 1 , 1 } if f ( x ) = s g n ( p ( x ) ) for all x ∈ { − 1 , 1 } n . In this paper, we consider the length and weight of polynomials sign-representing Boolean functions of the form ⊕ k f , the XOR of k copies of f on disjoint sets of variables. Firstly, we show that for an infinite family of functions f, a naive construction does not yield a shortest polynomial sign-representing ⊕ k f . More precisely, we give a construction of polynomials sign-representing ⊕ k AND n whose length is strictly smaller than the k-th power of the minimum length of a polynomial sign-representing AND n , for every k ≥ 2 and n ≥ 2 except for k = n = 2 . Previously, such polynomials were known only for n = 2 (Sezener and Oztop, 2015). A similar result for the weight is also provided. Secondly, we introduce a parameter v f ⁎ of a Boolean function f and show that the k-th root of the minimum weight of a polynomial sign-representing ⊕ k f converges between v f ⁎ and ( v f ⁎ ) 2 as k goes to infinity.

Journal ArticleDOI
TL;DR: An extension of the assertion language with inductive definitions is introduced and the expressiveness theorem is shown that states the weakest precondition of every program and every assertion can be expressed by some assertion.
Abstract: Reynolds' separation logical system for pointer program verification is investigated. This paper proves its completeness theorem that states that every true asserted program is provable in the logical system. In order to prove the completeness, this paper shows the expressiveness theorem that states the weakest precondition of every program and every assertion can be expressed by some assertion. This paper also introduces an extension of the assertion language with inductive definitions and proves the soundness theorem, the expressiveness theorem, and the completeness theorem.

Journal ArticleDOI
TL;DR: A barrier for the hardness result: the non-uniqueness of infinite Gibbs measure is not realizable by any finite gadgets is led to.
Abstract: We study the problem of approximately counting matchings in hypergraphs of bounded maximum degree and maximum size of hyperedges. With an activity parameter λ, each matching M is assigned a weight λ | M | . The counting problem is formulated as computing a partition function that gives the sum of the weights of all matchings in a hypergraph. This problem unifies two extensively studied statistical physics models in approximate counting: the hardcore model (graph independent sets) and the monomer–dimer model (graph matchings). For this problem, the critical activity λ c = d d k ( d − 1 ) d + 1 is the threshold for the uniqueness of Gibbs measures on the infinite ( d + 1 ) -uniform ( k + 1 ) -regular hypertree. Consider hypergraphs of maximum degree at most k + 1 and maximum size of hyperedges at most d + 1 . We show that when λ λ c , there is an FPTAS for computing the partition function; and when λ = λ c , there is a PTAS for computing the log-partition function. These algorithms are based on the decay of correlation (strong spatial mixing) property of Gibbs distributions. When λ > 2 λ c , there is no PRAS for the partition function or the log-partition function unless NP = RP. Towards obtaining a sharp transition of computational complexity of approximate counting, we study the local convergence from a sequence of finite hypergraphs to the infinite lattice with specified symmetry. We show a surprising connection between the local convergence and the reversibility of a natural random walk. This leads us to a barrier for the hardness result: The non-uniqueness of infinite Gibbs measure is not realizable by any finite gadgets.

Journal ArticleDOI
TL;DR: This paper presents the first fully polynomial stabilizing algorithm constructing a BFS tree under a distributed daemon, and as far as the authors know, it is also the first Fully Polynomial Stabilizing algorithm for spanning tree construction.
Abstract: The construction of a spanning tree is a fundamental task in distributed systems which allows to resolve other tasks (i.e., routing, mutual exclusion, network reset). In this paper, we are interested in the problem of constructing a Breadth First Search (BFS) tree. Stabilization is a versatile technique which ensures that the system recovers a correct behavior from an arbitrary global state resulting from transient faults. A fully polynomial algorithm has a round complexity in O ( d a ) and a step complexity in O ( n b ) where d and n are the diameter and the number of nodes of the network and a and b are constants. We present the first fully polynomial stabilizing algorithm constructing a BFS tree under a distributed daemon. Moreover, as far as we know, it is also the first fully polynomial stabilizing algorithm for spanning tree construction. Its round complexity is in Θ ( d 2 ) and its step complexity is in O ( n 6 ) .

Journal ArticleDOI
TL;DR: The problem of deciding whether a probabilistic pushdown automaton (pPDA) is simulated by a finite Markov decision process (MDP) is EXPTIME-complete, which means the decision problem is in PTIME when both the number of states of the pPDA and theNumber of states in the MDP are fixed.
Abstract: Checking whether a pushdown automaton is simulated – in the sense of a simulation pre-order – by a finite-state automaton is EXPTIME-complete. This paper shows that the same computational complexity is obtained in a probabilistic setting. That is, the problem of deciding whether a probabilistic pushdown automaton (pPDA) is simulated by a finite Markov decision process (MDP) is EXPTIME-complete. The considered pPDA contain both probabilistic and non-deterministic branching. The EXPTIME-membership is shown by combining a partition-refinement algorithm with a tableaux system that is inspired by Stirling's technique for bisimilarity checking of ordinary pushdown automata. The hardness is obtained by exploiting the EXPTIME-hardness for the non-probabilistic case. Moreover, our decision problem is in PTIME when both the number of states of the pPDA and the number of states in the MDP are fixed.

Journal ArticleDOI
TL;DR: The first document retrieval structures based on Lempel–Ziv compression, precisely LZ78 are presented, which use 7–10 bpc and dominate a large part of the space/time tradeoffs and enable more efficient partial or approximate answers.
Abstract: Document retrieval structures index a collection of string documents, to retrieve those that are relevant to query strings p: document listing retrieves all documents where p appears; top-k retrieval retrieves the k most relevant of those. Classical structures use too much space in practice. Most current research uses compressed suffix arrays, but fast indices still use 17–21 bpc (bits per character), whereas small ones take milliseconds per returned answer. We present the first document retrieval structures based on Lempel–Ziv compression, precisely LZ78. Our structures use 7–10 bpc and dominate a large part of the space/time tradeoffs. They also enable more efficient partial or approximate answers: our document listing outputs the first 75%–80% of the answers at a rate of one per microsecond; for top-k retrieval we return a result of 90% quality at the same rate and using just 4–6 bpc. This outperforms current indices by a wide margin.

Journal ArticleDOI
TL;DR: A new version of CST is proved which combines both features of being non-erasing and of using a grammar-independent alphabet, and the degree in the polynomial dependence of $|Omega|$ on $|\Sigma|$ may be reduced to just 2 in the case of linear grammars in Double Greibach Normal Form.
Abstract: The famous theorem by Chomsky and Schutzenberger (CST) says that every context-free language L over an alphabet Σ is representable as h ( D ∩ R ) , where D is a Dyck language over a set Ω of brackets, R is a local language and h is an alphabetic homomorphism that erases unboundedly many symbols. Berstel found that the number of erasures can be linearly limited if the grammar is in Greibach normal form; Berstel and Boasson (and later, independently, Okhotin) proved a non-erasing variant of CST for grammars in Double Greibach Normal Form. In all these CST statements, however, the size of the Dyck alphabet Ω depends on the grammar size for L. In the Stanley variant of the CST, | Ω | only depends on | Σ | and not on the grammar, but the homomorphism erases many more symbols than in the other versions of CST; also, the regular language R is strictly locally testable but not local. We prove a new version of CST which combines both features of being non-erasing and of using a grammar-independent alphabet. In our construction, | Ω | is polynomial in | Σ | , namely O ( | Σ | 46 ) , and the regular language R is strictly locally testable. Using a recent generalization of Medvedev's homomorphic characterization of regular languages, we prove that the degree in the polynomial dependence of | Ω | on | Σ | may be reduced to just 2 in the case of linear grammars in Double Greibach Normal Form.