scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2020"


Journal ArticleDOI
TL;DR: An efficient solution to the SAT problem is provided by means of a family of recognizer cell-like P systems with evolutional symport/antiport rules and membrane creation which make use of communication rules involving a restricted number of objects.
Abstract: Cell-like P systems with symport/antiport rules are computing models inspired by the conservation law, in the sense that they compute by changing the places of objects with respect to the membranes, and not by changing the objects themselves. In this work, a variant of these kinds of membrane systems, called cell-like P systems with evolutional symport/antiport rules, where objects can evolve in the execution of such rules, is introduced. Besides, inspired by the autopoiesis process (ability of a system to maintain itself), membrane creation rules are considered as an efficient mechanism to provide an exponential workspace in terms of membranes. The presumed efficiency of these computing models (ability to solve computationally hard problems in polynomial time and uniform way) is explored. Specifically, an efficient solution to the SAT problem is provided by means of a family of recognizer cell-like P systems with evolutional symport/antiport rules and membrane creation which make use of communication rules involving a restricted number of objects.

50 citations


Journal ArticleDOI
TL;DR: It is shown that existing implementations for computing rational functions for reachability probabilities or expected costs in parametric Markov chains can be improved by using fraction-free Gaussian elimination, a long-known technique for linear equation systems with parametric coefficients.
Abstract: Parametric Markov chains have been introduced as a model for families of stochastic systems that rely on the same graph structure, but differ in the concrete transition probabilities. The latter are specified by polynomial constraints over a finite set of parameters. Important tasks in the analysis of parametric Markov chains are (1) computing closed-form solutions for reachability probabilities and other quantitative measures and (2) finding symbolic representations of the set of parameter valuations for which a given temporal logical formula holds as well as (3) the decision variant of (2) that asks whether there exists a parameter valuation where a temporal logical formula holds. Our contribution to (1) is to show that existing implementations for computing rational functions for reachability probabilities or expected costs in parametric Markov chains can be improved by using fraction-free Gaussian elimination, a long-known technique for linear equation systems with parametric coefficients. Our contribution to (2) and (3) is a complexity-theoretic discussion of the model-checking problem for parametric Markov chains and probabilistic computation tree logic (PCTL) formulas. We present an exponential-time algorithm for (2) and a PSPACE upper bound for (3). Moreover, we identify fragments of PCTL and subclasses of parametric Markov chains where (1) and (3) are solvable in polynomial time and establish NP-hardness for other PCTL fragments.

26 citations


Journal ArticleDOI
TL;DR: This work develops techniques for effectively encoding SAT -and, with some limitations, MaxSAT- into Ising problems compatible with sparse QA architectures, and provides the theoretical foundations for this mapping.
Abstract: Quantum annealers (QAs) are specialized quantum computers that minimize objective functions over discrete variables by physically exploiting quantum effects. Current QA platforms allow for the optimization of quadratic objectives defined over binary variables (qubits), also known as Ising problems. In the last decade, QA systems as implemented by D-Wave have scaled with Moore-like growth. Current architectures provide 2048 sparsely-connected qubits, and continued exponential growth is anticipated, together with increased connectivity. We explore the feasibility of such architectures for solving SAT and MaxSAT problems as QA systems scale. We develop techniques for effectively encoding SAT –and, with some limitations, MaxSAT– into Ising problems compatible with sparse QA architectures. We provide the theoretical foundations for this mapping, and present encoding techniques that combine offline Satisfiability and Optimization Modulo Theories with on-the-fly placement and routing. Preliminary empirical tests on a current generation 2048-qubit D-Wave system support the feasibility of the approach for certain SAT and MaxSAT problems.

22 citations


Journal ArticleDOI
TL;DR: An asymptotic version of the well-known Nivat's conjecture is obtained: it is proved that any two-dimensional, non-periodic configuration can satisfy the low pattern complexity assumption with respect to only finitely many distinct rectangular shapes D.
Abstract: We study multidimensional configurations (infinite words) and subshifts of low pattern complexity using tools of algebraic geometry. We express the configuration as a multivariate formal power series over integers and investigate the setup when there is a non-trivial annihilating polynomial: a non-zero polynomial whose formal product with the power series is zero. Such annihilator exists, for example, if the number of distinct patterns of some finite shape D in the configuration is at most the size | D | of the shape. This is our low pattern complexity assumption. We prove that the configuration must be a sum of periodic configurations over integers, possibly with unbounded values. As a specific application of the method we obtain an asymptotic version of the well-known Nivat's conjecture: we prove that any two-dimensional, non-periodic configuration can satisfy the low pattern complexity assumption with respect to only finitely many distinct rectangular shapes D.

22 citations


Journal ArticleDOI
TL;DR: An algorithm is proposed that maintains the set of MAWs of a fixed-length window sliding over y online and applies this algorithm to the approximate pattern-matching problem under the Length Weighted Index distance, resulting in an online O(σ|y|)-time algorithm for finding approximate occurrences of a word x in y.
Abstract: An absent word of a word y is a word that does not occur in y. It is then called minimal if all its proper factors occur in y. In fact, minimal absent words (MAWs) provide useful information about y and thus have several applications. In this paper, we propose an algorithm that maintains the set of MAWs of a fixed-length window sliding over y online. Our algorithm represents MAWs through nodes of the suffix tree. Specifically, the suffix tree of the sliding window is maintained using modified Senft's algorithm (Senft, 2005), itself generalizing Ukkonen's online algorithm (Ukkonen, 1995). We then apply this algorithm to the approximate pattern-matching problem under the Length Weighted Index distance (Chairungsee and Crochemore, 2012). This results in an online O(σ|y|)-time algorithm for finding approximate occurrences of a word x in y, |x|≤|y|, where σ is the alphabet size.

22 citations


Journal ArticleDOI
TL;DR: It is shown that secure sketches defined using pseudoentropy instead of information-theoretic security are still subject to upper bounds from coding theory, and a negative result can be avoided by constructing and analyzing a computational fuzzy extractor directly.
Abstract: Fuzzy extractors derive strong keys from noisy sources Their security is usually defined information-theoretically, with gaps between known negative results, existential constructions, and polynomial-time constructions We ask whether using computational security can close these gaps We show the following: • Negative result: Noise tolerance in fuzzy extractors is usually achieved using an information reconciliation component called a secure sketch We show that secure sketches defined using pseudoentropy (Hastad et al, SIAM J Comput 1999) instead of information-theoretic security are still subject to upper bounds from coding theory • Positive result: We show that our negative result can be avoided by constructing and analyzing a computational fuzzy extractor directly We modify the code-offset construction (Juels and Wattenberg, CCS 1999) to use random linear codes Security is based on the Learning with Errors problem and holds when the noisy source is uniform or symbol-fixing (that is, each dimension is either uniform or fixed)

20 citations


Journal ArticleDOI
TL;DR: The main theorem of Kolmogorov complexity methods is applied to a problem in fractal geometry, giving an improved lower bound on the (classical) Hausdorff dimension of generalized sets of Furstenberg type.
Abstract: We use Kolmogorov complexity methods to give a lower bound on the effective Hausdorff dimension of the point ( x , a x + b ) , given real numbers a, b, and x We apply our main theorem to a problem in fractal geometry, giving an improved lower bound on the (classical) Hausdorff dimension of generalized sets of Furstenberg type

18 citations


Journal ArticleDOI
TL;DR: A weighted index with the same complexities as in the most efficient previously known index by Barton et al. (CPM 2016) is obtained, but the construction is significantly simpler.
Abstract: In a weighted sequence, for every position of the sequence and every letter of the alphabet a probability of occurrence of this letter at this position is specified. Weighted sequences are commonly used to represent imprecise or uncertain data, for example in molecular biology, where they are known under the name of Position Weight Matrices. Given a probability threshold 1/z , we say that a string P of length m occurs in a weighted sequence X at position i if the product of probabilities of the letters of P at positions i, . . . , i+m−1 in X is at least 1/z . In this article, we consider an indexing variant of the problem, in which we are to pre-process a weighted sequence to answer multiple pattern matching queries. We present an O(nz)-time construction of an O(nz)-sized index for a weighted sequence of length n that answers pattern matching queries in the optimal O(m+Occ) time, where Occ is the number of occurrences reported. The cornerstone of our data structure is a novel construction of a family of [z] strings that carries the information about all the strings that occur in the weighted sequence with a sufficient probability. We thus improve the most efficient previously known index by Amir et al. (Theor. Comput. Sci., 2008) with size and construction time O(nz2 log z), preserving optimal query time. On the way we develop a new, more straightforward index for the so-called property matching problem. We provide an open-source implementation of our data structure and present experimental results using both synthetic and real data. Our construction allows us also to obtain a significant improvement over the complexities of the approximate variant of the weighted index presented by Biswas et al. at EDBT 2016 and an improvement of the space complexity of their general index. We also present applications of our index.

18 citations


Journal ArticleDOI
TL;DR: The Deutsch-Jozsa algorithm can compute any symmetric partial Boolean function f with exact quantum 1-query complexity, and is proved to be faster than any possible deterministic classical algorithm for solving a promise problem.
Abstract: The Deutsch-Jozsa algorithm is essentially faster than any possible deterministic classical algorithm for solving a promise problem that is in fact a symmetric partial Boolean function, named as the Deutsch-Jozsa problem. The Deutsch-Jozsa problem can be equivalently described as a partial function D J n 0 : { 0 , 1 } n → { 0 , 1 } defined as: D J n 0 ( x ) = 1 for | x | = n / 2 , D J n 0 ( x ) = 0 for | x | = 0 , n , and it is undefined for the remaining cases, where n is even, and | x | is the Hamming weight of x. The Deutsch-Jozsa algorithm needs only one query to compute D J n 0 but the classical deterministic algorithm requires n 2 + 1 queries to compute it in the worse case. We present all symmetric partial Boolean functions with degree 1 and 2; We prove the exact quantum query complexity of all symmetric partial Boolean functions with degree 1 and 2. We prove Deutsch-Jozsa algorithm can compute any symmetric partial Boolean function f with exact quantum 1-query complexity.

16 citations


Journal ArticleDOI
TL;DR: In this article, a model-theoretic characterization of the learning type Inf Ex ≅, consisting of the structures whose isomorphism types can be learned in the limit, is given.
Abstract: We combine computable structure theory and algorithmic learning theory to study learning of families of algebraic structures. Our main result is a model-theoretic characterization of the learning type Inf Ex ≅ , consisting of the structures whose isomorphism types can be learned in the limit. We show that a family of structures is Inf Ex ≅ -learnable if and only if the structures can be distinguished in terms of their Σ 2 inf -theories. We apply this characterization to familiar cases and we show the following: there is an infinite learnable family of distributive lattices; no pair of Boolean algebras is learnable; no infinite family of linear orders is learnable.

11 citations


Journal ArticleDOI
TL;DR: This paper enhances theoretically, and tune algorithmically a presynthesis strategy already known for choice-free nets to fork-attribution nets to improve the quality of implementation and allow an implementation to give a meaningful response.
Abstract: A Petri net is called fork-attribution if it is choice-free (a unique output transition for each place) and join-free (a unique input place for each transition). Synthesis tries, for a given (finite) labelled transition system (LTS), to find an (unlabelled) Petri net with an isomorphic reachability graph. Synthesis often requires a large set of inequality systems to be solved, making it quite costly. In presynthesis we exploit common necessary properties of some class of Petri nets. If any of the properties do not hold, we can directly dismiss the LTS, it cannot be the reachability graph of a Petri net from our class. This also allows an implementation to give a meaningful response instead of just telling the user that some inequality system is unsolvable. If all properties hold, we may gain additional information that can simplify the inequality systems to be solved. In this paper, we enhance theoretically, and tune algorithmically a presynthesis strategy already known for choice-free nets to fork-attribution nets.

Journal ArticleDOI
TL;DR: This work considers several variants of the ecc problem, using classical quality measures (like the number of cliques) and new ones, and describes efficient heuristic algorithms, the fastest one taking O ( m d G ) time for a graph with m edges, degeneracy d G (also known as k-core number).
Abstract: The edge clique cover ( ecc ) problem deals with discovering a set of (possibly overlapping) cliques in a given graph that covers each of the graph's edges. This problem finds applications ranging from social networks to compiler optimization and stringology. We consider several variants of the ecc problem, using classical quality measures (like the number of cliques) and new ones. We describe efficient heuristic algorithms, the fastest one taking O ( m d G ) time for a graph with m edges, degeneracy d G (also known as k-core number). For large real-world networks with millions of nodes, like social networks, an algorithm should have (almost) linear running time to be practical: Our algorithm for finding ecc s of large networks has linear-time performance in practice because d G is small, as our experiments show, on real-world networks with thousands to several million nodes.

Journal ArticleDOI
TL;DR: It is proved that if the given endofunctor preserves monomorphisms then the LFF always exists and is a subcoalgebra of the final coalgebra (unlike the rational fixpoint previously studied by Ad\'amek, Milius, and Velebil).
Abstract: This paper contributes to a generic theory of behaviour of “finite-state” systems. Systems are coalgebras with a finitely generated carrier for an endofunctor on a locally finitely presentable category. Their behaviour gives rise to the locally finite fixpoint (LFF), a new fixpoint of the endofunctor. The LFF exists provided that the endofunctor is finitary and preserves monomorphisms, is a subcoalgebra of the final coalgebra, i.e. it is fully abstract w.r.t. behavioural equivalence, and it is characterized by two universal properties: as the final locally finitely generated coalgebra, and as the initial fg-iterative algebra. Instances of the LFF are: regular languages, rational streams, rational formal power-series, regular trees etc. Moreover, we obtain e.g. (realtime deterministic resp. non-deterministic) context-free languages, constructively S-algebraic formal power-series (in general, the behaviour of finite coalgebras under the coalgebraic language semantics arising from the generalized powerset construction by Silva, Bonchi, Bonsangue, and Rutten), and the monad of Courcelle's algebraic trees.

Journal ArticleDOI
TL;DR: A toolbox of algorithms and techniques for weighted automata is provided, on top of which the complexity bounds of the decidable problems are established, and alternative and direct proofs of the undecidability results are provided.
Abstract: Weighted automata map input words to values, and have numerous applications in computer science. A result by Krob from the 90s implies that the universality problem is decidable for weighted automata over the tropical semiring with weights in N ∪ { ∞ } and is undecidable when the weights are in Z ∪ { ∞ } . We continue the study of the borders of decidability in weighted automata over the tropical semiring. We give a complete picture of the decidability and complexity of various decision problems for them, including non-emptiness, universality, equality, and containment. For the undecidability results, we provide direct proofs, which stay in the terrain of state machines. This enables us to tighten the results and apply them to a very simple class of automata. In addition, we provide a toolbox of algorithms and techniques for weighted automata, on top of which we establish the complexity bounds.

Journal ArticleDOI
TL;DR: A number of heuristics and programming techniques to speed up the solution of random linear systems by orders of magnitude are described, making the overall construction competitive with the standard and widely used MWHC technique, which is based on hypergraph peeling.
Abstract: Recent advances in the analysis of random linear systems on finite fields have paved the way for the construction of constant-time data structures representing static functions and minimal perfect hash functions using less space with respect to existing techniques. The main obstacle for any practical application of these results is the time required to solve such linear systems: despite they can be made very small, the computation is still too slow to be feasible. In this paper, we describe in detail a number of heuristics and programming techniques to speed up the solution of these systems by orders of magnitude, making the overall construction competitive with the standard and widely used MWHC technique, which is based on hypergraph peeling. In particular, we introduce broadword programming techniques for fast equation manipulation and a lazy Gaussian elimination algorithm. We also describe a number of technical improvements to the data structure which further reduce space usage and improve lookup speed. Our implementation of these techniques yields a minimal perfect hash function data structure occupying 2.24 bits per element, compared to 2.68 for MWHC-based ones, and a static function data structure which reduces the multiplicative overhead from 1.23 to 1.024. For functions whose output has low entropy, we are able to implement feasibly for the first time the Hreinsson–Kroyer–Pagh approach, which makes it possible, for example, to store a function with an output of 106 values distributed following a power law of exponent 2 in just 2.76 bits per key instead of 20.

Journal ArticleDOI
TL;DR: Comparative analysis of the protocol with other pairing-free ID-2-PAKA schemes suggests that the proposed scheme offers a fine trade-off between efficiency and security, and proven secure in the modified extended Canetti-Krawczyk model.
Abstract: A Two-Party Authenticated Key Agreement (2-PAKA) protocol facilitates two communicating entities to equally contribute to the establishment of a shared session key. IDentity-based 2-PAKA (ID-2-PAKA) protocols are widely researched, since it eliminates the need for explicit public-key verification using digital certificates. Over the years, ID-2-PAKA protocols with perfect forward secrecy and Key Generation Center forward secrecy were devised, to circumvent the inherent key escrows in identity based cryptosystems. Nevertheless, cryptanalysis of the recent ID-2-PAKA schemes reveals that many of the protocols are insecure. We reconstruct the possible attacks against the schemes and propose a secure escrowless pairing-free ID-2-PAKA protocol. The proposed scheme is proven secure in the modified extended Canetti-Krawczyk model, which captures all the desirable security attributes of ID-2-PAKA protocols, including, public key replacement attack resilience. Comparative analysis of the protocol with other pairing-free ID-2-PAKA schemes suggests that the proposed scheme offers a fine trade-off between efficiency and security.

Journal ArticleDOI
TL;DR: It is proved that MC for (full) HS extended with regular expressions is decidable by an automaton-theoretic argument, and an asymptotically optimal bound is provided to the complexity of the two syntactically maximal fragments.
Abstract: In this paper, we investigate the model checking (MC) problem for Halpern and Shoham's modal logic of time intervals (HS) and its fragments, where labeling of intervals is defined by regular expressions. The MC problem for HS has recently emerged as a viable alternative to the traditional (point-based) temporal logic MC. Most expressiveness and complexity results have been obtained by imposing suitable restrictions on interval labeling, namely, by either defining it in terms of interval endpoints, or by constraining a proposition letter to hold over an interval if and only if it holds over each component state (homogeneity assumption). In both cases, the expressiveness of HS gets noticeably limited, in particular when fragments of HS are considered. A possible way to increase the expressiveness of interval temporal logic MC was proposed by Lomuscio and Michaliszyn, who suggested to use regular expressions to define interval labeling, i.e., the properties that hold true over intervals/computation stretches, based on their component points/system states. In this paper, we provide a systematic account of decidability and complexity issues for model checking HS and its fragments extended with regular expressions. We first prove that MC for (full) HS extended with regular expressions is decidable by an automaton-theoretic argument. Though the exact complexity of full HS MC remains an open issue, the complexity of all relevant proper fragments of HS is here determined. In particular, we provide an asymptotically optimal bound to the complexity of the two syntactically maximal fragments A A ‾ B B ‾ E ‾ and A A ‾ E B ‾ E ‾ , by showing that their MC problem is AEX P pol -complete ( AEX P pol is the complexity class of problems decided by exponential-time bounded alternating Turing Machines making a polynomially bounded number of alternations). Moreover, we show that a better result holds for A A ‾ B B ‾ , A A ‾ E E ‾ and all their sub-fragments, whose MC problem turns out to be PSPACE-complete.

Journal ArticleDOI
TL;DR: A new reduction from vertex cover to the minimum fill-in problem, which might be of its own interest: All previous reductions for similar problems start from some kind of graph layout problems, and hence have limited use in understanding their fine-grained complexity.
Abstract: Given an n × n sparse symmetric matrix with m nonzero entries, performing Gaussian elimination may turn some zeroes into nonzero values, so called fill-ins. The minimum fill-in problem asks whether it is possible to perform the elimination with at most k fill-ins. We exclude the existence of polynomial time approximation schemes for this problem, assuming P≠NP, and the existence of 2 O ( n 1 − δ ) -time approximation schemes for any positive δ, assuming the Exponential Time Hypothesis. We also give a 2 O ( k 1 / 2 − δ ) ⋅ n O ( 1 ) parameterized lower bound. All these results come as corollaries of a new reduction from vertex cover to the minimum fill-in problem, which might be of its own interest: All previous reductions for similar problems start from some kind of graph layout problems, and hence have limited use in understanding their fine-grained complexity.

Journal ArticleDOI
TL;DR: In this paper, an axiomatization of the arrow update model logic (AAUML) is presented, which is decidable and equally expressive as the base multi-agent modal logic.
Abstract: In this contribution we present arbitrary arrow update model logic (AAUML). This is a dynamic epistemic logic or update logic. In update logics, static/basic modalities are interpreted on a given relational model whereas dynamic/update modalities induce transformations (updates) of relational models. In AAUML the update modalities formalize the execution of arrow update models, and there is also a modality for quantification over arrow update models. Arrow update models are an alternative to the well-known action models. We provide an axiomatization of AAUML. The axiomatization is a rewrite system allowing to eliminate arrow update modalities from any given formula, while preserving truth. Thus, AAUML is decidable and equally expressive as the base multi-agent modal logic. Our main result is to establish arrow update synthesis: if there is an arrow update model after which φ, we can construct (synthesize) that model from φ. We also point out some pregnant differences in update expressivity between arrow update logics, action model logics, and refinement modal logic.

Journal ArticleDOI
TL;DR: A new streaming algorithm for the k- Mismatch problem, one of the most basic problems in pattern matching, is presented and a series of streaming algorithms for pattern matching on weighted strings, which are a commonly used representation of uncertain sequences in molecular biology are developed.
Abstract: We present a new streaming algorithm for the k- Mismatch problem, one of the most basic problems in pattern matching. Given a pattern and a text, the task is to find all substrings of the text that are at the Hamming distance at most k from the pattern. Our algorithm is enhanced with an important new feature called Error Correcting , and its complexities for k = 1 and for a general k are comparable to those of the solutions for the k- Mismatch problem by Porat and Porat (FOCS 2009) and Clifford et al. (SODA 2016). In parallel to our research, a yet more efficient algorithm for the k- Mismatch problem with the Error Correcting feature was developed by Clifford et al. (SODA 2019). Using the new feature and recent work on streaming Multiple Pattern Matching we develop a series of streaming algorithms for pattern matching on weighted strings, which are a commonly used representation of uncertain sequences in molecular biology. We also show that these algorithms are space-optimal up to polylog factors. A preliminary version of this work was published at DCC 2017 conference [24] .

Journal ArticleDOI
TL;DR: This paper provides rewriting procedures to reduce the satisfiability problem to the discrete-time case (to leverage on the mature state-of-the-art corresponding verification techniques) and to remove the extra functional symbols in the nuXmv model checker enabling the analysis of LTL-EF and MTL 0, ∞ based on SMT-based model checking.
Abstract: In this paper, we propose to extend First-Order Linear-time Temporal Logic with Past adding two operators “at next” and “at last”, which take in input a term and a formula and return the value of the term at the next state in the future or last state in the past in which the formula holds. The new logic, named LTL-EF, can be interpreted with different models of time (including discrete, dense, and super-dense time) and with different first-order theories (a la Satisfiability Modulo Theories (SMT)). We show that the “at next” and “at last” can encode (first-order) MTL 0 , ∞ with counting. We provide rewriting procedures to reduce the satisfiability problem to the discrete-time case (to leverage on the mature state-of-the-art corresponding verification techniques) and to remove the extra functional symbols. We implemented these techniques in the nuXmv model checker enabling the analysis of LTL-EF and MTL 0 , ∞ based on SMT-based model checking. We show the feasibility of the approach experimenting with several non-trivial valid and satisfiable formulas.

Journal ArticleDOI
TL;DR: It is proved that the rooted versions of these equivalences are congruences for the operators of CFM, then some algebraic properties are shown, and the process algebra CFM is expressive enough to represent all and only the finite-state machines, up to net isomorphism.
Abstract: Finite-state machines, a simple class of finite Petri nets, were equipped in [16] with an efficiently decidable, truly-concurrent, bisimulation-based, behavioral equivalence, called team equivalence, which conservatively extends classic bisimulation equivalence on labeled transition systems and which is checked in a distributed manner. This paper addresses the problem of defining variants of this equivalence which are insensitive to silent moves. We define (rooted) weak team equivalence and (rooted) branching team equivalence as natural transposition to finite-state machines of Milner's weak bisimilarity [25] and van Glabbeek and Weijland's branching bisimilarity [12] on labeled transition systems. The process algebra CFM [15] is expressive enough to represent all and only the finite-state machines, up to net isomorphism. Here we first prove that the rooted versions of these equivalences are congruences for the operators of CFM, then we show some algebraic properties, and, finally, we provide finite, sound and complete, axiomatizations for them.

Journal ArticleDOI
TL;DR: This work presents a very general framework that allows to remove delay: finite-state strategies exist for all winning conditions where the resulting delay-free game admits a finite- state strategy.
Abstract: What is a finite-state strategy in a delay game? We answer this surprisingly non-trivial question by presenting a very general framework that allows to remove delay: finite-state strategies exist for all winning conditions where the resulting delay-free game admits a finite-state strategy. The framework is applicable to games whose winning condition is recognized by an automaton with an acceptance condition that satisfies a certain aggregation property. Our framework also yields upper bounds on the complexity of determining the winner of such delay games and upper bounds on the necessary lookahead to win the game. In particular, we cover all previous results of that kind as special cases of our uniform approach.

Journal ArticleDOI
TL;DR: The efficiency of this approach is demonstrated by enumerating and describing four special cases: totalistic, outer-totalistic, binary and three-state cellular automata, and it is shown there are only trivial ones.
Abstract: We present a simple approach for finding all number-conserving two-dimensional cellular automata with the von Neumann neighborhood. We demonstrate the efficiency of this approach by enumerating and describing four special cases: totalistic, outer-totalistic, binary and three-state cellular automata. The last result was not published before. We then proceed to find all reversible two-dimensional number-conserving cellular automata with three states with the von Neumann neighborhood and show there are only trivial ones.

Journal ArticleDOI
TL;DR: This paper provides a robustly exponential worst case for the McNaughton-Zielonka divide et impera algorithm, showing that no possible intertwining of the above mentioned techniques can help mitigating the exponential nature of the divide et Impera approaches.
Abstract: The McNaughton-Zielonka divide et impera algorithm is the simplest and most flexible approach available in the literature for determining the winner in a parity game. Despite its theoretical exponential worst-case complexity and the negative reputation as a poorly effective algorithm in practice, it has been shown to rank among the best techniques for solving such games. Also, it proved to be resistant to a lower bound attack, even more than the strategy improvements approaches, but finally Friedmann provided a family of games on which the algorithm requires exponential time. An easy analysis of this family shows that a simple memoization technique can help the algorithm solve the family in polynomial time. The same result can also be achieved by exploiting an approach based on the dominion-decomposition techniques proposed in the literature. These observations raise the question whether a suitable combination of dynamic programming and game-decomposition techniques can improve on the exponential worst case of the original algorithm. In this paper we answer this question negatively, by providing a robustly exponential worst case, showing that no possible intertwining of the above mentioned techniques can help mitigating the exponential nature of the divide et impera approaches. The resulting worst case is even more robust than that, since it serves as a lower bound for progress measures based algorithms as well, such as Small Progress Measure and its quasi-polynomial variant recently proposed by Jurdzinski and Lazic.

Journal ArticleDOI
TL;DR: An alternative Double Description representation for the domain of NNC (not necessarily closed) polyhedra is presented, together with the corresponding Chernikova-like conversion procedure, and it is shown how the canonicity of the new representation allows for the specification of proper, semantic widening operators.
Abstract: We present an alternative Double Description representation for the domain of NNC (not necessarily closed) polyhedra, together with the corresponding Chernikova-like conversion procedure. The representation uses no slack variable at all and provides a solution to a few technical issues caused by the encoding of an NNC polyhedron as a closed polyhedron in a higher dimension space. We then reconstruct the abstract domain of NNC polyhedra, providing all the operators needed to interface it with commonly available static analysis tools: while doing this, we highlight the efficiency gains enabled by the new representation and we show how the canonicity of the new representation allows for the specification of proper, semantic widening operators. A thorough experimental evaluation shows that our new abstract domain achieves significant efficiency improvements with respect to classical implementations for NNC polyhedra.

Journal ArticleDOI
TL;DR: In this article, a weighted propositional configuration logic over commutative semirings is proposed to serve as a specification language for software architectures with quantitative features, which is used to describe well-known architectures equipped with quantitative characteristics using formulas in this logic.
Abstract: We introduce and investigate a weighted propositional configuration logic over commutative semirings. Our logic is intended to serve as a specification language for software architectures with quantitative features. We prove an efficient construction of full normal forms and decidability of equivalence of formulas in this logic. We illustrate the motivation of this work by describing well-known architectures equipped with quantitative characteristics using formulas in our logic.

Journal ArticleDOI
TL;DR: It is established that, over any nontrivial configuration space, there always exist CA that are not vN-regular, and it is shown that rules like 128 and 254 are vn-regular (and actually generalised inverses of each other), while others, like the well-known rules $90 and $110, are notvN- regular.
Abstract: Let G be a group and A a set. A cellular automaton (CA) over AG is von Neumann regular (vN-regular) if there exists a CA over AG such that = , and in such case, is called a weak generalised inverse of . In this paper, we investigate vN-regularity of various kinds of CA. First, we establish that, over any nontrivial conguration space, there always exist CA that are not vN-regular. Then, we obtain a partial classication of elementary vN-regular CA over f0; 1gZ; in particular, we show that rules like 128 and 254 are vN-regular (and actually generalised inverses of each other), while others, like the well-known rules 90 and 110, are not vN-regular. Next, when A and G are both nite, we obtain a full characterisation of vN-regular CA over AG. Finally, we study vN-regular linear CA when A = V is a vector space over a eld F; we show that every vN-regular linear CA is invertible when V = F and G is torsion-free elementary amenable (e.g. when G = Zd; d 2 N), and that every linear CA is vN-regular when V is nite-dimensional and G is locally nite with char(F) - o(g) for all g 2 G.

Journal ArticleDOI
Julian Shun1
TL;DR: The rank and select structures on binary sequences and multiary sequences, which are stored on wavelet tree nodes, can be constructed in parallel with improved work bounds, matching those of the best existing sequential algorithms for constructing rank and selects structures.
Abstract: Existing parallel algorithms for wavelet tree construction have a work complexity of O ( n log ⁡ σ ) . This paper presents parallel algorithms for the problem with improved work complexity. Our first algorithm is based on parallel integer sorting and has either O ( n log ⁡ log ⁡ n ⌈ log ⁡ σ / log ⁡ n log ⁡ log ⁡ n ⌉ ) work and polylogarithmic depth, or O ( n ⌈ log ⁡ σ / log ⁡ n ⌉ ) work and sub-linear depth. We also describe another algorithm that has O ( n ⌈ log ⁡ σ / log ⁡ n ⌉ ) work and O ( σ + log ⁡ n ) depth. We then show how to use similar ideas to construct variants of wavelet trees (arbitrary-shaped binary trees and multiary trees) as well as wavelet matrices in parallel with lower work complexity than prior algorithms. Finally, we show that the rank and select structures on binary sequences and multiary sequences, which are stored on wavelet tree nodes, can be constructed in parallel with improved work bounds, matching those of the best existing sequential algorithms for constructing rank and select structures.

Journal ArticleDOI
TL;DR: It is shown that the following two sets of pairs of one-dimensional one-sided cellular automata are recursively inseparable: pairs where the first cellular automaton has strictly higher entropy than the second one, and pairs that are strongly conjugate and both have zero topological entropies.
Abstract: Cellular automata are topological dynamical systems. We consider the problem of deciding whether two cellular automata are conjugate or not. We also consider deciding strong conjugacy, that is, conjugacy by a map that commutes with the shift maps. We show that the following two sets of pairs of one-dimensional one-sided cellular automata are recursively inseparable: (i) pairs where the first cellular automaton has strictly higher entropy than the second one, and (ii) pairs that are strongly conjugate and both have zero topological entropies. This implies that the following decision problems are undecidable: Given two one-dimensional one-sided cellular automata F and G: Are F and G conjugate? Is F a factor of G? Is F a subsystem of G? All of these are undecidable in both strong and weak variants (whether the homomorphism is required to commute with the shift or not, respectively). We also prove the same results for reversible two-dimensional cellular automata.