scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2015"


Journal ArticleDOI
TL;DR: The first solution of multi-mean-payoff games with infinite-memory strategies is presented, and it is shown that mean-pay off-sup objectives can be decided in NP ?
Abstract: In mean-payoff games, the objective of the protagonist is to ensure that the limit average of an infinite sequence of numeric weights is nonnegative. In energy games, the objective is to ensure that the running sum of weights is always nonnegative. Multi-mean-payoff and multi-energy games replace individual weights by tuples, and the limit average (resp., running sum) of each coordinate must be (resp., remain) nonnegative. We prove finite-memory determinacy of multi-energy games and show inter-reducibility of multi-mean-payoff and multi-energy games for finite-memory strategies. We improve the computational complexity for solving both classes with finite-memory strategies: we prove coNP-completeness improving the previous known EXPSPACE bound. For memoryless strategies, we show that deciding the existence of a winning strategy for the protagonist is NP-complete. We present the first solution of multi-mean-payoff games with infinite-memory strategies: we show that mean-payoff-sup objectives can be decided in NP ? coNP , whereas mean-payoff-inf objectives are coNP-complete.

105 citations


Journal ArticleDOI
TL;DR: This work provides the first strategy which performs exploration of a graph with n vertices at a distance of at most D from r in time O ( D ) , using a team of agents of polynomial size k = D n 1 + ?
Abstract: We study the following scenario of online graph exploration. A team of k agents is initially located at a distinguished vertex r of an undirected graph. We ask how many time steps are required to complete exploration, i.e., to make sure that every vertex has been visited by some agent.As our main result, we provide the first strategy which performs exploration of a graph with n vertices at a distance of at most D from r in time O ( D ) , using a team of agents of polynomial size k = D n 1 + ? < n 2 + ? , for any ? 0 . Our strategy works in the local communication model, in which agents can only exchange information when located at a vertex, without knowledge of global parameters such as n or D.We also obtain almost-tight bounds on the asymptotic relation between exploration time and team size, for large k, in both the local and the global communication model.

68 citations


Journal ArticleDOI
TL;DR: This work introduces the concept of a Las Vegas computable multi-valued function, which is a function that can be computed on a probabilistic Turing machine that receives a random binary sequence as auxiliary input and proves an Independent Choice Theorem that implies that Las Vegas Computable functions are closed under composition.
Abstract: We study the computational power of randomized computations on infinite objects, such as real numbers. In particular, we introduce the concept of a Las Vegas computable multi-valued function, which is a function that can be computed on a probabilistic Turing machine that receives a random binary sequence as auxiliary input. The machine can take advantage of this random sequence, but it always has to produce a correct result or to stop the computation after finite time if the random advice is not successful. With positive probability the random advice has to be successful. We characterize the class of Las Vegas computable functions in the Weihrauch lattice with the help of probabilistic choice principles and Weak Weak K?nig's Lemma. Among other things we prove an Independent Choice Theorem that implies that Las Vegas computable functions are closed under composition. In a case study we show that Nash equilibria are Las Vegas computable, while zeros of continuous functions with sign changes cannot be computed on Las Vegas machines. However, we show that the latter problem admits randomized algorithms with weaker failure recognition mechanisms. The last mentioned results can be interpreted such that the Intermediate Value Theorem is reducible to the jump of Weak Weak K?nig's Lemma, but not to Weak Weak K?nig's Lemma itself. These examples also demonstrate that Las Vegas computable functions form a proper superclass of the class of computable functions and a proper subclass of the class of non-deterministically computable functions. We also study the impact of specific lower bounds on the success probabilities, which leads to a strict hierarchy of classes. In particular, the classical technique of probability amplification fails for computations on infinite objects. We also investigate the dependency on the underlying probability space. Besides Cantor space, we study the natural numbers, the Euclidean space and Baire space.

66 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that reachability in bounded one-counter automata is PSPACE-complete, which is the first time reachability has been shown for two-clock timed automata.
Abstract: Recently, Haase, Ouaknine, and Worrell have shown that reachability in two-clock timed automata is log-space equivalent to reachability in bounded one-counter automata. We show that reachability in bounded one-counter automata is PSPACE-complete.

61 citations


Journal ArticleDOI
TL;DR: This work introduces a new compression scheme for labeled trees based on top trees that is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast navigational queries directly on the compressed representation.
Abstract: We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast navigational queries directly on the compressed representation. We show that the new compression scheme achieves close to optimal worst-case compression, can compress exponentially better than DAG compression, is never much worse than DAG compression, and supports navigational queries in logarithmic time.

55 citations


Journal ArticleDOI
TL;DR: This paper gives new distributed algorithms to find ( Δ / k ) -coloring in graphs of girth 4 (triangle-free graphs), girth 5, and trees, and shows that the chromatic number of triangle-free graph classes can be much smaller.
Abstract: Vertex coloring is a central concept in graph theory and an important symmetry-breaking primitive in distributed computing. Whereas degree-Δ graphs may require palettes of Δ + 1 colors in the worst case, it is well known that the chromatic number of many natural graph classes can be much smaller. In this paper we give new distributed algorithms to find ( Δ / k ) -coloring in graphs of girth 4 (triangle-free graphs), girth 5, and trees. The parameter k can be at most ( 1 4 - o ( 1 ) ) ln ? Δ in triangle-free graphs and at most ( 1 - o ( 1 ) ) ln ? Δ in girth-5 graphs and trees, where o ( 1 ) is a function of Δ. Specifically, for Δ sufficiently large we can find such a coloring in O ( k + log * ? n ) time. Moreover, for any Δ we can compute such colorings in roughly logarithmic time for triangle-free and girth-5 graphs, and in O ( log ? Δ + log Δ ? log ? n ) time on trees. As a byproduct, our algorithm shows that the chromatic number of triangle-free graphs is at most ( 4 + o ( 1 ) ) Δ ln ? Δ , which improves on Jamall's recent bound of ( 67 + o ( 1 ) ) Δ ln ? Δ . Finally, we show that ( Δ + 1 ) -coloring for triangle-free graphs can be obtained in sublogarithmic time for any Δ.

53 citations


Journal ArticleDOI
TL;DR: This work investigates the computational complexity of their associated game-theoretic decision problems, as well as semantic conditions characterising classes of LTL properties that are preserved by equilibrium points (pure-strategy Nash equilibria) whenever they exist.
Abstract: Iterated games are well-known in the game theory literature. We study iterated Boolean games. These are games in which players repeatedly choose truth values for Boolean variables they have control over. Our model of iterated Boolean games assumes that players have goals given by formulae of Linear Temporal Logic (LTL), a formalism for expressing properties of state sequences. In order to represent the strategies of players in such games, we use a finite state machine model. After introducing and formally defining iterated Boolean games, we investigate the computational complexity of their associated game-theoretic decision problems, as well as semantic conditions characterising classes of LTL properties that are preserved by equilibrium points (pure-strategy Nash equilibria) whenever they exist.

53 citations


Journal ArticleDOI
TL;DR: The extension of the alternating-time temporal logic (ATL) with strategy contexts is studied: contrary to the original semantics, in this semantics the strategy quantifiers do not reset the previously selected strategies.
Abstract: We study the extension of the alternating-time temporal logic (ATL) with strategy contexts: contrary to the original semantics, in this semantics the strategy quantifiers do not reset the previously selected strategies.We show that our extension ATL s c is very expressive, but that its decision problems are quite hard: model checking is k-EXPTIME-complete when the formula has k nested strategy quantifiers; satisfiability is undecidable, but we prove that it is decidable when restricting to turn-based games. Our algorithms are obtained through a very convenient translation to QCTL (the computation-tree logic CTL extended with atomic quantification), which we show also applies to Strategy Logic, as well as when strategy quantification ranges over memoryless strategies.

52 citations


Journal ArticleDOI
TL;DR: This paper proposes a session typing system for the higher-order π-calculus with asynchronous communication subtyping, which allows partial commutativity of actions in higher- order processes and introduces an asynchronous subtyped system which uniformly deals with type-manifested asynchrony and linear functions.
Abstract: This paper proposes a session typing system for the higher-order π-calculus (the HOπ-calculus) with asynchronous communication subtyping, which allows partial commutativity of actions in higher-order processes. The system enables two complementary kinds of optimisation, mobile code and asynchronous permutation of session actions, within processes that utilise structured, typed communications. Our first contribution is a session typing system for the HOπ-calculus using techniques from the linear λ-calculus. Integration of arbitrary higher-order code mobility and sessions leads to technical difficulties in type soundness, because linear usage of session channels and completion of sessions are required. Our second contribution is to introduce an asynchronous subtyping system which uniformly deals with type-manifested asynchrony and linear functions. The most technical challenge for subtyping is to prove the transitivity of the subtyping relation. We also demonstrate the expressiveness of our typing system with an e-commerce example, where optimised processes can interact respecting the expected sessions.

48 citations


Journal ArticleDOI
TL;DR: An upper bound for the size of a depth three circuit computing f n is found using Koiran's bound and it is shown that this last lower bound also holds if the fan-in is at least d.
Abstract: Koiran showed that if an n-variate polynomial f n of degree d (with d = n O ( 1 ) ) is computed by a circuit of size s, then it is also computed by a homogeneous circuit of depth four and of size 2 O ( d log ? ( n ) log ? ( s ) ) . Using this result, Gupta, Kamath, Kayal and Saptharishi found an upper bound for the size of a depth three circuit computing f n .We improve here Koiran's bound. Indeed, we transform an arithmetic circuit into a depth four circuit of size 2 ( O ( d log ? ( d s ) log ? ( n ) ) ) . Then, mimicking the proof in [2], it also implies a 2 ( O ( d log ? ( d s ) log ? ( n ) ) ) upper bound for depth three circuits.This new bound is almost optimal since a 2 ? ( d ) lower bound is known for the size of homogeneous depth four circuits such that gates at the bottom have fan-in at most d . Finally, we show that this last lower bound also holds if the fan-in is at least d .

44 citations


Journal ArticleDOI
TL;DR: It is shown that in contrast to multi-dimensional mean-payoff games that are known to be coNP-complete,Multi-dimensional total-payoffs games are undecidable, and conservative approximations of these objectives are introduced.
Abstract: We consider two-player games played on weighted directed graphs with mean-payoff and total-payoff objectives, two classical quantitative objectives. While for single-dimensional games the complexity and memory bounds for both objectives coincide, we show that in contrast to multi-dimensional mean-payoff games that are known to be coNP-complete, multi-dimensional total-payoff games are undecidable. We introduce conservative approximations of these objectives, where the payoff is considered over a local finite window sliding along a play, instead of the whole play. For single dimension, we show that (i) if the window size is polynomial, deciding the winner takes polynomial time, and (ii) the existence of a bounded window can be decided in NP ? coNP, and is at least as hard as solving mean-payoff games. For multiple dimensions, we show that (i) the problem with fixed window size is EXPTIME-complete, and (ii) there is no primitive-recursive algorithm to decide the existence of a bounded window.

Journal ArticleDOI
TL;DR: This work presents the first combinatorial polynomial time algorithm for computing the equilibrium of the Arrow-Debreu market model with linear utilities, and develops new methods to carefully deal with the flows and surpluses during price adjustments.
Abstract: We present the first combinatorial polynomial time algorithm for computing the equilibrium of the Arrow-Debreu market model with linear utilities. Our algorithm views the allocation of money as flows and iteratively improves the balanced flow as in 11] for Fisher's model. We develop new methods to carefully deal with the flows and surpluses during price adjustments. Our algorithm performs O ( n 6 log ? ( n U ) ) maximum flow computations, where n is the number of agents and U is the maximum integer utility. The flows have to be presented as numbers of bitlength O ( n log ? ( n U ) ) to guarantee an exact solution. Previously, 22,29] have given polynomial time algorithms for this problem, which are based on solving convex programs using the ellipsoid algorithm and the interior-point method, respectively.

Journal ArticleDOI
TL;DR: Some general properties of Shannon information measures are investigated over sets of probability distributions with restricted marginals and the notion of minimum entropy coupling is introduced and its relevance is demonstrated in information-theoretic, computational, and statistical contexts.
Abstract: In this paper, some general properties of Shannon information measures are investigated over sets of probability distributions with restricted marginals. Certain optimization problems associated with these functionals are shown to be NP-hard, and their special cases are found to be essentially information-theoretic restatements of well-known computational problems, such as the Subset sum and the 3-Partition . The notion of minimum entropy coupling is introduced and its relevance is demonstrated in information-theoretic, computational, and statistical contexts. Finally, a family of pseudometrics (on the space of discrete probability distributions) defined by these couplings is studied, in particular their relation to the total variation distance, and a new characterization of the conditional entropy is given.

Journal ArticleDOI
TL;DR: This work presents the first local-decoding algorithm for expander codes, and shows that if the inner code has a smooth reconstruction algorithm in the noiseless setting, then the corresponding expander code has an efficient local-correction algorithms in the noisy setting.
Abstract: In this work, we present the first local-decoding algorithm for expander codes. This yields a new family of constant-rate codes that can recover from a constant fraction of errors in the codeword symbols, and where any symbol of the codeword can be recovered with high probability by reading N e symbols from the corrupted codeword, where N is the block-length of the code.Expander codes, introduced by Sipser and Spielman, are formed from an expander graph G = ( V , E ) of degree d, and an inner code of block-length d over an alphabet Σ. Each edge of the expander graph is associated with a symbol in Σ. A string in Σ E will be a codeword if for each vertex in V, the symbols on the adjacent edges form a codeword in the inner code.We show that if the inner code has a smooth reconstruction algorithm in the noiseless setting, then the corresponding expander code has an efficient local-correction algorithm in the noisy setting. Instantiating our construction with inner codes based on finite geometries, we obtain novel locally decodable codes with rate approaching one. This provides an alternative to the multiplicity codes of Kopparty, Saraf and Yekhanin (STOC '11) and the lifted codes of Guo, Kopparty and Sudan (ITCS '13).

Journal ArticleDOI
TL;DR: A model-checking algorithm is introduced for ATLK irF by extending the algorithm for a full-observability variant of the logic and its complexity is investigated to validate its complexity and investigate its complexity.
Abstract: Alternating-time Temporal Logic is a logic to reason about strategies that agents can adopt to achieve a specified collective goal.A number of extensions for this logic exist; some of them combine strategies and partial observability, some others include fairness constraints, but to the best of our knowledge no work provides a unified framework for strategies, partial observability and fairness constraints. Integration of these three concepts is important when reasoning about the capabilities of agents without full knowledge of a system, for instance when the agents can assume that the environment behaves in a fair way.We present ATLK irF , a logic combining strategies under partial observability in a system with fairness constraints on states. We introduce a model-checking algorithm for ATLK irF by extending the algorithm for a full-observability variant of the logic and we investigate its complexity. We validate our proposal with an experimental evaluation.

Journal ArticleDOI
TL;DR: The main result is that such Quantum Arthur-Merlin proof systems (QAM(2QCFA)) with polynomial expected running time are more powerful than the models in which the verifiers are two-way probabilistic finite automata (AM(1PFA) and AM(2PFA), and that the NP-complete language L knapsack can be recognized by a QAM working only on quantum pure states using unitary operators.
Abstract: Interactive proof systems (IP) are very powerful - languages they can accept form exactly PSPACE They represent also one of the very fundamental concepts of theoretical computing and a model of computation by interactions One of the key players in IP is verifier In the original model of IP whose power is that of PSPACE, the only restriction on verifiers is that they work in randomized polynomial time Because of such key importance of IP, it is of large interest to find out how powerful will IP be when verifiers are more restricted So far this was explored for the case that verifiers are two-way probabilistic finite automata (Dwork and Stockmeyer, 1990) and one-way quantum finite automata as well as two-way quantum finite automata (Nishimura and Yamakami, 2009) IP in which verifiers use public randomization is called Arthur-Merlin proof systems (AM) AM with verifiers modeled by Turing Machines augmented with a fixed-size quantum register (qAM) were studied also by Yakaryilmaz (2012) He proved, for example, that an NP-complete language L knapsack , representing the 0-1 knapsack problem, can be recognized by a qAM whose verifier is a two-way finite automaton working on quantum mixed states using superoperatorsIn this paper we explore the power of AM for the case that verifiers are two-way finite automata with quantum and classical states (2QCFA) - introduced by Ambainis and Watrous in 2002 - and the communications are classical It is of interest to consider AM with such "semi-quantum" verifiers because they use only limited quantum resources Our main result is that such Quantum Arthur-Merlin proof systems (QAM(2QCFA)) with polynomial expected running time are more powerful than the models in which the verifiers are two-way probabilistic finite automata (AM(2PFA)) with polynomial expected running time Moreover, we prove that there is a language which can be recognized by an exponential expected running time QAM(2QCFA), but cannot be recognized by any AM(2PFA), and that the NP-complete language L knapsack can also be recognized by a QAM(2QCFA) working only on quantum pure states using unitary operators

Journal ArticleDOI
TL;DR: For a certain class of distributions, it is proved that the linear programming relaxation of k-medoids clustering - a variant ofk-means clustering where means are replaced by exemplars from within the dataset - distinguishes points drawn from nonoverlapping balls with high probability once the number of points drawn and the separation distance between any two balls are sufficiently large.
Abstract: For a certain class of distributions, we prove that the linear programming relaxation of k-medoids clustering - a variant of k-means clustering where means are replaced by exemplars from within the dataset - distinguishes points drawn from nonoverlapping balls with high probability once the number of points drawn and the separation distance between any two balls are sufficiently large. Our results hold in the nontrivial regime where the separation distance is small enough that points drawn from different balls may be closer to each other than points drawn from the same ball; in this case, clustering by thresholding pairwise distances between points can fail. We also exhibit numerical evidence of high-probability recovery in a substantially more permissive regime.

Journal ArticleDOI
TL;DR: This paper approximate the behaviour of a single agent with a time-inhomogeneous CTMC, which depends on the environment and on the other agents only through the solution of the fluid differential equation, and model check this process and prove the asymptotic correctness of this approach in terms of satisfiability of CSL formulae.
Abstract: In this paper we investigate a potential use of fluid approximation techniques in the context of stochastic model checking of CSL formulae. We focus on properties describing the behaviour of a single agent in a (large) population of agents, exploiting a limit result known also as fast simulation. In particular, we will approximate the behaviour of a single agent with a time-inhomogeneous CTMC, which depends on the environment and on the other agents only through the solution of the fluid differential equation, and model check this process. We will prove the asymptotic correctness of our approach in terms of satisfiability of CSL formulae. We will also present a procedure to model check time-inhomogeneous CTMC against CSL formulae.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the power of Arthur-Merlin probabilistic proof systems in the data stream model and gave a canonical AM streaming algorithm for a class of data stream problems.
Abstract: We study the power of Arthur-Merlin probabilistic proof systems in the data stream model. We show a canonical AM streaming algorithm for a class of data stream problems. The algorithm offers a tradeoff between the length of the proof and the space complexity that is needed to verify it.As an application, we give an AM streaming algorithm for the Distinct Elements problem. Given a data stream of length m over alphabet of size n, the algorithm uses O ? ( s ) space and a proof of size O ? ( w ) , for every s , w such that s ? w ? n (where O ? hides a polylog ( m , n ) factor). We also prove a lower bound, showing that every MA streaming algorithm for the Distinct Elements problem that uses s bits of space and a proof of size w, satisfies s ? w = ? ( n ) . Furthermore, the lower bound also holds for approximating the number of distinct elements within a multiplicative factor of 1 ? 1 / n .As a part of the proof of the lower bound for the Distinct Elements problem, we show a new lower bound of ? ( n ) on the MA communication complexity of the Gap Hamming Distance problem, and prove its tightness.

Journal ArticleDOI
TL;DR: The algorithmic results shed light on the performance quality of a popular heuristic due to Liu and Terzi ACM SIGMOD 2008]; in particular, it is shown that the heuristic provides optimal solutions if "many" edges need to be added.
Abstract: Motivated by a strongly growing interest in graph anonymization, we study the NP-hard Degree Anonymity problem asking whether a graph can be made k-anonymous by adding at most a given number of edges. Herein, a graph is k-anonymous if for every vertex in the graph there are at least k - 1 other vertices of the same degree. Our algorithmic results shed light on the performance quality of a popular heuristic due to Liu and Terzi ACM SIGMOD 2008]; in particular, we show that the heuristic provides optimal solutions if "many" edges need to be added. Based on this, we develop a polynomial-time data reduction yielding a polynomial-size problem kernel for Degree Anonymity parameterized by the maximum vertex degree. In terms of parameterized complexity analysis, this result is in a sense tight since we also show that the problem is already NP-hard for H-index three, implying NP-hardness for smaller parameters such as average degree and degeneracy.

Journal ArticleDOI
TL;DR: The theory of profiles is extended to prove that every run dag contains a profile tree with at most a finite number of infinite branches, and it is shown that this property provides a theoretical grounding for a new determinization construction where macrostates are doubly preordered sets of states.
Abstract: The determinization of Buchi automata is a celebrated problem, with applications in synthesis, probabilistic verification, and multi-agent systems. Since the 1960s, there has been a steady progress of constructions: by McNaughton, Safra, Piterman, Schewe, and others. Despite the proliferation of solutions, they are all essentially ad-hoc constructions, with little theory behind them other than proofs of correctness. Since Safra, all optimal constructions employ trees as states of the deterministic automaton, and transitions between states are defined operationally over these trees. The operational nature of these constructions complicates understanding, implementing, and reasoning about them, and should be contrasted with complementation, where a solid theory in terms of automata run dags underlies modern constructions.In 2010, we described a profile-based approach to Buchi complementation, where a profile is simply the history of visits to accepting states. We developed a structural theory of profiles and used it to describe a complementation construction that is deterministic in the limit. Here we extend the theory of profiles to prove that every run dag contains a profile tree with at most a finite number of infinite branches. We then show that this property provides a theoretical grounding for a new determinization construction where macrostates are doubly preordered sets of states. In contrast to extant determinization constructions, transitions in the new construction are described declaratively rather than operationally.

Journal ArticleDOI
TL;DR: This work resolves the e-approximate degree of the two-level AND-OR tree for any constant e 0 and gives an explicit dual polynomial that witnesses a tight lower bound for the approximate degree of any symmetric Boolean function.
Abstract: The e-approximate degree of a Boolean function f : { - 1 , 1 } n ? { - 1 , 1 } is the minimum degree of a real polynomial that approximates f to within error e in the ? ∞ norm. We prove several lower bounds on this important complexity measure by explicitly constructing solutions to the dual of an appropriate linear program. Our first result resolves the e-approximate degree of the two-level AND-OR tree for any constant e 0 . We show that this quantity is ? ( n ) , closing a line of incrementally larger lower bounds. The same lower bound was recently obtained independently by Sherstov (Theory Comput. 2013) using related techniques. Our second result gives an explicit dual polynomial that witnesses a tight lower bound for the approximate degree of any symmetric Boolean function, addressing a question of Spalek (2008). Our final contribution is to reprove several Markov-type inequalities from approximation theory by constructing explicit dual solutions to natural linear programs. These inequalities underly the proofs of many of the best-known approximate degree lower bounds, and have important uses throughout theoretical computer science.

Journal ArticleDOI
TL;DR: It is demonstrated that the satisfiability problem for a fragment of HRELTL allows for a satisfiability-preserving reduction to RELTL(RA), a logic over discrete traces with atoms in non-linear Real Arithmetic for which automated reasoning procedures are being developed.
Abstract: Hybrid traces are useful to describe behaviors of dynamic systems where continuous and discrete evolutions are combined. The ability to represent sets of traces by means of formulas in temporal logic has recently found important applications in various fields, such as requirements analysis, compositional verification, and contract-based design. In this paper we present HRELTL, a temporal logic to characterize hybrid traces. The logic is highly expressive: it allows the description of continuous behaviors, by expressing mathematical constraints over derivatives, and discrete behaviors, by constraining values of variables across instantaneous transitions. HRELTL combines the power of temporal operators and regular expressions, and enjoys important properties such as sampling invariance. We demonstrate that the satisfiability problem for a fragment of HRELTL allows for a satisfiability-preserving reduction to RELTL(RA), a logic over discrete traces with atoms in non-linear Real Arithmetic for which automated reasoning procedures are being developed.

Journal ArticleDOI
TL;DR: This work considers numerous parameters of this problem and answers the question whether or not the problem is still NP-complete if these parameters are bounded by constants.
Abstract: A pattern α, i.e., a string that contains variables and terminals, matches a terminal word w if w can be obtained by uniformly substituting the variables of α by terminal words. Deciding whether a given terminal word matches a given pattern is NP-complete and this holds for several natural variants of the problem that result from whether or not variables can be erased, whether or not the patterns are required to be terminal-free or whether or not the mapping of variables to terminal words must be injective. We consider numerous parameters of this problem (i.e., number of variables, length of w, length of the words substituted for variables, number of occurrences per variable, cardinality of the terminal alphabet) and for all possible combinations of the parameters (and variants described above), we answer the question whether or not the problem is still NP-complete if these parameters are bounded by constants.

Journal ArticleDOI
TL;DR: RSLR is an implicit higher-order characterization of the class PP of those problems which can be decided in probabilistic polynomial time with error probability smaller than 1 /2, an extension of Hofmann's SLR with a Probabilistic primitive, which enjoys basic properties such as subject reduction and confluence.
Abstract: We present RSLR, an implicit higher-order characterization of the class PP of those problems which can be decided in probabilistic polynomial time with error probability smaller than 1 /2. Analogously, a (less implicit) characterization of the class BPP can be obtained. RSLR is an extension of Hofmann's SLR with a probabilistic primitive, which enjoys basic properties such as subject reduction and confluence. Polynomial time soundness of RSLR is obtained by syntactical means, as opposed to the standard literature on SLR-derived systems, which use semantics in an essential way.

Journal ArticleDOI
TL;DR: In this paper, the complexity of computing Boolean functions by polynomial threshold functions (PTFs) on general Boolean domains was studied, where the degree and weight of PTFs were investigated.
Abstract: We initiate a comprehensive study of the complexity of computing Boolean functions by polynomial threshold functions (PTFs) on general Boolean domains (as opposed to domains such as { 0 , 1 } n or { - 1 , 1 } n that enforce multilinearity). A typical example of such a general Boolean domain, for the purpose of our results, is { 1 , 2 } n . We are mainly interested in the length (the number of monomials) of PTFs, with their degree and weight being of secondary interest.First we motivate the study of PTFs over the { 1 , 2 } n domain by showing their close relation to depth two threshold circuits. In particular we show that PTFs of polynomial length and polynomial degree compute exactly the functions computed by polynomial size THR ? MAJ circuits. We note that known lower bounds for THR ? MAJ circuits extend to the likely strictly stronger model of PTFs (with no degree restriction). We also show that a "max-plus" version of PTFs is related to AC 0 ? THR circuits.We exploit this connection to gain a better understanding of threshold circuits. In particular, we show that (super-logarithmic) lower bounds for 3-player randomized communication protocols with unbounded error would yield (super-polynomial) size lower bounds for THR ? THR circuits.Finally, having thus motivated the model, we initiate structural studies of PTFs. These include relationships between weight and degree of PTFs, and a degree lower bound for PTFs of constant length.

Journal ArticleDOI
TL;DR: It is proved that adding upwards closed first-order dependency atoms to first- order logic with team semantics does not increase its expressive power (with respect to sentences), and that the same remains true if the authors also add constancy atoms.
Abstract: We prove that adding upwards closed first-order dependency atoms to first-order logic with team semantics does not increase its expressive power (with respect to sentences), and that the same remains true if we also add constancy atoms. As a consequence, the negations of functional dependence, conditional independence, inclusion and exclusion atoms can all be added to first-order logic without increasing its expressive power.Furthermore, we define a class of bounded upwards closed dependencies and we prove that unbounded dependencies cannot be defined in terms of bounded ones.

Journal ArticleDOI
TL;DR: This work establishes that the existence of a uniform strategy is decidable for rational relations and provides a nonelementary synthesis procedure and exhibits an essentially optimal subclass of rational relations for which the problem becomes 2-Exptime-complete.
Abstract: A general concept of uniform strategies has recently been proposed as a relevant notion in game theory for computer science, which subsumes various notions from the literature. It relies on properties involving sets of plays in two-player turn-based arenas equipped with arbitrary binary relations between plays; these properties are expressed in a language based on CTL * with a quantifier over related plays. There are two semantics for our quantifier, a strict one and a full one, that we study separately. Regarding the strict semantics, the existence of a uniform strategy is undecidable for rational binary relations, but introducing jumping tree automata and restricting attention to recognizable relations allows us to establish a 2-Exptime-complete complexity - and still capture a class of two-player imperfect-information games with epistemic temporal objectives. Regarding the full semantics, relying on information set automata we establish that the existence of a uniform strategy is decidable for rational relations and we provide a nonelementary synthesis procedure. We also exhibit an essentially optimal subclass of rational relations for which the problem becomes 2-Exptime-complete. Considering rich classes of relations makes the theory of uniform strategies powerful: it directly entails various results in logics of knowledge and time, some of them already known, and others new.

Journal ArticleDOI
TL;DR: It was shown in this paper that membership in rational subsets of wreath products H? V with H a finite group and V a virtually free group is decidable, and that there exists a fixed finitely generated submonoid in the wreath product Z? Z with an undecidable membership problem.
Abstract: It is shown that membership in rational subsets of wreath products H ? V with H a finite group and V a virtually free group is decidable. On the other hand, it is shown that there exists a fixed finitely generated submonoid in the wreath product Z ? Z with an undecidable membership problem.

Journal ArticleDOI
TL;DR: In this paper, the transition structure of a deterministic automaton with state set X and inputs from an alphabet A can be viewed both as an algebra and as a coalgebra, and the main result is that the restrictions of free and co-free to preformations of languages and to quotients A * / C of A * with respect to a congruence relation C, form a dual equivalence.
Abstract: The transition structure α : X ? X A of a deterministic automaton with state set X and with inputs from an alphabet A can be viewed both as an algebra and as a coalgebra. We use this algebra-coalgebra duality as a common perspective for the study of equations and coequations. For every automaton ( X , α ) , we define two new automata: free ( X , α ) and cofree ( X , α ) representing, respectively, the greatest set of equations and the smallest set of coequations satisfied by ( X , α ) . Both constructions are shown to be functorial. Our main result is that the restrictions of free and cofree to, respectively, preformations of languages and to quotients A * / C of A * with respect to a congruence relation C, form a dual equivalence. As a consequence, we present a variant of Eilenberg's celebrated variety theorem for varieties of monoids (in the sense of Birkhoff) and varieties of languages.