scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2013"


Journal ArticleDOI
Shi Li1
TL;DR: It is shown that if @c is randomly selected, the approximation ratio can be improved to 1.488 and the gap with the 1.463 approximability lower bound is cut by almost 1/3.
Abstract: We present a 1.488-approximation algorithm for the metric uncapacitated facility location (UFL) problem. Previously, the best algorithm was due to Byrka (2007). Byrka proposed an algorithm parametrized by @c and used it with @c~1.6774. By either running his algorithm or the algorithm proposed by Jain, Mahdian and Saberi ([email protected]?02), Byrka obtained an algorithm that gives expected approximation ratio 1.5. We show that if @c is randomly selected, the approximation ratio can be improved to 1.488. Our algorithm cuts the gap with the 1.463 approximability lower bound by almost 1/3.

293 citations


Journal ArticleDOI
TL;DR: The Mmt language is designed and implemented as the simplest possible language that combines a module system, a foundationally uncommitted formal semantics, and web-scalable implementations to integrate existing representation languages for formal mathematical knowledge in a simple, scalable formalism.
Abstract: Symbolic and logic computation systems ranging from computer algebra systems to theorem provers are finding their way into science, technology, mathematics and engineering. But such systems rely on explicitly or implicitly represented mathematical knowledge that needs to be managed to use such systems effectively. While mathematical knowledge management (MKM) ''in the small'' is well-studied, scaling up to large, highly interconnected corpora remains difficult. We hold that in order to realize MKM ''in the large'', we need representation languages and software architectures that are designed systematically with large-scale processing in mind. Therefore, we have designed and implemented the Mmt language - a module system for mathematical theories. Mmt is designed as the simplest possible language that combines a module system, a foundationally uncommitted formal semantics, and web-scalable implementations. Due to a careful choice of representational primitives, Mmt allows us to integrate existing representation languages for formal mathematical knowledge in a simple, scalable formalism. In particular, Mmt abstracts from the underlying mathematical and logical foundations so that it can serve as a standardized representation format for a formal digital library. Moreover, Mmt systematically separates logic-dependent and logic-independent concerns so that it can serve as an interface layer between computation systems and MKM systems.

103 citations


Journal ArticleDOI
TL;DR: A novel notion of weak bisimulation is defined for Markov automata and it is proved that this provides both a sound and complete proof methodology for a natural extensional behavioural equivalence between such systems.
Abstract: Markov automata describe systems in terms of events which may be nondeterministic, may occur probabilistically, or may be subject to time delays. We define a novel notion of weak bisimulation for such systems and prove that this provides both a sound and complete proof methodology for a natural extensional behavioural equivalence between such systems, a generalisation of reduction barbed congruence, the well-known touchstone equivalence for a large variety of process description languages.

69 citations


Journal ArticleDOI
TL;DR: This paper considers several composition operators that allow smaller systems to be combined into a larger system, and explores the extent to which the secrecy consumption of a combined system is constrained by the secrecy consume of its constituents.
Abstract: Secrecy is fundamental to computer security, but real systems often cannot avoid leaking some secret information. For this reason, it is useful to model secrecy quantitatively, thinking of it as a “resource” that may be gradually “consumed” by a system. In this paper, we explore this intuition through several dynamic and static models of secrecy consumption, ultimately focusing on (average) vulnerability and min-entropy leakage as especially useful models of secrecy consumption. We also consider several composition operators that allow smaller systems to be combined into a larger system, and explore the extent to which the secrecy consumption of a combined system is constrained by the secrecy consumption of its constituents.

68 citations


Journal ArticleDOI
TL;DR: This paper studies the kernelization complexity of graph coloring problems with respect to certain structural parameterizations of the input instances, and shows that the existence of polynomial kernels for q-Coloring parameterized by the vertex-deletion distance to a graph class F is strongly related to a function f(q) which bounds the number of vertices which are needed to preserve the no-answer to an instance of q-List Coloring on F.
Abstract: This paper studies the kernelization complexity of graph coloring problems with respect to certain structural parameterizations of the input instances. We are interested in how well polynomial-time data reduction can provably shrink instances of coloring problems, in terms of the chosen parameter. It is well known that deciding 3-colorability is already NP-complete, hence parameterizing by the requested number of colors is not fruitful. Instead, we pick up on a research thread initiated by Cai (DAM, 2003) who studied coloring problems parameterized by the modification distance of the input graph to a graph class on which coloring is polynomial-time solvable; for example parameterizing by the number k of vertex-deletions needed to make the graph chordal. We obtain various upper and lower bounds for kernels of such parameterizations of q-Coloring, complementing [email protected]?s study of the time complexity with respect to these parameters. Our results show that the existence of polynomial kernels for q-Coloring parameterized by the vertex-deletion distance to a graph class F is strongly related to the existence of a function f(q) which bounds the number of vertices which are needed to preserve the no-answer to an instance of q-List Coloring on F.

62 citations


Journal ArticleDOI
TL;DR: The maximum cardinality popular matching problem in G=(A@?B,E) can be solved in O(mn"0) time, where m=|E| and n"0=min(|A|,|B|).
Abstract: We consider the problem of computing a maximum cardinality popular matching in a bipartite graph G=(A@?B,E) where each vertex u@?A@?B ranks its neighbors in a strict order of preference. Such a graph is called an instance of the stable marriage problem with strict preferences and incomplete lists. A matching M^@? is popular if for every matching M in G, the number of vertices that prefer M to M^@? is at most the number of vertices that prefer M^@? to M. Every stable matching of G is popular, however a stable matching is a minimum cardinality popular matching. The complexity of computing a maximum cardinality popular matching was unknown. In this paper we show a simple characterization of popular matchings in G=(A@?B,E). We also show a sufficient condition for a popular matching to be a maximum cardinality popular matching. We construct a matching that satisfies our characterization and sufficient condition in O(mn"0) time, where m=|E| and n"0=min(|A|,|B|). Thus the maximum cardinality popular matching problem in G=(A@?B,E) can be solved in O(mn"0) time.

61 citations


Journal ArticleDOI
TL;DR: The original motivation and the ultimate goal is to provide a convenient high level programming language for a theory of computational resources, such as one-way functions, and trapdoor functions, by adopting the methods for hiding the low level implementation details that emerged from practice.
Abstract: We present a new model of computation, described in terms of monoidal categories. It conforms to the Church–Turing Thesis, and captures the same computable functions as the standard models. It provides a succinct categorical interface to most of them, free of their diverse implementation details, using the ideas and structures that in the meantime emerged from research in semantics of computation and programming. The salient feature of the language of monoidal categories is that it is supported by a sound and complete graphical formalism, string diagrams, which provide a concrete and intuitive interface for abstract reasoning about computation. The original motivation and the ultimate goal of this effort is to provide a convenient high level programming language for a theory of computational resources, such as one-way functions, and trapdoor functions, by adopting the methods for hiding the low level implementation details that emerged from practice.

55 citations


Journal ArticleDOI
TL;DR: A new and adequate formulation of T[C], the system that extends a type theory T with coercive subtyping based on a set C of basic subtyped judgements, is given, and it is shown that coerciveSubtyping is a conservative extension and, in a more general sense, a definitional extension.
Abstract: Coercive subtyping is a useful and powerful framework of subtyping for type theories. The key idea of coercive subtyping is subtyping as abbreviation. In this paper, we give a new and adequate formulation of T[C], the system that extends a type theory T with coercive subtyping based on a set C of basic subtyping judgements, and show that coercive subtyping is a conservative extension and, in a more general sense, a definitional extension. We introduce an intermediate system, the star-calculus T[C]^@?, in which the positions that require coercion insertions are marked, and show that T[C]^@? is a conservative extension of T and that T[C]^@? is equivalent to T[C]. This makes clear what we mean by coercive subtyping being a conservative extension, on the one hand, and amends a technical problem that has led to a gap in the earlier conservativity proof, on the other. We also compare coercive subtyping with the 'ordinary' notion of subtyping - subsumptive subtyping, and show that the former is adequate for type theories with canonical objects while the latter is not. An improved implementation of coercive subtyping is done in the proof assistant Plastic.

51 citations


Journal ArticleDOI
TL;DR: An O(nlogn)-approximation algorithm for the problem of finding the sparsest spanner of a given directed graph G on n vertices is presented and the approximation ratio almost matches Dinitz and [email protected]?s lower bound for the integrality gap of a natural linear programming relaxation.
Abstract: We present an O(nlogn)-approximation algorithm for the problem of finding the sparsest spanner of a given directed graph G on n vertices A spanner of a graph is a sparse subgraph that approximately preserves distances in the original graph More precisely, given a graph G=(V,E) with nonnegative edge lengths d:E->R^>=^0 and a stretchk>=1, a subgraph H=(V,E"H) is a k-spanner of G if for every edge (s,t)@?E, the graph H contains a path from s to t of length at most [email protected]?d(s,t) The previous best approximation ratio was [email protected]?(n^2^/^3), due to Dinitz and Krauthgamer (STOC @?11) We also improve the approximation ratio for the important special case of directed 3-spanners with unit edge lengths from [email protected]?(n) to O(n^1^/^3logn) The best previously known algorithms for this problem are due to Berman, Raskhodnikova and Ruan (FSTTCS @?10) and Dinitz and Krauthgamer The approximation ratio of our algorithm almost matches Dinitz and [email protected]?s lower bound for the integrality gap of a natural linear programming relaxation Our algorithm directly implies an O(n^1^/^3logn)-approximation for the 3-spanner problem on undirected graphs with unit lengths An easy O(n)-approximation algorithm for this problem has been the best known for decades Finally, we consider the Directed Steiner Forest problem: given a directed graph with edge costs and a collection of ordered vertex pairs, find a minimum-cost subgraph that contains a path between every prescribed pair We obtain an approximation ratio of O(n^2^/^3^+^@e) for any constant @e>0, which improves the O(n^@[email protected]?min(n^4^/^5,m^2^/^3)) ratio due to Feldman, Kortsarz and Nutov ([email protected]?12)

51 citations


Journal ArticleDOI
TL;DR: This paper designs blackbox and efficient linear maps @f that reduce the number of variables from n to r but maintain trdeg{@f(f"i)}"i=r, assuming sparse f"i and small r and applies these fundamental maps to solve two cases of blackbox identity testing.
Abstract: Algebraic independence is a fundamental notion in commutative algebra that generalizes independence of linear polynomials. Polynomials {f"1,...,f"m}@?K[x"1,...,x"n] (over a field K) are called algebraically independent if there is no non-zero polynomial F such that F(f"1,...,f"m)=0. The transcendence degree, trdeg{f"1,...,f"m}, is the maximal number r of algebraically independent polynomials in the set. In this paper we design blackbox and efficient linear maps @f that reduce the number of variables from n to r but maintain trdeg{@f(f"i)}"i=r, assuming sparse f"i and small r. We apply these fundamental maps to solve two cases of blackbox identity testing (assuming a large or zero characteristic):1.Given a polynomial-degree circuit C and sparse polynomials f"1,...,f"m of transcendence degree r, we can test blackbox D:=C(f"1,...,f"m) for zeroness in poly(size(D))^r time. 2.Define a @[email protected]@[email protected]"@d(k,s,n) circuit to be of the form @?"i"="1^[email protected]?"j"="1^sf"i","j, where f"i","j are sparse n-variate polynomials of degree at most @d. For this class of depth-4 circuits we define a notion of rank. Assuming there is a rank bound R for minimal simple @[email protected]@[email protected]"@d(k,s,n) identities, we give a poly(@dsnR)^R^k^@d^^^2 time blackbox identity test for @[email protected]@[email protected]"@d(k,s,n) circuits. This partially generalizes the state of the art of depth-3 to depth-4 circuits. The notion of transcendence degree works best with large or zero characteristic, but we also give versions of our results for arbitrary fields.

51 citations


Journal ArticleDOI
TL;DR: This paper proposes a new assume-guarantee framework based on multi-objective probabilistic model checking which supports compositional verification for a range of quantitative properties, including Probabilistic @w-regular specifications and expected total cost or reward measures.
Abstract: Compositional approaches to verification offer a powerful means to address the challenge of scalability. In this paper, we develop techniques for compositional verification of probabilistic systems based on the assume-guarantee paradigm. We target systems that exhibit both nondeterministic and stochastic behaviour, modelled as probabilistic automata, and augment these models with costs or rewards to reason about, for example, energy usage or performance metrics. Despite significant theoretical advances in compositional reasoning for probabilistic automata, there has been a distinct lack of practical progress regarding automated verification. We propose a new assume-guarantee framework based on multi-objective probabilistic model checking which supports compositional verification for a range of quantitative properties, including probabilistic @w-regular specifications and expected total cost or reward measures. We present a wide selection of assume-guarantee proof rules, including asymmetric, circular and asynchronous variants, and also show how to obtain numerical results in a compositional fashion. Given appropriate assumptions to be used in the proof rules, our compositional verification methods are, in contrast to previously proposed approaches, efficient and fully automated. Experimental results demonstrate their practical applicability on several large case studies, including instances where conventional probabilistic verification is infeasible.

Journal ArticleDOI
TL;DR: Several techniques to adapt and optimize linear-programming based approaches to the maximum matching problem in the semi-streaming model are presented and the effectiveness of adapting such tools in this model is demonstrated.
Abstract: In this paper we study linear-programming based approaches to the maximum matching problem in the semi-streaming model. In this model edges are presented sequentially, possibly in an adversarial order, and we are only allowed to use a small space. The allowed space is near linear in the number of vertices (and sublinear in the number of edges) of the input graph. The semi-streaming model is relevant in the context of processing of very large graphs. In recent years, there have been several new and exciting results in the semi-streaming model. However broad techniques such as linear programming have not been adapted to this model. In this paper we present several techniques to adapt and optimize linear-programming based approaches in the semi-streaming model. We use the maximum matching problem as a foil to demonstrate the effectiveness of adapting such tools in this model. As a consequence we improve almost all previous results on the semi-streaming maximum matching problem. We also prove new results on interesting variants.

Journal ArticleDOI
TL;DR: An algorithm is given to compute an absolutely normal number so that the first n digits in its binary expansion are obtained in time polynomial in n; in fact, just above quadratic.
Abstract: We give an algorithm to compute an absolutely normal number so that the first n digits in its binary expansion are obtained in time polynomial in n; in fact, just above quadratic. The algorithm uses combinatorial tools to control divergence from normality. Speed of computation is achieved at the sacrifice of speed of convergence to normality.

Journal ArticleDOI
TL;DR: In this article, the authors studied a family of graph clustering problems where each cluster has to satisfy a certain local requirement, i.e., at most q edges leave each cluster and @m(C) is a function on the subsets of vertices of a graph G.
Abstract: We study a family of graph clustering problems where each cluster has to satisfy a certain local requirement. Formally, let @m be a function on the subsets of vertices of a graph G. In the (@m,p,q)-Partition problem, the task is to find a partition of the vertices into clusters where each cluster C satisfies the requirements that (1) at most q edges leave C and (2) @m(C)=

Journal ArticleDOI
TL;DR: A progress report on how researchers in the rewriting logic semantics project are narrowing the gap between theory and practice in areas such as: modular semantic definitions of languages; scalability to real languages; support for real time; semantics of software and hardware modeling languages; and semantics-based analysis tools such as static analyzers, model checkers, and program provers.
Abstract: Rewriting logic is an executable logical framework well suited for the semantic definition of languages. Any such framework has to be judged by its effectiveness to bridge the existing gap between language definitions on the one hand, and language implementations and language analysis tools on the other. We give a progress report on how researchers in the rewriting logic semantics project are narrowing the gap between theory and practice in areas such as: modular semantic definitions of languages; scalability to real languages; support for real time; semantics of software and hardware modeling languages; and semantics-based analysis tools such as static analyzers, model checkers, and program provers.

Journal ArticleDOI
TL;DR: This work demonstrates relatively efficient and general solutions where the client delegates the computation to several servers, and is guaranteed to determine the correct answer as long as even a single server is honest.
Abstract: Consider a weak client that wishes to delegate a computation to an untrusted server, and then verify the correctness of the result. When the client uses only a single untrusted server, current techniques suffer from disadvantages such as computational inefficiency for the client or the server, limited functionality, or high round complexity. We demonstrate relatively efficient and general solutions where the client delegates the computation to several servers, and is guaranteed to determine the correct answer as long as even a single server is honest. We call such protocols Refereed Delegation of Computation (RDoC) and show: 1. A computationally secure protocol for any efficiently computable function , with logarithmically many rounds, based on any collision-resistant hash family. In our description of this protocol, we model the computation as running on a Turing Machine, but the protocol can be adapted to other computation models. We present an adaptation for the X86 computation model and a prototype implementation, called Quin, for Windows executables. We describe the architecture of Quin and experiment with several parameters on live cloud servers. We show that the protocol is practical, can work with real-world cloud servers, and is efficient for both the servers and for the client. 2. A 1-round statistically secure protocol for any log-space uniform NC circuit. In contrast, in the single server setting all known one-round delegation protocols are computationally sound. The protocol extends the arithemetization techniques of Goldwasser, Kalai and Rothblum (STOC 08) and Feige and Kilian (STOC 97).

Journal ArticleDOI
TL;DR: It is shown that the specializations of bisimulation, trace, and testing equivalences for the different classes of ULTraS coincide with the behavioral equivalences defined in the literature over traditional models except when nondeterminism and probability/stochasticity coexist; then new equivalences pop up.
Abstract: Labeled transition systems are typically used as behavioral models of concurrent processes. Their labeled transitions define a one-step state-to-state reachability relation. This model can be generalized by modifying the transition relation to associate a state reachability distribution with any pair consisting of a source state and a transition label. The state reachability distribution is a function mapping each possible target state to a value that expresses the degree of one-step reachability of that state. Values are taken from a preordered set equipped with a minimum that denotes unreachability. By selecting suitable preordered sets, the resulting model, called ULTraS from Uniform Labeled Transition System, can be specialized to capture well-known models of fully nondeterministic processes (LTS), fully probabilistic processes (ADTMC), fully stochastic processes (ACTMC), and nondeterministic and probabilistic (MDP) or nondeterministic and stochastic (CTMDP) processes. This uniform treatment of different behavioral models extends to behavioral equivalences. They can be defined on ULTraS by relying on appropriate measure functions that express the degree of reachability of a set of states when performing multi-step computations. It is shown that the specializations of bisimulation, trace, and testing equivalences for the different classes of ULTraS coincide with the behavioral equivalences defined in the literature over traditional models except when nondeterminism and probability/stochasticity coexist; then new equivalences pop up.

Journal ArticleDOI
TL;DR: It is proved that weighted bisimilarity is a congruence on systems defined by weighted GSOS specifications, and the flexibility of the framework is illustrated by instantiating it to handle some special cases, most notably that of stochastic transition systems.
Abstract: We introduce weighted GSOS, a general syntactic framework to specify well-behaved transition systems where transitions are equipped with weights coming from a commutative monoid. We prove that weighted bisimilarity is a congruence on systems defined by weighted GSOS specifications. We illustrate the flexibility of the framework by instantiating it to handle some special cases, most notably that of stochastic transition systems. Through examples we provide weighted-GSOS definitions for common stochastic operators in the literature.

Journal ArticleDOI
TL;DR: This work studies convergence of natural better-response dynamics that converge to locally stable matchings - matchings that allow no incentive to deviate with respect to their imposed information structure in the social network.
Abstract: We study stable marriage and roommates problems under locality constraints. Each player is a vertex in a social network and strives to be matched to other players. The value of a match is specified by an edge weight. Players explore possible matches only based on their current neighborhood. We study convergence of natural better-response dynamics that converge to locally stable matchings - matchings that allow no incentive to deviate with respect to their imposed information structure in the social network. If we have global information and control to steer the convergence process, then quick convergence is possible and for every starting state we can construct in polynomial time a sequence of polynomially many better-response moves to a locally stable matching. In contrast, for a large class of oblivious dynamics including random and concurrent better-response the convergence time turns out to be exponential. In such distributed settings, a small amount of random memory can ensure polynomial convergence time, even for many-to-many matchings and more general notions of neighborhood. Here the type of memory is crucial as for several variants of cache memory we provide exponential lower bounds on convergence times.

Journal ArticleDOI
TL;DR: This work surveys and generalizes work carried out in models with known bounds on the number of processes, and proves several new results, including improved bounds for election when participation is required and a new adaptive starvation-free mutual exclusion algorithm for unbounded concurrency.
Abstract: We explore four classic problems in concurrent computing (election, mutual exclusion, consensus, and naming) when the number of processes which may participate is unbounded. Partial information about the number of processes actually participating and the concurrency level is shown to affect the computability and complexity of solving these problems when using only atomic registers. We survey and generalize work carried out in models with known bounds on the number of processes, and prove several new results. These include improved bounds for election when participation is required and a new adaptive starvation-free mutual exclusion algorithm for unbounded concurrency. We also survey results in models with shared objects stronger than atomic registers, such as test&set bits, semaphores or read-modify-write registers, and update them for the unbounded case.

Journal ArticleDOI
TL;DR: It is proved that a related problem, the so-called Bipartite Chain Deletion problem, admits a kernel with O(k^2) vertices, completing a previous result of Guo (ISAAC@?07).
Abstract: Given a graph G=(V,E) and a positive integer k, the Proper Interval Completion problem asks whether there exists a set F of at most k pairs of (VxV)@?E such that the graph H=(V,E@?F) is a proper interval graph The Proper Interval Completion problem finds applications in molecular biology and genomic research This problem is known to be FPT (Kaplan, Tarjan and Shamir, FOCS@?94), but no polynomial kernel was known to exist We settle this question by proving that Proper Interval Completion admits a kernel with O(k^3) vertices Moreover, we prove that a related problem, the so-called Bipartite Chain Deletion problem, admits a kernel with O(k^2) vertices, completing a previous result of Guo (ISAAC@?07)

Journal ArticleDOI
TL;DR: The computational complexity of the isomorphism problem for regular trees, regular linear orders, and regular words is analyzed and techniques can be used to show that one can check in polynomial time whether a given regular linear order has a non-trivial automorphism.
Abstract: The computational complexity of the isomorphism problem for regular trees, regular linear orders, and regular words is analyzed. A tree is regular if it is isomorphic to the prefix order on a regular language. In case regular languages are represented by NFAs (DFAs), the isomorphism problem for regular trees turns out to be EXPTIME-complete (resp. P-complete). In case the input automata are acyclic NFAs (acyclic DFAs), the corresponding trees are (succinctly represented) finite trees, and the isomorphism problem turns out to be PSPACE-complete (resp. P-complete). A linear order is regular if it is isomorphic to the lexicographic order on a regular language. A polynomial time algorithm for the isomorphism problem for regular linear orders (and even regular words, which generalize the latter) given by DFAs is presented. This solves an open problem by Esik and Bloom. Similar techniques can be used to show that one can check in polynomial time whether a given regular linear order has a non-trivial automorphism. This improves a recent decidability result of Kuske.

Journal ArticleDOI
TL;DR: This paper shows that pushdown module checking, which is by itself harder than pushdown model checking, becomes undecidable when the environment has imperfect information, and proves that with imperfect information about the control states, but a visible pushdown store, the problem is decidable and its complexity is 2Exptime-complete for CTL and the propositional @m-calculus.
Abstract: The model checking problem for finite-state open systems (module checking) has been extensively studied in the literature, both in the context of environments with perfect and imperfect information about the system. Recently, the perfect information case has been extended to infinite-state systems (pushdown module checking). In this paper, we extend pushdown module checking to the imperfect information setting; i.e., to the case where the environment has only a partial view of the [email protected]?s control states and pushdown store content. We study the complexity of this problem with respect to the branching-time temporal logics CTL, CTL^@? and the propositional @m-calculus. We show that pushdown module checking, which is by itself harder than pushdown model checking, becomes undecidable when the environment has imperfect information. We also show that undecidability relies on hiding information about the pushdown store. Indeed, we prove that with imperfect information about the control states, but a visible pushdown store, the problem is decidable and its complexity is 2Exptime-complete for CTL and the propositional @m-calculus, and 3Exptime-complete for CTL^@?.

Journal ArticleDOI
TL;DR: The problem of establishing a relationship between two interpretations of base type terms of a @l"c-calculus extended with algebraic operations is considered, and it is shown that the given relationship holds if it satisfies a set of natural conditions.
Abstract: We consider the problem of establishing a relationship between two interpretations of base type terms of a @l"c-calculus extended with algebraic operations. We show that the given relationship holds if it satisfies a set of natural conditions. We apply this result to 1) comparing two monadic semantics related by a strong monad morphism, and 2) comparing two monadic semantics of fresh name creation: [email protected]?s new name creation monad and the global counter monad. We also consider the same problem, relating semantics of computational effects, in the presence of recursive functions. We apply this additional by extending the previous monad morphism comparison result to the recursive case.

Journal ArticleDOI
TL;DR: Given an array A of size n, this work considers the problem of answering range majority queries: given a query range [i..j] where 1= and finds a solution to this problem.
Abstract: Given an array A of size n, we consider the problem of answering range majority queries: given a query range [i..j] where 1=

Journal ArticleDOI
TL;DR: It is concluded that the parallel composition of (communicating) RTMs can be simulated by a single RTM, and a correspondence between executability and finite definability in a simple process calculus is established.
Abstract: We propose reactive Turing machines (RTMs), extending classical Turing machines with a process-theoretical notion of interaction, and use it to define a notion of executable transition system. We show that every computable transition system with a bounded branching degree is simulated modulo divergence-preserving branching bisimilarity by an RTM, and that every effective transition system is simulated modulo the variant of branching bisimilarity that does not require divergence preservation. We conclude from these results that the parallel composition of (communicating) RTMs can be simulated by a single RTM. We prove that there exist universal RTMs modulo branching bisimilarity, but these essentially employ divergence to be able to simulate an RTM of arbitrary branching degree. We also prove that modulo divergence-preserving branching bisimilarity there are RTMs that are universal up to their own branching degree. We establish a correspondence between executability and finite definability in a simple process calculus. Finally, we establish that RTMs are at least as expressive as persistent Turing machines.

Journal ArticleDOI
TL;DR: This work shows how to compute @e-optimal strategies in infinite games with finite branching and bounded rates where the bound as well as the successors of a given state are effectively computable.
Abstract: We study continuous-time stochastic games with time-bounded reachability objectives and time-abstract strategies. We show that each vertex in such a game has a value (i.e., an equilibrium probability), and we classify the conditions under which optimal strategies exist. Further, we show how to compute @e-optimal strategies in finite games and provide detailed complexity estimations. Moreover, we show how to compute @e-optimal strategies in infinite games with finite branching and bounded rates where the bound as well as the successors of a given state are effectively computable. Finally, we show how to compute optimal strategies in finite uniform games.

Journal ArticleDOI
TL;DR: This paper has developed a complete abstraction theory for PAs, and also proposes the first specification theory for them, which supports both satisfaction and refinement operators, together with classical stepwise design operators.
Abstract: Probabilistic Automata (PAs) are a widely-recognized mathematical framework for the specification and analysis of systems with non-deterministic and stochastic behaviors. This paper proposes Abstract Probabilistic Automata (APAs), that is a novel abstraction model for PAs. In APAs uncertainty of the non-deterministic choices is modeled by may/must modalities on transitions while uncertainty of the stochastic behavior is expressed by (underspecified) stochastic constraints. We have developed a complete abstraction theory for PAs, and also propose the first specification theory for them. Our theory supports both satisfaction and refinement operators, together with classical stepwise design operators. In addition, we study the link between specification theories and abstraction in avoiding the state-space explosion problem.

Journal ArticleDOI
TL;DR: This work identifies many of those operations arising in applications and generalizes them into a wide set of desirable queries for a binary relation representation that not only are space-efficient but also efficiently support a large subset of the desired queries.
Abstract: Binary relations are an important abstraction arising in many data representation problems. The data structures proposed so far to represent them support just a few basic operations required to fit one particular application. We identify many of those operations arising in applications and generalize them into a wide set of desirable queries for a binary relation representation. We also identify reductions among those operations. We then introduce several novel binary relation representations, some simple and some quite sophisticated, that not only are space-efficient but also efficiently support a large subset of the desired queries.

Journal ArticleDOI
TL;DR: Two different methods to achieve subexponential time parameterized algorithms for problems on sparse directed graphs are developed based on non-trivial combinations of obstruction theorems for undirected graphs, kernelization, problem-specific combinatorial structures, and a layering technique similar to the one employed by Baker to obtain PTAS for planar graphs.
Abstract: In this paper we make the first step beyond bidimensionality by obtaining subexponential time algorithms for problems on directed graphs. We develop two different methods to achieve subexponential time parameterized algorithms for problems on sparse directed graphs. We exemplify our approaches with two well studied problems. For the first problem, k-Leaf Out-Branching, which is to find an oriented spanning tree with at least k leaves, we obtain an algorithm solving the problem in time 2^O^(^k^l^o^g^k^)n+n^O^(^1^) on directed graphs whose underlying undirected graph excludes some fixed graph H as a minor. For the special case when the input directed graph is planar, the running time can be improved to 2^O^(^k^)n+n^O^(^1^). The second example is a generalization of the Directed Hamiltonian Path problem, namely k-Internal Out-Branching, which is to find an oriented spanning tree with at least k internal vertices. We obtain an algorithm solving the problem in time 2^O^(^k^l^o^g^k^)+n^O^(^1^) on directed graphs whose underlying undirected graph excludes some fixed apex graph H as a minor. Finally, we observe that on these classes of graphs, the k-Directed Path problem is solvable in time O(([email protected])^kn^f^(^@e^)), for any @e>0, where f is some function of @e. Our methods are based on non-trivial combinations of obstruction theorems for undirected graphs, kernelization, problem-specific combinatorial structures, and a layering technique similar to the one employed by Baker to obtain PTAS for planar graphs.