scispace - formally typeset
Search or ask a question

Showing papers in "Information & Computation in 2012"


Journal ArticleDOI
TL;DR: A reconstruction of session types in a linear pi calculus where types are qualified as linear or unrestricted, which leads to a surprisingly simply theory which extends typability when compared to traditional systems for session types.
Abstract: We present a reconstruction of session types in a linear pi calculus where types are qualified as linear or unrestricted. Linearly qualified communication channels are guaranteed to occur in exactly one thread, possibly multiple times; unrestricted (or shared) channels may appear in an unbounded number of threads. In our language each channel is characterized by two distinct variables, one used for reading, the other for writing; scope restriction binds together two variables, thus establishing the correspondence between the two ends of the same channel. This mechanism allows a precise control of resources via a conventional linear type system. Furthermore, the uniform treatment of linear and shared channels leads to a surprisingly simply theory which, in addition, extends typability when compared to traditional systems for session types. We build the language gradually, starting from simple input/output, then adding recursive types, replication and finally choice. We also present an algorithmic type checking system.

133 citations


Journal ArticleDOI
TL;DR: It is established that SL without the magic wand is decidable, and it is shown that second-order logic is as expressive as SL and as a by-product the authors get undecidability of SL.
Abstract: We investigate decidability, complexity and expressive power issues for (first-order) separation logic with one record field (herein called SL) and its fragments. SL can specify properties about the memory heap of programs with singly-linked lists. Separation logic with two record fields is known to be undecidable by reduction of finite satisfiability for classical predicate logic with one binary relation. Surprisingly, we show that second-order logic is as expressive as SL and as a by-product we get undecidability of SL. This is refined by showing that SL without the separating conjunction is as expressive as SL, whence undecidable too. As a consequence, in SL the separating implication (also known as the magic wand) can simulate the separating conjunction. By contrast, we establish that SL without the magic wand is decidable, and we prove a non-elementary complexity by reduction from satisfiability for the first-order theory over finite words. This result is extended with a bounded use of the magic wand that appears in Hoare-style rules. As a generalization, it is shown that kSL, the separation logic over heaps with k>=1 record fields, is equivalent to kSO, the second-order logic over heaps with k record fields.

81 citations


Journal ArticleDOI
TL;DR: A dictionary whereby one can translate database concepts into category-theoretic concepts and vice versa is provided, and how to connect a database and a functional programming language by introducing a functorial connection between the schema and the category of types for that language is shown.
Abstract: In this paper we present a simple database definition language: that of categories and functors. A database schema is a small category and an instance is a set-valued functor on it. We show that morphisms of schemas induce three ''data migration functors'', which translate instances from one schema to the other in canonical ways. These functors parameterize projections, unions, and joins over all tables simultaneously and can be used in place of conjunctive and disjunctive queries. We also show how to connect a database and a functional programming language by introducing a functorial connection between the schema and the category of types for that language. We begin the paper with a multitude of examples to motivate the definitions, and near the end we provide a dictionary whereby one can translate database concepts into category-theoretic concepts and vice versa.

79 citations


Journal ArticleDOI
TL;DR: It is shown that suitable weighted MSO logics and these new weighted automata are expressively equivalent, both for finite and infinite words, leading to decidability results for the weighted logic formulas considered.
Abstract: Weighted automata model quantitative aspects of systems like memory or power consumption. Recently, Chatterjee, Doyen, and Henzinger introduced a new kind of weighted automata which compute objectives like the average cost or the long-time peak power consumption. In these automata, operations like average, limit superior, limit inferior, limit average, or discounting are used to assign values to finite or infinite words. In general, these weighted automata are not semiring weighted anymore. Here, we establish a connection between such new kinds of weighted automata and weighted logics. We show that suitable weighted MSO logics and these new weighted automata are expressively equivalent, both for finite and infinite words. The constructions employed are effective, leading to decidability results for the weighted logic formulas considered.

66 citations


Journal ArticleDOI
TL;DR: In this paper, coalgebras are used to model weighted automata in two different ways, i.e., on set and vector spaces, and on vector spaces and linear maps.
Abstract: Weighted automata are a generalisation of non-deterministic automata where each transition, in addition to an input letter, has also a quantity expressing the weight (e.g. cost or probability) of its execution. As for non-deterministic automata, their behaviours can be expressed in terms of either (weighted) bisimilarity or (weighted) language equivalence. Coalgebras provide a categorical framework for the uniform study of state-based systems and their behaviours. In this work, we show that coalgebras can suitably model weighted automata in two different ways: coalgebras on Set (the category of sets and functions) characterise weighted bisimilarity, while coalgebras on Vect (the category of vector spaces and linear maps) characterise weighted language equivalence. Relying on the second characterisation, we show three different procedures for computing weighted language equivalence. The first one consists in a generalisation of the usual partition refinement algorithm for ordinary automata. The second one is the backward version of the first one. The third procedure relies on a syntactic representation of rational weighted languages.

66 citations


Journal ArticleDOI
TL;DR: This article presents a formalization of floating-point arithmetic that makes it possible to efficiently compute inside the proofs of the Coq system using a certified library that provides the basic arithmetic operators and a few elementary functions.
Abstract: The process of proving some mathematical theorems can be greatly reduced by relying on numerically-intensive computations with a certified arithmetic. This article presents a formalization of floating-point arithmetic that makes it possible to efficiently compute inside the proofs of the Coq system. This certified library is a multi-radix and multi-precision implementation free from underflow and overflow. It provides the basic arithmetic operators and a few elementary functions.

59 citations


Journal ArticleDOI
TL;DR: It is proved that the connected search game is monotone for trees, i.e. restricting search strategies to only those where the clean territories increase monotonically does not require more searchers.
Abstract: In the graph searching game the opponents are a set of searchers and a fugitive in a graph. The searchers try to capture the fugitive by applying some sequence of moves that include placement, removal, or sliding of a searcher along an edge. The fugitive tries to avoid capture by moving along unguarded paths. The search number of a graph is the minimum number of searchers required to guarantee the capture of the fugitive. In this paper, we initiate the study of this game under the natural restriction of connectivity where we demand that in each step of the search the locations of the graph that are clean (i.e. non-accessible to the fugitive) remain connected. We give evidence that many of the standard mathematical tools used so far in classic graph searching fail under the connectivity requirement. We also settle the question on ''the price of connectivity'', that is, how many searchers more are required for searching a graph when the connectivity demand is imposed. We make estimations of the price of connectivity on general graphs and we provide tight bounds for the case of trees. In particular, for an n-vertex graph the ratio between the connected searching number and the non-connected one is O(logn) while for trees this ratio is always at most 2. We also conjecture that this constant-ratio upper bound for trees holds also for all graphs. Our combinatorial results imply a complete characterization of connected graph searching on trees. It is based on a forbidden-graph characterization of the connected search number. We prove that the connected search game is monotone for trees, i.e. restricting search strategies to only those where the clean territories increase monotonically does not require more searchers. A consequence of our results is that the connected search number can be computed in polynomial time on trees, moreover, we show how to make this algorithm distributed. Finally, we reveal connections of this parameter to other invariants on trees such as the Horton-Strahler number.

48 citations


Journal ArticleDOI
TL;DR: A new data structure called biddirectional wavelet index is presented that supports bidirectional search with much less space and is possible to search for candidates of RNA secondary structural patterns in large genomes, for example the complete human genome.
Abstract: Searching for genes encoding microRNAs (miRNAs) is an important task in genome analysis. Because the secondary structure of miRNA (but not the sequence) is highly conserved, the genes encoding it can be determined by finding regions in a genomic DNA sequence that match the structure. It is known that algorithms using a bidirectional search on the DNA sequence for this task outperform algorithms based on unidirectional search. The data structures supporting a bidirectional search (affix trees and affix arrays), however, are rather complex and suffer from their large space consumption. Here, we present a new data structure called bidirectional wavelet index that supports bidirectional search with much less space. With this data structure, it is possible to search for candidates of RNA secondary structural patterns in large genomes, for example the complete human genome. Another important application of this data structure is short read alignment. As a second contribution, we show how bidirectional matching statistics can be computed in linear time.

48 citations


Journal ArticleDOI
TL;DR: The dynamical behavior of non-uniform cellular automata is compared with the one of classical Cellular automata and a strong form of equicontinuity property specially suited for non- uniform cellular Automata is studied.
Abstract: The dynamical behavior of non-uniform cellular automata is compared with the one of classical cellular automata. Several differences and similarities are pointed out by a series of examples. Decidability of basic properties like surjectivity and injectivity is also established. The final part studies a strong form of equicontinuity property specially suited for non-uniform cellular automata.

40 citations


Journal ArticleDOI
TL;DR: The exact number of states in DFAs needed to represent unary languages recognized by n-state UFAs in terms of a new number-theoretic function g@?
Abstract: Nondeterministic finite automata (NFA) with at most one accepting computation on every input string are known as unambiguous finite automata (UFA). This paper considers UFAs over a one-letter alphabet, and determines the exact number of states in DFAs needed to represent unary languages recognized by n-state UFAs in terms of a new number-theoretic function g@?. The growth rate of g@?(n), and therefore of the UFA-DFA tradeoff, is estimated as e^@Q^(^n^l^n^^^2^n^3^). The conversion of an n-state unary NFA to a UFA requires UFAs with g(n)+O(n^2)=e^(^1^+^o^(^1^)^)^n^l^n^n states, where g(n) is the greatest order of a permutation of n elements, known as Landau@?s function. In addition, it is shown that representing the complement of n-state unary UFAs requires UFAs with at least n^2^-^o^(^1^) states in the worst case, while the Kleene star requires up to exactly (n-1)^2+1 states.

40 citations


Journal ArticleDOI
TL;DR: In this article, the effective version of [email protected]?s ergodic theorem for Martin-Lof random points and effectively open sets was shown to be equivalent to a special case of effective ergodics, where a trajectory of a computable mapping that starts from a random point cannot remain inside an effectively open set of measure less than 1 second.
Abstract: We prove the effective version of [email protected]?s ergodic theorem for Martin-Lof random points and effectively open sets, improving the results previously obtained in this direction (in particular those of Vyugin, Nandakumar and Hoyrup, Rojas) The proof consists of two steps First, we prove a generalization of [email protected]?s theorem, which is a particular case of effective ergodic theorem: a trajectory of a computable ergodic mapping that starts from a random point cannot remain inside an effectively open set of measure less than 1 Second, we show that the full statement of the effective ergodic theorem can be reduced to this special case Both steps use the statement of classical ergodic theorem but not its usual classical proof Therefore, we get a new simple proof of the effective ergodic theorem (with weaker assumptions than before) This result was recently obtained independently by Franklin, Greenberg, Miller and Ng

Journal ArticleDOI
TL;DR: The minimization problem of probabilistic and quantum automata is reduced to finding a solution of a system of algebraic polynomial (in)equations and the state minimization is shown to be decidable and in EXPSPACE.
Abstract: Several types of automata, such as probabilistic and quantum automata, require to work with real and complex numbers. For such automata the acceptance of an input is quantified with a probability. There are plenty of results in the literature addressing the complexity of checking the equivalence of these automata, that is, checking whether two automata accept all inputs with the same probability. On the other hand, the critical problem of finding the minimal automata equivalent to a given one has been left open [C. Moore, J.P. Crutchfield, Quantum automata and quantum grammars, Theoret. Comput. Sci. 237 (2000) 275-306, see p. 304, Problem 5]. In this work, we reduce the minimization problem of probabilistic and quantum automata to finding a solution of a system of algebraic polynomial (in)equations. An EXPSPACE upper bound on the complexity of the minimization problem is derived by applying [email protected]?s algorithm. More specifically, we show that the state minimization of probabilistic automata, measure-once quantum automata, measure-many quantum automata, measure-once generalized quantum automata, and measure-many generalized quantum automata is decidable and in EXPSPACE. Finally, we also solve an open problem concerning minimal covering of stochastic sequential machines [A. Paz, Introduction to Probabilistic Automata, Academic Press, New York, 1971, p. 43].

Journal ArticleDOI
TL;DR: This paper thoroughly justifies both algorithms, proves their correctness, compares their worst-case complexity and experimentally evaluates their efficiency, and presents an open-source implementation of them that will make it very easy to include termination-analysis capabilities in automatic program verifiers.
Abstract: The classical technique for proving termination of a generic sequential computer program involves the synthesis of a ranking function for each loop of the program. Linear ranking functions are particularly interesting because many terminating loops admit one and algorithms exist to automatically synthesize it. In this paper we present two such algorithms: one based on work dated 1991 by Sohn and Van Gelder; the other, due to Podelski and Rybalchenko, dated 2004. Remarkably, while the two algorithms will synthesize a linear ranking function under exactly the same set of conditions, the former is mostly unknown to the community of termination analysis and its general applicability has never been put forward before the present paper. In this paper we thoroughly justify both algorithms, we prove their correctness, we compare their worst-case complexity and experimentally evaluate their efficiency, and we present an open-source implementation of them that will make it very easy to include termination-analysis capabilities in automatic program verifiers.

Journal ArticleDOI
S. Chevillard1
TL;DR: This paper studies three different algorithms for evaluating erf and erfc, and the determination of the order of truncation, the analysis of roundoff errors and the way of choosing the working precision are presented.
Abstract: The error function erf is a special function. It is widely used in statistical computations for instance, where it is also known as the standard normal cumulative probability. The complementary error function is defined as erfc(x)=1-erf(x). In this paper, the computation of erf(x) and erfc(x) in arbitrary precision is detailed: our algorithms take as input a target precision t^' and deliver approximate values of erf(x) or erfc(x) with a relative error guaranteed to be bounded by 2^-^t^^^'. We study three different algorithms for evaluating erf and erfc. These algorithms are completely detailed. In particular, the determination of the order of truncation, the analysis of roundoff errors and the way of choosing the working precision are presented. The scheme used for implementing erf and erfc and the proofs are expressed in a general setting, so they can directly be reused for the implementation of other functions. We have implemented the three algorithms and studied experimentally what is the best algorithm to use in function of the point x and the target precision t^'.

Journal ArticleDOI
TL;DR: By improving the algorithm for solving hierarchical parity games, this work is able to solve the model-checking problem for the @m-calculus in Pspace and time complexity that is only polynomial in the depth of the hierarchy.
Abstract: We present a unified game-based approach for branching-time model checking of hierarchical systems. Such systems are exponentially more succinct than standard state-transition graphs, as repeated sub-systems are described only once. Early work on model checking of hierarchical systems shows that one can do better than a naive algorithm that ''flattens'' the system and removes the hierarchy. Given a hierarchical system S and a branching-time specification @j for it, we reduce the model-checking problem (does S satisfy @j?) to the problem of solving a hierarchical game obtained by taking the product of S with an alternating tree automaton A"@j for @j. Our approach leads to clean, uniform, and improved model-checking algorithms for a variety of branching-time temporal logics. In particular, by improving the algorithm for solving hierarchical parity games, we are able to solve the model-checking problem for the @m-calculus in Pspace and time complexity that is only polynomial in the depth of the hierarchy. Our approach also leads to an abstraction-refinement paradigm for hierarchical systems. The abstraction maintains the hierarchy, and is obtained by merging both states and sub-systems into abstract states.

Journal ArticleDOI
TL;DR: A novel technique, suitable for bit-parallelism, for representing both the nondeterministic automaton and the nond deterministic suffix automaton of a given string in a more compact way, based on a particular factorization of strings.
Abstract: We present a novel technique, suitable for bit-parallelism, for representing both the nondeterministic automaton and the nondeterministic suffix automaton of a given string in a more compact way. Our approach is based on a particular factorization of strings which on the average allows to pack in a machine word of w bits automata state configurations for strings of length greater than w. We adapted the Shift-And and BNDM algorithms using our encoding and compared them with the original algorithms. Experimental results show that the new variants are generally faster for long patterns.

Journal ArticleDOI
TL;DR: It is shown that for other two-dimensional systems, the reachability question remains unanswered, by proving that it is as hard as the reachable problem for piecewise affine maps on the real line, which is a well known open problem.
Abstract: Even though many attempts have been made to define the boundary between decidable and undecidable hybrid systems, the affair is far from being resolved. More and more low dimensional systems are being shown to be undecidable with respect to reachability, and many open problems in between are being discovered. In this paper, we present various two-dimensional hybrid systems for which the reachability problem is undecidable. We show their undecidability by simulating Minsky machines. Their proximity to the decidability frontier is understood by inspecting the most parsimonious constraints necessary to make reachability over these automata decidable. We also show that for other two-dimensional systems, the reachability question remains unanswered, by proving that it is as hard as the reachability problem for piecewise affine maps on the real line, which is a well known open problem.

Journal ArticleDOI
TL;DR: The decomposition uses the structural operational semantics that underlies the process algebra to derive congruence formats for two weak and rooted weak semantics: branching and @h-bisimilarity.
Abstract: We present a method for decomposing modal formulas for processes with the internal action @t. To decide whether a process algebra term satisfies a modal formula, one can check whether its subterms satisfy formulas that are obtained by decomposing the original formula. The decomposition uses the structural operational semantics that underlies the process algebra. We use this decomposition method to derive congruence formats for two weak and rooted weak semantics: branching and @h-bisimilarity.

Journal ArticleDOI
TL;DR: It is demonstrated that no sets with super-exponential growth rate can be represented and the results have direct implications on the power of unary conjunctive grammars with one nonterminal symbol.
Abstract: Equations of the form [email protected](X) are considered, where the unknown X is a set of natural numbers. The expression @f(X) may contain the operations of set addition, defined as S+T={m+n|[email protected]?S,[email protected]?T}, union, intersection, as well as ultimately periodic constants. An equation with a non-periodic solution of exponential growth rate is constructed. At the same time it is demonstrated that no sets with super-exponential growth rate can be represented. It is also shown that restricted classes of these equations cannot represent sets with super-linearly growing complements nor sets that are additive bases of order 2. The results have direct implications on the power of unary conjunctive grammars with one nonterminal symbol.

Journal ArticleDOI
TL;DR: In this paper, a log-space construction of a skew-symmetric, polynomially-bounded edge weight function for directed planar graphs, such that the weight of any simple cycle in the graph is non-zero with respect to this weight function, was given.
Abstract: We show a simple application of [email protected]?s theorem from multivariable calculus to the isolation problem in planar graphs. In particular, we give a log-space construction of a skew-symmetric, polynomially-bounded edge weight function for directed planar graphs, such that the weight of any simple cycle in the graph is non-zero with respect to this weight function. As a direct consequence of the above weight function, we are able to isolate a directed path between two fixed vertices, in a directed planar graph. We also show that given a bipartite planar graph, we can obtain an edge weight function (using the above function) in log-space, which isolates a perfect matching in the given graph. Earlier this was known to be true only for grid graphs - which is a proper subclass of planar graphs. We also look at the problem of obtaining a straight line embedding of a planar graph in log-space. Although we do not quite achieve this goal, we give a piecewise straight line embedding of the given planar graph in log-space.

Journal ArticleDOI
TL;DR: FJig is presented, a simple calculus where basic building blocks are classes in the style of Featherweight Java, declaring fields, methods and one constructor, and provides two different semantics of an FJig program: flattening and direct semantics.
Abstract: We present FJig, a simple calculus where basic building blocks are classes in the style of Featherweight Java, declaring fields, methods and one constructor. However, inheritance has been generalized to the much more flexible notion originally proposed in [email protected]?s Jigsaw framework. That is, classes play also the role of modules, that can be composed by a rich set of operators, all of which can be expressed by a minimal core. Fields and methods can be declared of four different kinds (abstract, virtual, frozen, local) determining how they are affected by the operators. We keep the nominal approach of Java-like languages, that is, types are class names. However, a class is not necessarily a structural subtype of any class used in its defining expression. While this allows a more flexible reuse, it may prevent the (generalized) inheritance relation from being a subtyping relation. So, the required subtyping relations among classes are declared by the programmer and checked by the type system. The calculus allows the encoding of a large variety of different mechanisms for software composition in class-based languages, including standard inheritance, mixin classes, traits and hiding. Hence, FJig can be used as a unifying framework for analyzing existing mechanisms and proposing new extensions. We provide two different semantics of an FJig program: flattening and direct semantics. The difference is analogous to that between two intuitive models to understand inheritance: the former where inherited methods are copied into heir classes, and the latter where member lookup is performed by ascending the inheritance chain. Here we address equivalence of these two views for a more sophisticated composition mechanism.

Journal ArticleDOI
TL;DR: This work considers the approximate counting problem for Boolean CSP with bounded-degree instances, for constraint languages containing the two unary constant relations {0} and {1} and obtains a complete classification of the complexity of this problem.
Abstract: The degree of a CSP instance is the maximum number of times that any variable appears in the scopes of constraints. We consider the approximate counting problem for Boolean CSP with bounded-degree instances, for constraint languages containing the two unary constant relations {0} and {1}. When the maximum allowed degree is large enough (at least 6) we obtain a complete classification of the complexity of this problem. It is exactly solvable in polynomial time if every relation in the constraint language is affine. It is equivalent to the problem of approximately counting independent sets in bipartite graphs if every relation can be expressed as conjunctions of {0}, {1} and binary implication. Otherwise, there is no FPRAS unless NP=RP. For lower degree bounds, additional cases arise, where the complexity is related to the complexity of approximately counting independent sets in hypergraphs.

Journal ArticleDOI
TL;DR: This work proposes a new solution for approximate overlaps based on backward backtracking (Lam, et al., 2008) and suffix filters (Karkkainen and Na, 2008), and uses nH"k+o([email protected])+rlogr bits of space, where H"k is the k-th order entropy and @s the alphabet size.
Abstract: Finding approximate overlaps is the first phase of many sequence assembly methods. Given a set of strings of total length n and an error-rate @e, the goal is to find, for all-pairs of strings, their suffix/prefix matches (overlaps) that are within edit distance [email protected][email protected]@[email protected]?, where @? is the length of the overlap. We propose a new solution for this problem based on backward backtracking (Lam, et al., 2008) and suffix filters (Karkkainen and Na, 2008). Our technique uses nH"k+o([email protected])+rlogr bits of space, where H"k is the k-th order entropy and @s the alphabet size. In practice, it is more scalable in terms of space, and comparable in terms of time, than q-gram filters (Rasmussen, et al., 2006). Our method is also easy to parallelize and scales up to millions of DNA reads.

Journal ArticleDOI
TL;DR: This work obtains a dichotomy theorem of approximate counting for complex-weighted Boolean CSPs, provided that all complex-valued unary constraints are freely available to use and introduces a novel notion of T-constructibility that naturally induces approximation-preserving reducibility.
Abstract: Constraint satisfaction problems (or CSPs) have been extensively studied in, for instance, artificial intelligence, database theory, graph theory, and statistical physics. From a practical viewpoint, it is beneficial to approximately solve those CSPs. When one tries to approximate the total number of truth assignments that satisfy all Boolean-valued constraints for (unweighted) Boolean CSPs, there is a known trichotomy theorem by which all such counting problems are neatly classified into exactly three categories under polynomial-time (randomized) approximation-preserving reductions. In contrast, we obtain a dichotomy theorem of approximate counting for complex-weighted Boolean CSPs, provided that all complex-valued unary constraints are freely available to use. It is the expressive power of free unary constraints that enables us to prove such a stronger, complete classification theorem. This discovery makes a step forward in the quest for the approximation-complexity classification of all counting CSPs. To deal with complex weights, we employ proof techniques of factorization and arity reduction along the line of solving Holant problems. Moreover, we introduce a novel notion of T-constructibility that naturally induces approximation-preserving reducibility. Our result also gives an approximation analogue of the dichotomy theorem on the complexity of exact counting for complex-weighted Boolean CSPs.

Journal ArticleDOI
TL;DR: Solving a combinatorial problem of hard computational complexity, the complete class of the 20-trinucleotide circular codes which contains 12,964,440 elements is solved and a surprising relation with the symmetric group @S"4 appears but it remains unexplained so far.
Abstract: Trinucleotide comma-free codes and trinucleotide circular codes are two important classes of codes in code theory and theoretical biology. A trinucleotide circular code containing exactly 20 elements is called here a 20-trinucleotide circular code. In this paper, solving a combinatorial problem of hard computational complexity, we extend and improve our results of C.J. Michel, G. Pirillo, and M.A. Pirillo (2008) [14] concerning the small class of 528 self-complementary 20-trinucleotide circular codes, to the complete class of the 20-trinucleotide circular codes which contains 12,964,440 elements. A surprising relation with the symmetric group @S"4 appears but it remains unexplained so far.

Journal ArticleDOI
TL;DR: It is shown that, for k constant, k-tree isomorphism can be decided in logarithmic space by giving an O(klogn) space canonical labeling algorithm that computes a unique tree decomposition, uses colors to fully encode the structure of the original graph in the decomposition tree and invokes Lindell's tree canonization algorithm.
Abstract: We show that, for k constant, k-tree isomorphism can be decided in logarithmic space by giving an O(klogn) space canonical labeling algorithm. The algorithm computes a unique tree decomposition, uses colors to fully encode the structure of the original graph in the decomposition tree and invokes Lindell@?s tree canonization algorithm. As a consequence, the isomorphism, the automorphism, as well as the canonization problem for k-trees are all complete for deterministic logspace. Completeness for logspace holds even for simple structural properties of k-trees. We also show that a variant of our canonical labeling algorithm runs in time O((k+1)!n), where n is the number of vertices, yielding the fastest known FPT algorithm for k-tree isomorphism.

Journal ArticleDOI
TL;DR: This work investigates the complexity of the satisfiability problem of temporal logics with a finite set of modalities definable in the existential fragment of monadic second-order logic and shows that the problem is in pspace over the class of all linear orders.
Abstract: We investigate the complexity of the satisfiability problem of temporal logics with a finite set of modalities definable in the existential fragment of monadic second-order logic. We show that the problem is in pspace over the class of all linear orders. The same techniques show that the problem is in pspace over many interesting classes of linear orders.

Journal ArticleDOI
TL;DR: The poa is investigated as a function of both the speed ratio between the fastest machine and the number of slow machines and a well-known structure of processors, where all machines are of the same speed except for one possibly faster machine.
Abstract: Recent interest in Nash equilibria led to a study of the price of anarchy (poa) and the strong price of anarchy (spoa) for scheduling problems. The two measures express the worst case ratio between the cost of an equilibrium (a pure Nash equilibrium, and a strong equilibrium, respectively) to the cost of a social optimum. The atomic players are the jobs, and the delay of a job is the completion time of the machine running it, also called the load of this machine. The social goal is to minimize the maximum delay of any job, while the selfish goal of each job is to minimize its own delay, that is, the delay of the machine running it. We consider scheduling on uniformly related machines. While previous studies either consider identical speed machines or an arbitrary number of speeds, focusing on the number of machines as a parameter, we consider the situation in which the number of different speeds is small. We reveal a linear dependence between the number of speeds and the poa. For a set of machines of at most p speeds, the poa turns out to be exactly p+1. The growth of the poa for large numbers of related machines is therefore a direct result of the large number of potential speeds. We further consider a well-known structure of processors, where all machines are of the same speed except for one possibly faster machine. We investigate the poa as a function of both the speed ratio between the fastest machine and the number of slow machines.

Journal ArticleDOI
TL;DR: This paper provides new algorithms which deal with complex floating point numbers and shows that the computed results are as accurate as if computed in twice the working precision.
Abstract: Several different techniques and softwares intend to improve the accuracy of results computed in a fixed finite precision. Here we focus on methods to improve the accuracy of summation, dot product and polynomial evaluation. Such algorithms exist real floating point numbers. In this paper, we provide new algorithms which deal with complex floating point numbers. We show that the computed results are as accurate as if computed in twice the working precision. The algorithms are simple since they only require addition, subtraction and multiplication of floating point numbers in the same working precision as the given data.

Journal ArticleDOI
TL;DR: The graph isomorphism problem restricted to graphs of bounded treewidth or bounded tree distance width is known to be solvable in polynomial time as discussed by the authors, which is the best known upper bound.
Abstract: The Graph Isomorphism problem restricted to graphs of bounded treewidth or bounded tree distance width are known to be solvable in polynomial time. We give restricted space algorithms for these problems proving the following results:*Isomorphism for bounded tree distance width graphs is in L and thus complete for the class. We also show that for this kind of graphs a canon can be computed within logspace. *For bounded treewidth graphs, when both input graphs are given together with a tree decomposition, the problem of whether there is an isomorphism which respects the decompositions (i.e. when only isomorphisms are considered, mapping bags in one decomposition blockwise onto bags in the other decomposition) is in L. *For bounded treewidth graphs, when one of the input graphs is given with a tree decomposition the isomorphism problem is in LogCFL. *As a corollary the isomorphism problem for bounded treewidth graphs is in LogCFL. This improves the known TC^1 upper bound for the problem given by Grohe and Verbitsky.