scispace - formally typeset
Search or ask a question

Showing papers presented at "Conference on Computability in Europe in 2013"


Book ChapterDOI
01 Jul 2013
TL;DR: This work addresses the basic problem of determining the geometry of the roots of a complex analytic function f, formalized as the root clustering problem, and provides a complete (δ,e)-exact algorithm based on soft zero tests.
Abstract: A challenge to current theories of computing in the continua is the proper treatment of the zero test. Such tests are critical for extracting geometric information. Zero tests are expensive and may be uncomputable. So we seek geometric algorithms based on a weak form of such tests, called soft zero tests. Typically, algorithms with such tests can only determine the geometry for “nice” (e.g., non-degenerate, non-singular, smooth, Morse, etc) inputs. Algorithms that avoid such niceness assumptions are said to be complete. Can we design complete algorithms with soft zero tests? We address the basic problem of determining the geometry of the roots of a complex analytic function f. We assume effective box functions for f and its higher derivatives are provided. The problem is formalized as the root clustering problem, and we provide a complete (δ,e)-exact algorithm based on soft zero tests.

34 citations


Journal ArticleDOI
01 Jun 2013
TL;DR: This work presents a method of creating low-fidelity prototypes for mobile AR entertainment systems, and presents a novel, fully autonomous map initialization method based on accelerometer data that is simpler to implement, more robust, faster, and less computationally expensive.
Abstract: Despite considerable progress in mobile augmented reality (AR) over recent years, there are few commercial entertainment systems utilizing this exciting technology. To help understand why, we shall review the state of the art in mobile AR solutions, in particular sensor-based, marker-based, and markerless solutions through a design lens of existing and future entertainment services. The majority of mobile AR that users are currently likely to encounter principally utilize sensor-based or marker-based solutions. In sensor-based systems, the poor accuracy of the sensor measurements results in relatively crude augmentation, whereas in marker-based systems, the requirement to physically augment our environment with fiducial markers limits the opportunity for wide-scale deployment. While the alternative online markerless systems overcome these limitations, they are sensitive to environmental conditions (i.e. light conditions), are computationally more expensive, and present greater complexity of implementation, particularly in terms of their system-initialization requirements. To simplify the operation of online markerless systems, a novel, fully autonomous map initialization method based on accelerometer data is also presented; when compared with alternative move-matching techniques, it is simpler to implement, more robust, faster, and less computationally expensive. Finally, we highlight that while there are many technical challenges remaining to make mobile AR development easier, we also acknowledge that because of the nature of AR, it is often difficult to assess the experience that mobile AR will provide to users without resorting to complex system implementations. We address this by presenting a method of creating low-fidelity prototypes for mobile AR entertainment systems.

30 citations


Book ChapterDOI
01 Jul 2013
TL;DR: It is shown that every compact computable metric space can be uniquely described, up to isometry, by a computable Π3 formula, and that orbits of elements are uniformly given by computableΠ2 formulas.
Abstract: We adjust methods of computable model theory to effective analysis. We use index sets and infinitary logic to obtain classification-type results for compact computable metric spaces. We show that every compact computable metric space can be uniquely described, up to isometry, by a computable Π3 formula, and that orbits of elements are uniformly given by computable Π2 formulas. We show that deciding if two compact computable metric spaces are isometric is a \(\Pi^0_2\) complete problem within the class of compact computable spaces, which in itself is \(\Pi^0_3\). On the other hand, if there is an isometry, then ∅ ′′ can compute one. In fact, there is a set low relative to ∅ ′ which can compute an isometry. We show that the result can not be improved to ∅ ′. We also give further results for special classes of compact spaces, and for other related classes of Polish spaces.

28 citations


Book ChapterDOI
01 Jul 2013
TL;DR: This paper is interested both to survey the theoretical research issues which, by taking their cue from Data Compression, have been developed in the context of Combinatorics on Words, and to focus on those combinatorial results useful to explore the applicative potential of the Burrows-Wheeler Transform.
Abstract: The Burrows-Wheeler Transform (BWT) is a tool of fundamental importance in Data Compression and, recently, has found many applications well beyond its original purpose. The main goal of this paper is to highlight the mathematical and combinatorial properties on which the outstanding versatility of the BWT is based, i.e., its reversibility and the clustering effect on the output. Such properties have aroused curiosity and fervent interest in the scientific world both for theoretical aspects and for practical effects. In particular, in this paper we are interested both to survey the theoretical research issues which, by taking their cue from Data Compression, have been developed in the context of Combinatorics on Words, and to focus on those combinatorial results useful to explore the applicative potential of the Burrows-Wheeler Transform.

19 citations


Book ChapterDOI
01 Jul 2013
TL;DR: This work approaches the problem of deciding whether there exists an anti-/morphism for which a word is a pseudo-repetition, and tries to discover whether a word has a hidden repetitive structure.
Abstract: Pseudo-repetitions are a natural generalization of the classical notion of repetitions in sequences: they are the repeated concatenation of a word and its encoding under a certain morphism or antimorphism. We approach the problem of deciding whether there exists an anti-/morphism for which a word is a pseudo-repetition. In other words, we try to discover whether a word has a hidden repetitive structure. We show that some variants of this problem are efficiently solvable, while some others are NP-complete.

17 citations


Book ChapterDOI
01 Jul 2013
TL;DR: In this paper, the problem of finding a minimum cardinality target set that eventually activates the whole graph G is hard to approximate to a factor better than O(2^{log √ log √ 1- √ √ ϵ + ϵ − 1 − 1}.
Abstract: We study variants of the Target Set Selection problem, first proposed by Kempe et al. In our scenario one is given a graph G = (V,E), integer values t(v) for each vertex v, and the objective is to determine a small set of vertices (target set) that activates a given number (or a given subset) of vertices of G within a prescribed number of rounds. The activation process in G proceeds as follows: initially, at round 0, all vertices in the target set are activated; subsequently at each round r ≥ 1 every vertex of G becomes activated if at least t(v) of its neighbors are active by round r − 1. It is known that the problem of finding a minimum cardinality Target Set that eventually activates the whole graph G is hard to approximate to a factor better than \(O(2^{\log^{1-\epsilon }|V|})\). In this paper we give exact polynomial time algorithms to find minimum cardinality Target Sets in graphs of bounded clique-width, and exact linear time algorithms for trees.

14 citations


Book ChapterDOI
01 Jul 2013
TL;DR: This paper gives a survey of nice properties of strong sufficient statistics and shows that there are strings for which complexity of every strong sufficient statistic is much larger than complexity of its minimal sufficient statistic.
Abstract: The notion of a strong sufficient statistic was introduced in [8]. In this paper, we give a survey of nice properties of strong sufficient statistics and show that there are strings for which complexity of every strong sufficient statistic is much larger than complexity of its minimal sufficient statistic.

13 citations


Book ChapterDOI
01 Jul 2013
TL;DR: I discuss part of the solution for the ML-covering problem, which passes through analytic notions such as martingale convergence and Lebesgue density, an understanding of the class of cost functions which characterizes K-triviality, and identifying the correct notion of randomness which corresponds to computing K-Trivial sets.
Abstract: I discuss part of the solution for the ML-covering problem [1]. This passes through analytic notions such as martingale convergence and Lebesgue density; an understanding of the class of cost functions which characterizes K-triviality; and identifying the correct notion of randomness which corresponds to computing K-trivial sets, together with the construction of a smart K-trivial set. This is joint work with Bienvenu, Kucera, Nies, and Turetsky.

10 citations


Book ChapterDOI
01 Jul 2013
TL;DR: It is demonstrated that the dimension of convex sets can be characterized by the cardinality of finite sets encodable into them, and choice from an n + 1 point set is reducible to choice from a convex set of dimension n, but not reducible from a conveyed set of Dimension n − 1.
Abstract: We investigate choice principles in the Weihrauch lattice for finite sets on the one hand, and convex sets on the other hand. Increasing cardinality and increasing dimension both correspond to increasing Weihrauch degrees. Moreover, we demonstrate that the dimension of convex sets can be characterized by the cardinality of finite sets encodable into them. Precisely, choice from an n + 1 point set is reducible to choice from a convex set of dimension n, but not reducible to choice from a convex set of dimension n − 1.

10 citations


Book ChapterDOI
Ivan N. Soskov1
01 Jul 2013
TL;DR: In this note, a negative solution to the ω-jump inversion problem for degree spectra of structures is provided.
Abstract: In this note we provide a negative solution to the ω-jump inversion problem for degree spectra of structures.

10 citations


Book ChapterDOI
01 Jul 2013
TL;DR: A review of a few recent results which look at nonlinear dynamical systems from a computational perspective, especially information concerning their long-term evolution.
Abstract: Nonlinear dynamical systems abound as models of natural phenomena. They are often characterized by highly unpredictable behaviour which is hard to analyze as it occurs, for example, in chaotic systems. A basic problem is to understand what kind of information we can realistically expect to extract from those systems, especially information concerning their long-term evolution. Here we review a few recent results which look at this problem from a computational perspective.

Book ChapterDOI
01 Jul 2013
TL;DR: A new graph structure is introduced that gives a unified view of classical and some extensions of the classical approaches to compute the genomic distance, present an overview of their results and point out some open problems related to them.
Abstract: The genomic distance typically describes the minimum number of large-scale mutations that transform one genome into another. Classical approaches to compute the genomic distance are usually limited to genomes with the same content and take into consideration only rearrangements that change the organization of the genome (i.e., positions and orientation of pieces of DNA, and number of chromosomes). In order to handle genomes with distinct contents, also insertions and deletions of pieces of DNA—named indels—must be allowed. Some extensions of the classical approaches lead to models that allow rearrangements and indels. In this work we introduce a new graph structure that gives a unified view of these approaches, present an overview of their results and point out some open problems related to them.

Book ChapterDOI
01 Jul 2013
TL;DR: This work advertises constant-size advice and explores its theoretical impact on the complexity of classification problems – a natural generalization of promise problems – and on real functions and operators.
Abstract: Promises are a standard way to formalize partial algorithms; and advice quantifies nonuniformity. For decision problems, the latter is captured in common complexity classes such as \(\mathcal{P}/\operatorname{poly}\), that is, with advice growing in size with that of the input. We advertise constant-size advice and explore its theoretical impact on the complexity of classification problems – a natural generalization of promise problems – and on real functions and operators. Specifically we exhibit problems that, without any advice, are decidable/computable but of high complexity while, with (each increase in the permitted size of) advice, (gradually) drop down to polynomial-time.

Book ChapterDOI
01 Jul 2013
TL;DR: A bioinformatic investigation on genomic repeats which occur in multiple genes is presented, with the unexpected result to have most of them occurring inside genes.
Abstract: Motivated by a general interest to understand sequence based signaling in the cell, and in particular how (even distal) genomic strings communicate each other in the transcriptional process, we present a bioinformatic investigation on genomic repeats which occur in multiple genes. Unconventional graph based methods to abstractly represent genomes, gene networks, and genomic languages are provided. In particular, the distribution of long repeats along genomic sequences from three specific organisms (genome of N. equitans, of E. coli, and chromosome IV of S. cerevisiae) is computed, and efficiently visualized along the entire sequences, with the unexpected result to have most of them occurring inside genes.

Book ChapterDOI
01 Jul 2013
TL;DR: Space-time diagrams of signal machines on finite configurations are composed of interconnected line segments in the Euclidean plane that with four directions are known that fractal generation, accumulation and any Turing computation are possible.
Abstract: Space-time diagrams of signal machines on finite configurations are composed of interconnected line segments in the Euclidean plane. As the system runs, a network emerges. If segments extend only in one or two directions, the dynamics is finite and simplistic. With four directions, it is known that fractal generation, accumulation and any Turing computation are possible.

Book ChapterDOI
01 Jul 2013
TL;DR: It is demonstrated that the immanant of any family of Young diagrams with bounded width and at least n e boxes at the right of the first column is \(\textsc{VNP}\)-complete.
Abstract: The fermionant \({\rm Ferm}^k_n(\bar x) = \sum_{\sigma \in S_n} (-k)^{c(\pi)}\prod_{i=1}^n x_{i,j}\) can be seen as a generalization of both the permanent (for k = − 1) and the determinant (for k = 1). We demonstrate that it is \(\textsc{VNP}\)-complete for any rational k ≠ 1. Furthermore it is #P-complete for the same values of k. The immanant is also a generalization of the permanent (for a Young diagram with a single line) and of the determinant (when the Young diagram is a column). We demonstrate that the immanant of any family of Young diagrams with bounded width and at least n e boxes at the right of the first column is \(\textsc{VNP}\)-complete.

Book ChapterDOI
01 Jul 2013
TL;DR: It is established that for classes comprising infinite sets, conservative partial learnability is in fact equivalent to explanatory learnability relative to the halting problem.
Abstract: Conservative partial learning is a variant of partial learning whereby the learner, on a text for a target language L, outputs one index e with L = W e infinitely often and every further hypothesis d is output only finitely often and satisfies \(L ot\subseteq W_d\). The present paper studies the learning strength of this notion, comparing it with other learnability criteria such as confident partial learning, explanatory learning, as well as behaviourally correct learning. It is further established that for classes comprising infinite sets, conservative partial learnability is in fact equivalent to explanatory learnability relative to the halting problem.

Journal ArticleDOI
01 Sep 2013
TL;DR: Stomp as discussed by the authors is a tangible user interface (TUI) designed to provide new participatory experiences for people with intellectual disability, which does not rely on specific competencies before interaction and engagement can occur.
Abstract: For people with intellectual disabilities, there are significant barriers to inclusion in socially cooperative endeavors. This paper investigates the effectiveness of Stomp, a tangible user interface (TUI) designed to provide new participatory experiences for people with intellectual disability. Results from an observational study reveal the extent to which the Stomp system supports social and physical interaction. The tangible, spatial, and embodied qualities of Stomp result in an experience that does not rely on the acquisition of specific competencies before interaction and engagement can occur.

Book ChapterDOI
01 Jul 2013
TL;DR: It is shown that the Duplication-Loss Alignment (DLA) problem is APX-hard even if the number of occurrences of a gene inside a genome is bounded by 2, and the Feasible Relabeling Alignment problem, involved in a general methodology for solving DLA, is equivalent to Minimum Feedback Vertex Set on Directed Graph, hence implying that the problem isAPX- hard.
Abstract: We investigate the complexity of two combinatorial problems related to pairwise genome alignment under the duplication-loss model of evolution. Under this model, the unaligned parts of the genomes (gaps) are interpreted as duplications and losses. The first, and most general, combinatorial problem that we consider is the Duplication-Loss Alignment problem, which is to find an alignment of minimum duplication-loss cost. Defining the cost as the number of segmental duplications and individual losses, the problem has been recently shown NP-hard. Here, we improve this result by showing that the Duplication-Loss Alignment (DLA) problem is APX-hard even if the number of occurrences of a gene inside a genome is bounded by 2. We then consider a more constrained version, the Feasible Relabeling Alignment problem, involved in a general methodology for solving DLA, that aims to infer a feasible (in term of evolutionary history) most parsimonious relabeling of an initial best candidate labeled alignment which is potentially cyclic. We show that it is equivalent to Minimum Feedback Vertex Set on Directed Graph, hence implying that the problem is APX-hard, is fixed-parameter tractable and approximable within factor O(log|χ|loglog|χ|), where χ is the aligned genome considered by Feasible Relabeling Alignment.

Book ChapterDOI
01 Jul 2013
TL;DR: In this paper, four restricted cases of the generalised communicating P systems are considered and better results are produced, with respect to the number of cells involved, than those provided so far in the literature.
Abstract: In this paper we consider four restricted cases of the generalised communicating P systems and study their computational power. In all these cases better results are produced, with respect to the number of cells involved, than those provided so far in the literature. Only one of these results is fully presented, whereas the others are shortly and informally described. Connections between the variants considered and recently introduced kernel P systems are investigated.

Book ChapterDOI
01 Jul 2013
TL;DR: The Reaction Algebra is proposed, a calculus resembling reaction systems extended with a restriction operator, which increases the expressiveness of the calculus by allowing the modeling of hidden entities, such as those contained in membranes.
Abstract: Reaction systems are an abstract model of interactions among biochemical reactions, developed around two opposite mechanisms: facilitation and inhibition. The evolution of a Reaction System is driven by the external objects which are sent into the system by the environment at each step. In this paper, we propose the Reaction Algebra, a calculus resembling reaction systems extended with a restriction operator. Restriction increases the expressiveness of the calculus by allowing the modeling of hidden entities, such as those contained in membranes.

Book ChapterDOI
01 Jul 2013
TL;DR: A “safer” real to replace π is provided so that the associated one function retains its trivial computability but has unprovability of the correctness of any particular program for it.
Abstract: On page 9 of Rogers’ computability book he presents two functions each based on eventual, currently unknown patterns in the decimal expansion of π. One of them is easily (classically) seen to be computable, but the proof is highly non-constructive and, conceptually interestingly, there is no known example algorithm for it. For the other, it is unknown as to whether it is computable. In the future, though, these unknown patterns in the decimal expansion of π may be sufficiently resolved, so that, for the one, we shall know a particular algorithm for it, and/or, for the other whether it’s computable. The present paper provides a “safer” real to replace π so that the associated one function retains its trivial computability but has unprovability of the correctness of any particular program for it. Re the other function, a real r to replace π is given with each bit of this r uniformly linear time computable in the length of its position and so that the Rogers’ other function associated with r is provably uncomputable.

Book ChapterDOI
01 Jul 2013
TL;DR: The paper surveys the stored program concept’s use and development and attempts to separate it into three distinct aspects, each with its own history and each amenable to more precise definition.
Abstract: Historians agree that the stored program concept was formulated in 1945 and that its adoption was the most important single step in the development of modern computing But the “concept” has never been properly defined, and its complex history has left it overloaded with different meanings The paper surveys its use and development and attempts to separate it into three distinct aspects, each with its own history and each amenable to more precise definition

Journal ArticleDOI
01 Jun 2013
TL;DR: A conceptual framework for game architectures—Game Worlds Graph (GWG)—is presented in this paper for three purposes: classifying existing architectures, communicating and sense-making of game architectures, and exploring and discovering future game architectures.
Abstract: Software architectures of video games have evolved radically over the past decade. Researchers and practitioners have proposed architectural patterns such as stand-alone configuration, peer-to-peer, client server and hybrid, and some of them have successfully been used in commercial game titles. A conceptual framework for game architectures—Game Worlds Graph (GWG)—is presented in this paper for three purposes: 1) classifying existing architectures, 2) communicating and sense-making of game architectures, and 3) exploring and discovering future game architectures. The framework is based on the Game World and the World Connector concepts, which reveal some essential characteristics of game architectures—thus can be more descriptive and informative than existing taxonomies.

Book ChapterDOI
Ulf Hashagen1
01 Jul 2013
TL;DR: In his path breaking book Revolution in Science, Cohen diagnoses that a general revolutionary change in the sciences had followed from the invention of the computer.
Abstract: It has often been claimed that the computer has not only revolutionized everyday life but has also affected the sciences in a fundamental manner. Even in national systems of innovation which had initially reacted with a fair amount of reserve to the computer as a new scientific instrument (such as Germany and France; cf., e.g., [33,18]), it is today a commonplace to speak about the ”computer revolution” in the sciences [27]. In his path breaking book Revolution in Science, Cohen diagnoses that a general revolutionary change in the sciences had followed from the invention of the computer. While he asserts that the scientific revolution in astronomy in the 17th century was not based on the newly invented telescope but on the intellect of Galileo Galilei, he maintains in contrast that the ”case is different for the computer, which [. . . ] has affected the thinking of scientists and the formulation of theories in a fundamental way, as in the case of the new computer models for world meteorology” [10, pp. 9-10 & 20-22].

Book ChapterDOI
01 Jul 2013
TL;DR: A general, executable formalism for describing game rules and desired strategy properties is derived and the outcomes for several variants of the familiar game of tic-tac-toe are presented.
Abstract: We revisit the problem of constructing strategies for simple position games. We derive a general, executable formalism for describing game rules and desired strategy properties. We present the outcomes for several variants of the familiar game of tic-tac-toe.

Book ChapterDOI
01 Jul 2013
TL;DR: This work aims to give a topological characterization of non-deterministic time complexity class and directly construct an \(\mathcal{S}\)-machine for an NP-complete problem.
Abstract: In this paper we study the topology of asymptotic cones of groups constructed from \(\mathcal{S}\)-machines running in polynomial time. In particular we directly construct an \(\mathcal{S}\)-machine for an NP-complete problem. Using a part of the machinery shaped by Sapir,Birget and Rips we construct its associated group and we show that every asymptotic cone of this group is not simply connected. The proof is rather geometric and use an argument similar to the one developed by Sapir and Olshanskii. This work aims to give a topological characterization of non-deterministic time complexity class.

Book ChapterDOI
01 Jul 2013
TL;DR: This paper proves the following limitations on the class of \(\mathfrak{L}\)-automatic structures for a fixed \(\ mathfrak {L}\) of finite condensation rank 1 + α.
Abstract: Bruyere and Carton lifted the notion of finite automata reading infinite words to finite automata reading words with shape an arbitrary linear order \(\mathfrak{L}\). Automata on finite words can be used to represent infinite structures, the so-called word-automatic structures. Analogously, for a linear order \(\mathfrak{L}\) there is the class of \(\mathfrak{L}\)-automatic structures. In this paper we prove the following limitations on the class of \(\mathfrak{L}\)-automatic structures for a fixed \(\mathfrak{L}\) of finite condensation rank 1 + α.

Book ChapterDOI
01 Jul 2013
TL;DR: In this article, the authors studied the problem of solving discounted, two player, turn based, stochastic games (2TBSGs) and showed that the same reduction also works for general 2TBSG.
Abstract: We study the problem of solving discounted, two player, turn based, stochastic games (2TBSGs). Jurdzinski and Savani showed that in the case of deterministic games the problem can be reduced to solving P-matrix linear complementarity problems (LCPs). We show that the same reduction also works for general 2TBSGs. This implies that a number of interior point methods can be used to solve 2TBSGs. We consider two such algorithms: the unified interior point method of Kojima, Megiddo, Noma, and Yoshise, and the interior point potential reduction algorithm of Kojima, Megiddo, and Ye. The algorithms run in time O((1 + κ)n 3.5 L) and \(O(\frac{-\delta}{\theta}n^4\log \epsilon^{-1})\), respectively, when applied to an LCP defined by an n ×n matrix M that can be described with L bits, and where the potential reduction algorithm returns an e-optimal solution. The parameters κ, δ, and θ depend on the matrix M. We show that for 2TBSGs with n states and discount factor γ we get \(\kappa = \Theta(\frac{n}{(1-\gamma)^2})\), \(-\delta = \Theta(\frac{\sqrt{n}}{1-\gamma})\), and \(1/\theta = \Theta(\frac{n}{(1-\gamma)^2})\) in the worst case. The lower bounds for κ, − δ, and 1/θ are all obtained using the same family of deterministic games.

Book ChapterDOI
01 Jul 2013
TL;DR: This paper presents a meta-data structure for summaries based on random sampling, which can be built over large, distributed data, and provide guaranteed performance for a variety of data summarization tasks.
Abstract: Prompted by the need to compute holistic properties of increasingly large data sets, the notion of the “summary” data structure has emerged in recent years as an important concept. Summary structures can be built over large, distributed data, and provide guaranteed performance for a variety of data summarization tasks. Various types of summaries are known: summaries based on random sampling; summaries formed as linear sketches of the input data; and other summaries designed for a specific problem at hand.