scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 2011"


Book
01 Dec 2011
TL;DR: This book provides a unique self-contained text for advanced students and researchers in mathematical logic and computer science and develops the theoretical underpinnings of the first author's proof assistant MINLOG.
Abstract: Driven by the question, 'What is the computational content of a (formal) proof?', this book studies fundamental interactions between proof theory and computability. It provides a unique self-contained text for advanced students and researchers in mathematical logic and computer science. Part I covers basic proof theory, computability and Gdel's theorems. Part II studies and classifies provable recursion in classical systems, from fragments of Peano arithmetic up to 11-CA0. Ordinal analysis and the (Schwichtenberg-Wainer) subrecursive hierarchies play a central role and are used in proving the 'modified finite Ramsey' and 'extended Kruskal' independence results for PA and 11-CA0. Part III develops the theoretical underpinnings of the first author's proof assistant MINLOG. Three chapters cover higher-type computability via information systems, a constructive theory TCF of computable functionals, realizability, Dialectica interpretation, computationally significant quantifiers and connectives and polytime complexity in a two-sorted, higher-type arithmetic with linear logic.

76 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explore the impact of geometry on computability and complexity in Winfree's model of nanoscale self-assembly in the two-dimensional tile assembly model, i.e., in the discrete Euclidean plane ℤ×ℤ.
Abstract: This paper explores the impact of geometry on computability and complexity in Winfree’s model of nanoscale self-assembly. We work in the two-dimensional tile assembly model, i.e., in the discrete Euclidean plane ℤ×ℤ. Our first main theorem says that there is a roughly quadratic function f such that a set A⊆ℤ+ is computably enumerable if and only if the set X A ={(f(n),0)∣n∈A}—a simple representation of A as a set of points on the x-axis—self-assembles in Winfree’s sense. In contrast, our second main theorem says that there are decidable sets D⊆ℤ×ℤ that do not self-assemble in Winfree’s sense. Our first main theorem is established by an explicit translation of an arbitrary Turing machine M to a modular tile assembly system $\mathcal{T}_{M}$, together with a proof that $\mathcal{T}_{M}$ carries out concurrent simulations of M on all positive integer inputs.

68 citations


Proceedings ArticleDOI
Kenneth L. McMillan1
30 Oct 2011
TL;DR: The efficiency of Z3 makes it possible to handle problems that are beyond the reach of existing interpolating provers, as it is demonstrated using benchmarks derived from bounded verification of sequential and concurrent programs.
Abstract: Interpolating provers have a number of applications in formal verification, including abstraction refinement and invariant generation. It has proved difficult, however, to construct efficient interpolating provers for rich theories. We consider the problem of deriving interpolants from proofs generated by the highly efficient SMT solver Z3 in the quantified theory of arrays, uninterpreted function symbols and linear integer arithmetic (AUFLIA) a theory that is commonly used in program verification. We do not directly interpolate the proofs from Z3. Rather, we divide them into small lemmas that can be handled by a secondary interpolating prover for a restricted theory. We show experimentally that the overhead of this secondary prover is negligible. Moreover, the efficiency of Z3 makes it possible to handle problems that are beyond the reach of existing interpolating provers, as we demonstrate using benchmarks derived from bounded verification of sequential and concurrent programs.

68 citations


DOI
01 Jan 2011
TL;DR: This thesis investigates the integration of the two theories of automata and formal language theory, exposing the differences and similarities between them and presents the reactive Turing machine, a classical Turing machine augmented with capabilities for interaction.
Abstract: The theory of automata and formal language was devised in the 1930s to provide models for and to reason about computation. Here we mean by computation a procedure that transforms input into output, which was the sole mode of operation of computers at the time. Nowadays, computers are systems that interact with us and also each other; they are non-deterministic, reactive systems. Concurrency theory, split off from classical automata theory a few decades ago, provides a model of computation similar to the model given by the theory of automata and formal language, but focuses on concurrent, reactive and interactive systems. This thesis investigates the integration of the two theories, exposing the differences and similarities between them. Where automata and formal language theory focuses on computations and languages, concurrency theory focuses on behaviour. To achieve integration, we look for process-theoretic analogies of classic results from automata theory. The most prominent difference is that we use an interpretation of automata as labelled transition systems modulo (divergence-preserving) branching bisimilarity instead of treating automata as language acceptors. We also consider similarities such as grammars as recursive specifications and finite automata as labelled finite transition systems. We investigate whether the classical results still hold and, if not, what extra conditions are sufficient to make them hold. We especially look into three levels of Chomsky's hierarchy: we study the notions of finite-state systems, pushdown systems, and computable systems. Additionally we investigate the notion of parallel pushdown systems. For each class we define the central notion of automaton and its behaviour by associating a transition system with it. Then we introduce a suitable specification language and investigate the correspondence with the respective automaton (via its associated transition system). Because we not only want to study interaction with the environment, but also the interaction within the automaton, we make it explicit by means of communicating parallel components: one component representing the finite control of the automaton and one component representing the memory. First, we study finite-state systems by reinvestigating the relation between finite-state automata, left- and right-linear grammars, and regular expressions, but now up to (divergence-preserving) branching bisimilarity. For pushdown systems we augment the finite-state systems with stack memory to obtain the pushdown automata and consider different termination styles: termination on empty stack, on final state, and on final state and empty stack. Unlike for language equivalence, up to (divergence-preserving) branching bisimilarity the associated transition systems for the different termination styles fall into different classes. We obtain (under some restrictions) the correspondence between context-free grammars and pushdown automata for termination on final state and empty stack. We show how for contrasimulation, a weaker equivalence than branching bisimilarity, we can obtain the correspondence result without some of the restrictions. Finally, we make the interaction within a pushdown automaton explicit, but in a different way depending on the termination style. By analogy of pushdown systems we investigate the parallel pushdown systems, obtained by augmenting finite-state systems with bag memory, and consider analogous termination styles. We investigate the correspondence between context-free grammars that use parallel composition instead of sequential composition and parallel pushdown automata. While the correspondence itself is rather tight, it unfortunately only covers a small subset of the parallel pushdown automata, i.e. the single-state parallel pushdown automata. When making the interaction within parallel pushdown automata explicit, we obtain a rather uniform result for all termination styles. Finally, we study computable systems and the relation with exective and computable transition systems and Turing machines. For this we present the reactive Turing machine, a classical Turing machine augmented with capabilities for interaction. Again, we make the interaction in the reactive Turing machine between its finite control and the tape memory explicit.

59 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a deterministic distributed function computation model for a network of identical and anonymous nodes, where each node has bounded computation and storage capabilities that do not grow with the network size.
Abstract: We propose a model for deterministic distributed function computation by a network of identical and anonymous nodes. In this model, each node has bounded computation and storage capabilities that do not grow with the network size. Furthermore, each node only knows its neighbors, not the entire graph. Our goal is to characterize the class of functions that can be computed within this model. In our main result, we provide a necessary condition for computability which we show to be nearly sufficient, in the sense that every function that violates this condition can at least be approximated. The problem of computing (suitably rounded) averages in a distributed manner plays a central role in our development; we provide an algorithm that solves it in time that grows quadratically with the size of the network.

55 citations


Journal ArticleDOI
TL;DR: In this paper, an axiomatic setup for algorithmic homological algebra of Abelian categories is presented, where all existential quantifiers entering the definition of an Abelian category need to be turned into constructive ones.
Abstract: In this paper we develop an axiomatic setup for algorithmic homological algebra of Abelian categories. This is done by exhibiting all existential quantifiers entering the definition of an Abelian category, which for the sake of computability need to be turned into constructive ones. We do this explicitly for the often-studied example Abelian category of finitely presented modules over a so-called computable ring R, i.e. a ring with an explicit algorithm to solve one-sided (in)homogeneous linear systems over R. For a finitely generated maximal ideal 𝔪 in a commutative ring R, we show how solving (in)homogeneous linear systems over R𝔪 can be reduced to solving associated systems over R. Hence, the computability of R implies that of R𝔪. As a corollary, we obtain the computability of the category of finitely presented R𝔪-modules as an Abelian category, without the need of a Mora-like algorithm. The reduction also yields, as a byproduct, a complexity estimation for the ideal membership problem over local polynomial rings. Finally, in the case of localized polynomial rings, we demonstrate the computational advantage of our homologically motivated alternative approach in comparison to an existing implementation of Mora's algorithm.

46 citations


Book ChapterDOI
26 Mar 2011
TL;DR: This work provides a novel semantics-based approach to such a theory of reversible computing, using reversible Turing machines (RTMs) as the underlying computation model, and proposes r-Turing completeness as the 'gold standard' for computability in reversible computation models.
Abstract: Reversible computing is the study of computation models that exhibit both forward and backward determinism Understanding the fundamental properties of such models is not only relevant for reversible programming, but has also been found important in other fields, eg, bidirectional model transformation, program transformations such as inversion, and general static prediction of program properties Historically, work on reversible computing has focussed on reversible simulations of irreversible computations Here, we take the viewpoint that the property of reversibility itself should be the starting point of a computational theory of reversible computing We provide a novel semantics-based approach to such a theory, using reversible Turing machines (RTMs) as the underlying computation model We show that the RTMs can compute exactly all injective, computable functions We find that the RTMs are not strictly classically universal, but that they support another notion of universality; we call this RTMuniversality Thus, even though the RTMs are sub-universal in the classical sense, they are powerful enough as to include a self-interpreter Lifting this to other computation models, we propose r-Turing completeness as the 'gold standard' for computability in reversible computation models

45 citations


Proceedings ArticleDOI
18 Dec 2011
TL;DR: The extended finite-state machine (EFSM) induction method that uses SAT-solver is described, which has been tested on randomly generated scenario sets of size from 250 to 2000 and on the alarm clock controlling EFSM induction problem where it has greatly outperformed genetic algorithm.
Abstract: In the paper we describe the extended finite-state machine (EFSM) induction method that uses SAT-solver. Input data for the induction algorithm is a set of test scenarios. The algorithm consists of several steps: scenarios tree construction, compatibility graph construction, Boolean formula construction, SAT-solver invocation and finite-state machine construction from satisfying assignment. These extended finite-state machines can be used in automata-based programming, where programs are designed as automated controlled objects. Each automated controlled object contains a finite-state machine and a controlled object. The method described has been tested on randomly generated scenario sets of size from 250 to 2000 and on the alarm clock controlling EFSM induction problem where it has greatly outperformed genetic algorithm.

42 citations


Book
27 Sep 2011
TL;DR: This book develops major themes in computability theory, such as Rice's theorem and the recursion theorem, and provides a systematic account of Blum's complexity theory as well as an introduction to the theory of computable real numbers and functions.
Abstract: Aimed at mathematicians and computer scientists who will only be exposed to one course in this area, this is a brief but rigorous introduction to the abstract theory of computation, sometimes also referred to as recursion theory. It develops major themes in computability theory, such as Rice's theorem and the recursion theorem, and provides a systematic account of Blum's complexity theory as well as an introduction to the theory of computable real numbers and functions. The book is intended as a university text, but it may also be used for self-study. Appropriate exercises and solutions are included.

32 citations


Journal ArticleDOI
TL;DR: By defining such diverse terms as randomness, knowledge, intelligence and computability in terms of a common denominator the paper is able to bring together contributions from Shannon, Levin, Kolmogorov, Solomonoff, Chaitin, Yao and many others under a common umbrella of the efficiency theory.
Abstract: The paper serves as the first contribution towards the development of the theory of efficiency: a unifying framework for the currently disjoint theories of information, complexity, communication and computation. Realizing the defining nature of the brute force approach in the fundamental concepts in all of the above mentioned fields, the paper suggests using efficiency or improvement over the brute force algorithm as a common unifying factor necessary for the creation of a unified theory of information manipulation. By defining such diverse terms as randomness, knowledge, intelligence and computability in terms of a common denominator we are able to bring together contributions from Shannon, Levin, Kolmogorov, Solomonoff, Chaitin, Yao and many others under a common umbrella of the efficiency theory.

32 citations


Book ChapterDOI
22 Aug 2011
TL;DR: The mechanisation uses two models: the recursive functions and the ?
Abstract: This paper presents a mechanisation of some basic computability theory. The mechanisation uses two models: the recursive functions and the ?- calculus, and shows that they have equivalent computational power. Results proved include the Recursion Theorem, an instance of the s-m-n theorem, the existence of a universal machine, Rice's Theorem, and closure facts about the recursive and recursively enumerable sets. The mechanisation was performed in the HOL4 system and is available online.

Proceedings Article
01 Dec 2011
TL;DR: This paper surveys modern parallel SAT-solvers and points out weaknesses that have to be overcome to exploit the full power of modern multi-core processors.
Abstract: This paper surveys modern parallel SAT-solvers It focusses on recent successful techniques and points out weaknesses that have to be overcome to exploit the full power of modern multi-core processors

Posted Content
TL;DR: A concept of uniform continuity based on the Henkin quantifier is proposed and proved necessary for relative computability of compact real relations, which is a strict hierarchy of notions each necessary — and the ω-th level also sufficient — forrelative computability.
Abstract: A type-2 computable real function is necessarily continuous; and this remains true for relative, i.e. oracle-based computations. Conversely, by the Weierstrass Approximation Theorem, every continuous f:[0,1]->R is computable relative to some oracle. In their search for a similar topological characterization of relatively computable multivalued functions f:[0,1]=>R (aka relations), Brattka and Hertling (1994) have considered two notions: weak continuity (which is weaker than relative computability) and strong continuity (which is stronger than relative computability). Observing that uniform continuity plays a crucial role in the Weierstrass Theorem, we propose and compare several notions of uniform continuity for relations. Here, due to the additional quantification over values y in f(x), new ways of (linearly) ordering quantifiers arise, yet none of them turn out as satisfactory. We are thus led to a notion of uniform continuity based on the Henkin Quantifier; and prove it necessary for relative computability. In fact iterating this condition yields a strict hierarchy of notions each necessary, and the omega-th level also sufficient, for relative computability.

Proceedings ArticleDOI
30 Oct 2011
TL;DR: An interpolation procedure for the theory of fixed-size bit-vectors is presented, which allows to apply effective interpolation-based techniques for software verification without giving up the ability of handling precisely the word-level operations of typical programming languages.
Abstract: We present an interpolation procedure for the theory of fixed-size bit-vectors, which allows to apply effective interpolation-based techniques for software verification without giving up the ability of handling precisely the word-level operations of typical programming languages. Our algorithm is based on advanced SMT techniques, and, although general, is optimized to exploit the structure of typical interpolation problems arising in software verification. We have implemented a prototype version of it within the MathSAT SMT solver, and we have integrated it into a software verification framework based on standard predicate abstraction. Our experimental results show that our new technique allows our prototype to significantly outperform other systems on programs requiring bit-precise modeling of word-level operations.

Proceedings ArticleDOI
01 Jan 2011
TL;DR: It is shown that emptiness for such automata is decidable, both over finite and infinite words, under reasonable computability assumptions on the linear order.
Abstract: In this paper we work over linearly ordered data domains equipped with finitely many unary predicates and constants. We consider nondeterministic automata processing words and storing finitely many variables ranging over the domain. During a transition, these automata can compare the data values of the current configuration with those of the previous configuration using the linear order, the unary predicates and the constants. We show that emptiness for such automata is decidable, both over finite and infinite words, under reasonable computability assumptions on the linear order. Finally, we show how our automata model can be used for verifying properties of workflow specifications in the presence of an underlying database.

Posted Content
TL;DR: Since HRSs include λ-abstraction but STRSs do not, this paper restructure the static dependency pair method to allow κ-Abstraction, and shows that the static Dependency Pair method also works well on H RSs without new restrictions.
Abstract: Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include \lambda-abstraction but STRSs do not, we restructure the static dependency pair method to allow \lambda-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.

Journal ArticleDOI
TL;DR: In this paper, a new type of generalized Turing machines (GTMs) are introduced, which are used for computability in analysis, and it is shown that the functions that are computable via given representations are closed under GTM programming.
Abstract: We introduce a new type of generalized Turing machines (GTMs), which are intended as a tool for the mathematician who studies computability in Analysis. In a single tape cell a GTM can store a symbol, a real number, a continuous real function or a probability measure, for example. The model is based on TTE, the representation approach for computable analysis. As a main result we prove that the functions that are computable via given representations are closed under GTM programming. This generalizes the well known fact that these functions are closed under composition. The theorem allows to speak about objects themselves instead of names in algorithms and proofs. By using GTMs for specifying algorithms, many proofs become more rigorous and also simpler and more transparent since the GTM model is very simple and allows to apply well-known techniques from Turing machine theory. We also show how finite or infinite sequences as names can be replaced by sets (generalized representations) on which computability is already defined via representations. This allows further simplification of proofs. All of this is done for multi-functions, which are essential in Computable Analysis, and multi-representations, which often allow more elegant formulations. As a byproduct we show that the computable functions on finite and infinite sequences of symbols are closed under programming with GTMs. We conclude with examples of application.

Book ChapterDOI
23 May 2011
TL;DR: This work considers the problem of testing whether a function f is computable by a read-once, width-2 ordered binary decision diagram (OBDD), and shows that for any constant w ≥ 4, Ω(n) queries are required, resolving a conjecture of Goldreich.
Abstract: We consider the problem of testing whether a function f : {0, 1}n → {0, 1} is computable by a read-once, width-2 ordered binary decision diagram (OBDD), also known as a branching program. This problem has two variants: one where the variables must occur in a fixed, known order, and one where the variables are allowed to occur in an arbitrary order. We show that for both variants, any nonadaptive testing algorithm must make Ω(n) queries, and thus any adaptive testing algorithm must make Ω(log n) queries. We also consider the more general problem of testing computability by width-w OBDDs where the variables occur in a fixed order. We show that for any constant w ≥ 4, Ω(n) queries are required, resolving a conjecture of Goldreich [15]. We prove all of our lower bounds using a new technique of Blais, Brody, and Matulef [6], giving simple reductions from known hard problems in communication complexity to the testing problems at hand. Our result for width-2 OBDDs provides the first example of the power of this technique for proving strong nonadaptive bounds.

Journal ArticleDOI
TL;DR: The main technical result of this paper is constructing a sound and complete axiomatization for the propositional fragment of computability logic whose vocabulary includes all four kinds of conjunction and disjunction: parallel, toggling, sequential and choice, together with negation.

Journal ArticleDOI
TL;DR: This paper shows Lambalgen's theorem for correlated probability without the assumption, which is shown under a uniform computability assumption in [H Takahashi Inform. Compt. 2008].
Abstract: We study algorithmic randomness and monotone complexity on product of the set of infinite binary sequences. We explore the following problems: monotone complexity on product space, Lambalgen's theorem for correlated probability, classification of random sets by likelihood ratio tests, decomposition of complexity and independence, and Bayesian statistics for individual random sequences. Formerly Lambalgen's theorem for correlated probability is shown under a uniform computability assumption in [H. Takahashi Inform. Compt. 2008]. In this paper we show the theorem without the assumption.

Proceedings ArticleDOI
30 Oct 2011
TL;DR: An algorithm for solving pseudo-Boolean problems through an incremental translation to SAT that works with any incremental SAT solver as a backend and is shown to be a part of any portfolio solver.
Abstract: We revisit pseudo-Boolean Solving via compilation to SAT. We provide an algorithm for solving pseudo-Boolean problems through an incremental translation to SAT that works with any incremental SAT solver as a backend. Experimental evaluation shows that our incremental algorithm solves industrial problems that previous SAT-based approaches do not. We also show that SAT-based algorithms for solving pseudo-Boolean problems should be a part of any portfolio solver.

Proceedings ArticleDOI
01 Jun 2011
TL;DR: This work surveys results relating the computability and randomness aspects of sets of natural numbers and shows that properties originally defined in very different ways are shown to coincide.
Abstract: We survey results relating the computability and randomness aspects of sets of natural numbers. Each aspect corresponds to several mathematical properties. Properties originally defined in very different ways are shown to coincide. For instance, lowness for ML-randomness is equivalent to K-triviality. We include some interactions of randomness with computable analysis. Mathematics Subject Classification (2010). 03D15, 03D32.

Journal ArticleDOI
TL;DR: Giorgi as mentioned in this paper extended the expressive power of CoL in a qualitatively new way, generalized formulas (to which the earlier languages were limited) to cirquents, allowing for subgame/subtask sharing between different parts of the entire game/task.
Abstract: Computability logic (CoL) (see http://www.cis.upenn.edu/~giorgi/cl.html) is a recently introduced semantical platform and ambitious program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth that logic has more traditionally been. Its expressions represent interactive computational tasks seen as games played by a machine against the environment, and "truth" is understood as existence of an algorithmic winning strategy. With logical operators standing for operations on games, the formalism of CoL is open-ended, and has already undergone series of extensions. This article extends the expressive power of CoL in a qualitatively new way, generalizing formulas (to which the earlier languages of CoL were limited) to circuit-style structures termed cirquents. The latter, unlike formulas, are able to account for subgame/subtask sharing between different parts of the overall game/task. Among the many advantages offered by this ability is that it allows us to capture, refine and generalize the well known independence-friendly logic which, after the present leap forward, naturally becomes a conservative fragment of CoL, just as classical logic had been known to be a conservative fragment of the formula-based version of CoL. Technically, this paper is self-contained, and can be read without any prior familiarity with CoL.

Book ChapterDOI
14 Jun 2011
TL;DR: Some decidable and undecidable problems closely related to PB-realizability problem are presented thus demonstrating its 'borderline' status with respect to computability.
Abstract: The chamber hitting problem (CHP) for linear maps consists in checking whether an orbit of a linear map specified by a rational matrix hits a given rational polyhedral set. The CHP generalizes some wellknown open computability problems about linear recurrent sequences (e.g., the Skolem problem, the nonnegativity problem). It is recently shown that the CHP is Turing equivalent to checking whether an intersection of a regular language and the special language of permutations of binary words (the permutation filter) is nonempty (PB-realizability problem). In this paper we present some decidable and undecidable problems closely related to PB-realizability problem thus demonstrating its 'borderline' status with respect to computability.

Journal ArticleDOI
TL;DR: A series of approximations are described that converge to the $p-radius with a priori computable accuracy for nonnegative matrices, which gives efficient approximation schemes for the p-radius computation.
Abstract: The $p$-radius characterizes the average rate of growth of norms of matrices in a multiplicative semigroup. This quantity has found several applications in recent years. We raise the question of its computability. We prove that the complexity of its approximation increases exponentially with $p$. We then describe a series of approximations that converge to the $p$-radius with a priori computable accuracy. For nonnegative matrices, this gives efficient approximation schemes for the $p$-radius computation.

Journal ArticleDOI
TL;DR: It is shown that with respect to lower semantics, the finite-time reachable sets are lower-semicomputable, and withrespect to upper semantics,the finite- time reachable set are upper-seicomputable.
Abstract: In this paper we consider the semantics for the evolution of hybrid systems, and the computability of the evolution with respect to these semantics. We show that with respect to lower semantics, the finite-time reachable sets are lower-semicomputable, and with respect to upper semantics, the finite-time reachable sets are upper-semicomputable. We use the framework of type-two Turing computability theory and computable analysis, which deal with obtaining approximation results with guaranteed error bounds from approximate data. We show that, in general, we cannot find a semantics for which the evolution is both lower- and upper-semicomputable, unless the system is free from tangential and corner contact with the guard sets. We highlight the main points of the theory with simple examples illustrating the subtleties involved.

Proceedings ArticleDOI
Timon Hertli1
22 Oct 2011
TL;DR: The PPSZ algorithm by Paturi, Pudlak, Saks, and Zane [1998] is the fastest known algorithm for Unique k-SAT, where the input formula does not have more than one satisfying assignment.
Abstract: The PPSZ algorithm by Paturi, Pudl\'ak, Saks, and Zane [1998] is the fastest known algorithm for Unique k-SAT, where the input formula does not have more than one satisfying assignment. For k>=5 the same bounds hold for general k-SAT. We show that this is also the case for k=3,4, using a slightly modified PPSZ algorithm. We do the analysis by defining a cost for satisfiable CNF formulas, which we prove to decrease in each PPSZ step by a certain amount. This improves our previous best bounds with Moser and Scheder [2011] for 3-SAT to O(1.308^n) and for 4-SAT to O(1.469^n).

Journal ArticleDOI
TL;DR: A version of the magic sets technique for DFRP programs is designed, which ensures query equivalence under both brave and cautious reasoning, and it is shown that, if the input program is D FRP, then its magic-sets rewriting is guaranteed to be finitely ground.
Abstract: The support for function symbols in logic programming under answer set semantics allows us to overcome some modeling limitations of traditional Answer Set Programming (ASP) systems, such as the inability of handling infinite domains. On the other hand, admitting function symbols in ASP makes inference undecidable in the general case. Recently, the research community has been focusing on finding proper subclasses of programs with functions for which decidability of inference is guaranteed. The two major proposals, so far, are finitary programs and finitely-ground programs. These two proposals are somehow complementary: indeed, the former is conceived to allow decidable querying (by means of a top-down evaluation strategy), while the latter supports the computability of answer sets (by means of a bottom-up evaluation strategy). One of the main advantages of finitely-ground programs is that they can be “directly” evaluated by current ASP systems, which are based on a bottom-up computational model. However, there are also some interesting programs which are suitable for top-down query evaluation; but they do not fall in the class of finitely-ground programs. In this paper, we focus on disjunctive finitely recursive positive (DFRP) programs. We design a version of the magic sets technique for DFRP programs, which ensures query equivalence under both brave and cautious reasoning. We show that, if the input program is DFRP, then its magic-sets rewriting is guaranteed to be finitely ground. Reasoning on DFRP programs turns out to be decidable; and we provide also an effective method that allows one to simply perform this reasoning by using the ASP system DLV.

Posted Content
TL;DR: The theory of physical computability and accordingly, the physical complexity theory is formally proposed and a framework that can evaluate almost all forms of computation using various physical mechanisms is discussed.
Abstract: Inspired by the work of Feynman, Deutsch, We formally propose the theory of physical computability and accordingly, the physical complexity theory. To achieve this, a framework that can evaluate almost all forms of computation using various physical mechanisms is discussed. Here, we focus on using it to review the theory of Quantum Computation. As a preliminary study on more general problems, some examples of other physical mechanism are also given in this paper.

Journal ArticleDOI
TL;DR: This paper shows that (a) the problem of determining the number of attractors in a given compact set is in general undecidable, even for analytic systems and (b) the attractors are semi-computable for stable systems.
Abstract: In this paper we explore the problem of computing attractors and their respective basins of attraction for continuous-time planar dynamical systems. We consider C1 systems and show that stability is in general necessary (but may not be sufficient) to attain computability. In particular, we show that (a) the problem of determining the number of attractors in a given compact set is in general undecidable, even for analytic systems and (b) the attractors are semi-computable for stable systems. We also show that the basins of attraction are semi-computable if and only if the system is stable.