scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 2004"


Book
01 Jan 2004

233 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the structure function determines all stochastic properties of the data: for every constrained model class it determines the individual best fitting model in the class irrespective of whether the "true" model is in the model class considered or not.
Abstract: In 1974, Kolmogorov proposed a nonprobabilistic approach to statistics and model selection. Let data be finite binary strings and models be finite sets of binary strings. Consider model classes consisting of models of given maximal (Kolmogorov) complexity. The "structure function" of the given data expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. We show that the structure function determines all stochastic properties of the data: for every constrained model class it determines the individual best fitting model in the class irrespective of whether the "true" model is in the model class considered or not. In this setting, this happens with certainty, rather than with high probability as is in the classical case. We precisely quantify the goodness-of-fit of an individual model with respect to individual data. We show that-within the obvious constraints-every graph is realized by the structure function of some data. We determine the (un)computability properties of the various functions contemplated and of the "algorithmic minimal sufficient statistic.".

137 citations



Journal ArticleDOI
TL;DR: The presuppositions and context of the TM model are reviewed and it is shown that it is unsuited to natural computation, so an expanded definition of computation is considered that includes alternative (especially analog) models as well as the TM.

89 citations


Book
19 Apr 2004

69 citations


Journal ArticleDOI
TL;DR: It is shown that if a new notion of computability is introduced for the GPAC, based on ideas from computable analysis, then one can compute transcendentally transcendental functions such as the Gamma function or Riemann's Zeta function.
Abstract: This paper revisits one of the first models of analog computation, the General Purpose Analog Computer (GPAC). In particular, we restrict our attention to the improved model presented in [11] and we show that it can be further refined. With this we prove the following: (i) the previous model can be simplified; (ii) it admits extensions having close connections with the class of smooth continuous time dynamical systems. As a consequence, we conclude that some of these extensions achieve Turing universality. Finally, it is shown that if we introduce a new notion of computability for the GPAC, based on ideas from computable analysis, then one can compute transcendentally transcendental functions such as the Gamma function or Riemann's Zeta function. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

68 citations


Proceedings ArticleDOI
07 Nov 2004
TL;DR: An extensive suite of experiments with large sequential circuits confirm the robustness and efficiency of the proposed logic debugging methodology and suggest that Boolean satisfiability provides an effective platform for sequential logic debugging.
Abstract: Logic debugging of today's complex sequential circuits is an important problem. In this paper, a logic debugging methodology for multiple errors in sequential circuits with no state equivalence is developed. The proposed approach reduces the problem of debugging to an instance of Boolean satisfiability. This formulation takes advantage of modern Boolean satisfiability solvers that handle large circuits in a computationally efficient manner. An extensive suite of experiments with large sequential circuits confirm the robustness and efficiency of the proposed approach. The results further suggest that Boolean satisfiability provides an effective platform for sequential logic debugging.

62 citations


Journal ArticleDOI
TL;DR: In this paper, the WhileCC* model is introduced for approximable many-valued computation on topological algebras, and the equivalence between abstract and concrete computation on metric partial algesbras is established.
Abstract: In the theory of computation on topological algebras there is a considerable gap between so-called abstract and concrete models of computation. In concrete models, unlike abstract models, the computations depend on the representation of the algebra. First, we show that with abstract models, one needs algebras with partial operations, and computable functions that are both continuous and many-valued. This many-valuedness is needed even to compute single-valued functions, and so abstract models must be nondeterministic even to compute deterministic problems. As an abstract model, we choose the "while"-array programming language, extended with a nondeterministic "countable choice" assignment, called the WhileCC* model. Using this, we introduce the concept of approximable many-valued computation on metric algebras. For our concrete model, we choose metric algebras with effective representations. We prove:(1) for any metric algebra A with an effective representation α, WhileCC* approximability implies computability in α, and (2) also the converse, under certain reasonable conditions on A. From (1) and (2) we derive an equivalence theorem between abstract and concrete computation on metric partial algebras. We give examples of algebras where this equivalence holds.

59 citations


Journal ArticleDOI
TL;DR: This article investigates several additional properties of representations which guarantee that such representations induce a reasonable Type-2 Complexity Theory on the represented spaces.
Abstract: The basic concept of Type-2 Theory of Effectivity (TTE) to define computability on topological spaces (X, τ ) or limit spaces (X,) are representations, i. e. surjection functions from the Baire space onto X. Representations having the topological property of admissibility are known to provide a reasonable computability theory. In this article, we investigate several additional properties of representations which guarantee that such representations induce a reasonable Type-2 Complexity Theory on the represented spaces. For each of these properties, we give a nice characterization of the class of spaces that are equipped with a representation having the respective property. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

54 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe procedures that dramatically improve the computability of such models and bring them into the realm of real-time application, based on selecting and optimizing an arterial priority network or a route priority network.

48 citations


01 Jan 2004
TL;DR: G Guaranteed Optimization is a proof technique for constructing compilers that find optimal programs within a decidable approximation of program equivalence, and gives us compilers whose kernels possess intuitive closure properties akin to, but stronger than, languages with explicit staging, and can meet the ‘Turing-complete kernel’ requirement to be stage- and safety-universal.
Abstract: Universal programming languages are an old dream. There is the computability sense of Turing-universal; Landin and others have advocated syntactically universal languages, a path leading to extensible syntax, e.g., macros. A stronger kind of universality would reduce the need for domain-specific languages—they could be replaced by ‘active libraries’ providing and safety requirements. Experience suggests that much domain-specific optimization can be realized by staging, i.e., doing computations at compile time to produce an efficient run-time. Rudimentary computability arguments show that languages with a ‘Turing-complete kernel’ can be both stage-universal and safety-universal. But making this approach practical requires compilers that find optimal programs, and this is a hard problem. Guaranteed Optimization is a proof technique for constructing compilers that find optimal programs within a decidable approximation of program equivalence. This gives us compilers whose kernels possess intuitive closure properties akin to, but stronger than, languages with explicit staging, and can meet the ‘Turing-complete kernel’ requirement to be stage- and safety-universal. To show this technique is practical we demonstrate a prototype compiler that finds optimal programs in the presence of heap operations; the proof of this is tedious but automated. The proof ensures that any code ‘lying in the kernel’ is evaluated and erased at compile-time. This opens several interesting directions for active libraries. One is staging: we can synthesize fast implementation code at compile-time by putting code-generators in the kernel. To achieve domain-specific safety checking we propose ‘proof embeddings’ in which proofs are intermingled with code and the optimizer does double-duty as a theorem prover. Proofs lying in the kernel are checked and erased at compile-time, yielding code that is both fast and safe.

Proceedings ArticleDOI
16 Jun 2004
TL;DR: This paper proposes an efficient algorithm for logic synthesis based on the incremental Boolean satisfiability (SAT) approach and shows that this technique leads not only to huge memory savings when compared with the methods based on reachability graphs, but also to significant speedups in many cases, without affecting the quality of the solution.
Abstract: The behaviour of asynchronous circuits is often described by signal transition graphs (STGs), which are Petri nets whose transitions are interpreted as rising and falling edges of signals. One of the crucial problems in the synthesis of such circuits is deriving equations for logic gates implementing each output signal of the circuit. This is usually done using reachability graphs. In this paper, we avoid constructing the reachability graph of an STG, which can lead to state space explosion, and instead use only the information about causality and structural conflicts between the events involved in a finite and complete prefix of its unfolding. We propose an efficient algorithm for logic synthesis based on the incremental Boolean satisfiability (SAT) approach. Experimental results show that this technique leads not only to huge memory savings when compared with the methods based on reachability graphs, but also to significant speedups in many cases, without affecting the quality of the solution.

Journal ArticleDOI
TL;DR: These results yield algorithms which solve systems of linear equations A ċ x = b, determine the spectral resolution of a symmetric matrix B, and compute a linear subspace's dimension from its Euclidean distance function, provided the rank of A and the number of distinct eigenvalues of B are known.

Journal ArticleDOI
TL;DR: A new theoretical computational model, called conformon-P system, based on simple and basic concepts inspired from a theoretical model of the living cell and membrane computing is presented, and its computational power and of some natural variants are studied.

Proceedings ArticleDOI
09 Sep 2004
TL;DR: An extensive suite of experiments with large sequential circuits confirm the robustness and efficiency of the proposed logic debugging methodology and suggest that Boolean satisfiability provides an effective platform for sequential logic debugging.
Abstract: Logic debugging of today's complex sequential circuits is an important problem. In this paper, a logic debugging methodology for multiple errors in sequential circuits with no state equivalence is developed. The proposed approach reduces the problem of debugging to an instance of Boolean satisfiability. This formulation takes advantage of modern Boolean satisfiability solvers that handle large circuits in a computationally efficient manner. An extensive suite of experiments with large sequential circuits confirm the robustness and efficiency of the proposed approach. The results further suggest that Boolean satisfiability provides an effective platform for sequential logic debugging.

Journal ArticleDOI
TL;DR: This paper gives exact recursion-theoretical characterization of the computational power of this kind of fuzzy Turing machines and shows that fuzzy languages accepted by these machines with a computable t-norm correspond exactly to the union Σ10 ∪ Π10 of recursively enumerable languages and their complements.

Journal ArticleDOI
TL;DR: This paper presents a meta-theoreticalPrinciples of Computational Complexity and its Applications to Turing Machines, a Discussion of the Foundations of Turing Machines and their Applications to Context-Free Languages.
Abstract: Preface. Leverages. Finite Automata. Context-Free Languages. Turing Machines. Computability Theory. Computational Complexity. NP-Completeness. References. Index.

Journal Article
TL;DR: This work proposes an alternative but semantically equivalent perspective based on a generalization of results relating to the database-theoretic problem of answering queries using views, which leads to a fast query rewriting algorithm, which has been implemented and experimentally evaluated.
Abstract: We address the problem of answering queries using expressive symmetric inter-schema constraints which allow to establish mappings between several heterogeneous information systems. This problem is of high relevance to data integration, as symmetric constraints are essential for dealing with true concept mismatch and are generalizations of the kinds of mappings supported by both local-as-view and global-as-view approaches that were previously studied in the literature. Moreover, the flexibility gained by using such constraints for data integration is essential for virtual enterprise and e-commerce applications. We first discuss resolution-based methods for computing maximally contained rewritings and characterize computability aspects. Then we propose an alternative but semantically equivalent perspective based on a generalization of results relating to the database-theoretic problem of answering queries using views. This leads to a fast query rewriting algorithm based on AI techniques, which has been implemented and experimentally evaluated.

Journal ArticleDOI
TL;DR: It turns out that only few of these ‘basic’ computability notions are suitable in the sense of rendering all those operations uniformly computable.
Abstract: For regular sets in Euclidean space, previous work has identified twelve ‘basic’ computability notions to (pairs of) which many previous notions considered in literature were shown to be equivalent. With respect to those basic notions we now investigate on the computability of natural operations on regular sets: union, intersection, complement, convex hull, image, and pre-image under suitable classes of functions. It turns out that only few of these notions are suitable in the sense of rendering all those operations uniformly computable. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

Journal ArticleDOI
TL;DR: Despite the recursive non-computability of Hilbert's tenth problem, this work outlines and argues for a quantum algorithm that is based on the Quantum Adiabatic Theorem and explains how this algorithm can solve Hilbert's Tenth problem.

Posted Content
TL;DR: In Computable Economics an eclectic approach is adopted where the main criterion is numerical content for economic entities and both the computable and the constructive traditions are freely and indiscriminately invoked in the formaliza- tion of economic entities.
Abstract: Computability theory came into being as a result of Hilbert's attempts to meet Brouwer's challenges, from an intuitionistc and constructive standpoint, to formalism as a foundation for mathematical practice. Viewed this way, constructive mathematics should be one vision of computability theory. However, there are fundamental differences between computability theory and constructive mathematics: the Church-Turing thesis is a disciplining criterion in the former and not in the latter; and classical logic - particularly, the law of the excluded middle - is not accepted in the latter but freely invoked in the former, especially in proving universal negative propositions. In Computable Economic an eclectic approach is adopted where the main criterion is numerical content for economic entities. In this sense both the computable and the constructive traditions are freely and indiscriminately invoked and utilised in the formalization of economic entities. Some of the mathematical methods and concepts of computable economics are surveyed in a pedagogical mode. The context is that of a digital economy embedded in an information society.

Journal ArticleDOI
TL;DR: The conceptual foundations of Alan Turing's analysis of computability are explored, and it is argued that Turing's account represents a last vestige of a famous but unsuccessful program in pure mathematics, viz., Hilbert's formalist program.

Journal ArticleDOI
TL;DR: In this article, a soundness and completeness proof for an axiomatization of one of the most basic fragments of computability logic is given, where the logical vocabulary contains operators for the so-called parallel and choice operations, and its atoms represent elementary problems.
Abstract: In the same sense as classical logic is a formal theory of truth, the recently initiated approach called computability logic is a formal theory of computability. It understands (interactive) computational problems as games played by a machine against the environment, their computability as existence of a machine that always wins the game, logical operators as operations on computational problems, and validity of a logical formula as being a scheme of "always computable" problems. The present contribution gives a detailed exposition of a soundness and completeness proof for an axiomatization of one of the most basic fragments of computability logic. The logical vocabulary of this fragment contains operators for the so called parallel and choice operations, and its atoms represent elementary problems, i.e. predicates in the standard sense. This article is self-contained as it explains all relevant concepts. While not technically necessary, however, familiarity with the foundational paper "Introduction to computability logic" [Annals of Pure and Applied Logic 123 (2003), pp.1-99] would greatly help the reader in understanding the philosophy, underlying motivations, potential and utility of computability logic, -- the context that determines the value of the present results. Online introduction to the subject is available at this http URL and this http URL .

Journal ArticleDOI
TL;DR: This paper introduces three notions of weak computability in a way similar to the Ershov's hierarchy of Δ02-sets of natural numbers based on the binary expansion, Dedekind cut and Cauchy sequence, respectively, which leads to a series of classes of reals with different levels of computability.
Abstract: The computability of reals was introduced by Alan Turing [20] by means of decimal representations. But the equivalent notion can also be introduced accordingly if the binary expansion, Dedekind cut or Cauchy sequence representations are considered instead. In other words, the computability of reals is independent of their representations. However, as it is shown by Specker [19] and Ko [9], the primitive recursiveness and polynomial time computability of the reals do depend on the representation. In this paper, we explore how the weak computability of reals depends on the representation. To this end, we introduce three notions of weak computability in a way similar to the Ershov's hierarchy of Δ02-sets of natural numbers based on the binary expansion, Dedekind cut and Cauchy sequence, respectively. This leads to a series of classes of reals with different levels of computability. We investigate systematically questions as on which level these notions are equivalent. We also compare them with other known classes of reals like c. e. and d-c. e. reals. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

Journal ArticleDOI
01 Mar 2004
TL;DR: Language Emulator introduces error-detecting and internationalization functionalities into automata tools in order to help undergraduate students to understand the concepts of Automata Theory.
Abstract: Language Emulator, written in Java, is a toolkit to help undergraduate students to understand the concepts of Automata Theory. The software allows the manipulation of regular expressions, regular grammars, deterministic finite automata, nondeterministic finite automata with and without lambda transitions, and Moore and Mealy machines. Language Emulator introduces error-detecting and internationalization functionalities into automata tools. It has been accepted by 95% of students in a recent survey, indicating that it is a helpful toolkit in learning Automata Theory.

Proceedings ArticleDOI
02 Sep 2004
TL;DR: KYPD is a customized solver for KYP-SDPs that utilizes the inherent structure of the optimization problem thus improving efficiency significantly.
Abstract: Semidefinite programs derived from the Kalman-Yakubovich-Popov lemma are quite common in control and signal processing applications The programs are often of high dimension making them hard or impossible to solve with general-purpose solvers KYPD is a customized solver for KYP-SDPs that utilizes the inherent structure of the optimization problem thus improving efficiency significantly

Proceedings ArticleDOI
01 Oct 2004
TL;DR: In this article, the Infinite Merchant Problem (IMP) is considered as a decision problem equivalent to the Halting Problem, based on results obtained in the Coins, ACP.
Abstract: Hypercomputation or super-Turing computation is a ``computation'' that transcends the limit imposed by Turing's model of computability. The field still faces some basic questions, technical (can we mathematically and/or physically build a hypercomputer?), cognitive (can hypercomputers realize the AI dream?), philosophical (is thinking more than computing?). The aim of this paper is to address the question: can we mathematically build a hypercomputer? We will discuss the solutions of the Infinite Merchant Problem, a decision problem equivalent to the Halting Problem, based on results obtained in \cite{Coins,acp}. The accent will be on the new computational technique and results rather than formal proofs.

Posted Content
TL;DR: This work considers the theoretical model of a quantum computer on a non commutative space background, which is a computational model for quantum gravity, and shows that a theorem that classically was considered true but non computable, at the Planck scale becomes computable but non decidable.
Abstract: We consider the issue of computability at the most fundamental level of physical reality: the Planck scale. To this aim, we consider the theoretical model of a quantum computer on a non commutative space background, which is a computational model for quantum gravity. In this domain, all computable functions are the laws of physics in their most primordial form, and non computable mathematics finds no room in the physical world. Moreover, we show that a theorem that classically was considered true but non computable, at the Planck scale becomes computable but non decidable. This fact is due to the change of logic for observers in a quantum-computing universe: from standard quantum logic and classical logic, to paraconsistent logic.

Journal ArticleDOI
TL;DR: The study shows that, by combining the infeasible interior point methods and specific decomposition techniques, it is possible to greatly improve the computability of multistage stochastic linear programs.
Abstract: Multistage stochastic linear programming (MSLP) is a powerful tool for making decisions under uncertainty. A deterministic equivalent problem of MSLP is a large-scale linear program with nonanticipativity constraints. Recently developed infeasible interior point methods are used to solve the resulting linear program. Technical problems arising from this approach include rank reduction and computation of search directions. The sparsity of the nonanticipativity constraints and the special structure of the problem are exploited by the interior point method. Preliminary numerical results are reported. The study shows that, by combining the infeasible interior point methods and specific decomposition techniques, it is possible to greatly improve the computability of multistage stochastic linear programs.

Proceedings ArticleDOI
08 Nov 2004
TL;DR: The Semantic Randomization Theorem is proved, which states that the complexity of an arbitrary self-referential functional is unbounded in the limit of k-limited fine-grained parallel processors.
Abstract: In this paper, we first apply traditional computability theory to prove that the randomization problem, as defined herein, is recursively unsolvable. We then move on to extend traditional computability theory for the case of k-limited fine-grained parallel processors (i. e., temporal relativity). Using this modification, we are able to prove the Semantic Randomization Theorem (SRT). This theorem states that the complexity of an arbitrary self-referential functional (i.e., implying representation and knowledge) is unbounded in the limit. Furthermore, it then follows from the unsolvability of the randomization problem that effective knowledge acquisition in the large must be domain-specific and evolutionary. It is suggested that a generalized operant mechanics will be the fixed-point randomization of a domain-general self-referential randomization. In practice, this provides for the definition of knowledge-based systems that can formally apply analogy in the reasoning process as a consequence of semantic randomization.