scispace - formally typeset
Search or ask a question

Showing papers on "Computability published in 2008"


Journal ArticleDOI
TL;DR: This paper introduces the notions of conservative extension, safety and module for a very general class of logic-based ontology languages, and provides the notion of a safety class, which characterizes any sufficient condition for safety, and identifies a family of safety classes-called locality--which enjoys a collection of desirable properties.
Abstract: In this paper, we propose a set of tasks that are relevant for the modular reuse of ontologies. In order to formalize these tasks as reasoning problems, we introduce the notions of conservative extension, safety and module for a very general class of logic-based ontology languages. We investigate the general properties of and relationships between these notions and study the relationships between the relevant reasoning problems we have previously identified. To study the computability of these problems, we consider, in particular, Description Logics (DLs), which provide the formal underpinning of the W3C Web Ontology Language (OWL), and show that all the problems we consider are undecidable or algorithmically unsolvable for the description logic underlying OWL DL. In order to achieve a practical solution, we identify conditions sufficient for an ontology to reuse a set of symbols "safely"--that is, without changing their meaning. We provide the notion of a safety class, which characterizes any sufficient condition for safety, and identify a family of safety classes-called locality--which enjoys a collection of desirable properties. We use the notion of a safety class to extract modules from ontologies, and we provide various modularization algorithms that are appropriate to the properties of the particular safety class in use. Finally, we show practical benefits of our safety checking and module extraction algorithms.

416 citations


Book
08 Sep 2008
TL;DR: Many topics often absent from other textbooks, such as repetitions in words, state complexity, the interchange lemma, 2DPDAs, and the incompressibility method are covered here.
Abstract: Intended for graduate students and advanced undergraduates in computer science, A Second Course in Formal Languages and Automata Theory treats topics in the theory of computation not usually covered in a first course. After a review of basic concepts, the book covers combinatorics on words, regular languages, context-free languages, parsing and recognition, Turing machines, and other language classes. Many topics often absent from other textbooks, such as repetitions in words, state complexity, the interchange lemma, 2DPDAs, and the incompressibility method, are covered here. The author places particular emphasis on the resources needed to represent certain languages. The book also includes a diverse collection of more than 200 exercises, suggestions for term projects, and research problems that remain open.

218 citations


Book ChapterDOI
15 Dec 2008
TL;DR: This work formally defines the new class of finitely-ground programs, allowing for a powerful (possibly recursive) use of function terms in the full ASP language with disjunction and negation, and proves that it is semi-decidable.
Abstract: Disjunctive Logic Programming (DLP) under the answer set semantics, often referred to as Answer Set Programming (ASP), is a powerful formalism for knowledge representation and reasoning (KRR). The latest years witness an increasing effort for embedding functions in the context of ASP. Nevertheless, at present no ASP system allows for a reasonably unrestricted use of function terms. Functions are either required not to be recursive or subject to severe syntactic limitations, if allowed at all in ASP systems. In this work we formally define the new class of finitely-ground programs, allowing for a powerful (possibly recursive) use of function terms in the full ASP language with disjunction and negation. We demonstrate that finitely-ground programs have nice computational properties: (i) both brave and cautious reasoning are decidable, and (ii) answer sets of finitely-ground programs are computable. Moreover, the language is highly expressive, as any computable function can be encoded by a finitely-ground program. Due to the high expressiveness, membership in the class of finitely-ground program is clearly not decidable (we prove that it is semi-decidable). We single out also a subset of finitely-ground programs, called finite-domain programs, which are effectively recognizable, while keeping computability of both reasoning and answer set computation. We implement all results in DLP, further extending the language in order to support list and set terms, along with a rich library of built-in functions for their manipulation. The resulting ASP system is very powerful: any computable function can be encoded in a rich and fully declarative KRR language, ensuring termination on every finitely-ground program. In addition, termination is "a priori" guaranteed if the user asks for the finite-domain check.

136 citations


Journal ArticleDOI
TL;DR: In this article, an implicit/explicit integration scheme for non-linear constitutive models is presented to provide additional computability to those solid mechanics problems were robustness is an important issue, i.e., material failure models equipped with strain softening, soft materials, contact-friction models, etc.

124 citations


Journal ArticleDOI
TL;DR: It is shown that augmenting those postulates about algorithmic computation with an additional requirement regarding basic operations gives a natural axiomatization of computability and a proof of Church's Thesis, as Gödel and others suggested may be possible.
Abstract: Church's Thesis asserts that the only numeric functions that can be calculated by effective means are the recursive ones, which are the same, extensionally, as the Turing-computable numeric functions. The Abstract State Machine Theorem states that every classical algorithm is behaviorally equivalent to an abstract state machine. This theorem presupposes three natural postulates about algorithmic computation. Here, we show that augmenting those postulates with an additional requirement regarding basic operations gives a natural axiomatization of computability and a proof of Church's Thesis, as Godel and others suggested may be possible. In a similar way, but with a different set of basic operations, one can prove Turing's Thesis, characterizing the effective string functions, and—in particular—the effectively-computable functions on string representations of numbers.

104 citations


01 Jan 2008
TL;DR: These lecture notes present some basic notions and results on Automata Theory, Formal Languages Theory, Computability Theory, and Parsing Theory as well as on parsing techniques for context-free languages.
Abstract: These lecture notes present some basic notions and results on Automata Theory, Formal Languages Theory, Computability Theory, and Parsing Theory. I prepared these notes for a course on Automata, Languages, and Translators which I am teaching at the University of Roma Tor Vergata. More material on these topics and on parsing techniques for context-free languages can be found in standard textbooks such as [1, 8, 9]. The reader is encouraged to look at those books. A theorem denoted by the triple k.m.n is in Chapter k and Section m, and within that section it is identified by the number n. Analogous numbering system is used for algorithms, corollaries, definitions, examples, exercises, figures, and remarks. We use ‘iff’ to mean ‘if and only if’. Many thanks to my colleagues of the Department of Informatics, Systems, and Production of the University of Roma Tor Vergata. I am also grateful to my stu- dents and co-workers and, in particular, to Lorenzo Clemente, Corrado Di Pietro, Fulvio Forni, Fabio Lecca, Maurizio Proietti, and Valerio Senni for their help and encouragement. Finally, I am grateful to Francesca Di Benedetto, Alessandro Colombo, Donato Corvaglia, Gioacchino Onorati, and Leonardo Rinaldi of the Aracne Publishing Com- pany for their kind cooperation.

86 citations


Book
01 Jan 2008
TL;DR: The author elaborates on and clarifies many of Turings statements, making the original difficult-to-read document accessible to present day programmers, computer science majors, math geeks, and others.
Abstract: Programming Legend Charles Petzold unlocks the secrets of the extraordinary and prescient 1936 paper by Alan M. Turing Mathematician Alan Turing invented an imaginary computer known as the Turing Machine; in an age before computers, he explored the concept of what it meant to be computable, creating the field of computability theory in the process, a foundation of present-day computer programming. The book expands Turings original 36-page paper with additional background chapters and extensive annotations; the author elaborates on and clarifies many of Turings statements, making the original difficult-to-read document accessible to present day programmers, computer science majors, math geeks, and others. Interwoven into the narrative are the highlights of Turings own life: his years at Cambridge and Princeton, his secret work in cryptanalysis during World War II, his involvement in seminal computer projects, his speculations about artificial intelligence, his arrest and prosecution for the crime of "gross indecency," and his early death by apparent suicide at the age of 41.

78 citations


Book
01 Jan 2008
TL;DR: This book provides a thorough description of hypercomputation and all attempts at devising conceptual hypermachines and all new promising computational paradigms that may eventually lead to the construction of a hypermachine.
Abstract: This book provides a thorough description of hypercomputation. It covers all attempts at devising conceptual hypermachines and all new promising computational paradigms that may eventually lead to the construction of a hypermachine. Readers will gain a deeper understanding of what computability is, and why the Church-Turing thesis poses an arbitrary limit to what can be actually computed. Hypercomputing is a relatively novel idea. However, the books most important features are its description of the various attempts of hypercomputation, from trial-and-error machines to the exploration of the human mind, if we treat it as a computing device.

74 citations


Journal IssueDOI
TL;DR: This work provides a simple, clear description, and classification of emergence in terms of self-organization, which provides a framework for understanding how concepts such as thermodynamic equilibrium, nonlinearity, and computability are related to emergence.
Abstract: Emergence is a difficult concept to describe clearly. It has been characterized in the literature in a number of ways, none of which are easy to understand or describe clearly how other concepts in complex systems science are related to emergence. We provide a simple, clear description, and classification of emergence in terms of self-organization. This provides a framework for understanding how concepts such as thermodynamic equilibrium, nonlinearity, and computability are related to emergence. © 2008 Wiley Periodicals, Inc. Complexity, 2008.

73 citations


Journal ArticleDOI
TL;DR: The main technical result of the present paper is a sound and complete axiomatization of the propositional fragment of computability logic whose vocabulary includes all three - parallel, choice and sequential - sorts of conjunction and disjunction.
Abstract: Computability logic (CL) is a semantical platform and research program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth which it has more traditionally been. Formulas in CL stand for (interactive) computational problems, understood as games between a machine and its environment; logical operators represent operations on such entities; and ''truth'' is understood as existence of an effective solution, i.e., of an algorithmic winning strategy. The formalism of CL is open-ended, and may undergo series of extensions as the study of the subject advances. The main groups of operators on which CL has been focused so far are the parallel, choice, branching, and blind operators, with the logical behaviors of the first three groups resembling those of the multiplicatives, additives and exponentials of linear logic, respectively. The present paper introduces a new important group of operators, called sequential. The latter come in the form of sequential conjunction and disjunction, sequential quantifiers, and sequential recurrences (''exponentials''). As the name may suggest, the algorithmic intuitions associated with this group are those of sequential computations, as opposed to the intuitions of parallel computations associated with the parallel group of operations. Specifically, while playing a parallel combination of games means playing all components of the combination simultaneously, playing a sequential combination means playing the components in a sequential fashion, one after one. The main technical result of the present paper is a sound and complete axiomatization of the propositional fragment of computability logic whose vocabulary, together with negation, includes all three - parallel, choice and sequential - sorts of conjunction and disjunction. An extension of this result to the first-order level is also outlined.

71 citations


Journal ArticleDOI
TL;DR: As computability implies value definiteness, certain sequences of quantum outcomes cannot be computable as discussed by the authors. But they are computable in the sense that they have a value and can be defined.
Abstract: As computability implies value definiteness, certain sequences of quantum outcomes cannot be computable.

Journal ArticleDOI
TL;DR: It is argued that after decades of being respected but not taken seriously, research on multiprocessor algorithms and data structures is going mainstream with parallel and concurrent computation.
Abstract: Changes in technology can have far-reaching effects on theory. For example, while Turing's work on computability predated the first electronic computers, complexity theory flowered only after computers became a reality. After all, an algorithm's complexity may not matter much in a mathematics journal, but matters quite a bit in a FORTRAN program. We argue that something similar is going on with parallel and concurrent computation: after decades of being respected but not taken seriously, research on multiprocessor algorithms and data structures is going mainstream.

Book ChapterDOI
01 Jan 2008
TL;DR: An analysis of computability that leads to precise concepts, but dispenses with theses is presented, which leads to axioms for discrete dynamical systems (representing human and machine computations) and allows the reduction of models of theseAxioms to Turing machines.
Abstract: Church's and Turing's theses dogmatically assert that an informal notion of computability is captured by a particular mathematical concept. I present an analysis of computability that leads to precise concepts, but dispenses with theses. To investigate computability is to analyze processes that can in principle be carried out by calculators. Drawing on this lesson we owe to Turing and recasting work of Gandy, I formulate finiteness and locality conditions for two types of calculators, human computing agents and mechanical computing devices; the distinctive feature of the latter is that they can operate in parallel. The analysis leads to axioms for discrete dynamical systems (representing human and machine computations) and allows the reduction of models of these axioms to Turing machines. Cellular automata and a variety of artificial neural nets can be shown to satisfy the axioms for machine computations.

Book ChapterDOI
15 Jun 2008
TL;DR: The impact of geometry on computability and complexity in Winfree's model of nanoscale self-assembly in the two-dimensional tile assembly model in the discrete Euclidean plane is explored.
Abstract: This paper explores the impact of geometry on computability and complexity in Winfree's model of nanoscale self-assembly. We work in the two-dimensional tile assembly model, i.e., in the discrete Euclidean plane ? ×?. Our first main theorem says that there is a roughly quadratic function fsuch that a set A? ?+is computably enumerable if and only if the set X A = { (f(n), 0) | n? A} --- a simple representation of Aas a set of points on the x-axis --- self-assembles in Winfree's sense. In contrast, our second main theorem says that there are decidable sets D? ? ×? that do notself-assemble in Winfree's sense. Our first main theorem is established by an explicit translation of an arbitrary Turing machine Mto a modular tile assembly system $\mathcal{T}_M$, together with a proof that $\mathcal{T}_M$ carries out concurrent simulations of Mon all positive integer inputs.

Book
01 Jun 2008
TL;DR: In this article, the authors present results of cutting edge research in cellular-automata framework of digital physics and modelling of spatially extended nonlinear systems; massive-parallel computing, language acceptance, and computability; reversibility of computation, graph-theoretic analysis and logic; chaos and undecidability; evolution, learning and cryptography.
Abstract: Cellular automata are regular uniform networks of locally-connected finite-state machines. They are discrete systems with non-trivial behaviour. Cellular automata are ubiquitous: they are mathematical models of computation and computer models of natural systems. The book presents results of cutting edge research in cellular-automata framework of digital physics and modelling of spatially extended non-linear systems; massive-parallel computing, language acceptance, and computability; reversibility of computation, graph-theoretic analysis and logic; chaos and undecidability; evolution, learning and cryptography. The book is unique because it brings together unequalled expertise of inter-disciplinary studies at the edge of mathematics, computer science, engineering, physics and biology.

Book ChapterDOI
16 Sep 2008
TL;DR: The Computability Path Ordering (CPO) as discussed by the authors is a termination proof method for higher-order recursive path ordering, for which an improved definition is given in this paper.
Abstract: In this paper, we first briefly survey automated termination proof methods for higher-order calculi. We then concentrate on the higher-order recursive path ordering, for which we provide an improved definition, the Computability Path Ordering. This new definition appears indeed to capture the essence of computability arguments a la Tait and Girard, therefore explaining the name of the improved ordering.

Book ChapterDOI
18 Aug 2008
TL;DR: This paper reports on the experience of validating and using a real number calculator in PVS and shows that for all practical purposes, mathematicians have been prepared to make such calculations without being overly punctilious about the computability of the operations they were performing.
Abstract: When handling proofs of properties in the real world we often need to assert that one numeric quantity is greater than another. When these numeric quantities are real-valued, it is often tempting to get out the calculator to calculate the values of the expressions and then enter the results directly into the theorem prover as "facts" or axioms, since formally proving the desired properties can often be very tiresome. Obviously, such a procedure poses a few risks. An alternative approach, presented in this paper, is to prove the correctness of an arbitrarily accurate calculator for the reals. If this calculator is expressed in terms of the underlying integer arithmetic operations of the theorem-prover's implementation language, then there is a reasonable expectation that a practical evaluator of real-valued expressions may have been constructed. Obviously, there are some constraints imposed by computability theory. It is well known, for example, that it is not possible to determine the sign of a computable real in finite time. We show that for all practical purposes, we need not worry about such fussy details. After all, mathematicians have --- throughout the centuries --- been prepared to make such calculations without being overly punctilious about the computability of the operations they were performing! We report on the experience of validating and using a real number calculator in PVS.

Proceedings ArticleDOI
07 Jan 2008
TL;DR: It is proved, under very weak complexity assumptions, that any recursive Complexity Clique is trivial, and any r.e.complexity-theoretic version of Kleene's Fixed Point Theorem is not even semi-decidable.
Abstract: The proofs of major results of Computability Theory like Rice, Rice-Shapiro or Kleene's fixed point theorem hidemore information of what is usually expressed in theirrespective statements. We make this information explicit, allowing to state stronger, complexity theoretic-versions of all these theorems. In particular, we replace the notion of extensional set of indices of programs, by a set of indices of programs having not only the same extensional behavior but also similar complexity (Complexity Clique). We prove, under very weak complexity assumptions, that any recursive Complexity Clique is trivial, and any r.e. Complexity Clique is an extensional set (and thus satisfies Rice-Shapiro conditions). This allows, for instance, to use Rice's argument to prove that the property of having polynomial complexity is not decidable, and to use Rice-Shapiro to conclude that it is not even semi-decidable. We conclude the paper with a discussion of "complexity-theoretic" versions of Kleene's Fixed Point Theorem.

Book ChapterDOI
22 Sep 2008
TL;DR: This work gives the first characterization of distributed tasks that can be computed with weak local termination and presents a new characterization of tasks computed with local termination, which also characterize tasks computable by polynomial algorithms.
Abstract: We investigate the computability of distributed tasks in reliable anonymous networks with arbitrary knowledge. More precisely, we consider tasks computable with local termination, i.e., a node knows when to stop to participate in a distributed algorithm, even though the algorithm is not necessarily terminated elsewhere. We also study weak local termination, that is when a node knows its final value but continues to execute the distributed algorithm, usually in order to provide information to other nodes. We give the first characterization of distributed tasks that can be computed with weak local termination and we present a new characterization of tasks computed with local termination. For both terminations, we also characterize tasks computable by polynomial algorithms.

Journal ArticleDOI
TL;DR: In this paper, the authors address whether neural networks perform computations in the sense of computability theory and computer science, and defend the following theses: (1) Many neural networks compute-they perform computation.

Journal ArticleDOI
TL;DR: A new deductive system CL12 is introduced and its soundness and completeness with respect to the semantics of CL are proved and it is shown that this system presents a reasonable, computationally meaningful, constructive alternative to classical logic as a basis for applied theories.
Abstract: Computability logic (CL) (see this http URL) is a recently launched program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth that logic has more traditionally been. Formulas in it represent computational problems, "truth" means existence of an algorithmic solution, and proofs encode such solutions. Within the line of research devoted to finding axiomatizations for ever more expressive fragments of CL, the present paper introduces a new deductive system CL12 and proves its soundness and completeness with respect to the semantics of CL. Conservatively extending classical predicate calculus and offering considerable additional expressive and deductive power, CL12 presents a reasonable, computationally meaningful, constructive alternative to classical logic as a basis for applied theories. To obtain a model example of such theories, this paper rebuilds the traditional, classical-logic-based Peano arithmetic into a computability-logic-based counterpart. Among the purposes of the present contribution is to provide a starting point for what, as the author wishes to hope, might become a new line of research with a potential of interesting findings -- an exploration of the presumably quite unusual metatheory of CL-based arithmetic and other CL-based applied systems.

Journal ArticleDOI
TL;DR: A new representation for closed sets under which the robust zero set of a function is computable is given, called the component cover representation, which is based on topological index theory.

Proceedings Article
01 Jan 2008
TL;DR: I address whether neural networks perform computations in the sense of computability theory and computer science, and defends the following theses.
Abstract: I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way. Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may well fall into this last class.

Journal ArticleDOI
TL;DR: It is shown that if the solution is unique, then it is computable, even if the right-hand side is not locally Lipschitz, and it is proved that the maximal interval of existence for the solution must be effectively enumerable open.

Proceedings ArticleDOI
15 Sep 2008
TL;DR: This work compares the performance of BMC and BSC over a set of case studies, using the Zot tool to translate automata and temporal logic formulae into boolean logic, and proposes a method to check whether an operational model is a correct implementation of a temporal logic model, and assess its effectiveness.
Abstract: In bounded model checking (BMC) a system is modeled with a finite automaton and various desired properties with temporal logic formulae. Property verification is achieved by translation into boolean logic and the application of SAT-solvers. bounded satisfiability checking (BSC) adopts a similar approach, but both the system and the properties are modeled with temporal logic formulae, without an underlying operational model. Hence, BSC supports a higher-level, descriptive approach to system specification and analysis. We compare the performance of BMC and BSC over a set of case studies, using the Zot tool to translate automata and temporal logic formulae into boolean logic. We also propose a method to check whether an operational model is a correct implementation (refinement) of a temporal logic model, and assess its effectiveness on the same set of case studies. Our experimental results show the feasibility of BSC and refinement checking, with modest performance loss w.r.t. BMC.

Proceedings ArticleDOI
28 May 2008
TL;DR: The techniques underlying SMT are overviewed, it is shown how to represent dynamic systems in fragments of first order logic, and the application of SMT solvers to their verification is discussed.
Abstract: Many systems can be naturally represented in some decidable fragments of first order logic. The expressive power provided by a background theory allows to describe important aspects such as real time, continuous dynamics, and data flow over integer variables. The corresponding verification problems can be tackled by means of Satisfiability Modulo Theory (SMT) solvers. SMT solvers are based on the tight integration of propositional SAT solvers with dedicated procedures to reason about the theory component. In this paper, we overview the techniques underlying SMT, we show how to represent dynamic systems in fragments of first order logic, and discuss the application of SMT solvers to their verification.

Book ChapterDOI
01 Jan 2008
TL;DR: The theory of computable numberings is one of the main parts of the theory of numberings as mentioned in this paper, and it was proposed by A.N. Kolmogorov and V.A. Uspensky.
Abstract: The theory of computable numberings is one of the main parts of the theory of numberings. The papers of H. Rogers [36] and R. Friedberg [21] are the starting points in the systematical investigation of computable numberings. The general notion of a computable numbering was proposed in 1954 by A.N. Kolmogorov and V.A. Uspensky (see [40, p. 398]), and the monograph of Uspensky [41] was the first textbook that contained several basic results of the theory of computable numberings. The theory was developed further by many authors, and the most important contribution to it and its applications was made by A.I. Malt’sev, Yu.L. Ershov, A. Lachlan, S.S. Goncharov, S.A. Badaev, A.B. Khutoretskii, V.L. Selivanov, M. Kummer, M.B. Pouer-El, I.A. Lavrov, S.D. Denisov, and many other authors.

Journal ArticleDOI
Arnon Avron1
TL;DR: A unified framework for dealing with constructibility and absoluteness in set theory, decidability of relations in effective structures (like the natural numbers) and domain independence of queries in database theory is developed.

Book ChapterDOI
01 Mar 2008
TL;DR: This paper addresses the problem of providing a foundation for the EGC mode of computation with a reworking of van der Waerden's idea of explicit rings and fields, and develops the approximability of real functions within standard Turing machine computability, and shows its connection to the analytic approach.
Abstract: The Exact Geometric Computation (EGC) mode of computation has been developed over the last decade in response to the widespread problem of numerical non-robustness in geometric algorithms Its technology has been encoded in libraries such as LEDA, CGAL and Core Library The key feature of EGC is the necessity to decide zero in its computation This paper addresses the problem of providing a foundation for the EGC mode of computation This requires a theory of real computation that properly addresses the Zero Problem The two current approaches to real computation are represented by the analytic school and algebraic school We propose a variant of the analytic approach based on real approximation To capture the issues of representation, we begin with a reworking of van der Waerden's idea of explicit rings and fields We introduce explicit sets and explicit algebraic structures Explicit rings serve as the foundation for real approximation: our starting point here is not ?, but $\mathbb{F}\subseteq \mathbb{R}$, an explicit ordered ring extension of ? that is dense in ? We develop the approximability of real functions within standard Turing machine computability, and show its connection to the analytic approach Current discussions of real computation fail to address issues at the intersection of continuous and discrete computation An appropriate computational model for this purpose is obtained by extending Schonhage's pointer machines to support both algebraic and numerical computation Finally, we propose a synthesis wherein both the algebraic and the analytic models coexist to play complementary roles Many fundamental questions can now be posed in this setting, including transfer theorems connecting algebraic computability with approximability

Journal Article
TL;DR: In this article, the authors define and compare several probabilistically weakened notions of computability for mappings from represented spaces (that are equipped with a measure or outer measure) into effective metric spaces.
Abstract: We define and compare several probabilistically weakened notions of computability for mappings from represented spaces (that are equipped with a measure or outer measure) into effective metric spaces. We thereby generalize definitions by Ko [Ko, K.-I., ''Complexity Theory of Real Functions,'' Birkhauser, Boston, 1991] and Parker [Parker, M.W., Undecidability inR^n: Riddled basins, the KAM tori, and the stability of the solar system, Philosophy of Science 70 (2003), pp. 359-382; Parker, M.W., Three concepts of decidability for general subsets of uncountable spaces, Theoretical Computer Science 351 (2006), pp. 2-13], and furthermore introduce the new notion of computability in the mean. Some results employ a notion of computable measure that originates in definitions by Weihrauch [Weihrauch, K., Computability on the probability measures on the Borel sets of the unit interval., Theoretical Computer Science 219 (1999), pp. 421-437] and Schroder [Schroder, M., Admissible representations of probability measures, Electronic Notes in Theoretical Computer Science 167 (2007), pp. 61-78]. In the spirit of the well-known Representation Theorem, we establish dependencies between the weakened computability notions and classical properties of mappings. We finally present some positive results on the computability of vector-valued integration on metric spaces, and discuss certain measurability issues arising in connection with our definitions.