scispace - formally typeset
Open AccessPosted Content

Uniform test of algorithmic randomness over a general space

TLDR
In this article, a general framework for the analysis of randomness in non-compact spaces is introduced, where the underlying space is the set of finite or infinite sequences and the underlying probability distribution is the uniform distribution or a computable distribution.
Abstract
The algorithmic theory of randomness is well developed when the underlying space is the set of finite or infinite sequences and the underlying probability distribution is the uniform distribution or a computable distribution. These restrictions seem artificial. Some progress has been made to extend the theory to arbitrary Bernoulli distributions (by Martin-Loef), and to arbitrary distributions (by Levin). We recall the main ideas and problems of Levin's theory, and report further progress in the same framework. - We allow non-compact spaces (like the space of continuous functions, underlying the Brownian motion). - The uniform test (deficiency of randomness) d_P(x) (depending both on the outcome x and the measure P should be defined in a general and natural way. - We see which of the old results survive: existence of universal tests, conservation of randomness, expression of tests in terms of description complexity, existence of a universal measure, expression of mutual information as "deficiency of independence. - The negative of the new randomness test is shown to be a generalization of complexity in continuous spaces; we show that the addition theorem survives. The paper's main contribution is introducing an appropriate framework for studying these questions and related ones (like statistics for a general family of distributions).

read more

Citations
More filters
Journal ArticleDOI

Hedging Predictions in Machine Learning

TL;DR: This article describes a new technique for 'hedging' the predictions output by many such algorithms, including support vector machines, kernel ridge regression, kernel nearest neighbours and by many other state-of-the-art methods.
Posted Content

Computability of probability measures and Martin-Lof randomness over metric spaces

TL;DR: This paper shows that any computable metric space with a computable probability measure is isomorphic to the Cantor space in a computables and measure-theoretic sense and admits a universal uniform randomness test.
Book ChapterDOI

Computability and analysis: the legacy of Alan Turing

TL;DR: The theory of probability, which was born in an exchange of letters between Blaise Pascal and Pierre de Fermat in 1654 and developed further by Christian Huygens and Jakob Bernoulli, provided methods for calculating odds related to games of chance.
Journal ArticleDOI

Probabilistic computability and choice

TL;DR: This work introduces the concept of a Las Vegas computable multi-valued function, which is a function that can be computed on a probabilistic Turing machine that receives a random binary sequence as auxiliary input and proves an Independent Choice Theorem that implies that Las Vegas Computable functions are closed under composition.
Proceedings ArticleDOI

Noncomputable Conditional Distributions

TL;DR: It is shown that in general one cannot compute conditional probabilities, so a pair of computable random variables are constructed in the unit interval whose conditional distribution P[Y|X] encodes the halting problem.
References
More filters
Book

Convergence of Probability Measures

TL;DR: Weak Convergence in Metric Spaces as discussed by the authors is one of the most common modes of convergence in metric spaces, and it can be seen as a form of weak convergence in metric space.
Book

An Introduction to Kolmogorov Complexity and Its Applications

TL;DR: The Journal of Symbolic Logic as discussed by the authors presents a thorough treatment of the subject with a wide range of illustrative applications such as the randomness of finite objects or infinite sequences, Martin-Loef tests for randomness, information theory, computational learning theory, the complexity of algorithms, and the thermodynamics of computing.

An Introduction to Kolmogorov Complexity and Its Applications

TL;DR: The book presents a thorough treatment of the central ideas and their applications of Kolmogorov complexity with a wide range of illustrative applications, and will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics.
Journal ArticleDOI

A Formal Theory of Inductive Inference. Part II

TL;DR: Four ostensibly different theoretical models of induction are presented, in which the problem dealt with is the extrapolation of a very long sequence of symbols—presumably containing all of the information to be used in the induction.
Book

Computable Analysis : An Introduction

TL;DR: This book provides a solid fundament for studying various aspects of computability and complexity in analysis and is written in a style suitable for graduate-level and senior students in computer science and mathematics.