scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper characterize the well-known computational complexity classes of the polynomial time hierarchy as classes of provably recursive functions of some second order theories with weak comprehension axiom schemas but without any induction schemas, and finds a natural relationship between these theories and the theories of bounded arithmetic S2.
Abstract: In this paper we characterize the well-known computational complexity classes of the polynomial time hierarchy as classes of provably recursive functions (with graphs of suitable bounded complexity) of some second order theories with weak comprehension axiom schemas but without any induction schemas (Theorem 6). We also find a natural relationship between our theories and the theories of bounded arithmetic (Lemmas 4 and 5). Our proofs use a technique which enables us to “speed up” induction without increasing the bounded complexity of the induction formulas. This technique is also used to obtain an interpretability result for the theories of bounded arithmetic (Theorem 4).

5 citations

Book ChapterDOI
08 Oct 2005
TL;DR: The future loss when predicting any (computably) stochastic sequence online is bound by a new variant of algorithmic complexity of μ given x, plus the complexity of the randomness deficiency of x.
Abstract: We bound the future loss when predicting any (computably) stochastic sequence online. Solomonoff finitely bounded the total deviation of his universal predictor M from the true distribution μ by the algorithmic complexity of μ. Here we assume we are at a time t>1 and already observed x=x1...xt. We bound the future prediction performance on xt+1xt+2... by a new variant of algorithmic complexity of μ given x, plus the complexity of the randomness deficiency of x. The new complexity is monotone in its condition in the sense that this complexity can only decrease if the condition is prolonged. We also briefly discuss potential generalizations to Bayesian model classes and to classification problems.

5 citations

Book ChapterDOI
Chung-Chih Li1
01 Jan 2004
TL;DR: The notion of “small sets” is altered from “finiteness” to topological “compactness” for type-2 complexity theory, and it is shown that explicit type- 2 complexity classes can be defined in terms of resource bounds and are recursively representable.
Abstract: We propose an alternative notion of asymptotic behaviors for the study of type-2 computational complexity. Since the classical asymptotic notion (for all but finitely many) is not acceptable in type-2 context, we alter the notion of “small sets” from “finiteness” to topological “compactness” for type-2 complexity theory. A natural reference for type-2 computations is the standard Baire topology. However, we point out some serious drawbacks of this and introduce an alternative topology for describing compact sets. Following our notion explicit type-2 complexity classes can be defined in terms of resource bounds. We show that such complexity classes are recursively representable; namely, every complexity class has a programming system. We also prove type-2 analogs of Rabin’s Theorem, Recursive Relatedness Theorem, and Gap Theorem to provide evidence that our notion of type-2 asymptotic is workable. We speculate that our investigation will give rise to a possible approach in examining the complexity structure at type-2 along the line of the classical complexity theory.

5 citations

Journal ArticleDOI
TL;DR: A genetic algorithm for compressing a graph by finding the most compact description of its structure, and how the compressed size of the problem instance correlates with the runtime of an exact algorithm for two hard combinatorial problems.

5 citations

Book ChapterDOI
02 Mar 1995
TL;DR: This work studies a variation on classical key-agreement and consensus problems in which the key space S is the range of a random variable that can be sampled, and shows agreement possible with zero communication if every fully polynomial-time approximation scheme (fpras) has a certain symmetry-breaking property.
Abstract: We study a variation on classical key-agreement and consensus problems in which the key space S is the range of a random variable that can be sampled. We give tight upper and lower bounds of [log2k] bits on the communication complexity of agreement on some key in S, using a form of Sperner's Lemma, and give bounds on other problems. In the case where keys are generated by a probabilistic polynomial-time Turing machine, we show agreement possible with zero communication if every fully polynomial-time approximation scheme (fpras) has a certain symmetry-breaking property.

5 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732