scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Posted Content
TL;DR: This work considers the problem of counting k-cliques in s-uniform Erdős-Rényi hypergraphs G(n, c, s) with edge density c and proves that its fine-grained average- case complexity can be based on its worst-case complexity.
Abstract: We consider the problem of counting $k$-cliques in $s$-uniform Erdos-Renyi hypergraphs $G(n,c,s)$ with edge density $c$, and show that its fine-grained average-case complexity can be based on its worst-case complexity. We prove the following: 1. Dense Erdos-Renyi graphs and hypergraphs: Counting $k$-cliques on $G(n,c,s)$ with $k$ and $c$ constant matches its worst-case time complexity up to a $\mathrm{polylog}(n)$ factor. Assuming randomized ETH, it takes $n^{\Omega(k)}$ time to count $k$-cliques in $G(n,c,s)$ if $k$ and $c$ are constant. 2. Sparse Erdos-Renyi graphs and hypergraphs: When $c = \Theta(n^{-\alpha})$, we give several algorithms exploiting the sparsity of $G(n, c, s)$ that are faster than the best known worst-case algorithms. Complementing this, based on a fine-grained worst-case assumption, our results imply a different average-case phase diagram for each fixed $\alpha$ depicting a tradeoff between a runtime lower bound and $k$. Surprisingly, in the hypergraph case ($s \ge 3$), these lower bounds are tight against our algorithms exactly when $c$ is above the Erdős-Renyi $k$-clique percolation threshold. This is the first worst-case-to-average-case hardness reduction for a problem on Erdős-Renyi hypergraphs that we are aware of. We also give a variant of our result for computing the parity of the $k$-clique count that tolerates higher error probability.

2 citations

Book ChapterDOI
01 Jan 2011
TL;DR: This survey presents the basic aspects of this theory as well as some of the main results regarding it, and includes Livne's result by which all natural NPC-problems have average-case complete versions, which seems to shed doubt on the association of P-computable distributions with natural distributions.
Abstract: More than two decades elapsed since Levin set forth a theory of average-case complexity. In this survey we present the basic aspects of this theory as well as some of the main results regarding it. The current presentation deviates from our old "Notes on Levin's Theory of Average-Case Complexity" (ECCC, TR97-058, 1997) in several aspects. In particular: - We currently view average-case complexity as referring to the performance on "average" (or rather typical) instances, and not as the average performance on random instances. (Thus, it may be more justified to refer to this theory by the name typical-case complexity, but we retain the name average-case for historical reasons.) - We include a treatment of search problems, and a presentation of the reduction of "NP with sampleable distributions" to "NP with P-computable distributions" (due to Impagliazzo and Levin, 31st FOCS, 1990). - We include Livne's result (ECCC, TR06-122, 2006) by which all natural NPC-problems have average-case complete versions. This result seems to shed doubt on the association of P-computable distributions with natural distributions.

2 citations

Book ChapterDOI
19 Jun 2011
TL;DR: It is shown that, assuming the exponential time hypothesis (ETH), the certificate complexity of k-SAT increases infinitely often as k grows, and that if CircuitSAT has subexponential-time verifiable certificates of length cn, then an unlikely collapse happens.
Abstract: It is common to classify satisfiability problems by their time complexity. We consider another complexity measure, namely the length of certificates (witnesses). Our results show that there is a similarity between these two types of complexity if we deal with certificates verifiable in subexponential time. In particular, the well-known result by Impagliazzo and Paturi [IP01] on the dependence of the time complexity of k-SAT on k has its counterpart for the certificate complexity: we show that, assuming the exponential time hypothesis (ETH), the certificate complexity of k-SAT increases infinitely often as k grows. Another example of time-complexity results that can be translated into the certificatecomplexity setting is the results of [CIP06] on the relationship between the complexity of k-SAT and the complexity of SAT restricted to formulas of constant clause density. We also consider the certificate complexity of CircuitSAT and observe that if CircuitSAT has subexponential-time verifiable certificates of length cn, where c < 1 is a constant and n is the number of inputs, then an unlikely collapse happens (in particular, ETH fails).

2 citations

01 Jan 2006
TL;DR: A main abutment can be moved along the one part to increase the spacing between the ends and thereby can straighten the succession of vertebrae between the connectors.
Abstract: A device for straightening a spinal column having a succession of vertebrae extending along a nonstraight line lying generally in a plane has an elongated bar lying generally in the plane of the line and having a pair of relatively longitudinally displaceable bar parts in turn having respective bar ends. A pair of connectors of the bar ends are secured to respective vertebrae of the succession. Respective pivots between the connectors and the respective ends define therebetween respective generally parallel axes transverse to the plane. A main abutment movable along and flexible on one of the parts and longitudinally engageable with the other of the parts serves for limiting relative longitudinal displacement of the parts toward each other. Thus the main abutment can be moved along the one part to increase the spacing between the ends and thereby can straighten the succession of vertebrae between the connectors.

2 citations

01 Jan 2007
TL;DR: This paper proposes a concrete definition of Kolomogorov complexity that is (arguably) as simple as possible, by defining a machine model based on the elegantly minimal Combinatory Logic, and exhibiting a universal machine.
Abstract: Intuitively, the amount of information in a string is the size of the shortest program that outputs the string. The first billion digits of π for example, contain very little information, since they can be calculated by a C program of a few lines only. Although information content seems to be highly dependent on choice of programming language, the notion is actually invariant up to an additive constant. The theory of program size complexity, which has become known as Kolmogorov complexity after one of its founding fathers, has found fruitful application in many fields such as combinatorics, algorithm analysis, machine learning, machine models, and logic. In this paper we propose a concrete definition of Kolomogorov complexity that is (arguably) as simple as possible, by defining a machine model based on the elegantly minimal Combinatory Logic, and exhibiting a universal machine.

2 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732