scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Several results relating time-bounded C, CD, and CND complexity and their applications to a variety of questions in computational complexity theory are shown, including showing how to approximate the size of a set using CD complexity without using the random string as needed in Sipser's earlier proof of a similar result.
Abstract: We take a fresh look at CD complexity, where CDt(x) is the size of the smallest program that distinguishes x from all other strings in time t(|x|). We also look at CND complexity, a new nondeterministic variant of CD complexity, and time-bounded Kolmogorov complexity, denoted by C complexity. We show several results relating time-bounded C, CD, and CND complexity and their applications to a variety of questions in computational complexity theory, including the following: Showing how to approximate the size of a set using CD complexity without using the random string as needed in Sipser's earlier proof of a similar result. Also, we give a new simpler proof of this result of Sipser's. Improving these bounds for almost all strings, using extractors. A proof of the Valiant--Vazirani lemma directly from Sipser's earlier CD lemma. A relativized lower bound for CND complexity. Exact characterizations of equivalences between C, CD, and CND complexity. Showing that satisfying assignments of a satisfiable Boolean formula can be enumerated in time polynomial in the size of the output if and only if a unique assignment can be found quickly. This answers an open question of Papadimitriou. A new Kolmogorov complexity-based proof that BPP\subseteq\Sigma_2^p$. New Kolmogorov complexity based constructions of the following relativized worlds: There exists an infinite set in P with no sparse infinite NP subsets. EXP=NEXP but there exists a NEXP machine whose accepting paths cannot be found in exponential time. Satisfying assignments cannot be found with nonadaptive queries to SAT.

64 citations

Proceedings ArticleDOI
04 May 1998
TL;DR: A rigorous complexity analysis of the (1+1) evolutionary algorithm for linear functions with Boolean inputs is given and it is found that the expected run time of this algorithm is at most /spl Theta/(n ln n) forlinear functions with n variables.
Abstract: Evolutionary algorithms (EAs) are heuristic randomized algorithms which, by many impressive experiments, have been proven to behave quite well for optimization problems of various kinds. In this paper, a rigorous complexity analysis of the (1+1) evolutionary algorithm for linear functions with Boolean inputs is given. The analysis is carried out for different mutation rates. The main contribution of the paper is not the result that the expected run time of the (1+1) evolutionary algorithm is at most /spl Theta/(n ln n) for linear functions with n variables, but the presentation of methods showing how this result can be proven rigorously.

64 citations

Journal ArticleDOI
TL;DR: It is shown that there is a fixed distribution on instances of NP-complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial-time algorithms (unless NP is easy in the worst case).
Abstract: We prove that if NP $${ subseteq}$$ BPP, i.e., if SAT is worst-case hard, then for every probabilistic polynomial-time algorithm trying to decide SAT, there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. This is the first worst-case to average-case reduction for NP of any kind. We stress however, that this does not mean that there exists one fixed samplable distribution that is hard for all probabilistic polynomial-time algorithms, which is a pre-requisite assumption needed for one-way functions and cryptography (even if not a sufficient assumption). Nevertheless, we do show that there is a fixed distribution on instances of NP-complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial-time algorithms (unless NP is easy in the worst case). Our results are based on the following lemma that may be of independent interest: Given the description of an efficient (probabilistic) algorithm that fails to solve SAT in the worst case, we can efficiently generate at most three Boolean formulae (of increasing lengths) such that the algorithm errs on at least one of them.

63 citations

Proceedings ArticleDOI
08 Jun 2011
TL;DR: Lower bounds for the QMA-communication complexity of the functions Inner Product and Disjointness are shown, and how one can 'transfer' hardness under an analogous measure in the query complexity model to the communication model using Sherstov's pattern matrix method is described.
Abstract: We show several results related to interactive proof modes of communication complexity. First we show lower bounds for the QMA-communication complexity of the functions Inner Product and Disjointness. We describe a general method to prove lower bounds for QMA-communication complexity, and show how one can 'transfer' hardness under an analogous measure in the query complexity model to the communication model using Sherstov's pattern matrix method.Combining a result by Vereshchagin and the pattern matrix method we find a partial function with AM-communication complexity O(\log n), PP-communication complexity \Omega(n^{1/3}), and QMA-communication complexity \Omega(n^{1/6}). Hence in the world of communication complexity noninteractive quantum proof systems are not able to efficiently simulate co-nondeterminism or interaction. These results imply that the related questions in Turing machine complexity theory cannot be resolved by 'algebrizing' techniques. Finally we show that in MA-protocols there is an exponential gap between one-way protocols and two-way protocols for a partial function (this refers to the interaction between Alice and Bob). This is in contrast to nondeterministic, AM-, and QMA-protocols, where one-way communication is essentially optimal.

63 citations

Book ChapterDOI
01 Jan 2013
TL;DR: This chapter provides a survey of the main concepts and results of the computational complexity theory, starting with basic concepts such as time and space complexities, complexity classes P and NP, and Boolean circuits and two related ways to make computations more efficient.
Abstract: This chapter provides a survey of the main concepts and results of the computational complexity theory. We start with basic concepts such as time and space complexities, complexity classes P and NP, and Boolean circuits. We discuss how the use of randomness and interaction enables one to compute more efficiently. In this context we also mention some basic concepts from theoretical cryptography. We then discuss two related ways to make computations more efficient: the use of processors working in parallel and the use of quantum circuits. Among other things, we explain Shor’s quantum algorithm for factoring integers. The topic of the last section is important for the foundations of mathematics, although it is less related to computational complexity; it is about algorithmic complexity of finite strings of bits.

62 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732