scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 2004"


Journal ArticleDOI
TL;DR: This paper addresses the issue of proving strong direct product assertions, that is, ones in which s' \approx ks and is in particular larger than s, for decision trees and communication protocols.
Abstract: A fundamental question of complexity theory is the direct product question. A famous example is Yao's XOR-lemma, in which one assumes that some function f is hard on average for small circuits (meaning that every circuit of some fixed size s which attempts to compute f is wrong on a non-negligible fraction of the inputs) and concludes that every circuit of size s' only has a small advantage over guessing randomly when computing f⊕k(x1,...,xk) = f(x1) ⊕...⊕ f(xk) on independently chosen x1,...,xk. All known proofs of this lemma have the property that s' < s. In words, the circuit which attempts to compute f⊕k is smaller than the circuit which attempts to compute f on a single input! This paper addresses the issue of proving strong direct product assertions, that is, ones in which s' ≈ ks and is in particular larger than s. We study the question of proving strong direct product question for decision trees and communication protocols.

93 citations


Journal Article
TL;DR: Some lower bounds on the complexity of explicitly given graphs are proved, which yields some new lower bounds for boolean functions, as well as new proofs of some known lower bounds in the graph-theoretic framework.
Abstract: By the complexity of a graph we mean the minimum number of union and intersection operations needed to obtain the whole set of its edges starting from stars. This measure of graphs is related to the circuit complexity of boolean functions.We prove some lower bounds on the complexity of explicitly given graphs. This yields some new lower bounds for boolean functions, as well as new proofs of some known lower bounds in the graph-theoretic framework. We also formulate several combinatorial problems whose solution would have intriguing consequences in computational complexity.

61 citations


Book ChapterDOI
17 Aug 2004
TL;DR: In this paper, a depth-first search algorithm for generating all maximal cliques of an undirected graph, in which pruning methods are employed as in Bron and Kerbosch's algorithm, is presented.
Abstract: We present a depth-first search algorithm for generating all maximal cliques of an undirected graph, in which pruning methods are employed as in Bron and Kerbosch’s algorithm. All maximal cliques generated are output in a tree-like form. Then we prove that its worst-case time complexity is O(3 n/3) for an n-vertex graph. This is optimal as a function of n, since there exist up to 3 n/3 cliques in an n-vertex graph.

58 citations


MonographDOI
24 Aug 2004
TL;DR: This week's topics include complexity through reductions, quantum computation, probabilistic proof systems, and Randomness in computation Pseudorandomness.
Abstract: Week One: Complexity theory: From Godel to Feynman Complexity theory: From Godel to Feynman History and basic concepts Resources, reductions and P vs. NP Probabilistic and quantum computation Complexity classes Space complexity and circuit complexity Oracles and the polynomial time hierarchy Circuit lower bounds "Natural" proofs of lower bounds Bibliography Average case complexity Average case complexity Bibliography Exploring complexity through reductions Introduction PCP theorem and hardness of computing approximate solutions Which problems have strongly exponential complexity? Toda's theorem: $PH\subseteq P^{\ No. P}$ Bibliography Quantum computation Introduction Bipartite quantum systems Quantum circuits and Shor's factoring algorithm Bibliography Lower bounds: Circuit and communication complexity Communication complexity Lower bounds for probabilistic communication complexity Communication complexity and circuit depth Lower bound for directed $st$-connectivity Lower bound for $FORK$ (continued) Bibliography Proof complexity An introduction to proof complexity Lower bounds in proof complexity Automatizability and interpolation The restriction method Other research and open problems Bibliography Randomness in computation Pseudorandomness Preface Computational indistinguishability Pseudorandom generators Pseudorandom functions and concluding remarks Appendix Bibliography Pseudorandomness-Part II Introduction Deterministic simulation of randomized algorithms The Nisan-Wigderson generator Analysis of the Nisan-Wigderson generator Randomness extractors Bibliography Probabilistic proof systems-Part I Interactive proofs Zero-knowledge proofs Suggestions for further reading Bibliography Probabilistically checkable proofs Introduction to PCPs NP-hardness of PCS A couple of digressions Proof composition and the PCP theorem Bibliography.

53 citations


Journal Article
TL;DR: In this paper, the authors investigated the question of whether one can characterize complexity classes in terms of efficient reducibility to the set of Kolmogorov-random strings R C.
Abstract: We investigate the question of whether one can characterize complexity classes (such as PSPACE or NEXP) in terms of efficient reducibility to the set of Kolmogorov-random strings R C . This question arises because PSPACE ⊆ P R C and NEXP ⊆ NP R C , and no larger complexity classes are known to be reducible to R C in this way. We show that this question cannot be posed without explicitly dealing with issues raised by the choice of universal machine in the definition of Kolmogorov complexity. What follows is a list of some of our main results. • Although Kummer showed that, for every universal machine U there is a time bound t such that the halting problem is disjunctive truth-table reducible to R C U in time t , there is no such time bound t that suffices for every universal machine U . We also show that, for some machines U , the disjunctive reduction can be computed in as little as doubly-exponential time. • Although for every universal machine U , there are very complex sets that are ≤ dtt P -reducible to R C U , it is nonetheless true that P = REC ∩ ⋂ U { A : A ≤ dtt P R C U } . (A similar statement holds for parity-truth-table reductions.) • Any decidable set that is polynomial-time monotone-truth-table reducible to R C is in P / poly . • Any decidable set that is polynomial-time truth-table reducible to R C via a reduction that asks at most f ( n ) queries on inputs of size n lies in P / ( f ( n ) 2 f ( n ) 3 log f ( n ) ) .

42 citations


Journal ArticleDOI
TL;DR: The k-error linear complexity of periodic binary sequences is defined to be the smallest linear complexity that can be obtained by changing k or fewer bits of the sequence per period.
Abstract: The k-error linear complexity of periodic binary sequences is defined to be the smallest linear complexity that can be obtained by changing k or fewer bits of the sequence per period. For the period length pn, where p is an odd prime and 2 is a primitive root modulo p2, we show a relationship between the linear complexity and the minimum value k for which the k-error linear complexity is strictly less than the linear complexity. Moreover, we describe an algorithm to determine the k-error linear complexity of a given pn-periodic binary sequence.

42 citations


Journal Article
TL;DR: QR factorization with sort and Dijkstra’s algorithm for decreasing the computational complexity of the sphere decoder that is used for ML detection of signals on the multi-antenna fading channel is proposed.
Abstract: SUMMARY We propose use of QR factorization with sort and Dijkstra’s algorithm for decreasing the computational complexity of the sphere decoder that is used for ML detection of signals on the multi-antenna fading channel. QR factorization with sort decreases the complexity of searching part of the decoder with small increase in the complexity required for preprocessing part of the decoder. Dijkstra’s algorithm decreases the complexity of searching part of the decoder with increase in the storage complexity. The computer simulation demonstrates that the complexity of the decoder is reduced by the proposed methods significantly.

34 citations


Proceedings ArticleDOI
13 Jun 2004
TL;DR: In this paper, it was shown that the hardness of hardness amplification can be improved to 1/2-1/nω(1) by using both derandomization and nondeterminism.
Abstract: We revisit the problem of hardness amplification in NP, as recently studied by O'Donnell (STOC '02). We prove that if NP has a balanced function f such that any circuit of size s(n) fails to compute f on a 1/poly(n) fraction of inputs, then NP has a function f′ such that any circuit of size s′(n)=s(√n)Ω(1) fails to compute f′ on a 1/2 - 1/s′(n) fraction of inputs. In particular, 1. If s(n)=nω(1), we amplify to hardness 1/2-1/nω(1). 2. If s(n)=2nω(1), we amplify to hardness 1/2-1/2nΩ(1). 3. If s(n)=2(n), we amplify to hardness 1/2-1/2 Ω(sqrtn).These improve the results of O'Donnell, which only amplified to 1/2-1/√n. O'Donnell also proved that no construction of a certain general form could amplify beyond 1/2-1/n. We bypass this barrier by using both derandomization and nondeterminism in the construction of f′.We also prove impossibility results demonstrating that both our use of nondeterminism and the hypothesis that f is balanced are necessary for "black-box" hardness amplification procedures (such as ours).

30 citations


01 Jan 2004
TL;DR: In this article, the authors discuss open questions around worst case time and space bounds for NP-hard problems and present exponential time solutions for these problems with a relatively good worst case behavior.
Abstract: We discuss open questions around worst case time and space bounds for NP-hard problems. We are interested in exponential time solutions for these problems with a relatively good worst case behavior.

28 citations


Journal ArticleDOI
TL;DR: These simulations show that the proposed string search algorithm's performance is better for long text, long pattern, and large alphabet set and even its worst case time complexity linearly depends on the length of the text.

28 citations


Journal Article
TL;DR: The hypothesis is that a concept’s level of difficulty is determined by that of the multi-agent communication protocol, and that logical complexity (i.e., the maximal Boolean compression of the disjunctive normal form) is the best possible measure of conceptual complexity.
Abstract: Conceptual complexity is assessed by a multi-agent system which is tested experimentally. In this model, where each agent represents a working memory unit, concept learning is an inter-agent communication process that promotes the elaboration of common knowledge from distributed knowledge. Our hypothesis is that a concept’s level of difficulty is determined by that of the multi-agent communication protocol. Three versions of the model, which differ according to how they compute entropy, are tested and compared to Feldman’s model (Nature, 2000), where logical complexity (i.e., the maximal Boolean compression of the disjunctive normal form) is the best possible measure of conceptual complexity. All three models proved superior to Feldman’s: the serial version is ahead by 5.5 points of variance in explaining adult inter-concept performance. Computational complexity theories (Johnson, 1990; Lassaigne & Rougemont, 1996) provide a measure of complexity in terms of the computation load associated with a program’s execution time. In this approach, called the structural approach, problems are grouped into classes on the basis of the machine time and space required by the algorithms used to solve them. A program is a function or a combination of functions. In view of developing psychological models, it can be likened to a concept, especially when y’s domain [y = f(x)] is confined to the values 0 and 1. A neighboring perspective (Delahaye, 1994) aimed at describing the complexity of objects (and not at solving problems) is useful for distinguishing between the“orderless, irregular, random, chaotic, random” complexity (this quantity is called algorithmic complexity, algorithmic randomness, algorithmic information content or Chaitin-Kolmogorov complexity; Chaitin,


Proceedings ArticleDOI
31 Oct 2004
TL;DR: This paper derives a new heuristic for complexity estimation using minimum description length principles and develops a new complexity estimator and compression algorithm based on grammar inference using this heuristic, which is used to provide meaningful models of unknown data sets.
Abstract: In this paper we build on the principle of "conservation of complexity", analyzed in Evans, S et al. (2001), to measure protocol redundancy and pattern content as a metric for information assurance. We first analyze complexity estimators as a tool for detecting FTP exploits. Results showing the utility of complexity-based information assurance to detect exploits over the file transfer protocol are presented and analyzed. We show that complexity metrics are able to distinguish between FTP exploits and normal sessions within some margin of error. We then derive a new heuristic for complexity estimation using minimum description length principles and develop a new complexity estimator and compression algorithm based on grammar inference using this heuristic. This estimator is used to provide meaningful models of unknown data sets. Finally we demonstrate the capability of our complexity-based approach to classify protocol behavior based on similarity distance metrics from known behaviors.


Proceedings ArticleDOI
20 Jun 2004
TL;DR: The proposed SD algorithm, called SD-KB algorithm, can provide pseudo-MLD solutions, which have significant performance gain over the baseline method, especially when the signal-to-interference ratio (SIR) is low.
Abstract: The sphere decoding (SD) algorithm has been widely recognized as an important algorithm to solve the maximum likelihood detection (MLD) problem, given that symbols can only be selected from a set with a finite alphabet. The complexity of the sphere decoding algorithm is much lower than the directly implemented MLD method, which needs to search through all possible candidates before making a decision. However, in high-dimensional and low signal-to-noise ratio (SNR) cases, the complexity of sphere decoding is still prohibitively high for practical applications. In this paper, a simplified SD algorithm, which combines the K-best algorithm and SD algorithm, is proposed. With carefully selected parameters, the new SD algorithm, called SD-KB algorithm, can achieve very low complexity with acceptable performance degradation compared with the traditional SD algorithm. The low complexity of the new SD-KB algorithm makes it applicable to the simultaneously operating piconets (SOP) problem of the multi-band orthogonal frequency division multiplex (MB-OFDM) scheme for the high- speed wireless personal area network (WPAN). We show in particular that the proposed algorithm provides over 4 dB gain in bit error rate (BER) performance over the baseline MB-OFDM scheme when several piconets interfere with each other. The SD-KB algorithm can provide pseudo-MLD solutions, which have significant performance gain over the baseline method, especially when the signal-to-interference ratio (SIR) is low. The cost of performance improvement is higher complexity. However, the new SD algorithm has predictable computation complexity even in the worst scenario.

Journal ArticleDOI
TL;DR: A lower bound on the complexity of a continuous function using a fixed number of sequentially selected function evaluations is established by analyzing the average case for the Brownian bridge.


01 Jan 2004
TL;DR: For any fixed κ>0, the combinatorial complexity of the union of n κ-round, not necessarily convex objects in ℝ3 (resp., not necessarily a convex object with constant description complexity of O(n 2+ε) for any ε> 0, where the constant of proportionality depends on ε, κ, and the algebraic complexity of objects.
Abstract: A compact body c in ℝd is κ-round if for every point p∈ ∂c there exists a closed ball that contains p, is contained in c, and has radius κ diam c. We show that, for any fixed κ>0, the combinatorial complexity of the union of n κ-round, not necessarily convex objects in ℝ3 (resp., in ℝ4) of constant description complexity is O(n2+ε) (resp., O(n3+ε)) for any ε>0, where the constant of proportionality depends on ε, κ, and the algebraic complexity of the objects. The bound is almost tight.

Journal ArticleDOI
TL;DR: A complexity measure for words, called repetition complexity, which quantifies the amount of repetition in a word, and de Bruijn words, well known for their high subword complexity, are shown to have almost highest repetition complexity.
Abstract: With ideas from data compression and combinatorics on words, we introduce a complexity measure for words, called repetition complexity, which quantifies the amount of repetition in a word. The repetition complexity of w, R(w), is defined as the smallest amount of space needed to store w when reduced by repeatedly applying the following procedure: n consecutive occurrences uu…u of the same subword u of w are stored as (u,n). The repetition complexity has interesting relations with well-known complexity measures, such as subword complexity, SUB, and Lempel-Ziv complexity, LZ. We have always R(w)≥LZ(w) and could even be that the former is linear while the latter is only logarithmic; e.g., this happens for prefixes of certain infinite words obtained by iterated morphisms. An infinite word α being ultimately periodic is equivalent to: (i) , (ii) , and (iii) . De Bruijn words, well known for their high subword complexity, are shown to have almost highest repetition complexity; the precise complexity remains open. R(w) can be computed in time and it is open, and probably very difficult, to find fast algorithms.

01 Jan 2004
TL;DR: This work describes the properties of various notions of time-bounded Kolmogorov complexity and other connections between Kolmogsorovsky complexity and computational complexity.
Abstract: We describe the properties of various notions of time-bounded Kolmogorov complexity and other connections between Kolmogorov complexity and computational complexity.

Proceedings ArticleDOI
29 Nov 2004
TL;DR: A high performance algorithm, dual iterative all hops k-shortest paths (DIAHKP) algorithm, that can achieve 100% success ratio in finding the delay constrained least cost (DCLC) path with very low average computational complexity.
Abstract: We introduce an iterative all hops k-shortest paths (IAHKP) algorithm that is capable of iteratively computing all hops k-shortest path (AHKP) from a source to a destination. Based on IAHKP, a high performance algorithm, dual iterative all hops k-shortest paths (DIAHKP) algorithm, is proposed. It can achieve 100% success ratio in finding the delay constrained least cost (DCLC) path with very low average computational complexity. The underlying concept is that since DIAHKP is a k-shortest-paths-based solution to DCLC, implying that its computational complexity increases with k, we can minimize its computational complexity by adaptively minimizing k, while achieving 100% success ratio in finding the optimal feasible path. Through extensive analysis and simulations, we show that DIAHKP is highly effective and flexible. By setting a very small upper bound to k (k=1,2), DIAHKP still can achieve very satisfactory performance. With only an average computational complexity of twice that of the standard Bellman-Ford algorithm, DIAHKP achieves 100% success ratio in finding the optimal feasible path in the typical 32-node network.

Journal Article
TL;DR: It is proved that if L is an NP-complete set and S /spl nsupe/ L is a p-selective sparse set, then L -S is /spl les//sub m//sup p/-hard for NP, and it is shown that no NP- complete set is quasipolynomial-close to P, and that disjoint Turing-complete sets for NP are not closed under union.
Abstract: We study several properties of sets that are complete for NP. We prove that if L is an NP-complete set and S /spl nsupe/ L is a p-selective sparse set, then L -S is /spl les//sub m//sup p/-hard for NP. We demonstrate existence of a sparse set S /spl isin/ DTIME(2/sup 2n/) such that for every L /spl isin/ NP - P, L - S is not /spl les//sub m//sup p/-hard for NP. Moreover, we prove for every L /spl isin/ NP - P, that there exists a sparse S /spl isin/ EXP such that L - S is not /spl les//sub m//sup p/-hard for NP. Hence, removing sparse information in P from a complete set leaves the set complete, while removing sparse information in EXP from a complete set may destroy its completeness. Previously, these properties were known only for exponential time complexity classes. We use hypotheses about pseudorandom generators and secure one-way permutations to derive consequences for long-standing open questions about whether NP-complete sets are immune. For example, assuming that pseudorandom generators and secure one-way permutations exist, it follows easily that NP-complete sets are not p-immune. Assuming only that secure one-way permutations exist, we prove that no NP-complete set is DTIME(2/sup ne/)-immune. Also, using these hypotheses we show that no NP-complete set is quasipolynomial-close to P. We introduce a strong but reasonable hypothesis and infer from it that disjoint Turing-complete sets for NP are not closed under union. Our hypothesis asserts existence of a UP-machine M that accepts 0* such that for some 0 < /spl epsi/ < 1, no 2/sup ne/ time-bounded machine can correctly compute infinitely many accepting computations of M, We show that if UP /spl cap/ coUP contains DTIME(2/sup ne/)-bi-immune sets, then this hypothesis is true.

01 Jan 2004
TL;DR: This thesis studies different notions of resource-bounded Kolmogorov complexity and studies the relation between (a) the ability of sets in certain complexity classes to avoid simple strings and (b) the inclusion relation between different complexity classes.
Abstract: Kolmogorov complexity is a measure that describes the compressibility of a string. Strings with low complexity contain a lot of redundancy, while strings with high Kolmogorov complexity seem to lack any kind of pattern. For instance, a string such as 5555 5555 5555 5555 5555 has low complexity, while a sequence such as 1732 7356 2748 7621 6552 would have high complexity. This thesis studies different notions of resource-bounded Kolmogorov complexity. In particular it studies Levin's Kt complexity and measures Kμ that are defined in a similar manner. Levin defined his measure as Kt( x) = min{|d| + logt | U(d) = x in t steps} where U is a universal Turing machine. It is shown that, contrary to common intuition, the measures Kμ behave differently from the resource-unbounded Kolmogorov complexity, even for generous resource bounds. In particular it is argued that a property called Symmetry of Information does not hold for some of these measures Kμ. One of the main results of this thesis addresses the question of the complexity of computing the measure Kt(x) for a given string x. It can be computed in exponential time, but no meaningful lower bound is known. However, it is shown that it is complete for exponential time under efficient, non-uniform reductions (i.e., reductions computable in P/poly) as well as nondeterministic polynomial time reductions (i.e., reductions computable in NP). Further completeness results of other complexity measures for different complexity classes are obtained as well. These results are of interest as the problem of computing the Kolmogorov complexity of a string is not a typical complete problem. Most problems that are complete for a complexity classes have a clear combinatorial structure representative of the complexity class they reside in. However, the problems studied seem to lack that property. This thesis also studies the relation between (a) the ability of sets in certain complexity classes to avoid simple strings and (b) the inclusion relation between different complexity classes. For instance, it is shown that every set in P contains simple strings, if and only if NEXP ⊆ P/poly.

Journal ArticleDOI
TL;DR: It is shown that if P ≠ NP, then the computational complexity of H (and of similar SR-functionals) is inherently infeasible, and that when added to PCF, it yields a language that computes exactly SR.

Book ChapterDOI
01 Jan 2004
TL;DR: It is shown that for the case that N = pfl p is an odd prime,and q is a primitive root modulo p2, the relationship between thelinear complexity and the minimum value of k for which the k-error linear complexity is strictly less than the linear complexity.
Abstract: The k-error linear complexity of an N-periodic sequence with terms in the finite field \({\mathbb{F}_q}\) is defined to be the smallest linear complexity that can be obtained by changing k or fewer terms of the sequence per period. For the case that N = pfl p is an odd prime,and q is a primitive root modulo p2, we show a relationship between the linear complexity and the minimum value of k for which the k-error linear complexity is strictly less than the linear complexity.

Book ChapterDOI
23 Mar 2004
TL;DR: The complexity index captures the "richness of the language" used in a sequence and is used to characterize the sequence statistically and has a long history of applications in several fields, such as data compression, computational biology, data mining, computational linguistics, among others.
Abstract: This paper discusses the measure of complexity of a sequence called the complexity index. The complexity index captures the "richness of the language" used in a sequence. The measure is simple but quite intuitive. Sequences with low complexity index contain a large number of repeated substrings and they eventually become periodic (e.g., tandem repeats in a DNA sequence). The complexity index is used to characterize the sequence statistically and has a long history of applications in several fields, such as data compression, computational biology, data mining, computational linguistics, among others.

Book ChapterDOI
Chung-Chih Li1
01 Jan 2004
TL;DR: The notion of “small sets” is altered from “finiteness” to topological “compactness” for type-2 complexity theory, and it is shown that explicit type- 2 complexity classes can be defined in terms of resource bounds and are recursively representable.
Abstract: We propose an alternative notion of asymptotic behaviors for the study of type-2 computational complexity. Since the classical asymptotic notion (for all but finitely many) is not acceptable in type-2 context, we alter the notion of “small sets” from “finiteness” to topological “compactness” for type-2 complexity theory. A natural reference for type-2 computations is the standard Baire topology. However, we point out some serious drawbacks of this and introduce an alternative topology for describing compact sets. Following our notion explicit type-2 complexity classes can be defined in terms of resource bounds. We show that such complexity classes are recursively representable; namely, every complexity class has a programming system. We also prove type-2 analogs of Rabin’s Theorem, Recursive Relatedness Theorem, and Gap Theorem to provide evidence that our notion of type-2 asymptotic is workable. We speculate that our investigation will give rise to a possible approach in examining the complexity structure at type-2 along the line of the classical complexity theory.

Proceedings ArticleDOI
17 May 2004
TL;DR: A reduced complexity implementation of a soft Chase algorithm for algebraic soft-decision decoding of Reed-Solomon (RS) codes, based on the recently proposed algorithm of Koetter and Vardy, is presented.
Abstract: A reduced complexity implementation of a soft Chase algorithm for algebraic soft-decision decoding of Reed-Solomon (RS) codes, based on the recently proposed algorithm of Koetter and Vardy, is presented. The reduction in complexity is obtained at the algorithm level by integrating the re-encoding and Chase algorithms and at the architecture level by considering a backup mode which sharply reduces the average computational complexity of the hybrid decoder.

Book ChapterDOI
01 Jan 2004
TL;DR: This paper is an introduction to the entire volume: the notions of reduction functions and their derived complexity classes are introduced abstractly and connected to the areas covered by this volume.
Abstract: This paper is an introduction to the entire volume: the notions of reduction functions and their derived complexity classes are introduced abstractly and connected to the areas covered by this volume.

Posted Content
TL;DR: A precise estimation of the computational complexity in Shor's factoring algorithm under the condition that the large integer we want to factorize is composed by the product of two prime numbers is derived by the results related to number theory as mentioned in this paper.
Abstract: A precise estimation of the computational complexity in Shor's factoring algorithm under the condition that the large integer we want to factorize is composed by the product of two prime numbers, is derived by the results related to number theory. Compared with Shor's original estimation, our estimation shows that one can obtain the solution under such a condition, by less computational complexity.