scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 2006"


Proceedings Article
16 Jul 2006
TL;DR: The proposed algorithm is a core tree-growing algorithm that can be combined with other scaling-up techniques to achieve further speedup and is as fast as naive Bayes but outperforms naive Baye in accuracy according to experiments.
Abstract: There is growing interest in scaling up the widely-used decision-tree learning algorithms to very large data sets. Although numerous diverse techniques have been proposed, a fast tree-growing algorithm without substantial decrease in accuracy and substantial increase in space complexity is essential. In this paper, we present a novel, fast decision-tree learning algorithm that is based on a conditional independence assumption. The new algorithm has a time complexity of O(m ċ n), where m is the size of the training data and n is the number of attributes. This is a significant asymptotic improvement over the time complexity O(m ċ n2) of the standard decision-tree learning algorithm C4.5, with an additional space increase of only O(n). Experiments show that our algorithm performs competitively with C4.5 in accuracy on a large number of UCI benchmark data sets, and performs even better and significantly faster than C4.5 on a large number of text classification data sets. The time complexity of our algorithm is as low as naive Bayes'. Indeed, it is as fast as naive Bayes but outperforms naive Bayes in accuracy according to our experiments. Our algorithm is a core tree-growing algorithm that can be combined with other scaling-up techniques to achieve further speedup.

184 citations


Journal ArticleDOI
TL;DR: In this article, the authors survey the average-case complexity of problems in NP and present completeness results due to Impagliazzo and Levin, and discuss various notions of good-on-average algorithms.
Abstract: We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question is whether the existence of hard-on-average problems in NP can be based on the P ≠ NP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different "degrees" of average-case complexity. We discuss some of these "hardness amplification" results.

105 citations


Book ChapterDOI
20 Mar 2006
TL;DR: In this paper, an exact algorithm for motif extraction based on suffix trees is presented, which is shown to be more than two times faster than the best known exact algorithm in terms of average case complexity.
Abstract: We present in this paper an exact algorithm for motif extraction. Efficiency is achieved by means of an improvement in the algorithm and data structures that applies to the whole class of motif inference algorithms based on suffix trees. An average case complexity analysis shows a gain over the best known exact algorithm for motif extraction. A full implementation was developed and made available online. Experimental results show that the proposed algorithm is more than two times faster than the best known exact algorithm for motif extraction.

102 citations


Proceedings ArticleDOI
21 May 2006
TL;DR: The possibility of basing one-way functions on NP-Hardness is considered, and possible reductions from a worst-case decision problem to the task of average-case inverting a polynomial-time computable function f are studied.
Abstract: We consider the possibility of basing one-way functions on NP-Hardness; that is, we study possible reductions from a worst-case decision problem to the task of average-case inverting a polynomial-time computable function f. Our main findings are the following two negative results:If given y one can efficiently compute |f-1(y)| then the existence of a (randomized) reduction of NP to the task of inverting f implies that coNP ⊆ AM. Thus, it follows that such reductions cannot exist unless coNP ⊆ AM.For any function f, the existence of a (randomized) non-adaptive reduction of NP to the task of average-case inverting f implies that coNP ⊆ AM.Our work builds upon and improves on the previous works of Feigenbaum and Fortnow (SIAM Journal on Computing, 1993) and Bogdanov and Trevisan (44th FOCS, 2003), while capitalizing on the additional "computational structure" of the search problem associated with the task of inverting polynomial-time computable functions. We believe that our results illustrate the gain of directly studying the context of one-way functions rather than inferring results for it from a the general study of worst-case to average-case reductions.

91 citations


Journal ArticleDOI
TL;DR: It is shown that if an NP-complete problem has a nonadaptive self-corrector with respect to any samplable distribution, then coNP is contained in NP/poly and the polynomial hierarchy collapses to the third level.
Abstract: We show that if an NP-complete problem has a nonadaptive self-corrector with respect to any samplable distribution, then coNP is contained in NP/poly and the polynomial hierarchy collapses to the third level. Feigenbaum and Fortnow [SIAM J. Comput., 22 (1993), pp. 994-1005] show the same conclusion under the stronger assumption that an NP-complete problem has a nonadaptive random self-reduction. A self-corrector for a language L with respect to a distribution $\cal D$ is a worst-case to average-case reduction that transforms any given algorithm that correctly decides $L$ on most inputs (with respect to $\cal D$) into an algorithm of comparable efficiency that decides L correctly on every input. A random self-reduction is a special case of a self-corrector, where the reduction, given an input $x$, is restricted to only making oracle queries that are distributed according to $\cal D$. The result of Feigenbaum and Fortnow depends essentially on the property that the distribution of each query in a random self-reduction is independent of the input of the reduction. Our result implies that the average-case hardness of a problem in NP or the security of a one-way function cannot be based on the worst-case complexity of an NP-complete problem via nonadaptive reductions (unless the polynomial hierarchy collapses).

86 citations


01 Jan 2006
TL;DR: In this article, the authors show that some voting protocols are hard to manipulate, but predictably used NP-hardness as the complexity measure, and demonstrate that such a worstcase analysis may be insufficient guarantee of resistance to manipulation.
Abstract: Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Previous studies have shown that some voting protocols are hard to manipulate, but predictably used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation.Indeed, we demonstrate that NP-hard manipulations may be tractable in the average-case. For this purpose, we augment the existing theory of average-case complexity with new concepts; we consider elections distributed with respect to junta distributions, which concentrate on hard instances, and introduce a notion of heuristic polynomial time. We use our techniques to prove that a family of important voting protocols is susceptible to manipulation by coalitions, when the number of candidates is constant.

85 citations


Posted Content
TL;DR: The many open questions and the few things that are known about the average-case complexity of computational problems are reviewed, and a theory of completeness for distributional problems under reductions that preserveaverage-case tractability is initiated.
Abstract: We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P$ eq$NP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different "degrees" of average-case complexity. We discuss some of these "hardness amplification" results.

59 citations


Journal Article
TL;DR: The complexity of the 2D Sperner problem has been shown to be PPAD-complete in this paper, which is the first proof that the complexity of 2D-SPERNER is PPADcomplete.
Abstract: We study a computational complexity version of the 2D Sperner problem, which states that any three coloring of vertices of a triangulated triangle, satisfying some boundary conditions, will have a trichromatic triangle In introducing a complexity class PPAD, Papadimitriou [CH Papadimitriou, On graph-theoretic lemmata and complexity classes, in: Proceedings of the 31st Annual Symposium on Foundations of Computer Science, 1990, 794-801] proved that its 3D analogue is PPAD-complete about fifteen years ago The complexity of 2D-SPERNER itself has remained open since then We settle this open problem with a PPAD-completeness proof The result also allows us to derive the computational complexity characterization of a discrete version of the 2D Brouwer fixed point problem, improving a recent result of Daskalakis, Goldberg and Papadimitriou [C Daskalakis, PW Goldberg, CH Papadimitriou, The complexity of computing a Nash equilibrium, in: Proceedings of the 38th Annual ACM Symposium on Theory of Computing (STOC), 2006] Those hardness results for the simplest version of those problems provide very useful tools to the study of other important problems in the PPAD class

52 citations


Journal ArticleDOI
TL;DR: In this paper, it is shown that it is also possible to write the classic union-find algorithm and variants in CHR and study the time complexity of their programs: they match the almost linear complexity of the best known imperative implementations.
Abstract: Constraint Handling Rules (CHR) is a committed-choice rule-based language that was originally intended for writing constraint solvers. In this paper we show that it is also possible to write the classic union-find algorithm and variants in CHR. The programs neither compromise in declarativeness nor efficiency. We study the time complexity of our programs: they match the almost-linear complexity of the best known imperative implementations. This fact is illustrated with experimental results.

51 citations


Book ChapterDOI
02 Apr 2006
TL;DR: A fast new RNA folding algorithm is utilized for genome-wide discovery of accessible cis-regulatory motifs in data sets of ribosomal densities and decay rates of S. cerevisiae genes and to the mining of exposed binding sites of tissue-specific microRNAs in A. Thaliana.
Abstract: mRNA molecules are folded in the cells and therefore many of their substrings may actually be inaccessible to protein and microRNA binding. The need to apply an accessability criterion to the task of genome-wide mRNA motif discovery raises the challenge of overcoming the core O(n3) factor imposed by the time complexity of the currently best known algorithms for RNA secondary structure prediction [24, 25, 43]. We speed up the dynamic programming algorithms that are standard for RNA folding prediction. Our new approach significantly reduces the computations without sacrificing the optimality of the results, yielding an expected time complexity of O(n2ψ(n)), where ψ(n) is shown to be constant on average under standard polymer folding models. Benchmark analysis confirms that in practice the runtime ratio between the previous approach and the new algorithm indeed grows linearly with increasing sequence size. The fast new RNA folding algorithm is utilized for genome-wide discovery of accessible cis-regulatory motifs in data sets of ribosomal densities and decay rates of S. cerevisiae genes and to the mining of exposed binding sites of tissue-specific microRNAs in A. Thaliana. Further details, including additional figures and proofs to all lemmas, can be found at: http://www.cs.tau.ac.il/~michaluz/QuadraticRNAFold.pdf

45 citations


Proceedings ArticleDOI
09 Jul 2006
TL;DR: In this article, it was shown that for the set of all d times n binary matrices with entries 0 or 1 and d < n, MKCS exactly recovers the input for an overwhelming fraction of the matrices provided the Kolmogorov complexity of the input is O(d).
Abstract: Consider a d times n matrix A, with d < n. The problem of solving for x in y = Ax is underdetermined, and has infinitely many solutions (if there are any). Given y, the minimum Kolmogorov complexity solution (MKCS) of the input x is defined to be an input z (out of many) with minimum Kolmogorov-complexity that satisfies y = Az. One expects that if the actual input is simple enough, then MKCS will recover the input exactly. This paper presents a preliminary study of the existence and value of the complexity level up to which such a complexity-based recovery is possible. It is shown that for the set of all d times n binary matrices (with entries 0 or 1 and d < n), MKCS exactly recovers the input for an overwhelming fraction of the matrices provided the Kolmogorov complexity of the input is O(d). A weak converse that is loose by a log n factor is also established for this case. Finally, we investigate the difficulty of finding a matrix that has the property of recovering inputs with complexity of O(d) using MKCS

Journal ArticleDOI
TL;DR: The Viterbi algorithm is investigated, and the sphere-constrained search strategy of SD is combined with the dynamic programming principles of the VA to solve the maximum-likelihood sequence detection problem for channels with memory.
Abstract: The maximum-likelihood (ML) sequence detection problem for channels with memory is investigated. The Viterbi algorithm (VA) provides an exact solution. Its computational complexity is linear in the length of the transmitted sequence, but exponential in the channel memory length. On the other hand, the sphere decoding (SD) algorithm also solves the ML detection problem exactly, and has expected complexity which is a low-degree polynomial (often cubic) in the length of the transmitted sequence over a wide range of signal-to-noise ratios. We combine the sphere-constrained search strategy of SD with the dynamic programming principles of the VA. The resulting algorithm has the worst-case complexity determined by the VA, but often significantly lower expected complexity

Journal ArticleDOI
TL;DR: In this paper, the authors prove lower bounds on the complexity of explicitly given graphs and prove new lower bounds for boolean functions, as well as new proofs of some known lower bounds in the graph-theoretic framework.
Abstract: By the complexity of a graph we mean the minimum number of union and intersection operations needed to obtain the whole set of its edges starting from stars. This measure of graphs is related to the circuit complexity of boolean functions.We prove some lower bounds on the complexity of explicitly given graphs. This yields some new lower bounds for boolean functions, as well as new proofs of some known lower bounds in the graph-theoretic framework. We also formulate several combinatorial problems whose solution would have intriguing consequences in computational complexity.

Book ChapterDOI
TL;DR: This paper defines thelinear complexity of a graph to be the linear complexity of any one of its associated adjacency matrices, and compute or give upper bounds for the Linear complexity of several classes of graphs.
Abstract: The linear complexity of a matrix is a measure of the number of additions, subtractions, and scalar multiplications required to multiply that matrix and an arbitrary vector. In this paper, we define the linear complexity of a graph to be the linear complexity of any one of its associated adjacency matrices. We then compute or give upper bounds for the linear complexity of several classes of graphs.

Proceedings ArticleDOI
08 May 2006
TL;DR: It is demonstrated that NP-hard manipulations may be tractable in the average-case, and a family of important voting protocols is susceptible to manipulation by coalitions, when the number of candidates is constant.
Abstract: Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Previous studies have shown that some voting protocols are hard to manipulate, but predictably used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation.Indeed, we demonstrate that NP-hard manipulations may be tractable in the average-case. For this purpose, we augment the existing theory of average-case complexity with new concepts; we consider elections distributed with respect to junta distributions, which concentrate on hard instances, and introduce a notion of heuristic polynomial time. We use our techniques to prove that a family of important voting protocols is susceptible to manipulation by coalitions, when the number of candidates is constant.

Book ChapterDOI
24 Sep 2006
TL;DR: Some interesting properties of the linear complexity and the 1-error linear complexity of 2n-periodic binary sequences are obtained and a new approach is obtained to derive the counting function for the1-errorlinear complexity of2n- periodicbinary sequences.
Abstract: The linear complexity of sequences is one of the important security measures for stream cipher systems. Recently, using fast algorithms for computing the linear complexity and the k-error linear complexity of 2n-periodic binary sequences, Meidl determined the counting function and expected value for the 1-error linear complexity of 2n-periodic binary sequences. In this paper, we study the linear complexity and the 1-error linear complexity of 2n-periodic binary sequences. Some interesting properties of the linear complexity and the 1-error linear complexity of 2n-periodic binary sequences are obtained. Using these properties, we characterize the 2n-periodic binary sequences with fixed 1-error linear complexity. Along the way, we obtain a new approach to derive the counting function for the 1-error linear complexity of 2n-periodic binary sequences. Finally, we give new fast algorithms for computing the 1-error linear complexity and locating the error positions for 2n-periodic binary sequences.

Journal Article
TL;DR: In this article, the 1-error linear complexity of 2 n -periodic binary sequences was studied and a fast algorithm for computing the 1 error linear complexity and locating the error positions was given.
Abstract: The linear complexity of sequences is one of the important security measures for stream cipher systems. Recently, using fast algorithms for computing the linear complexity and the k-error linear complexity of 2-periodic binary sequences, Meidl determined the counting function and expected value for the 1-error linear complexity of 2-periodic binary sequences. In this paper, we study the linear complexity and the 1-error linear complexity of 2 n -periodic binary sequences. Some interesting properties of the linear complexity and the 1-error linear complexity of 2 n -periodic binary sequences are obtained. Using these properties, we characterize the 2 n -periodic binary sequences with fixed 1-error linear complexity. Along the way, we obtain a new approach to derive the counting function for the 1-error linear complexity of 2 n -periodic binary sequences. Finally, we give new fast algorithms for computing the 1-error linear complexity and locating the error positions for 2-periodic binary sequences.

Book
10 Dec 2006
TL;DR: In this article, Impagliazzo, Goldreich, and Bogdanov discuss the equivalence between hard-on-average problems and worst-case hard problems in complexity classes like PSPACE and EXP.
Abstract: We review the many open questions and the few things that are known about the average-case complexity of computational problems We shall follow the presentations of Impagliazzo, of Goldreich, and of Bogdanov and the author, and focus on the following subjects (i) Average-case tractability What does it mean for a problem to have an "efficient on average'' algorithm with respect to a distribution of instances? There is more than one ``correct'' answer to this question, and a numberof subtleties arise, which are interesting to discuss (ii) Worst case versus average-case Is the existence of hard-on-averageproblems in a complexity class equivalent to the existence of worst-case-hardproblems? This is the case for complexity classes like PSPACE and EXP, but it is openfor NP, with partial evidence pointing to a negative answer (To be sure, we believethat hard-on-average, and also worst-case hard problems, exist in NP, and if so theirexistence is ``equivalent'' in the way two true statements are logically equivalent There is, however, partial evidence that such an equivalence cannot be establishedvia reductions It is also known that such an equivalence cannot be established viaany relativizing technique) (iii) Amplification of average-case hardness A weak sense in which aproblem may be hard-on-average is that every efficient algorithm fails on a noticeable(at least inverse polynomial) fraction of inputs; a strong sense is that noalgorithm can do much better than guess the answer at random In many settings,the existence of problems of weak average-case complexity implies the existenceof problems, in the same complexity class, of strong average-case complexityIt remains open to prove such equivalence in the setting of uniform algorithmsfor problems in NP (Some partial results are known even in this setting) (iv) Reductions and Completeness Levin initiated a theoryof completeness for distributional problems under reductions that preserveaverage-case tractability Even establishing the existence of an NP-completeproblem in this theory is a non-trivial (and interesting) result

Proceedings ArticleDOI
11 Sep 2006
TL;DR: This paper presents a constructive synthesis algorithm for any n-qubit reversible function, where N distinct input patterns different from their corresponding outputs, and shows that this circuit can be synthesized by at most 2nldrN '(n - 1)'-CNOT gates and 4n2 ldr N NOT gates.
Abstract: This paper presents a constructive synthesis algorithm for any n-qubit reversible function. Given any n-qubit reversible function, there are N distinct input patterns different from their corresponding outputs, where N les 2n, and the other (2n - N) input patterns will be the same as their outputs. We show that this circuit can be synthesized by at most 2nldrN '(n - 1)'-CNOT gates and 4n2 ldr N NOT gates. The time complexity of our algorithm has asymptotic upper bound O(n ldr 4n). The space complexity of our synthesis algorithm is also O(n ldr 2n). The computational complexity of our synthesis algorithm is exponentially lower than the complexity of breadth-first search based synthesis algorithm.

Journal ArticleDOI
TL;DR: This work defines the complexity of a quantum state by means of the classical description complexity of an (abstract) experimental procedure that allows us to prepare the state with a given fidelity, and argues that this definition satisfies the intuitive idea of complexity as a measure of how difficult it is to prepare a state.
Abstract: We give a definition for the Kolmogorov complexity of a pure quantum state. In classical information theory, the algorithmic complexity of a string is a measure of the information needed by a universal machine to reproduce the string itself. We define the complexity of a quantum state by means of the classical description complexity of an (abstract) experimental procedure that allows us to prepare the state with a given fidelity. We argue that our definition satisfies the intuitive idea of complexity as a measure of "how difficult" it is to prepare a state. We apply this definition to give an upper bound on the algorithmic complexity of a number of known states. Furthermore, we establish a connection between the entanglement of a quantum state and its algorithmic complexity.

26 May 2006
TL;DR: The role of data complexity in the context of binary classification problems is investigated, and it is illustrated that a data set is best approximated by its principal subsets which are Pareto optimal with respect to the complexity and the set size.
Abstract: We investigate the role of data complexity in the context of binary classification problems. The universal data complexity is defined for a data set as the Kolmogorov complexity of the mapping enforced by the data set. It is closely related to several existing principles used in machine learning such as Occam's razor, the minimum description length, and the Bayesian approach. The data complexity can also be defined based on a learning model, which is more realistic for applications. We demonstrate the application of the data complexity in two learning problems, data decomposition and data pruning. In data decomposition, we illustrate that a data set is best approximated by its principal subsets which are Pareto optimal with respect to the complexity and the set size. In data pruning, we show that outliers usually have high complexity contributions, and propose methods for estimating the complexity contribution. Since in practice we have to approximate the ideal data complexity measures, we also discuss the impact of such approximations.

Proceedings ArticleDOI
11 Sep 2006
TL;DR: This paper investigates the applicability of KC as an estimator of problem difficulty for optimization in the black box scenario and concludes that high KC implies hardness however, while easy fitness functions have low KC the reverse is not necessarily true.
Abstract: The Kolmogorov complexity (KC) of a string is defined as the length of the shortest program that can print that string and halts. This measure of complexity is often used in optimization to indicate expected function difficulty. While it is often used, there are known counterexamples. This paper investigates the applicability of KC as an estimator of problem difficulty for optimization in the black box scenario. In particular we address the known counterexamples (e.g., pseudorandom functions, the NIAH) and explore the connection of KC to the NFLTs. We conclude that high KC implies hardness however, while easy fitness functions have low KC the reverse is not necessarily true.

Journal Article
TL;DR: This work shows that assuming specific hardness of the balanced bipartite independent set problem in constant degree graphs or hardness of refuting random 3CNF formulas, the envy-free pricing problem cannot be approximated in polynomial time within O(log |C|) for some e > 0.
Abstract: We consider the envy-free pricing problem, in which we want to compute revenue maximizing prices for a set of products P assuming that each consumer from a set of consumer samples C will buy the product maximizing her personal utility, i.e., the difference between her respective budget and the product’s price. We show that assuming specific hardness of the balanced bipartite independent set problem in constant degree graphs or hardness of refuting random 3CNF formulas, the envy-free pricing problem cannot be approximated in polynomial time within O(log |C|) for some e > 0. This is the first result giving evidence that envy-free pricing might be hard to approximate within essentially better ratios than the logarithmic ratio obtained so far. Additionally, it gives another example of how average case complexity is connected to the worst case approximation complexity of notorious optimization problems.

Proceedings ArticleDOI
21 Oct 2006
TL;DR: In this paper, a lower bound on the complexity of randomized volume algorithms for convex bodies was obtained for a convex body with complexity roughly n^4, conjectured to be n^3.
Abstract: How much can randomness help computation? Motivated by this general question and by volume computation, one of the few instances where randomness provably helps, we analyze a notion of dispersion and connect it to asymptotic convex geometry. We obtain a nearly quadratic lower bound on the complexity of randomized volume algorithms for convex bodies in \mathbb{R}^n (the current best algorithm has complexity roughly n^4, conjectured to be n^3). Our main tools, dispersion of random determinants and dispersion of the length of a random point from a convex body, are of independent interest and applicable more generally; in particular, the latter is closely related to the variance hypothesis from convex geometry. This geometric dispersion also leads to lower bounds for matrix problems and property testing.

Journal Article
TL;DR: In this paper, the authors survey the average-case complexity of problems in NP and present completeness results due to Impagliazzo and Levin, and discuss various notions of good-on-average algorithms.
Abstract: We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question is whether the existence of hard-on-average problems in NP can be based on the P ≠ NP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different "degrees" of average-case complexity. We discuss some of these "hardness amplification" results.

Journal Article
TL;DR: In this article, a lower bound of (p n) for the randomized one-way communication complexity of the Boolean Hidden Matching Problem was given. But this lower bound was later broken by Gavinsky, Kempe, de Wolf, and de Wolf.
Abstract: We give a tight lower bound of ( p n) for the randomized one-way communication complexity of the Boolean Hidden Matching Problem [BJK04]. Since there is a quantum one-way communication complexity protocol of O(logn) qubits for this problem, we obtain an exponential separation of quantum and classical one-way communication complexity for partial functions. A similar result was independently obtained by Gavinsky, Kempe, de Wolf [GKdW06]. Our lower bound is obtained by Fourier analysis, using the Fourier coecients inequality of Kahn Kalai and Linial [KKL88].

Journal ArticleDOI
07 Feb 2006
TL;DR: The idea is that information is an extension of the concept 'algorithmic complexity' from a class of desirable and concrete processes to a class more general that can only in pragmatic terms be regarded as existing in the conception.
Abstract: We study complexity and information and introduce the idea that while complexity is relative to a given class of processes, information is process independent: Information is complexity relative to the class of all conceivable processes. In essence, the idea is that information is an extension of the concept 'algorithmic complexity' from a class of desirable and concrete processes, such as those represented by binary decision trees, to a class more general that can only in pragmatic terms be regarded as existing in the conception. It is then precisely the fact that information is defined relative to such a large class of processes that it becomes an effective tool for analyzing phenomena in a wide range of disciplines.We test these ideas on the complexity of classical states. A domain is used to specify the class of processes, and both qualitative and quantitative notions of complexity for classical states emerge. The resulting theory is used to give new proofs of fundamental results from classical information theory, to give a new characterization of entropy in quantum mechanics, to establish a rigorous connection between entanglement transformation and computation, and to derive lower bounds on algorithmic complexity. All of this is a consequence of the setting which gives rise to the fixed point theorem: The least fixed point of the copying operator above complexity is information.

Proceedings ArticleDOI
13 Mar 2006
TL;DR: This is a brief survey into the applications of list decoding in complexity theory, specifically in relating the worst-case and average-case complexity of computational problems, and in construction of pseudorandom generators.
Abstract: This is a brief survey into the applications of list decoding in complexity theory, specifically in relating the worst-case and average-case complexity of computational problems, and in construction of pseudorandom generators Since we do not have space for full proofs, the aim is to give a flavor of the utility of list decoding in these settings together with pointers to where further details can be found

Proceedings ArticleDOI
09 Jul 2006
TL;DR: A power-efficient communication model for wireless sensor networks where silence is used to convey information is considered and the average-case and worst-case complexities of symmetric functions under this model are studied and protocols that achieve them are described.
Abstract: We consider a power-efficient communication model for wireless sensor networks where silence is used to convey information. We study the average-case and worst-case complexities of symmetric functions under this model and describe protocols that achieve them. For binary-input functions, we determine the average complexity. For ternary-input functions, we consider a special type of protocols and provide close lower and upper bounds for their worst-case complexity. We also describe the protocol that achieves the average complexity

Proceedings ArticleDOI
15 Dec 2006
TL;DR: This paper introduces a new enumeration technique for (multi)parametric linear programs (pLPs) based on the reverse-search paradigm and proves that the proposed algorithm has a computational complexity that is linear in the size of the output and a constant space complexity.
Abstract: This paper introduces a new enumeration technique for (multi) parametric linear programs (pLPs) based on the reverse-search paradigm. We prove that the proposed algorithm has a computational complexity that is linear in the size of the output (number of so-called critical regions) and a constant space complexity. This is an improvement over the quadratic and linear computational and space complexities of current approaches. Current implementations of the proposed approach become faster than existing methods for large problems. Extensions of this method are proposed that make the computational requirements lower than those of existing approaches in all cases, while allowing for efficient parallelisation and bounded memory usage.