scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 2007"


Journal ArticleDOI
TL;DR: It is proved that for any unbounded function m = ω(1) with arbitrarily slow growth rate, solving the generalized compact knapsack problems on the average is at least as hard as the worst-case instance of various approximation problems over cyclic lattices.
Abstract: We investigate the average-case complexity of a generalization of the compact knapsack problem to arbitrary rings: given m (random) ring elements a 1,..., a m ? R and a (random) target value b ? R, find coefficients x 1, ..., x m ? S (where S is an appropriately chosen subset of R) such that ? a i · x i = b. We consider compact versions of the generalized knapsack where the set S is large and the number of weights m is small. Most variants of this problem considered in the past (e.g., when $$R={\mathbb{Z}}$$ is the ring of the integers) can be easily solved in polynomial time even in the worst case. We propose a new choice of the ring R and subset S that yields generalized compact knapsacks that are seemingly very hard to solve on the average, even for very small values of m. Namely, we prove that for any unbounded function m = ?(1) with arbitrarily slow growth rate, solving our generalized compact knapsack problems on the average is at least as hard as the worst-case instance of various approximation problems over cyclic lattices. Specific worst-case lattice problems considered in this paper are the shortest independent vector problem SIVP and the guaranteed distance decoding problem GDD (a variant of the closest vector problem, CVP) for approximation factors n 1+? almost linear in the dimension of the lattice. Our results yield very efficient and provably secure one-way functions (based on worst-case complexity assumptions) with key size and time complexity almost linear in the security parameter n. Previous constructions with similar security guarantees required quadratic key size and computation time. Our results can also be formulated as a connection between the worst-case and average-case complexity of various lattice problems over cyclic and quasi-cyclic lattices.

253 citations


Journal ArticleDOI
TL;DR: It is shown that scoring protocols are susceptible to manipulation by coalitions, when the number of candidates is constant, and it is demonstrated that NP-hard manipulations may be tractable in the average-case.
Abstract: Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Recently, computational complexity has been suggested as a means of precluding strategic behavior. Previous studies have shown that some voting protocols are hard to manipulate, but used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation. Indeed, we demonstrate that NP-hard manipulations may be tractable in the average-case. For this purpose, we augment the existing theory of average-case complexity with some new concepts. In particular, we consider elections distributed with respect to junta distributions, which concentrate on hard instances. We use our techniques to prove that scoring protocols are susceptible to manipulation by coalitions, when the number of candidates is constant.

183 citations


Proceedings ArticleDOI
07 Sep 2007
TL;DR: The tool, the Trend Profiler (trend-prof), is described, for constructing models of empirical computational complexity that predict how many times each basic block in a program runs as a linear or a powerlaw function of user-specified features of the program's workloads.
Abstract: The standard language for describing the asymptotic behavior of algorithms is theoretical computational complexity. We propose a method for describing the asymptotic behavior of programs in practice by measuring their empirical computational complexity. Our method involves running a program on workloads spanning several orders of magnitude in size, measuring their performance, and fitting these observations to a model that predicts performance as a function of workload size. Comparing these models to the programmer's expectations or to theoretical asymptotic bounds can reveal performance bugs or confirm that a program's performance scales as expected. Grouping and ranking program locations based on these models focuses attention on scalability-critical code. We describe our tool, the Trend Profiler (trend-prof), for constructing models of empirical computational complexity that predict how many times each basic block in a program runs as a linear (y = a + bx) or a powerlaw (y = axb) function of user-specified features of the program's workloads. We ran trend-prof on several large programs and report cases where a program scaled as expected, beat its worst-case theoretical complexity bound, or had a performance bug.

147 citations


Journal ArticleDOI
TL;DR: In this article, the authors gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP ≠ BPP), but their result does not provide a continuous trade-off between worst-case hardness and pseudo-randomness.
Abstract: Impagliazzo and Wigderson (1998) gave the first construction of pseudorandom generators from a uniform complexity assumption on EXP (namely EXP ≠ BPP). Unlike results in the nonuniform setting, their result does not provide a continuous trade-off between worst-case hardness and pseudorandomness, nor does it explicitly establish an average-case hardness result. In this paper:

102 citations


Journal ArticleDOI
TL;DR: It is shown that there is a fixed distribution on instances of NP-complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial-time algorithms (unless NP is easy in the worst case).
Abstract: We prove that if NP $${ subseteq}$$ BPP, i.e., if SAT is worst-case hard, then for every probabilistic polynomial-time algorithm trying to decide SAT, there exists some polynomially samplable distribution that is hard for it. That is, the algorithm often errs on inputs from this distribution. This is the first worst-case to average-case reduction for NP of any kind. We stress however, that this does not mean that there exists one fixed samplable distribution that is hard for all probabilistic polynomial-time algorithms, which is a pre-requisite assumption needed for one-way functions and cryptography (even if not a sufficient assumption). Nevertheless, we do show that there is a fixed distribution on instances of NP-complete languages, that is samplable in quasi-polynomial time and is hard for all probabilistic polynomial-time algorithms (unless NP is easy in the worst case). Our results are based on the following lemma that may be of independent interest: Given the description of an efficient (probabilistic) algorithm that fails to solve SAT in the worst case, we can efficiently generate at most three Boolean formulae (of increasing lengths) such that the algorithm errs on at least one of them.

63 citations


Proceedings ArticleDOI
11 Jun 2007
TL;DR: It follows from the results that this bound on the saving in communication is tight almost always, and this approach gives access to several powerful tools from this area such as normed spaces duality and Grothendiek's inequality.
Abstract: We introduce a new method to derive lower bounds on randomized and quantum communication complexity. Our method is based on factorization norms, a notion from Banach Space theory. This approach gives us access toseveral powerful tools from this area such as normed spaces duality and Grothendiek's inequality. This extends the arsenal of methods for deriving lower bounds in communication complexity. As we show,our method subsumes most of the previously known general approaches to lower bounds on communication complexity. Moreover, we extend all (but one) of these lower bounds to the realm of quantum communication complexity with entanglement. Our results also shed some light on the question how much communication can be saved by using entanglement.It is known that entanglement can save one of every two qubits, and examples for which this is tight are also known. It follows from our results that this bound on the saving in communication is tight almost always.

57 citations


Book
05 Oct 2007
TL;DR: This article surveys the known lower bounds for the time and space complexity of satisfiability and closely related problems on deterministic, randomized, and quantum models with random access and discusses the state-of-the-art results.
Abstract: Ever since the fundamental work of Cook from 1971, satisfiability has been recognized as a central problem in computational complexity. It is widely believed to be intractable, and yet till recently even a linear-time, logarithmic-space algorithm for satisfiability was not ruled out. In 1997 Fortnow, building on earlier work by Kannan, ruled out such an algorithm. Since then there has been a significant amount of progress giving non-trivial lower bounds on the computational complexity of satisfiability. In this article, we survey the known lower bounds for the time and space complexity of satisfiability and closely related problems on deterministic, randomized, and quantum models with random access. We discuss the state-of-the-art results and present the underlying arguments in a unified framework.

50 citations


Journal ArticleDOI
TL;DR: A new recursive algorithm is presented, which produces the minimal nonlinear feedback shift register of a given binary sequence and it is shown that the eigenvalue profile of a sequence uniquely determines its nonlinear complexity profile, thus establishing a connection between Lempel-Ziv complexity and non linear complexity.
Abstract: The nonlinear complexity of binary sequences and its connections with Lempel-Ziv complexity is studied in this paper. A new recursive algorithm is presented, which produces the minimal nonlinear feedback shift register of a given binary sequence. Moreover, it is shown that the eigenvalue profile of a sequence uniquely determines its nonlinear complexity profile, thus establishing a connection between Lempel-Ziv complexity and nonlinear complexity. Furthermore, a lower bound for the Lempel-Ziv compression ratio of a given sequence is proved that depends on its nonlinear complexity.

41 citations


Proceedings ArticleDOI
07 Jul 2007
TL;DR: The results show that while the number of objective function evaluations needed to find a solution is often increased by using m
Abstract: A limited memory version of the covariance matrix adaptation evolution strategy (CMA-ES) is presented. This algorithm, L-CMA-ES, improves the space and time complexity of the CMA-ES algorithm. The L-CMA-ES uses the $m$ eigenvectors and eigenvalues spanning the m-dimensional dominant subspace of the n-dimensional covariance matrix, C, describing the mutation distribution. The algorithm avoids explicit computation and storage of $C$ resulting in space and time savings. The L-CMA-ES algorithm has a space complexity of \mathcal{O}(nm) and a time complexity of \mathcal{O}(nm^2). The algorithm is evaluated on a number of standard test functions. The results show that while the number of objective function evaluations needed to find a solution is often increased by using m

39 citations


Journal ArticleDOI
TL;DR: Estimation through nondeterministic state complexity is studied and it is shown that the method is very promising for a large class of combined operations.
Abstract: We consider the state complexity of several combined operations. Those results show that the state complexity of a combined operation is in general very different from the composition of the state complexities of the participating individual operations. We also consider general estimation methods for the state complexity of combined operations. In particular, estimation through nondeterministic state complexity is studied. It is shown that the method is very promising for a large class of combined operations.

34 citations


Journal ArticleDOI
Kyungchun Lee1, Joohwan Chun1
TL;DR: This paper presents an efficient iterative search strategy, which is based on the shortest path algorithm for a graph, and proposes to use scaling, lattice-reduction, and regularization techniques to reduce the complexity of this algorithm.
Abstract: This paper presents a new maximum likelihood (ML) symbol detection algorithm for multiple-input multiple-output (MIMO) systems. To achieve the ML performance with low complexity, we search the integer points corresponding to symbol vectors in increasing order of the distance from the unconstrained least-squares solution. For each integer point, we test if it is the ML solution, and continue the integer point search until one of searched points is determined to be the ML solution. We present an efficient iterative search strategy, which is based on the shortest path algorithm for a graph. The simulation results show that the proposed algorithm has the lower complexity compared to the sphere decoding for channel matrices having low condition numbers. For further complexity reduction, we propose to use scaling, lattice-reduction, and regularization techniques. By applying these techniques, the computational complexity of proposed algorithm is reduced significantly when the channel matrix has a high condition number.

Proceedings ArticleDOI
29 Jul 2007
TL;DR: A new randomized algorithm is presented for computing the characteristic polynomial of an x x n matrix over a field, improving by a factor of log 1 on the worst case complexity of Keller-Gehrig's algorithm.
Abstract: A new randomized algorithm is presented for computing the characteristic polynomial of an n x n matrix over a field. Over a suffciently large field the asymptotic expected complexity of the algorithm is O(nθ)field operations, improving by a factor of log n on the worst case complexity of Keller-Gehrig's algorithm [11].

Proceedings ArticleDOI
21 Oct 2007
TL;DR: This paper shows that the Black-While Pebbling Game is PSPACE-complete, and uses similar ideas in a more complicated reduction to prove the PSPace-completeness of Resolution space.
Abstract: The complexity of the Black-White Pebbling Game has remained open for 30 years. It was devised to capture the power of non-deterministic space bounded computation. Since then it has been applied to problems in diverse areas of computer science including VLSI design and more recently propositional proof complexity. In this paper we show that the Black-While Pebbling Game is PSPACE-complete. We then use similar ideas in a more complicated reduction to prove the PSPACE-completeness of Resolution space. The reduction also yields a surprising exponential time/space speedup for Resolution in which an increase of 3 units of space results in an exponential decrease in proof-size.

Journal ArticleDOI
TL;DR: The statistical stability properties of pm -periodic binary sequences are studied in terms of their linear complexity and k-error linear complexity, where p is n prime number and 2 is a primitive root modulo p2.
Abstract: In this correspondence, we study the statistical stability properties of pm -periodic binary sequences in terms of their linear complexity and k-error linear complexity, where p is n prime number and 2 is a primitive root modulo p2. We show that their linear complexity and k-error linear complexity take a value only from some specific ranges. We then present the minimum value k for which the k-error linear complexity is strictly less than the linear complexity in a new viewpoint different from the approach by Meidl. We also derive the distribution of pm-periodic binary sequences with specific k-error linear complexity. Finally, we get an explicit formula for the expectation value of the k-error linear complexity and give its lower and upper bounds, when k les [p/2].

Journal ArticleDOI
TL;DR: It is argued that it suffices for an algorithmic time complexity measure to be system invariant rather than system independent (which means predicting from the desk) instead of being system independent.

Journal ArticleDOI
TL;DR: This paper presents two optimization algorithms that solve the optimization problem of jointly selecting the best set of reference frames and their associated transport QoS levels in a multipath streaming setting globally optimally and locally optimally with lower complexity.
Abstract: Recent video coding standards such as H.264 offer the flexibility to select reference frames during motion estimation for predicted frames. In this paper, we study the optimization problem of jointly selecting the best set of reference frames and their associated transport QoS levels in a multipath streaming setting. The application of traditional Lagrangian techniques to this optimization problem suffers from either bounded worst case error but high complexity or low complexity but undetermined worst case error. Instead, we present two optimization algorithms that solve the problem globally optimally with high complexity and locally optimally with lower complexity. We then present rounding methods to further reduce computation complexity of the second dynamic programming-based algorithm at the expense of degrading solution quality. Results show that our low-complexity dynamic programming algorithm achieves results comparable to the optimal but high-complexity algorithm, and that gradual tradeoff between complexity and optimization quality can be achieved by our rounding techniques

Journal ArticleDOI
TL;DR: In this paper, it was shown that the class of functions of a chosen complexity is a differentialalgebraic set and a differential polynomial defining the functions of first class is constructed.
Abstract: The definition of analytic complexity of an analytic function of two variables is given. It is proved that the class of functions of a chosen complexity is a differentialalgebraic set. A differential polynomial defining the functions of first class is constructed. An algorithm for obtaining relations defining an arbitrary class is described. Examples of functions are given whose order of complexity is equal to zero, one, two, and infinity. It is shown that the formal order of complexity of the Cardano and Ferrari formulas is significantly higher than their analytic complexity. The complexity classes turn out to be invariant with respect to a certain infinite-dimensional transformation pseudogroup. In this connection, we describe the orbits of the action of this pseudogroup in the jets of orders one, two, and three. The notion of complexity order is extended to plane (or “planar”) 3-webs. It is discovered that webs of complexity order one are the hexagonal webs. Some problems are posed.

Journal ArticleDOI
TL;DR: This paper shows a corresponding upper bound for deterministic information complexity and improves known lower bounds for the public coin Las Vegas communication complexity by a constant factor.

Journal ArticleDOI
TL;DR: For the 2n-periodic periodic binary sequence with linear complexity 2n − 1 and k = 2,3, the number of sequences with given k-error linear complexity and the expected k- error linear complexity are provided.
Abstract: Linear complexity and k-error linear complexity of the stream cipher are two important standards to scale the randomicity of keystreams. For the 2n-periodic periodic binary sequence with linear complexity 2n − 1 and k = 2,3, the number of sequences with given k-error linear complexity and the expected k-error linear complexity are provided. Moreover, the proportion of the sequences whose k-error linear complexity is bigger than the expected value is analyzed.

Book ChapterDOI
09 Jul 2007
TL;DR: It follows from the existential result that any function that is complete for the class of functions with polylogarithmic nondeterministic k-party communication complexity does not have polylogARithmic deterministic complexity.
Abstract: We solve some fundamental problems in the number-onforehead (NOF) k-party communication model. We show that there exists a function which has at most logarithmic communication complexity for randomized protocols with a one-sided error probability of 1/3 but which has linear communication complexity for deterministic protocols. The result is true for k = nO(1) players, where n is the number of bits on each players' forehead. This separates the analogues of RP and P in the NOF communication model. We also show that there exists a function which has constant randomized complexity for public coin protocols but at least logarithmic complexity for private coin protocols. No larger gap between private and public coin protocols is possible. Our lower bounds are existential and we do not know of any explicit function which allows such separations. However, for the 3-player case we exhibit an explicit function which has Ω(log log n) randomized complexity for private coins but only constant complexity for public coins. It follows from our existential result that any function that is complete for the class of functions with polylogarithmic nondeterministic k-party communication complexity does not have polylogarithmic deterministic complexity. We show that the set intersection function, which is complete in the number-in-hand model, is not complete in the NOF model under cylindrical reductions.

Posted Content
TL;DR: This article is a short introduction to generic case complexity, which is a recently developed way of measuring the difficulty of a computational problem while ignoring atypical behavior on a small set of inputs.
Abstract: This article is a short introduction to generic case complexity, which is a recently developed way of measuring the difficulty of a computational problem while ignoring atypical behavior on a small set of inputs Generic case complexity applies to both recursively solvable and recursively unsolvable problems

Book ChapterDOI
22 Feb 2007
TL;DR: In this article, it was shown that if a language has a neutral letter and bounded communication complexity in the k-party game for some fixed k then the language is in fact regular.
Abstract: We study languages with bounded communication complexity in the multiparty "input on the forehead model" with worst-case partition In the two-party case, languages with bounded complexity are exactly those recognized by programs over commutative monoids [19] This can be used to show that these languages all lie in shallow ACC0 In contrast, we use coding techniques to show that there are languages of arbitrarily large circuit complexity which can be recognized in constant communication by k players for k ≥ 3 However, we show that if a language has a neutral letter and bounded communication complexity in the k-party game for some fixed k then the language is in fact regular We give an algebraic characterization of regular languages with this property We also prove that a symmetric language has bounded k-party complexity for some fixed k iff it has bounded two party complexity

Book ChapterDOI
26 Sep 2007
TL;DR: This paper tackles the issue of program decomposition wrt quasi-interpretations analysis and uses the notion of modularity to study the modularity of quasi- interpretations through the notions of constructor-sharing and hierarchical unions of programs.
Abstract: Quasi-interpretation analysis belongs to the field of implicit computational complexity (ICC) and has shown its interest to deal with resource analysis of first-order functional programs, which are terminating or not. In this paper, we tackle the issue of program decomposition wrt quasi-interpretations analysis. For that purpose, we use the notion of modularity. Firstly, modularity decreases the complexity of the quasi-interpretation search algorithms. Secondly, modularity increases the intentionality of the quasi-interpretation method, that is the number of captured programs. Finally, we take advantage of modularity conditions to extend smoothly quasi-interpretations to higher-order programs. We study the modularity of quasi-interpretations through the notions of constructor-sharing and hierarchical unions of programs. We show that, in both cases, the existence of quasi-interpretations is no longer a modular property. However, we can still certify the complexity of programs by showing, under some restrictions, that the size of the values computed by a program remains polynomially bounded by the inputs size.

Journal ArticleDOI
TL;DR: The combined model checking complexity as well as the data complexity of FLC are EXPTIME-complete, which is already the case for its alternation-free fragment.
Abstract: This paper analyses the complexity of model checking fixpoint logic with Chop – an extension of the modal μ -calculus with a sequential composition operator. It uses two known game-based characterisations to derive the following results: the combined model checking complexity as well as the data complexity of FLC are EXPTIME-complete. This is already the case for its alternation-free fragment. The expression complexity of FLC is trivially P-hard and limited from above by the complexity of solving a parity game, i.e. in UP ∩ co-UP. For any fragment of fixed alternation depth, in particular alternation- free formulas it is P-complete.

Proceedings Article
03 Dec 2007
TL;DR: It is proved that nearest neighbor clustering is statistically consistent, its worst case complexity is polynomial by construction, and it can be implemented with small average case complexity using branch and bound.
Abstract: Clustering is often formulated as a discrete optimization problem. The objective is to find, among all partitions of the data set, the best one according to some quality measure. However, in the statistical setting where we assume that the finite data set has been sampled from some underlying space, the goal is not to find the best partition of the given sample, but to approximate the true partition of the underlying space. We argue that the discrete optimization approach usually does not achieve this goal. As an alternative, we suggest the paradigm of "nearest neighbor clustering". Instead of selecting the best out of all partitions of the sample, it only considers partitions in some restricted function class. Using tools from statistical learning theory we prove that nearest neighbor clustering is statistically consistent. Moreover, its worst case complexity is polynomial by construction, and it can be implemented with small average case complexity using branch and bound.

Book ChapterDOI
18 Dec 2007
TL;DR: It is shown that the Berlekamp-Massey Algorithm, which computes the linear complexity of a sequence, can be adapted to approximate the k-error linear complexity profile for a general sequence over a finite field.
Abstract: Some cryptographical applications use pseudorandom sequences and require that the sequences are secure in the sense that they cannot be recovered by only knowing a small amount of consecutive terms. Such sequences should therefore have a large linear complexity and also a large k-error linear complexity. Efficient algorithms for computing the k-error linear complexity of a sequence only exist for sequences of period equal to a power of the characteristic of the field. It is therefore useful to find a general and efficient algorithm to compute a good approximation of the k-error linear complexity. We show that the Berlekamp-Massey Algorithm, which computes the linear complexity of a sequence, can be adapted to approximate the k-error linear complexity profile for a general sequence over a finite field. While the complexity of this algorithm is still exponential, it is considerably more efficient than the exhaustive search.

Journal ArticleDOI
TL;DR: The complexity of multicriteria scheduling problems in the light of the previous complexity results is reviewed and the aim is often to enumerate the set of the so-called Pareto optima.
Abstract: In this paper we tackle an important point of combinatorial optimisation: that of complexity theory when dealing with the counting or enumeration of optimal solutions. Complexity theory has been initially designed for decision problems and evolved over the years, for instance, to tackle particular features in optimisation problems. It has also evolved, more or less recently, towards the complexity of counting and enumeration problems and several complexity classes, which we review in this paper, have emerged in the literature. This kind of problems makes sense, notably, in the case of multicriteria optimisation where the aim is often to enumerate the set of the so-called Pareto optima. In the second part of this paper we review the complexity of multicriteria scheduling problems in the light of the previous complexity results.

Journal ArticleDOI
TL;DR: One of Knuth’s well known results on average case complexity in replacement (i.e. selection) sort is rejected and hence the robustness of average complexity measures where the response variable is sensitive to ties is challenged.

Journal ArticleDOI
TL;DR: In computational theory, time is defined in terms of steps, and steps are defined by the computational process, which allows time to be measured in bits, which in turn allows the definition of various computable complexity measures.
Abstract: In computational theory, time is defined in terms of steps, and steps are defined by the computational process. Because steps can be described, a computation can be recorded as a binary string. This allows time to be measured in bits, which in turn, allows the definition of various computable complexity measures that account for the minimal amount of computation required to create an object from primitive beginnings. Three such measures are introduced in this article. They are “transcript depth,” which is closely relate to logical and computational depth, and “Kd complexity,” which is similar to Levin's Kt complexity, and “minimal history.” The later two measures are comprehensive in the sense that they characterize all information required for the creation of an object and also all computable internal relationships and redundancies that are present in the object. © 2007 Wiley Periodicals, Inc. Complexity 12:48–53, 2007

Proceedings ArticleDOI
20 Jun 2007
TL;DR: The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets)
Abstract: A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d