scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1995"


Journal ArticleDOI
TL;DR: It is argued that for many problems in this setting, parameterized computational complexity rather than NP-completeness is the appropriate tool for studying apparent intractability and a new result is described for the Longest Common Subsequence problem.
Abstract: Many computational problems in biology involve parameters for which a small range of values cover important applications. We argue that for many problems in this setting, parameterized computational complexity rather than NP-completeness is the appropriate tool for studying apparent intractability. At issue in the theory of parameterized complexity is whether a problem can be solved in time O(n alpha) for each fixed parameter value, where alpha is a constant independent of the parameter. In addition to surveying this complexity framework, we describe a new result for the Longest Common Subsequence problem. In particular, we show that the problem is hard for W[t] for all t when parameterized by the number of strings and the size of the alphabet. Lower bounds on the complexity of this basic combinatorial problem imply lower bounds on more general sequence alignment and consensus discovery problems. We also describe a number of open problems pertaining to the parameterized complexity of problems in computational biology where small parameter values are important.

120 citations


Journal ArticleDOI
TL;DR: This paper gives an example exhibiting the largest gap known and proves two related theorems about the relationship between the communication complexity of a boolean function and the rank of the associated matrix.
Abstract: This paper concerns the open problem of Lovasz and Saks regarding the relationship between the communication complexity of a boolean function and the rank of the associated matrix. We first give an example exhibiting the largest gap known. We then prove two related theorems.

118 citations


Journal ArticleDOI
TL;DR: An approach to separating NC1 fromP is outlined and it is shown that the approach provides a new proof of the separation of monotoneNC1 from monotonesP.
Abstract: Is it easier to solve two communication problems together than separately? This question is related to the complexity of the com- position of boolean functions. Based on this relationship, an approach to separating NC 1 from P is outlined. Furthermore, it is shown that the approach provides a new proof of the separation of monotone NC 1 from monotone P.

108 citations


Proceedings ArticleDOI
29 May 1995
TL;DR: This work proves several separations which show that in a generic relativized world, the search classes are distinct and there is a standard search problem in each of them that is not computationally equivalent to any decision problem.
Abstract: Papadimitriou introduced several classes of NP search problemsbased on combinatorial principles which guarantee the existence of solutions to the problems. Many interesting search problems not known to be solvable in polynomial time are contained in these classes, and a number of them are complete problems. We consider the question of the relative complexity of these search problem classes. We prove several separations which show that in a generic relativized world, the search classes are distinct and there is a standard search problem in each of them that is not computationally equivalent to any decision problem. (Naturally, absolute separations would imply that P#NP.) Our separation proofs have interesting combinatorial content and go to the heart of the combinatorial principles on which the classes are based. We derive one res~t via new lower degrees of polynomials asserted to exist by stellensatz over finite fields.

77 citations


Journal ArticleDOI
Ran Raz1
TL;DR: Fourier analysis is used to get general lower bounds for the probabilistic communication complexity of large classes of functions using an inequality by Kahn, Kalai, and Linial derived from two lemmas of Beckner.
Abstract: We use Fourier analysis to get general lower bounds for the probabilistic communication complexity of large classes of functions. We give some examples showing how to use our method in some known cases and for some new functions. Our main tool is an inequality by Kahn, Kalai, and Linial, derived from two lemmas of Beckner.

67 citations


Journal ArticleDOI
TL;DR: The main result regarding the first attempt is negative: one cannot use this method for proving superpolynomial lower bounds for formula size, and the main results regarding the second attempt is a "direct-sum" theorem for two-round communication complexity.
Abstract: It is possible to view communication complexity as the minimum solution of an integer programming problem. This integer programming problem is relaxed to a linear programming problem and from it information regarding the original communication complexity question is deduced. A particularly appealing avenue this opens is the possibility of proving lower bounds on the communication complexity (which is a minimization problem) by exhibiting upper bounds on the maximization problem defined by the dual of the linear program. This approach works very neatly in the case of nondeterministic communication complexity. In this case a special case of Lovasz's fractional cover measure is obtained. Through it the amortized nondeterministic communication complexity is completely characterized. The power of the approach is also illustrated by proving lower and upper bounds on the nondeterministic communication complexity of various functions. In the case of deterministic complexity the situation is more complicated. Two attempts are discussed and some results using each of them are obtained. The main result regarding the first attempt is negative: one cannot use this method for proving superpolynomial lower bounds for formula size. The main result regarding the second attempt is a "direct-sum" theorem for two-round communication complexity.

65 citations


Journal ArticleDOI
TL;DR: The existence of a non-constant gap between the communication complexity of a function and the logarithm of the rank of its input matrix is shown and an Ω(nloglogn) lower bound for the graph connectivity problem in the non-deterministic case is proved.
Abstract: We show the existence of a non-constant gap between the communication complexity of a function and the logarithm of the rank of its input matrix. We consider the following problem: each of two players gets a perfect matching between twon-element sets of vertices. Their goal is to decide whether or not the union of the two matcliings forms a Hamiltonian cycle. We prove: Our result also supplies a superpolynomial gap between the chromatic number of a graph and the rank of its adjacency matrix. Another conclusion from the second result is an Ω(nloglogn) lower bound for the graph connectivity problem in the non-deterministic case. We make use of the theory of group representations for the first result. The second result is proved by an information theoretic argument.

57 citations


Journal ArticleDOI
TL;DR: This work presents the first algebraic problem complete for the average case under a natural probability distribution of unimodular matrices and decides if there is a product of $\leq n$ members of $S$ that takes $X$ to the identity matrix.
Abstract: In the theory of worst case complexity, NP completeness is used to establish that, for all practical purposes, the given NP problem is not decidable in polynomial time. In the theory of average case complexity, average case completeness is supposed to play the role of NP completeness. However, the average case reduction theory is still at an early stage, and only a few average case complete problems are known. We present the first algebraic problem complete for the average case under a natural probability distribution. The problem is this: Given a unimodular matrix $X$ of integers, a set $S$ of linear transformations of such unimodular matrices and a natural number $n$, decide if there is a product of $\leq n$ (not necessarily different) members of $S$ that takes $X$ to the identity matrix.

45 citations


Journal ArticleDOI
TL;DR: Four notions of polynomial-time computable sets in R$^2$ are introduced and studied and their relationship is studied.
Abstract: The computational complexity of bounded sets of the two-dimensional plane is studied in the discrete computational model. We introduce four notions of polynomial-time computable sets in R$^2$ and study their relationship. The computational complexity of the winding number problem, membership problem, distance problem, and area problem is characterized by the relations between discrete complexity classes of the NP theory.

42 citations


Journal ArticleDOI
TL;DR: The distributed approximating functional-path integral is formulated as an iterated sequence of {ital d}-dimensional integrals, based on deterministic ``low discrepancy sequences,`` as opposed to products of one-dimensional quadratures or basis functions.
Abstract: The distributed approximating functional-path integral is formulated as an iterated sequence of $d$-dimensional integrals, where $d$ is the intrinsic number of degrees of freedom for the system under consideration. This is made practical for larger values of $d$ by evaluating these integrals using average-case complexity integration techniques, based on deterministic ``low discrepancy sequences,'' as opposed to products of one-dimensional quadratures or basis functions. The integration converges as $(\mathrm{log}P{)}^{d\ensuremath{-}1}/P,$ where P is the number of sample points used, and the dimensionality of the integral does not increase with the number of time slices required.

29 citations


Journal ArticleDOI
TL;DR: This work proves that the average-case complexity of computing an e-approximation is of order loglog(l/e), and that a hybrid secant-bisection method with a suitable adaptive stopping rule is almost optimal.
Abstract: We present an average-case complexity analysis for the zerofinding problem for functions from C r ([0,1]), r ≥ 2, which change sign at the endpoints. This class of functions is equipped with a conditional r-folded Wiener measure. We prove that the average-case complexity of computing an e-approximation is of order loglog(l/e), and that a hybrid secant-bisection method with a suitable adaptive stopping rule is almost optimal. This method uses only function evaluations. We stress that the adaptive stopping rule is crucial. If one uses a nonadaptive stopping rule, then the cost has to be of order log(l/e). Hence, the adaptive stopping rule is exponentially more powerful than arbitrary nonadaptive stopping rules. Our algorithm is a slightly simplified version of the hybrid methods proposed by Dekker in 1969 and Bus and Dekker in 1975. These algorithms are still considered as the best algorithms for zerofinding by Kahaner, Moler, and Nash in their book on numerical methods.

Proceedings ArticleDOI
22 Oct 1995
TL;DR: The authors establish that the answer to the question whether there is a general algorithm that is polynomial in all input parameters except k is probably "no", and give an overview of this theory and derive the main result.
Abstract: A series of previous PSPACE- and NP-hardness results suggest that no general algorithm for robot motion planning can be polynomial in all of its input parameters, i.e., at least one parameter x must be exponential relative to a constant, e.g., 2/sup x/, or another parameter of the problem, e.g., y/sup x/. However, they have not answered the more relevant question posed by some FP space-based algorithms-namely, whether there is a general algorithm that is polynomial in all input parameters except k, in which k may yet be exponential relative to a constant or itself. In this paper, using the theory of parameterized computational complexity developed by Downey and Fellows (1992), the authors establish that the answer to this question is probably "no". The authors give an overview of this theory and derive their main result. Finally, they briefly discuss the implications for robotics of both these results and the parameterized complexity framework.

Proceedings ArticleDOI
01 Jan 1995
TL;DR: This merged algorithm, its error analysis, and software simulation results are presented, showing this structure, the shifter size is reduced to 1/2 (1+9/n+1).
Abstract: The COordinate Rotation DIgital Computer (CORDIC) algorithm is an iterative procedure to evaluate various elementary functions. It usually consists of one scaling multiplication and n+1 elementary shift-add iterations in an n bit processor. These iterations can be paired off to form double iterations to lower the hardware complexity while the computational complexity stays the same. With this structure, the shifter size is reduced to 1/2 (1+9/n+1). In this paper, we present this merged algorithm, its error analysis, and software simulation results.

Proceedings ArticleDOI
19 Jun 1995
TL;DR: This work investigates the relationship between the complexity of NP decision problems and that of NP optimization problems under polynomial-time computable distributions, and it is shown that the difference between P/sub tt//sup NP/-samplable and P/sup NP- samplable distributions is crucial.
Abstract: For the worst-case complexity measure, if P=NP, then P=OptP, i.e., all NP optimization problems are polynomial-time solvable. On the other hand, it is not clear whether a similar relation holds when considering average-case complexity. We investigate the relationship between the complexity of NP decision problems and that of NP optimization problems under polynomial-time computable distributions, and study what makes them (seemingly) different. It is shown that the difference between P/sub tt//sup NP/-samplable and P/sup NP/-samplable distributions is crucial.

Proceedings ArticleDOI
05 Nov 1995
TL;DR: The author presents a new path consistency algorithm, PC-5, which has an O(n/sup 3/a/sup 2/) space complexity while retaining the worst case time complexity of PC-4, and exhibits a much better average caseTime complexity.
Abstract: One of the main factors limiting the use of path consistency algorithms in real life applications is their high space complexity. C. Han and C. Lee (1988) presented a path consistency algorithm, PC-4, with O(n/sup 3/a/sup 3/) space complexity, which makes it practicable only for small problems. The author presents a new path consistency algorithm, PC-5, which has an O(n/sup 3/a/sup 2/) space complexity while retaining the worst case time complexity of PC-4. Moreover, the new algorithm exhibits a much better average case time complexity. The new algorithm is based on the idea (due to C. Bessiere (1994)) that, at any time, only a minimal amount of support has to be found and recorded for a labeling to establish its viability; one has to look for at new support only if the current support is eliminated. The author also shows that PC-5 can be improved further to yield an algorithm, PC5++, with even better average case performance and the same space complexity.

Proceedings ArticleDOI
19 Jun 1995
TL;DR: Some of the more recent advances on selected topics in structural complexity theory are highlighted, including polynomial-size circuit complexity, membership comparability, approxamability, selectivity, and cheatability.
Abstract: Over a decade ago, V. Schoning introduced the concept of lowness into structural complexity theory. Since then a large body of results has been obtained classifying various complexity classes according to their lowness properties. In this paper we highlight some of the more recent advances on selected topics in the area. Among the lowness properties we consider are polynomial-size circuit complexity, membership comparability, approximability, selectivity, and cheatability. Furthermore, we review some of the recent results concerning lowness for counting classes.

Journal ArticleDOI
TL;DR: The goal is to prove statements of the kind: "Given two complexity classes C and D, C = D if and only if for every sparse set S, CS = DS."

Book ChapterDOI
01 Jan 1995
TL;DR: The complexity theory of counting contrasts intriguingly with that of existence or optimization as discussed by the authors, and the complexity of counting can be seen as a special case of complexity theory in the optimization domain.
Abstract: The complexity theory of counting contrasts intriguingly with that of existence or optimization.

Journal ArticleDOI
Rainer Schuler1
TL;DR: In this paper, it is shown that P is a proper subset of P p-comp and it can be shown thatP p-Comp is properly contained in E.As a further property it is showed that Pp-comp is different from NP.

Journal ArticleDOI
TL;DR: This is a complete exposition of a tight version of a fundamental theorem of computational complexity due to Levin: the inherent space complexity of any partial function is very accurately specifiable in a Pi-1 way, and every such specification does characterize the complexity of some partial function, even one that assumes only the values 0 and 1.

Journal Article
TL;DR: This paper analyzes the complexity of an alphabet indexing and shows that the problem is NP-complete and gives a local search algorithm for this problem and shows a result on PLS-completeness.

Proceedings ArticleDOI
17 Mar 1995
TL;DR: A new perspective on complexity metrics and their uses during the testing phase of software development is dealt with, applicable to any procedural language.
Abstract: This paper deals with a new perspective on complexity metrics and their uses during the testing phase of software development. In keeping with traditional viewpoints, the complexity of a code fragment is assumed to be correlated with its error-proneness or its maintenance difficulty, so that the higher the complexity, the more likely the existence of errors. While the theories advocated here are applicable to any procedural language, the prototype version currently under development is based on Ada.

Proceedings ArticleDOI
01 Dec 1995
TL;DR: This paper presents a new algorithm of worst-case time (and space) complexity O(n log n), where n is the total number of realizations for the basic blocks, regardless whether the slicing is balanced or not, and proves /spl Omega/( n log n) is the lower bound and the time complexity of any area minimization algorithm.
Abstract: The traditional algorithm of L. Stockmeyer (1983) for area minimization of slicing floorplans has time (and space) complexity O(n/sup 2/) in the worst case, or O(n log n) for balanced slicing. For more than a decade, it is considered the best possible. In this paper, we present a new algorithm of worst-case time (and space) complexity O(n log n), where n is the total number of realizations for the basic blocks, regardless whether the slicing is balanced or not. We also prove /spl Omega/(n log n) is the lower bound and the time complexity of any area minimization algorithm. Therefore, the new algorithm not only finds the optimal realization, but also has an optimal running time.

Journal ArticleDOI
TL;DR: In this paper, a new approach to quantum dynamics is presented which addresses the fundamental difficulty of exponential growth of computational complexity with the dimensionality of the system, and a general recursive polynomial treatment of the time-independent full Green operator, the average-case complexity approach to multi-dimensional integration, and the continuous distributed approximating functional representation of the Hamiltonian are the three ingredients of the approach.

Journal ArticleDOI
TL;DR: The 3-Satisfiability problem is analyzed and the existence of fast decision procedures for this problem over the reals is examined based on certain conditions on the discrete setting.
Abstract: Relations between discrete and continuous complexity models are considered. The present paper is devoted to combine both models. In particular we analyze the 3-Satisfiability problem. The existence of fast decision procedures for this problem over the reals is examined based on certain conditions on the discrete setting. Moreover we study the behaviour of exponential time computations over the reals depending on the real complexity of 3-Satisfiability. This will be done using tools from complexity theory over the integers.

Journal Article
TL;DR: This work presents a hierarchy theorem for average-case complexity, for arbitrary time-bounds, that is as tight as the well-known Hartmanis-Stearns HS65], and demonstrates that the deenition is natural and is as justiied for arbitrary Time-Bounds as is Levin's deenitions for polynomial time- bounds.
Abstract: We extend Levin's theory of average polynomial time to arbitrary time-bounds in accordance with the following general principles: (1) It essentially agrees with Levin's notion when applied to polynomial time-bounds. (2) If a language L belongs to DTIME(T(n)), for some time-bound T (n), then every distributional problem (L;) is T on the-average. (3) If L does not belong to DTIME(T(n)) almost everywhere, then no distributional problem (L;) is T on the-average. We present a hierarchy theorem for average-case complexity, for arbitrary time-bounds, that is as tight as the well-known Hartmanis-Stearns HS65] hierarchy theorem for deterministic complexity. As a consequence, for every time-bound T (n), there are distributional problems (L;) that can be solved using only a slight increase in time but that cannot be solved on the-average in time T (n). We demonstrate that our deenition is natural and is as justiied for arbitrary time-bounds as is Levin's deenition for polynomial time-bounds. We critique an earlier proposal of a deenition of average case complexity for arbitrary time-bounds BDCGL92] by demonstrating that it does not satisfy our general principles. Nevertheless, we obtain a ne hierarchy, for the earlier deenition, for distributional problems whose running time is bounded by a polynomial. Our proofs use techniques of convexity, HH older's inequality, and properties of Hardy's class of logarithmico-exponential functions.

Journal ArticleDOI
TL;DR: 'natural' distributions for the satis ability problem (SAT) of propositional logic are investigated, using concepts introduced by [25, 19, 1], and evidence that (at least polynomial-time, no-error] randomized reductions are appropriate in average-case complexity is supported.
Abstract: We investigate in this paper 'natural' distributions for the satis ability problem (SAT) of propositional logic, using concepts introduced by [25, 19, 1] to study the average-case complexity ofNP-complete problems. Gurevich showed that a problem with a at distribution is not DistNP complete (for deterministic reductions), unless DEXPTime 6= NEXPTime. We express the known results concerning xed size and xed density distributions for CNF in the framework of average-case complexity and show that all these distributions are at. We introduce the family of symmetric distributions, which generalizes those mentioned before, and show that bounded symmetric distributions on ordered tuples of clauses (CNFTuples) and on k-CNF (sets of k-literal-clauses), are at. This eliminates all these distributions as candidates for 'provably hard' (i.e. DistNP complete) distributions for SAT, if one considers only deterministic reductions. Given the (presumed) naturalness and generality of these distributions, this result supports evidence that (at least polynomial-time, no-error [38, 19]) randomized reductions are appropriate in average-case complexity. We also observe, that there are nonat distributions for which SAT is polynomial on the average, but that this is due to the particular choice of the size functions. Finally, Chv atal and Szemer edi ([8]) have shown that for certain xed size distributions (which are also at) resolution is exponential for almost all instances. We use this to show that every resolution algorithm will need at least exp(n ) (for any 0 < < 1) time on the average. In other words, resolution based algorithms will not establish that SAT, with these distributions, is in AverP.

01 Jan 1995
TL;DR: In this paper, a theoretical framework for analyzing average-case time and storage complexity of ray tracing acceleration techniques is introduced by means of homogeneous spatial Poisson point processes, and as a demonstrative example of its application, the expected query time of the widely known technique based on a regular spatial grid is analyzed.
Abstract: A theoretical framework for analyzing average-case time and storage complexity of ray tracing acceleration techniques is introduced by means of homogeneous spatial Poisson point processes. Then, as a demonstrative example of its application, the expected query time of the widely known technique based on a regular spatial grid is analyzed. Finally, an interpretation of the results is presented within the context of probability theory.

Journal ArticleDOI
TL;DR: This paper characterize the well-known computational complexity classes of the polynomial time hierarchy as classes of provably recursive functions of some second order theories with weak comprehension axiom schemas but without any induction schemas, and finds a natural relationship between these theories and the theories of bounded arithmetic S2.
Abstract: In this paper we characterize the well-known computational complexity classes of the polynomial time hierarchy as classes of provably recursive functions (with graphs of suitable bounded complexity) of some second order theories with weak comprehension axiom schemas but without any induction schemas (Theorem 6). We also find a natural relationship between our theories and the theories of bounded arithmetic (Lemmas 4 and 5). Our proofs use a technique which enables us to “speed up” induction without increasing the bounded complexity of the induction formulas. This technique is also used to obtain an interpretability result for the theories of bounded arithmetic (Theorem 4).

Book ChapterDOI
02 Mar 1995
TL;DR: This work studies a variation on classical key-agreement and consensus problems in which the key space S is the range of a random variable that can be sampled, and shows agreement possible with zero communication if every fully polynomial-time approximation scheme (fpras) has a certain symmetry-breaking property.
Abstract: We study a variation on classical key-agreement and consensus problems in which the key space S is the range of a random variable that can be sampled. We give tight upper and lower bounds of [log2k] bits on the communication complexity of agreement on some key in S, using a form of Sperner's Lemma, and give bounds on other problems. In the case where keys are generated by a probabilistic polynomial-time Turing machine, we show agreement possible with zero communication if every fully polynomial-time approximation scheme (fpras) has a certain symmetry-breaking property.