scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1986"


Proceedings ArticleDOI
27 Oct 1986
TL;DR: The main objective is to exploit the analogy between Turing machine (TM) and communication complexity (CC) classes to provide a more amicable environment for the study of questions analogous to the most notorious problems in TM complexity.
Abstract: We take a complexity theoretic view of A. C. Yao's theory of communication complexity. A rich structure of natural complexity classes is introduced. Besides providing a more structured approach to the complexity of a variety of concrete problems of interest to VLSI, the main objective is to exploit the analogy between Turing machine (TM) and communication complexity (CC) classes. The latter provide a more amicable environment for the study of questions analogous to the most notorious problems in TM complexity. Implicitly, CC classes corresponding to P, NP, coNP, BPP and PP have previously been considered. Surprisingly, pcc = Npcc ∩ coNPcc is known [AUY]. We develop the definitions of PSPACEcc and of the polynomial time hierarchy in CC. Notions of reducibility are introduced and a natural complete member in each class is found. BPPcc ⊆ Σ2cc ∩ Π2cc [Si2] remains valid. We solve the question that BPPcc ? NPcc by proving an Ω(√n) lower bound for the bounded-error complexity of the coNPcc- complete problem "disjointness". Similar lower bounds follow for essentially any nontrivial monotone graph property. Another consequence is that the deterministically exponentially hard "equality" relation is not NPcc-hard with respect to oracle-protocol reductions. We prove that the distributional complexity of the disjointness problem is O(√n log n) under any product measure on {0, 1}n × {0, 1}n. This points to the difficulty of improving the Ω(√n) lower bound for the B2PP complexity of "disjointness". The variety of counting and probabilistic classes appears to be greater than in the Turing machine versions. Many of the simplest graph problems (undirected reachability, planarity, bipartiteness, 2-CNF-satisfiability) turn out to be PSPACEcc-hard. The main open problem remains the separation of the hierarchy, more specifically, the conjecture that Σ2cc ≠ Π2cc. Another major problem is to show that PSPACEcc and the probabilistic class UPPcc are not comparable.

387 citations



Book ChapterDOI
Eric Allender1
11 Jun 1986
TL;DR: The complexity of sparse sets in P is shown to be central to certain questions about circuit complexity classes and about one-way functions.
Abstract: P-printable sets, defined in [HY-84], arise naturally in the study of P-uniform circuit complexity, generalized Kolmogorov complexity, and data compression, as well as in many other areas. We present new characterizations of the P-printable sets and present necessary and sufficient conditions for the existence of sparse sets in P which are not P-printable. The complexity of sparse sets in P is shown to be central to certain questions about circuit complexity classes and about one-way functions. Among the main results are:

91 citations


Proceedings Article
01 Jun 1986

66 citations


Journal ArticleDOI
TL;DR: In this paper, quantities are discussed which can serve as measures of the complexity of complex dynamical systems, and some of the most interesting patterns have zero randomness but infinite complexity in the present sense.
Abstract: In an increasing number of simple dynamical systems, patterns arise which are judged as “complex” in some naive sense. In this talk, quantities are discussed which can serve as measures of this complexity. They are measure-theoretic constructs. In contrast to the Kolmogorov complexity, they are small both for completely ordered and for completely random patterns. Some of the most interesting patterns have indeed zero randomness but infinite complexity in the present sense.

43 citations


Book ChapterDOI
15 Jul 1986
TL;DR: A definition of computability and complexity of real functions and real numbers is given which is open to methods of recursive function theory as well as to methodsof numerical analysis.
Abstract: In this paper a definition of computability and complexity of real functions and real numbers is given which is open to methods of recursive function theory as well as to methods of numerical analysis. As an example of application we study the computational complexity of roots and thereby establish a subpolynomial hierarchy of real closed fields.

31 citations


Journal ArticleDOI
TL;DR: A hierarchy is established with respect to this complexity measure for both deterministic and nondeterministic models and a communication complexity hierarchy for k -way communication complexity is established too.

16 citations


Journal ArticleDOI
Ker-I Ko1
TL;DR: A theory of approximation to measurable sets and measurable functions based on the concepts of recursion theory and discrete complexity theory is developed and the computational complexity may be viewed as a formulation of the average- case complexity of real functions—in contrast to the more restrictive worst-case complexity.

13 citations


Journal ArticleDOI
TL;DR: In this paper the theory of slice functions is extended and a monotone representation of each Boolean function whosemonotone complexity is at most a factor n larger than its circuit complexity is presented.

11 citations


Book ChapterDOI
01 Jan 1986
TL;DR: A number of different Simplex-type algorithms have been investigated to establish their status with respect to computational complexity, either in the worst case or on average.
Abstract: The well-known fact that the Simplex algorithm in its most simple form is exponential was established in 1972 by Klee and Minty. Since then, a number of different Simplex-type algorithms have been investigated to establish their status with respect to computational complexity, either in the worst case or on average.

1 citations


Book ChapterDOI
17 Sep 1986
TL;DR: It is proved: using a fixed number p of processing elements (PEs) the time complexity of a parallel partitioned algorithm is minimal if either all p PEs or if only one PE is used for executing one operation on datablocks.
Abstract: A general concept for the description of partitioned algorithms is presented. It is based on a partitioning of the occurring data in datablocks of equal size. For a class of partitioned algorithms including matrix multiplication, LU-decomposition of a matrix, solving a linear system of equations it is proved: using a fixed number p of processing elements (PEs) the time complexity of a parallel partitioned algorithm is minimal if either all p PEs or if only one PE is used for executing one operation on datablocks.

Book ChapterDOI
17 Sep 1986
TL;DR: A new parallel direct algorithm for solving general linear systems of equations that requires less computations than the classical Jordan algorithm and two related algorithms for linear recurrence problems of order 1 and tridiagonal systems are derived.
Abstract: A new parallel direct algorithm for solving general linear systems of equations ist proposed in this paper For sparse systems our algorithm requires less computations than the classical Jordan algorithm Particularly we have also derived two related algorithms for linear recurrence problems of order 1 and tridiagonal systems Each of the two algorithms has the same computational complexity as that of the corresponding recursive doubling algorithm or Even/Odd elimination algorithm, but requires half of the processors required by the corresponding algorithm