scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 1983"


Proceedings ArticleDOI
07 Nov 1983
TL;DR: It is proved that, to compute the majority function of n Boolean variables, the size of any depth-3 monotone circuit must be greater than 2nε, and thesize of any width-2 branching program must have super-polynomial growth.
Abstract: The purpose of this paper is to resolve several open problems in the current literature on Boolean circuits, communication complexity, and hashing functions. These lower bound results share the common feature that their proofs utilize probabilistic arguments in an essential way. Specifically, we prove that, to compute the majority function of n Boolean variables, the size of any depth-3 monotone circuit must be greater than 2ne, and the size of any width-2 branching program must have super-polynomial growth. We also show that, for the problem of deciding whether i ≤ j for two n-bit integers i and j, the probabilistic e-error one-way communication complexity is of order θ(n), while the two-way e-error complexity is O((log n)2). We will also prove that, to compute i ? j mod p for an n-bit prime p, the probabilistic e-error two-way communication complexity is of order θ(n). Finally, we prove a conjecture of Ullman that uniform hashing is asymptotically optimal in its expected retrieval cost among open address hashing schemes.

254 citations


01 Jan 1983
TL;DR: This thesis proves strict containment with exponential gaps between deterministic, nondeterministic, random, bounded and unbounded error probabilistic communication complexity classes, and derives bounds on the average information transfer under uniform distribution of the input data.
Abstract: Information transfer is a basic measure of complexity in distributed computation. In VLSI, communication constraints alone dictate bounds on the performance of the chips. In this thesis, we study the complexity of distributed computation under several models, using information transfer as a complexity measure. Besides deterministic protocols, we consider nondeterministic and several probabilistic protocols. We derive optimal characterizations for nondeterministic and probabilistic protocols. For the polynomial time complexity classes, the fundamental problem of proving strict containment among deterministic, nondeterministic and probabilistic computations is wide open. We settle the analogues question for the Communication Complexity classes: We prove strict containment with exponential gaps between deterministic, nondeterministic, random, bounded and unbounded error probabilistic communication complexity classes. We explore connections between 1-way and 2-way communications and strengthen the known gap for deterministic protocols. Using this and our lower bound characterizations we show exponential gaps between 1-way and 2-way probabilistic communication complexity classes. We also derive bounds on the average information transfer using relationships from Classical Information theory. Using the Von Neumann Minimax theorem these bounds are translated to bounds on Las Vegas computations. We develop two general lower bound techniques to estimate the communication requirements for the distributed computation of a set of boolean functions. A partial list of problems for which we have shown maximal information transfer regardless of the partitioning of the input data includes integer multiplication, integer division, matrix squaring, matrix inversion, matrix multiplication, Discrete Fourier Transform, solving a linear system of equations, computing square roots,... . The novelty in our approach is that the techniques are simple and can be easily applied to obtain optimal bounds for many problems. Moreover, using one of our lower bound techniques and Shannon's first theorem we show bounds on the average information transfer under uniform distribution of the input data. Using these results we derive bounds on area time tradeoffs and the chip area required to solve these problems under a variety of VLSI models. Finally, we translate our bounds on information transfer to area time tradeoffs for probabilistic VLSI chips.

10 citations


01 Jan 1983
TL;DR: A generalized, two-parameter, Kolmogorov complexity of finite strings that not only classifies strings as random or not random, but measures the amount of randomness detectable in a given time is defined and established.
Abstract: ' parameter Kolmogorov complexity measure lor finite string which measures how fdr and "OW Id,t a string can be compressed. We investigate the properties 01 this measure, reformulate some classic computational complexity problems and results in terms 01 these concepts and use them to derive new results about the structure of feasible computations. Intuitively, the string z of length n, Iz I==n, is in the Abstract In this paper we define a generalized, two-parameter, Kolmogorov complexity of finite strings wlmh measures AotIJ much and how /a,t a string can be compressed· and we show that this string complexity measure is an efficient too) lor the study of computational complexity. The advantage 01 this approach is that it not only classifies strings as random or not random, but measures the amount of randomness detectable in a given time. This permits the study how computations change the amount of randomness of finite strings and thus establish a direct link between computational complexity and generalized Kolmogorov complexity of strings. This approach gives a new viewpoint for computational complexity theory, yields natural formulations of new problems and yields new results about the structure of feasible computations.

1 citations