scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 1988"


Journal ArticleDOI
TL;DR: A new model for weak random physical sources is presented that strictly generalizes previous models and provides a fruitful viewpoint on problems studied previously such as Extracting almost-perfect bits from sources of weak randomness.
Abstract: A new model for weak random physical sources is presented. The new model strictly generalizes previous models (e.g., the Santha and Vazirani model [27]). The sources considered output strings according to probability distributions in which no single string is too probable.The new model provides a fruitful viewpoint on problems studied previously such as: • Extracting almost-perfect bits from sources of weak randomness. The question of possibility as well as the question of efficiency of such extraction schemes are addressed. • Probabilistic communication complexity. It is shown that most functions have linear communication complexity in a very strong probabilistic sense. • Robustness of BPP with respect to sources of weak randomness (generalizing a result of Vazirani and Vazirani [32], [33]).

537 citations


Proceedings ArticleDOI
24 Oct 1988
TL;DR: A general framework for the study of a broad class of communication problems is developed based on a recent analysis of the communication complexity of graph connectivity, which makes use of combinatorial lattice theory.
Abstract: A general framework for the study of a broad class of communication problems is developed. It is based on a recent analysis of the communication complexity of graph connectivity. The approach makes use of combinatorial lattice theory. >

135 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: The bounds imply improved lower bounds for the VLSI complexity of these decision problems and sharp bounds for a generalized decision tree model which is related to the notion of evasiveness.
Abstract: We prove t(n log n) bounds for the deterministic 2-way communication complexity of the graph properties CONNECTIVITY, s-t-CONNECTIVITY and BIPARTITENESS (for arbitrary partitions of the variables into two sets of equal size). The proofs are based on combinatorial results of Dowling-Wilson and Lovasz-Saks about partition matrices using the Mobius function, and the Regularity Lemma of Szemeredi. The bounds imply improved lower bounds for the VLSI complexity of these decision problems and sharp bounds for a generalized decision tree model which is related to the notion of evasiveness.

74 citations


Journal ArticleDOI
01 Nov 1988
TL;DR: In this paper, the authors present algorithms for transposing a matrix embedded in a Boolean n-cube by a binary encoding, a binary-reflected Gray code encoding of rows and columns, or combinations of these two encodings.
Abstract: In a multiprocessor with distributed storage the data structures have a significant impact on the communication complexity. In this paper we present a few algorithms for performing matrix transposition on a Boolean n-cube. One algorithm performs the transpose in a time proportional to the lower bound both with respect to communication start-ups and to element transfer times. We present algorithms for transposing a matrix embedded in the cube by a binary encoding, a binary-reflected Gray code encoding of rows and columns, or combinations of these two encodings. The transposition of a matrix when several matrix elements are identified to a node by consecutive or cyclic partitioning is also considered and lower bound algorithms given. Experimental data are provided for the Intel iPSC and the Connection Machine.

73 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: This paper presents several end-toend communication protocols whose space complexity at each node is independent of either the input length or the network size, and dispel the myth ‘The sender and the receiver are not forever separated’.
Abstract: This paper addresses the problem of end-toend communication over a dynamically changing network in which the sender and the receiver are not forever separated. We present several end-to-end communication protocols whose space complexity at each node is independent of either the input length or the network size. Although the time complexity of these protocols is bounded, their communication complexity is either unbounded, or exponential if an acyclic orientation of the network is given. To bound the communication complexity of the protocols, in the absence of an acyclic orientation, we assume either knowledge of the total number of nodes in the network, or that nodes have unique ids. These bounded communication-complexity protocols thus require O(logn) space per incident link at each node. In sum, we dispel the myth ‘Supported by NSF Presidential Young Investigators Award under grant DCR84-51396 SL matching funds from IBM FacuRy Development Award under

67 citations


Journal ArticleDOI
TL;DR: A Petri net graph model of Ada rendezvous is used to introduce a rendezvous graph, an abstraction that can be useful in viewing and computing effective communication complexity.
Abstract: Using Ada as a representative distributed programming language, the author discusses some ideas on complexity metrics that focus on Ada tasking and rendezvous. Concurrently active rendezvous are claimed to be an important aspect of communication complexity. A Petri net graph model of Ada rendezvous is used to introduce a rendezvous graph, an abstraction that can be useful in viewing and computing effective communication complexity. >

41 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: The communication complexity of discrete functions under different modes of computation is compared, unifying and extending several known models and an exponential gap between deterministic k-round and probabilistic (k - 1)-round communication with fixed error probability is obtained.
Abstract: We compare the communication complexity of discrete functions under different modes of computation, unifying and extending several known models. Protocols can be deterministic, nondeterministic or probabilistic and in the last case the error probability may vary. On the other hand communication can be 1-way, 2-way or as an intermediate stage consist of a fixed number k > 1 of rounds.The following main results are obtained. A square gap between deterministic and nondeterministic communication complexity is shown for a specific function, which is the maximal possible. This improves the results of [MS 82] and [AUY 83]. For probabilistic 1- and 2-way protocols we prove linear lower bounds for functions that satisfy certain independence conditions, extending the results of [Y 79] and [Y 83]. Further, with more technical effort an exponential gap between deterministic k-round and probabilistic (k - 1)-round communication with fixed error probability is obtained. This generalizes the main result of [DGS 84]. On contrast for arbitrary error probabilities less than 1/2 there is no difference between the complexity of 1- and 2-way protocols, extending results of [PS 84]. Finally we consider communication with fixed message length and uniform probability distributions and give simulations of arbitrary protocols by such uniform ones with little overhead.

38 citations


01 Jan 1988
TL;DR: A general framework for the study of a broad class of communication problems which has several interesting special cases including the graph connectivity problem is developed, based on combinatorial lattice theory.
Abstract: In a recent paper, Hajnal, Maass and Tura'n analyzed the communication complexity of graph connectivity. Building on this work, we develop a general framework for the study of a broad class of communication problems which has several interesting special cases including the graph connectivity problem. The approach is based on combinatorial lattice theory.

27 citations


Journal ArticleDOI
TL;DR: A straight-line-topology local area network to which a number of nodes are connected either in series or in parallel is considered, and algorithms optimal for all networks and files are presented.
Abstract: A straight-line-topology local area network (LAN) to which a number of nodes are connected either in series or in parallel is considered. A file F is arbitrarily partitioned among these sites. The problem studied is that of rearranging the records of the file such that the keys of records at lower-ranking sites are all smaller than those at higher-ranking sites. Lower bounds on the worst-case communication complexity are given for both the series and parallel arrangements, and algorithms optimal for all networks and files are presented. >

26 citations


Book
01 Nov 1988

25 citations


Proceedings ArticleDOI
24 Oct 1988
TL;DR: A framework is introduced that provides a unified way for proving correctness as well as analyzing performance of a class of communication protocols called (asynchronous) reset protocols, which are logarithmic transformers, converting protocols working in a static asynchronous network into protocolsWorking in a dynamic asynchronous network.
Abstract: A framework is introduced that provides a unified way for proving correctness as well as analyzing performance of a class of communication protocols called (asynchronous) reset protocols. They are logarithmic transformers, converting protocols working in a static asynchronous network into protocols working in a dynamic asynchronous network. The design of reset protocols is a classical problem in communication networking, renowned for its complexity. A paradigm is developed that gives fresh insight into this complicated problem. This additional insight leads to the development of reset protocols with complexities bounded by the communication complexity of the original protocol. >

Proceedings Article
01 Jan 1988
TL;DR: In this paper, the authors introduce a new framework for proving correctness and analyzing performance of (asynchronous) reset protocols, called algorithmic transformers, which can convert protocols working in a static asynchronous network into protocols that work in a dynamic asynchronous network.
Abstract: This paper introduces a new framework which provides a unified way for proving correctness as well as analyzing performance of certain, quite important, class of communication protocols, called (asynchronous) Reset Protocols. Those are algorithmic transformers, converting protocols working in a static asynchronous network into protocols working in a dynamic asynchronous network. Design of Reset protocols is a classical problem in communication networking, and is renowned for its complexity. This paper develops a new paradigm, which gives new insight into this complicated problem. This additional insight enables to develop new Reset protocols, whose complexities are bounded by communication complexity of the original protocol.

Journal ArticleDOI
TL;DR: It is shown that O(d(log Δ + log d) and O( log Δ +log d) communication activities suffice on the average for point-to-point and shout-echo networks, respectively, improving the existing bounds.

Proceedings ArticleDOI
14 Jun 1988
TL;DR: The nonuniformity of communication protocols is used to show that the Boolean communication hierarchy does not collapse, and some proper inclusions are shown.
Abstract: The complexity of communication between two processors is studied in terms of complexity classes. Previously published results showing some analogies between Turing machine classes and the corresponding communication complexity classes are enlarged, and some proper inclusions are shown. The nonuniformity of communication protocols is used to show that the Boolean communication hierarchy does not collapse. For completeness an overview on communication complexity classes is added with proofs of some properties already observed by other authors. >

Journal ArticleDOI
01 Apr 1988
TL;DR: The hierarchy of S-communication complexity is established, and a similar relation between determinism and nondeterminism as for communication complexity is proved, and newΩ(n2) lower bounds for language recognition on AT2 of VLSI circuits are obtained.
Abstract: In this paper a formal definition of S-communication complexity based on the idea of Aho, Ullman and Yanakakis [On notions of information transfer in VLSI circuits, Proc. 14th Ann. ACM STOC (1983) 133–139] is given, and its properties are compared with the original communication complexity. The basic advantages of S-communication complexity presented here are the following two: (1) S-communication complexity provides the strongest lower bound Ω(n2) on AT2 of VLSI circuits in most cases in which the communication complexity grants only constant lower bounds on AT2; (2) proving lower bounds for S-communication complexity is technically not so hard as obtaining lower bounds for communication complexity. Further, the hierarchy of S-communication complexity is established, and a similar relation between determinism and nondeterminism as for communication complexity is proved. Using the S-communication complexity, newΩ(n2) lower bounds for language recognition on AT2 of VLSI circuits are obtained. The hardness of algorithmically determining the S-communication complexity of a given Boolean formula, and other properties of S-communication complexity are studied.


DOI
01 Jan 1988
TL;DR: A unified collection of formal models of distributed computation on asynchronous rings is developed which captures the essential characteristics of a spectrum of distributed algorithms--those that are error free, and those that err with small probability (Monte Carlo and nondeterministic/probabilistic).
Abstract: The communication complexity of fundamental problems in distributed computing on an asynchornous ring are examined from both the algorithmic and lower bound perspective. A detailed study is made of the effect on complexity of a number of assumptions about the algorithms. Randomization is shown to influence both the computability and complexity of several problems. Communication complexity is also shown to exhibit varying degrees of sensitivity to additional parameters including admissibility of error, kinds of error, knowledge of ring size, termination requirements, and the existence of identifiers. A unified collection of formal models of distributed computation on asynchronous rings is developed which captures the essential characteristics of a spectrum of distributed algorithms--those that are error free (deterministic, Las Vegas, and nondeterministic), and those that err with small probability (Monte Carlo and nondeterministic/probabilistic). The nondeterministic and nondeterministic/probabilistic models are introduced as natural generalizations of the Las Vegas and Monte Carlo models respectively, and prove useful in deriving lower bounds. The unification helps to clarify the essential differences between the progressively more general notions of a distributed algorithm. In addition, the models reveal the sensitivity of various problems to the parameters listed above. Complexity bounds derived using these models typically vary depending on the type of algorithm being investigated. The lower bounds are complemented by algorithms with matching complexity while frequently the lower bounds hold on even more powerful models than those required by the algorithms. Among the algorithms and lower bounds presented are two specific results which stand out because of their relative significance. (1) If g is any nonconstant cyclic function of n variables, then any nondeterministic algorithm for computing g on an anonymous ring of size n has complexity $\Omega(n\sqrt{log\ n})$ bits of communication; and, there is a nonconstant cyclic boolean function f, such that f can be computed by a Las Vegas algorithm in O($n\sqrt{log\ n}$) expected bits of communication on a ring of size n. (2) The expected complexity of computing AND (and a number of other natural functions) on a ring of fixed size n in the Monte Carlo model is $\Theta(n\min\{$log n, log log$(1/\epsilon)\})$ messages and bits where $\epsilon$ is the allowable probability of error.

Book ChapterDOI
14 Nov 1988
TL;DR: It is shown that only regular languages belong to the same levels of both hierarchies, which implies that VLSI circuits need Θ (n) area and Θ(n2) area to recognize deterministic context-free languages.
Abstract: Chomsky hierarchy is compared with the hierarchy of communication complexity for VLSI. It is shown that only regular languages belong to the same levels of both hierarchies. There are hard languages according to Chomsky hierarchy that belong to the lowest level in communication complexity hierarchy. On the other hand there is a deterministic linear language that requires the highest (linear) communication complexity. This is the main result because it implies that VLSI circuits need Θ(n) area and Θ(n2) area. (time)2 complexity to recognize deterministic context-free languages which solves an open problem of Hromkovic [7].

Proceedings Article
11 Jul 1988
TL;DR: It is' shown that the a communication delay-computation time tradeoff given by Papadimitriou and Utlman for a diamond dag can be achieved for essentially two values of the computation time.
Abstract: We propose a model for the concurrent read exclusive write PRAM that captures its communication and computational requirements. For this model, we present several results, including the following: Two n x n matrices can be multiplied in O(n3/p) computation time and O(n2/p 2/3) communication delay using p processors (for p 0. Given a binary tree, % with n leaves and height h, let Dopt(~r) denote the minimum communication delay needed to compute ~-. It is shown that t2( log n) < Dopt(r) _< O(v/-nn ), and tl(v/h ) < Dop t <_ O(h), all bounds being the best possible. We also present a simple polynomial algorithm that generates a schedule for computing ~with at most 2Dopt(~) delay. It is' shown that the a communication delay-computation time tradeoff given by Papadimitriou and Utlman for a diamond dag can be achieved for essentially two values of the computation time. We also present DAGs that exhibit proper tradeoffs for a substantial range of time.

01 Jun 1988
TL;DR: These analyses show that the parallel algorithms for performing orthogonal decomposition of dense and sparse square or rectangular matrices have lower synchronization cost or lower communication cost than other known schemes.
Abstract: In this thesis we propose a number of new parallel algorithms for performing orthogonal decomposition of dense and sparse square or rectangular matrices. Our target machines are shared-memory multiprocessors and local-memory hypercube multiprocessors. For dense matrices, we propose an algorithm for shared-memory multiprocessors, and several algorithms for the hypercube multiprocessors. For sparse matrices, the algorithm we propose is specific to hypercubes. The paradigms we use in developing these parallel algorithms include divide-and-conquer, changing the order of computation, asynchronous computation and redundant computation. The algorithms designed for the hypercubes take further advantage of various topological properties of the network. We provide arithmetic and communication complexity analyses or implementations for each algorithm to indicate their expected performance. In particular, our analyses show that the parallel algorithms we propose for QR decomposition of dense (square or rectangular) matrices have lower synchronization cost or lower communication cost than other known schemes. These results are supported by numerical experiments.

Proceedings ArticleDOI
01 Jan 1988
TL;DR: This paper explores how the one-bit translation of unbounded message algorithms can be sped up by pipelining, and considers two problems: routing between two processors in an arbitrary network and in some special networks.
Abstract: Many algorithms in distributed systems assume that the size of a single message depends on the number of processors. In this paper, we assume in contrast that messages consist of a single bit. Our main goal is to explore how the one-bit translation of unbounded message algorithms can be sped up by pipelining. We consider two problems. The first is routing between two processors in an arbitrary network and in some special networks (ring, grid, hypercube). The second problem is coloring a synchronous ring with three colors. The routing problem is a very basic subroutine in many distributed algorithms; the three coloring problem demonstrates that pipelining is not always useful.

Proceedings ArticleDOI
07 Dec 1988
TL;DR: The author has shown that a recently introduced method for asynchronous simulation with rollback contains the Bellman-Ford algorithm as a special case, and he has deduced that the rollback method also has exponential communication complexity.
Abstract: Summary form only given. The author has studied an asynchronous version of the Bellman-Ford algorithm for computing the shortest distances from all nodes in a network to a fixed destination. It is known that this algorithm has (in the worst case) exponential (in the size of the underlying graph) communication complexity. The author has obtained results indicating that its expected (in a probabilistic sense) communication complexity is actually polynomial, under some reasonable probabilistic assumptions. He has shown that a recently introduced method for asynchronous simulation with rollback contains the Bellman-Ford algorithm as a special case, and he has deduced that the rollback method also has exponential communication complexity. The author has also investigated whether (under certain probabilistic assumptions and/or modifications of the simulation algorithm) the communication complexity becomes polynomial. >