scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 1999"


Book ChapterDOI
02 May 1999
TL;DR: A single-database computationally private information retrieval scheme with polylogarithmic communication complexity based on a new, but reasonable intractability assumption, which is essentially the difficulty of deciding whether a small prime divides φ(m), where m is a composite integer of unknown factorization.
Abstract: We present a single-database computationally private information retrieval scheme with polylogarithmic communication complexity. Our construction is based on a new, but reasonable intractability assumption, which we call the φ-Hiding Assumption (φHA): essentially the difficulty of deciding whether a small prime divides φ(m), where m is a composite integer of unknown factorization.

699 citations


Book
01 Dec 1999
TL;DR: The space requirement of algorithms that make only one (or a small number of) pass(es) over the input data is studied under a model of data streams that is introduced here.
Abstract: In this paper we study the space requirement of algorithms that make only one (or a small number of) pass(es) over the input data. We study such algorithms under a model of data streams that we introduce here. We give a number of upper and lower bounds for problems stemming from queryprocessing, invoking in the process tools from the area of communication complexity.

403 citations


Proceedings ArticleDOI
01 May 1999
TL;DR: It is shown that for certain communication complezity problems quantum communication protocols are exponentially faster than classical ores and an ezponential gap between quantum communication complexity and classical probabilistic communication complexity is given.
Abstract: Communication complexity has become a central completity model. In that model, we count the amount of communication bits needed between two parties in order to solve certain computational problems. We show that for certain communication complezity problems quantum communication protocols are exponentially faster than classical ores. More explicitly, we give an example for a communication complexity relation (0~ promise problem) P such that: 1) The quantum communication complexity of P is O(log m). 2) The classical probabilistic communication complexity of P is Q(m’l’/ logm). (where m is the length of the inputs). This gives an ezponential gap between quantum communication complexity and classical probabilistic communication complexity. Only a quadratic gap was previously known. Our problem P is of geometrical nature, and is a finite precision variation of the following problem: Player I gets as input a unit vector z E R” and two orthogonal subspaces MO, MI c R”. Player II gets as input an orthogonal matrixT : R” + R”. Their goal is to answer 0 if T(x) E MO and 1 if T(x) E M,, (and any an~luer in any other case). We give an almost tight analysis for the quantum communication complexity and for the classical-probabilistic communication complexity of this problem.

302 citations


Journal ArticleDOI
TL;DR: The results include a connection to the VC-dimension, a study of the problem of computing the inner product of two real valued vectors, and a relation between “simultaneous” protocols and one-round protocols.
Abstract: We present several results regarding randomized one-round communication complexity. Our results include a connection to the VC-dimension, a study of the problem of computing the inner product of two real valued vectors, and a relation between “simultaneous” protocols and one-round protocols.

234 citations


Posted Content
TL;DR: In this paper, the trade-offs between the number of queries of quantum search algorithms, their error probability, the size of the search space, and their number of solutions in this space were analyzed.
Abstract: We present a number of results related to quantum algorithms with small error probability and quantum algorithms that are zero-error. First, we give a tight analysis of the trade-offs between the number of queries of quantum search algorithms, their error probability, the size of the search space, and the number of solutions in this space. Using this, we deduce new lower and upper bounds for quantum versions of amplification problems. Next, we establish nearly optimal quantum-classical separations for the query complexity of monotone functions in the zero-error model (where our quantum zero-error model is defined so as to be robust when the quantum gates are noisy). Also, we present a communication complexity problem related to a total function for which there is a quantum-classical communication complexity gap in the zero-error model. Finally, we prove separations for monotone graph properties in the zero-error and other error models which imply that the evasiveness conjecture for such properties does not hold for quantum computers.

144 citations


02 Aug 1999
TL;DR: In this paper, the authors consider the problem of minimizing the total number of bits exchanged between two users, while minimizing the number of rounds and the complexity of internal computations, and show how to estimate the distance between x and y using a single message of logarithmic size.
Abstract: We have two users, A and B, who hold documents x and y respectively. Neither of the users has any information about the other''s document. They exchange messages so that B computes x; it may be required that A compute y as well. Our goal is to design communication protocols with the main objective of minimizing the total number of bits they exchange; other objectives are minimizing the number of rounds and the complexity of internal computations. An important notion which determines the efficiency of the protocols is how one measures the distance between x and y. We consider several metrics for measuring this distance, namely the Hamming metric, the Levenshtein metric (edit distance), and a new LZ metric, which is introduced in this paper. We show how to estimate the distance between x and y using a single message of logarithmic size. For each metric, we present the first communication-efficient protocols, which often match the corresponding lower bounds. A consequence of these are error-correcting codes for these error models which correct up to d errors in n characters using O(d log n) bits. Our most interesting methods use a new histogram transformation that we introduce to convert edit distance to L1 distance.

133 citations


Journal ArticleDOI
TL;DR: This work constructs a function G such that for the one-round communication model and three parties, G can be computed with n+1 bits of communication when the parties share prior entanglement and if no entangled particles are provided, then F is roughly k*log(k).
Abstract: Quantum entanglement cannot be used to achieve direct communication between remote parties, but it can reduce the communication needed for some problems. Let each of $k$ parties hold some partial input data to some fixed $k$-variable function $f.$ The communication complexity of $f$ is the minimum number of classical bits required to be broadcasted for every party to know the value of $f$ on their inputs. We construct a function $G$ such that for the one-round communication model and three parties, $G$ can be computed with $n+1$ bits of communication when the parties share prior entanglement. We then show that without entangled particles, the one-round communication complexity of $G$ is $(3/2)n+1.$ Next we generalize this function to a function $F.$ We show that if the parties share prior quantum entanglement, then the communication complexity of $F$ is exactly $k.$ We also show that, if no entangled particles are provided, then the communication complexity of $F$ is roughly $k{\mathrm{log}}_{2}k.$ These two results prove communication complexity separations better than a constant number of bits.

119 citations


Journal Article
TL;DR: In this paper, the communication complexity of the binary inner product function in a variation of the two-party scenario where the parties have an a priori supply of particles in an entangled quantum state was considered.
Abstract: We consider the communication complexity of the binary inner product function in a variation of the two-party scenario where the parties have an a priori supply of particles in an entangled quantum state. We prove linear lower bounds for both exact protocols, as well as for protocols that determine the answer with bounded-error probability. Our proofs employ a novel kind of quantum reduction from a quantum information theory problem to the problem of computing the inner product. The communication required for the former problem can then be bounded by an application of Holevo's theorem. We also give a specific example of a probabilistic scenario where entanglement reduces the communication complexity of the inner product function by one bit.

68 citations


Proceedings ArticleDOI
12 Apr 1999
TL;DR: This paper presents a model for point-to-point communication in HNOW systems and shows how it can be used for characterizing the performance of different collective communication operations and proves the accuracy of the proposed model by using an experimental HNOW testbed.
Abstract: Networks of workstations (NOW) have become an attractive alternative platform for high performance computing. Due to the commodity nature of workstations and interconnects and due to the multiplicity of vendors and platforms, the NOW environments are being gradually redefined as heterogeneous networks of workstations (HNOW). Having an accurate model for the communication in HNOW systems is crucial for design and evaluation of efficient communication layers for such systems. In this paper we present a model for point-to-point communication in HNOW systems and show how it can be used for characterizing the performance of different collective communication operations. In particular, we show how the performance of broadcast, scatter, and gather operations can be modeled and analyzed. We also verify the accuracy of our proposed model by using an experimental HNOW testbed. Furthermore, it is shown how this model can be used for comparing the performance of different collective communication algorithms. We also show how the effect of heterogeneity on the performance of collective communication operations can be predicted.

57 citations


Journal ArticleDOI
TL;DR: In this article, the problem of power and bit allocation in OFDM systems is analyzed and a solution algorithm with substantially lower computational complexity than existing algorithms is proposed, which is the first algorithm to address this problem.
Abstract: The problem of power and bit allocation in OFDM systems is analysed. A solution algorithm with substantially lower computational complexity than existing algorithms is proposed.

48 citations


Book ChapterDOI
15 Aug 1999
TL;DR: In this article, an optimally resilient distributed multiplication protocol that enjoys the property of noninteractivity is presented, which relies on a standard cryptographic assumption and works over a complete, synchronous, untappable network with a broadcast channel.
Abstract: An optimally resilient distributed multiplication protocol that enjoys the property of non-interactivity is presented. The protocol relies on a standard cryptographic assumption and works over a complete, synchronous, untappable network with a broadcast channel. As long as no disruption occurs, each player uses those channels only once to send messages; thus no interaction is needed among players. The cost is an increase in local computation and communication complexity that is determined by the factor of the threshold.

Journal ArticleDOI
TL;DR: An efficient distributed algorithm to detect generalized deadlocks in replicated databases that use quorum-consensus algorithms to perform majority voting and is shown to perform significantly better in both time and message complexity than the best known existing algorithms.
Abstract: Replicated databases that use quorum-consensus algorithms to perform majority voting are prone to deadlocks. Due to the P-out-of-Q nature of quorum requests, deadlocks that arise are generalized deadlocks and are hard to detect. We present an efficient distributed algorithm to detect generalized deadlocks in replicated databases. The algorithm performs reduction of a distributed wait-for-graph (WFG) to determine the existence of a deadlock. If sufficient information to decide the reducibility of a node is not available at that node, the algorithm attempts reduction later in a lazy manner. We prove the correctness of the algorithm. The algorithm has a message complexity of 2e messages and a worst-case time complexity of 2d+2 hops, where e is the number of edges and d is the diameter of the WFG. The algorithm is shown to perform significantly better in both time and message complexity than the best known existing algorithms. We conjecture that this is an optimal algorithm, in time and message complexity, to detect generalized deadlocks if no transaction has complete knowledge of the topology of the WFG or the system and the deadlock detection is to be carried out in a distributed manner.

Journal ArticleDOI
TL;DR: In Search of Clusters began as an attempt to collect a set of terms and definitions that would define and describe various aspects of computer clusters, and Gregory Pfister accomplishes this goal and more.
Abstract: Reviewed by Michaele E. Duncan, University of Southern Mississippi Technical Editor: Marcin Paprzycki Dept. of Computer Science and Statistics Univ. of Southern Mississippi Southern Station 1506 Hattiesburg, MS 39406-1506 m.paprzycki@usm.edu In Search of Clusters began as an attempt to collect a set of terms and definitions that would define and describe various aspects of computer clusters; Gregory Pfister accomplishes this goal and more. His obvious sense of humor and generous use of metaphors make a potentially boring subject incredibly interesting—even fun. In Search of Clusters contains valuable information for the novice and expert alike. The reader need only have a limited understanding of computers and the associated terminology to benefit from this book. However, while easy to understand, there is a wealth of technical “hard stuff” to keep the computer gurus interested. It is a must read for anyone considering assembling a cluster. Pfister divides the book into four basic parts. First, he covers the concepts of clusters, explaining how and why they can solve complex problems. He includes many detailed examples of clusters that help the reader grasp the concepts, as well as definitions and comparisons of clusters and associated terminology. The reader learns to distinguish between parallel and distributed systems. Secondly, Pfister discusses the hardware possibilities involved in clustering machines. He begins with the four basic categories of hardware structures and how to establish communication. Following this, discussions on cluster alteration techniques, symmetric multiprocessors, NUMA, UMA, and Norma show the multiple hardware avenues available, along with the advantages and disadvantages of each. Thirdly, Pfister addresses the software aspects of clusters. Chapters on workloads, basic programming models and issues, commercial programming models, and single-system image help explain the often overlooked complexities of the software side of clustering. Finally, Pfister ties everything together in a section on systems. His effective style of presentation, along with over 100 figures and 20 tables, make In Search of Clusters a very interesting and informative book. While this is not applicable as a text book, it would be extremely beneficial as required reading in any computer-related course on networking or parallel processing at the undergraduate or graduate level. Book Reviews

Journal ArticleDOI
TL;DR: The quantum analogue of classical communication complexity, the quantum communication complexity model, was defined and studied, and some of the main results in the area are presented.
Abstract: Classical communication complexity has been intensively studied since its conception two decades ago. Recently, its quantum analogue, the quantum communication complexity model, was defined and studied. In this paper we present some of the main results in the area.

Journal ArticleDOI
TL;DR: Fault-tolerant communication algorithms for k-ary n-cubes are introduced, which include: One-to-all broadcasting, all- to- all broadcasting, one-To-all personalized communication, and all-to the all personalized communication.
Abstract: Fault-tolerant communication algorithms for k-ary n-cubes are introduced. These include: One-to-all broadcasting, all-to-all broadcasting, one-to-all personalized communication, and all-to-all personalized communication. Each of these algorithms can tolerate up to (2n-2) node failures provided that k>(2n-2) and k>3. Extensions of these algorithms with up to 2n-1 node failures are also described. The communication complexities of the proposed algorithms are derived when wormhole or store and forward packet routing is used.

Posted Content
TL;DR: In particular, the polynomial equivalence of quantum and classical communication complexity for various classes of functions has been shown in this article for the model with unlimited prior entanglement.
Abstract: The quantum version of communication complexity allows the two communicating parties to exchange qubits and/or to make use of prior entanglement (shared EPR-pairs). Some lower bound techniques are available for qubit communication complexity, but except for the inner product function, no bounds are known for the model with unlimited prior entanglement. We show that the log-rank lower bound extends to the strongest model (qubit communication + unlimited prior entanglement). By relating the rank of the communication matrix to properties of polynomials, we are able to derive some strong bounds for exact protocols. In particular, we prove both the "log-rank conjecture" and the polynomial equivalence of quantum and classical communication complexity for various classes of functions. We also derive some weaker bounds for bounded-error quantum protocols.

Book ChapterDOI
28 Jun 1999
TL;DR: In this paper, the authors give a basic overview of computational complexity, query complexity, and communication complexity, with quantum information incorporated into each of these scenarios. The aim is to provide simple but clear definitions, and to highlight the interplay between the three scenarios and currently known quantum algorithms.
Abstract: We give a basic overview of computational complexity, query complexity, and communication complexity, with quantum information incorporated into each of these scenarios. The aim is to provide simple but clear definitions, and to highlight the interplay between the three scenarios and currently-known quantum algorithms. Complexity theory is concerned with the inherent cost required to solve information processing problems, where the cost is measured in terms of various well-defined resources. In this context, a problem can usually be thought of as a function whose input is a problem instance and whose corresponding output is the solution to it. Sometimes the solution is not unique, in which case the problem can be thought of as a relation, rather than a function. Resources are usually measured in terms of: some designated elementary operations, memory usage, or communication. We consider three specific complexity scenarios, which illustrate different advantages of working with quantum information: 1. Computational complexity

Book ChapterDOI
11 Jul 1999
TL;DR: The Rivest-Vuillemin proof of the famous AKR conjecture is strengthened to show that no non-trivial monotone graph property can be expressed as a polynomial of sub-quadratic degree and near-optimal space-time tradeoffs are obtained.
Abstract: We initiate a study of space-time tradeoffs in the cell-probe model under restricted preprocessing power. Classically, space-time tradeoffs have been studied in this model under the assumption that the preprocessing is unrestricted. In this setting, a large gap exists between the best known upper and lower bounds. Augmenting the model with a function family F that characterizes the preprocessing power, makes for a more realistic computational model and allows to obtain much tighter space-time tradeoffs for various natural settings of F. The extreme settings of our model reduce to the classical cell probe and generalized decision tree complexities. We use graph properties for the purpose of illustrating various aspects of our model across this broad spectrum. In doing so, we develop new lower bound techniques and strengthen some existing results. In particular, we obtain near-optimal space-time tradeoffs for various natural choices of F; strengthen the Rivest-Vuillemin proof of the famous AKR conjecture to show that no non-trivial monotone graph property can be expressed as a polynomial of sub-quadratic degree; and obtain new results on the generalized decision tree complexity w.r.t. various families F.

Proceedings Article
15 Aug 1999
TL;DR: An optimally resilient distributed multiplication protocol that enjoys the property of non-interactivity is presented and works over a complete, synchronous, untappable network with a broadcast channel.
Abstract: An optimally resilient distributed multiplication protocol that enjoys the property of non-interactivity is presented. The protocol relies on a standard cryptographic assumption and works over a complete, synchronous, untappable network with a broadcast channel. As long as no disruption occurs, each player uses those channels only once to send messages; thus no interaction is needed among players. The cost is an increase in local computation and communication complexity that is determined by the factor of the threshold. As an application of the proposed protocol we present a robust threshold version of the Cramer-Shoup cryptosystem, which is the first noninteractive solution with optimal resilience.

Journal ArticleDOI
TL;DR: This paper disproves Tiwari's conjecture, by exhibiting an infinite family of functions for which is essentially at most , and leads to progress on another major problem in this area.
Abstract: network consists of k+1 processors with links only between and (0≤i

Journal Article
TL;DR: A robust threshold version of the Cramer-Shoup cryptosystem is presented, which is the first non-interactive solution with optimal resiliency and an optimally resilient distributed multiplication protocol that enjoys the property ofnon-interactivity.
Abstract: An optimally resilient distributed multiplication protocol that enjoys the property of non-interactivity is presented. The protocol relies on a standard cryptographic assumption and works over a complete, synchronous, untappable network with a broadcast channel. As long as no disruption occurs, each player uses those channels only once to send messages; thus no interaction is needed among players. The cost is an increase in local computation and communication complexity that is determined by the factor of the threshold. As an application of the proposed protocol we present a robust threshold version of the Cramer-Shoup cryptosystem, which is the first non-interactive solution with optimal resiliency.

Book ChapterDOI
27 Sep 1999
TL;DR: This work presents a protocol for this task which has communication complexity that is linear in the "actual" size of the biggest connected component, and defines the virtual component, which is shown to be the closest one can get to the notion of the " actual" component in asynchronous networks.
Abstract: Many crucial network tasks such as database maintenance can be efficiently carried out given a tree that spans the network. By maintaining such a spanning tree, rather than constructing it "from-scratch" due to every topology change, one can improve the efficiency of the tree construction, as well as the efficiency of the protocols that use the tree. We present a protocol for this task which has communication complexity that is linear in the "actual" size of the biggest connected component. The time complexity of our protocol has only a polylogarithmic overhead in the "actual" size of the biggest connected component. The communication complexity of the previous solution, which was considered communication optimal, was linear in the network size, that is, unbounded as a function of the "actual" size of the biggest connected component. The overhead in the time measure of the previous solution was polynomial in the network size. In an asynchronous network it may not be clear what is the meaning of the "actual" size of the connected component at a given time. To capture this notion we define the virtual component and show that in asynchronous networks, in a sense, the notion of the virtual component is the closest one can get to the notion of the "actual" component.

Proceedings Article
11 Jul 1999
TL;DR: The incompressibility method is an elementary yet powerful proof technique based on Kolmogorov complexity as mentioned in this paper, and it is particularly suited to obtain average-case computational complexity lower bounds.
Abstract: The incompressibility method is an elementary yet powerful proof technique based on Kolmogorov complexity [13]. We show that it is particularly suited to obtain average-case computational complexity lower bounds. Such lower bounds have been difficult to obtain in the past by other methods. In this paper we present four new results and also give four new proofs of known results to demonstrate the power and elegance of the new method.

Proceedings ArticleDOI
01 May 1999
TL;DR: A protocol is presented for this task which has communica- tion complexity that is linear in the “actual” size of the biggest connected component, and the time complexity of the protocol has only a polylogarith- mic overhead in the’s “ actual”sized component.
Abstract: Many crucial network tasks such as database maintenance can be efficiently carried out given a tree that spans the network. By maintaining such a spanning tree, rather than constructing it “from- scratch” due to every topology change, one can improve the efficiency of the tree construction, as well as the efficiency of the protocols that use the tree. We present a protocol for this task which has communica- tion complexity that is linear in the “actual” size of the biggest connected component. The time complexity of our protocol has only a polylogarith- mic overhead in the “actual” size of the biggest connected component. The communication complexity of the previous solution, which was considered communication optimal, was linear in the network size, that is, unbounded as a function of the “actual” size of the biggest connected component. The overhead in the time measure of the previous solution was polynomial in the network size.

Book ChapterDOI
Peter Sanders1
16 Dec 1999
TL;DR: In this article, it was shown that the parallel execution time of the receiver-initiated load balancing algorithm with random polling is at most (1+ )Tseq/P+O(Tatomic+h(1/Ɛ+Trout+Tsplit)) with high probability, where Trout, Tsplit and Tatomic bound the time for sending a message, splitting a subproblem and finishing a small unsplittable subproblem respectively.
Abstract: Many applications in parallel processing have to traverse large, implicitly defined trees with irregular shape. The receiver initiated load balancing algorithm random polling has long been known to be very efficient for these problems in practice. For any Ɛ > 0, we prove that its parallel execution time is at most (1+Ɛ)Tseq/P+O(Tatomic+h(1/Ɛ+Trout+ Tsplit)) with high probability, where Trout, Tsplit and Tatomic bound the time for sending a message, splitting a subproblem and finishing a small unsplittable subproblem respectively. The maximum splitting depth h is related to the depth of the computation tree. Previous work did not prove efficiency close to one and used less accurate models. In particular, our machine model allows asynchronous communication with nonconstant message delays and does not assume that communication takes place in rounds. This model is compatible with the LogP model.

Proceedings ArticleDOI
01 May 1999
TL;DR: A new architecture is proposed which supports modular group composition by providing a distinction between intra-group and inter-group communication, and multiple group communication protocols and end-to-end delivery semantics can be used in a single system.
Abstract: This paper examines the problem of building scalable, fault-tolerant distributed systems from collections of communicating process groups, while maintaining well-defined end-to-end delivery semantics. We propose a new architecture which supports modular group composition by providing a distinction between intra-group and inter-group communication. With this architecture, multiple group communication protocols and end-to-end delivery semantics can be used in a single system. These features reduce the complexity of ordering messages in a group composition, resulting in enhanced scalability. Finally we present simulation results comparing the performance of a group composition using our architecture to that of a single process group.

Book ChapterDOI
26 Jul 1999
TL;DR: It is proved that matrices with small Boolean rank have small matrix rigidity over any field, and functions with nondeterministic communication complexity l can be approximated by functions with parity communication complexity O(l).
Abstract: We consider combinatorial properties of Boolean matrices and their application to two-party communication complexity. Let A be a binary n×n matrix and let K be a field. Rectangles are sets of entries defined by collections of rows and columns. We denote by rankB(A) (rankK(A), resp.) the least size of a family of rectangles whose union (sum, resp.) equals A. We prove the following: - With probability approaching 1, for a random Boolean matrix A the following holds: rankB(A) ≥ n(1-o(1)). - For finite K and fixed Ɛ > 0 the following holds: If A is a Boolean matrix with rankB(A) ≥ t then there is some matrix A′ ≥ A such that A-A′ has at most Ɛċn2 non-zero entries and rankK(A′) ≥ tO(1)(1). As applications we mention some improvements of earlier results: (1) With probability approaching 1 a random n-variable Boolean function has nondeterministic communication complexity n, (2) functions with nondeterministic communication complexity l can be approximated by functions with parity communication complexity O(l). The latter complements a result saying that nondeterministic and parity communication protocols cannot efficiently simulate each other. Another consequence is: (3) matrices with small Boolean rank have small matrix rigidity over any field.

Journal ArticleDOI
TL;DR: An analogous polylogarithmic upper bound in the stronger multiparty communication model of Chandra et al.
Abstract: The two-party communication complexity of Boolean function f is known to be at least log rank (M f ), i.e., the logarithm of the rank of the communication matrix of f [19]. Lovasz and Saks [17] asked whether the communication complexity of f can be bounded from above by (log rank (M f )) c , for some constant c . The question was answered affirmatively for a special class of functions f in [17], and Nisan and Wigderson proved nice results related to this problem [20], but, for arbitrary f , it remained a difficult open problem. We prove here an analogous polylogarithmic upper bound in the stronger multiparty communication model of Chandra et al. [6], which, instead of the rank of the communication matrix, depends on the L 1 norm of function f , for arbitrary Boolean function f .

Proceedings Article
26 May 1999
TL;DR: The state of the art in the domain is surveyed and some fruitful directions of research are offered and a rough approximation of a ''measure of density'' of networks is proposed.
Abstract: The present paper surveys recent and promising results about graph--theoretic and group--theoretic modelling in distributed computing. The specific behaviour of various classes of networks (Cayley and Borel Cayley networks, de Bruijn and Kautz networks, etc.) is studied in terms of usual efficiency requirements, such as computability, symmetry, uniformity and algebraic structure, ease of routing, fault-robustness, flexibility, etc. We also address various problems arising from the application of the notion of sense of direction on several significant network topologies. A rough approximation of a ''measure of density'' of networks is proposed. It leads to a conjecture about the real impact of sense of direction with respect to Leader Election. The notion of the dynamic and static symmetry of networks is also considered from the viewpoint of checking and measuring the effects of orientation on the communication complexity of ''consensus protocols''. On the whole, this paper attempt to survey the state of the art in the domain and offer some fruitful directions of research.

Journal ArticleDOI
TL;DR: In this article, the authors show that if the knowledge of the relevant probabilities is initially decentralized, then expected-profit maximizing decisions can require unbounded communication, as measured by the minimal message space dimension.