scispace - formally typeset
Search or ask a question
Topic

Communication complexity

About: Communication complexity is a research topic. Over the lifetime, 3870 publications have been published within this topic receiving 105832 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: Pseudo-telepathy is a surprising application of quantum information processing to communication complexity and a survey of recent and not-so-recent work on the subject is presented.
Abstract: Quantum information processing is at the crossroads of physics, mathematics and computer science. It is concerned with that we can and cannot do with quantum information that goes beyond the abilities of classical information processing devices. Communication complexity is an area of classical computer science that aims at quantifying the amount of communication necessary to solve distributed computational problems. Quantum communication complexity uses quantum mechanics to reduce the amount of communication that would be classically required. Pseudo-telepathy is a surprising application of quantum information processing to communication complexity. Thanks to entanglement, perhaps the most nonclassical manifestation of quantum mechanics, two or more quantum players can accomplish a distributed task with no need for communication whatsoever, which would be an impossible feat for classical players. After a detailed overview of the principle and purpose of pseudo-telepathy, we present a survey of recent and no-so-recent work on the subject. In particular, we describe and analyse all the pseudo-telepathy games currently known to the authors.

31 citations

Book ChapterDOI
Arpita Patra1
13 Dec 2011
TL;DR: First ever error-free, asynchronous broadcast and Byzantine Agreement protocols with optimal communication complexity and fault tolerance with reduction-based protocols in synchronous settings are presented.
Abstract: In this paper we present first ever error-free, asynchronous broadcast (called as A-cast) and Byzantine Agreement (called as ABA) protocols with optimal communication complexity and fault tolerance. Our protocols are multi-valued, meaning that they deal with l bit input and achieve communication complexity of ${\mathcal O}(n\ell)$ bits for large enough l for a set of n≥3t+1 parties in which at most t can be Byzantine corrupted. Previously, Patra and Rangan (Latincrypt'10, ICITS'11) reported multi-valued, communication optimal A-cast and ABA protocols that are only probabilistically correct. Following all the previous works on multi-valued protocols, we too follow reduction-based approach for our protocols, meaning that our protocols are designed given existing A-cast and ABA protocols for small message (possibly for single bit). Our reductions invoke less or equal number of instances of protocols for single bit in comparison to the reductions of Patra and Rangan. Furthermore, our reductions run in constant expected time, in contrast to ${\mathcal O}(n)$ of Patra and Rangan (ICITS'11). Also our reductions are much simpler and more elegant than their reductions. By adapting our techniques from asynchronous settings, we present new error-free, communication optimal reduction-based protocols for broadcast (BC) and Byzantine Agreement (BA) in synchronous settings that are constant-round and call for only $\mathcal O(n^2)$ instances of protocols for single bit. Prior to this, communication optimality has been achieved by Fitzi and Hirt (PODC'06) who proposed probabilistically correct multi-valued BC and BA protocols with constant-round and ${\mathcal O}(n(n+\kappa))$ (κ is the error parameter) invocations to the single bit protocols. Recently, Liang and Vaidya (PODC'11) achieved the same without error probability. However, their reduction calls for round complexity and number of instances that are function of the message size, ${\mathcal O}(\sqrt{\ell} + n^2)$ and ${\mathcal O}(n^2\sqrt{\ell} + n^4)$ , respectively where l=Ω(n6).

31 citations

Journal ArticleDOI
TL;DR: A detailed computational complexity investigation and simulation results indicate that the algorithm based on SDSC has significant performance and complexity advantages and is very robust against channel estimation errors compared with existing suboptimal detection and equalization algorithms proposed earlier in the literature.
Abstract: This paper is concerned with the challenging and timely problem of data detection for coded orthogonal frequency-division multiplexing (OFDM) systems in the presence of frequency-selective and very rapidly time varying channels New low-complexity maximum a posteriori probability (MAP) data detection algorithms are proposed based on sequential detection with optimal ordering (SDOO) and sequential detection with successive cancellation (SDSC) The received signal vector is optimally decomposed into reduced dimensional subobservations by exploiting the banded structure of the frequency-domain channel matrix whose bandwidth is a parameter to be adjusted according to the speed of the mobile terminal The data symbols are then detected by the proposed algorithms in a computationally efficient way by means of the Markov chain Monte Carlo (MCMC) technique with Gibbs sampling The impact of the imperfect channel state information (CSI) on the bit error rate (BER) performance of these algorithms is investigated analytically and by computer simulations A detailed computational complexity investigation and simulation results indicate that, particularly, the algorithm based on SDSC has significant performance and complexity advantages and is very robust against channel estimation errors compared with existing suboptimal detection and equalization algorithms proposed earlier in the literature

31 citations

Proceedings ArticleDOI
Ilan Newman1
21 Jun 2004
TL;DR: This work presents here the first linear complexity protocols for several classes of Boolean functions, including the OR function, functions that have O(l)-minterm (maxterm) size, function that have linear size AC/sub 0/ formulae and some other functions.
Abstract: We consider a fault tolerance broadcast network of n processors each holding one bit of information. The goal is to compute a given Boolean function on the n bits. In each step, a processor may broadcast one bit of information. Each listening processor receives the bit that was broadcasted with error probability bounded by a fixed constant /spl epsi/. The errors in different steps, as well as for different receiving processors in the same step, are mutually independent. The protocols that are considered in this model are oblivious protocols: At each step, the processors that broadcast are fixed in advanced and independent of the input and the outcome of previous steps. The primal complexity measure in this model is the total number of broadcasts that is performed by the protocol. We present here the first linear complexity protocols for several classes of Boolean functions, including the OR function, functions that have O(l)-minterm (maxterm) size, functions that have linear size AC/sub 0/ formulae and some other functions. This answer an open question of Yao (1997), considering this fault tolerance model of El-Gamal (1984) and Gallager (1988).

31 citations

Proceedings ArticleDOI
13 Jul 2020
TL;DR: A novel framework for the winner selection problem in voting, in which a voting rule is seen as a combination of an elicitation rule and an aggregation rule, is study, which shows that the best communication complexity is ~Θ (m/(kd)) when the rule uses deterministic elicitation and ~δ (m/d3) when therule uses randomized elicitation.
Abstract: In recent work, Mandal et al. [2019] study a novel framework for the winner selection problem in voting, in which a voting rule is seen as a combination of an elicitation rule and an aggregation rule. The elicitation rule asks voters to respond to a query based on their preferences over a set of alternatives, and the aggregation rule aggregates voter responses to return a winning alternative. They study the tradeoff between the communication complexity of a voting rule, which measures the number of bits of information each voter must send in response to its query, and its distortion, which measures the quality of the winning alternative in terms of utilitarian social welfare. They prove upper and lower bounds on the communication complexity required to achieve a desired level of distortion, but their bounds are not tight. Importantly, they also leave open the question whether the best randomized rule can significantly outperform the best deterministic rule. We settle this question in the affirmative. For a winner selection rule to achieve distortion d with m alternatives, we show that the communication complexity required is ~Θ (m/d) when using deterministic elicitation, and ~Θ (m/d3) when using randomized elicitation; both bounds are tight up to logarithmic factors. Our upper bound leverages recent advances in streaming algorithms. To establish our lower bound, we derive a new lower bound on a multi-party communication complexity problem. We then study the k-selection problem in voting, where the goal is to select a set of k alternatives. For a k-selection rule that achieves distortion d with m alternatives, we show that the best communication complexity is ~Θ (m/(kd)) when the rule uses deterministic elicitation and ~Θ (m/(kd3)) when the rule uses randomized elicitation. Our optimal bounds yield the non-trivial implication that the k-selection problem becomes strictly easier as k increases.

31 citations


Network Information
Related Topics (5)
Upper and lower bounds
56.9K papers, 1.1M citations
84% related
Encryption
98.3K papers, 1.4M citations
82% related
Network packet
159.7K papers, 2.2M citations
81% related
Server
79.5K papers, 1.4M citations
81% related
Wireless network
122.5K papers, 2.1M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202319
202256
2021161
2020165
2019149
2018141