scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 2022"


Proceedings ArticleDOI
11 Jul 2022
TL;DR: This work presents a simple randomized distributed MIS algorithm that, with high probability, has O(1) node-averaged awake complexity and O(łog n) worst-case round complexity.
Abstract: Chatterjee, Gmyr, and Pandurangan [PODC 2020] recently introduced the notion of awake complexity for distributed algorithms, which measures the number of rounds in which a node is awake. In the other rounds, the node is sleeping and performs no computation or communication. Measuring the number of awake rounds can be of significance in many settings of distributed computing, e.g., in sensor networks where energy consumption is of concern. In that paper, Chatterjee et al. provide an elegant randomized algorithm for the Maximal Independent Set (MIS) problem that achieves an O(1) node-averaged awake complexity. That is, the average awake time among the nodes is O(1) rounds. However, to achieve that, the algorithm sacrifices the more standard round complexity measure from the well-known O(łog n) bound of MIS, due to Luby [STOC'85], to O(łog^3.41 n) rounds. Our first contribution is to present a simple randomized distributed MIS algorithm that, with high probability, has O(1) node-averaged awake complexity and O(łog n) worst-case round complexity. Our second, and more technical contribution, is to show algorithms with the same O(1) node-averaged awake complexity and O(łog n) worst-case round complexity for 1+ε approximation of maximum matching and 2+ε approximation of minimum vertex cover, where ε denotes an arbitrary small positive constant.

9 citations


Proceedings ArticleDOI
24 Mar 2022
TL;DR: This work designs an algorithm against adaptive adversary that reduces the communication gap by nearly linear factor to O(√n· n) bits per process, while keeping almost-optimal (up to factor O(log3 n)) time complexity O( ∼n·log5/2 n).
Abstract: Consensus is one of the most thoroughly studied problems in distributed computing, yet there are still complexity gaps that have not been bridged for decades. In particular, in the classical message-passing setting with processes’ crashes, since the seminal works of Bar-Joseph and Ben-Or [PODC 1998] and Aspnes and Waarts [SICOMP 1996, JACM 1998] in the previous century, there is still a fundamental unresolved question about communication complexity of fast randomized Consensus against a (strong) adaptive adversary crashing processes arbitrarily online. The best known upper bound on the number of communication bits is Θ(n3/2/√logn) per process, while the best lower bound is Ω(1). This is in contrast to randomized Consensus against a (weak) oblivious adversary, for which time-almost-optimal algorithms guarantee amortized O(1) communication bits per process. We design an algorithm against adaptive adversary that reduces the communication gap by nearly linear factor to O(√n· n) bits per process, while keeping almost-optimal (up to factor O(log3 n)) time complexity O(√n·log5/2 n). More surprisingly, we show this complexity indeed can be lowered further, but at the expense of increasing time complexity, i.e., there is a trade-off between communication complexity and time complexity. More specifically, our main Consensus algorithm allows to reduce communication complexity per process to any value from n to O(√n· n), as long as Time × Communication = O(n· n). Similarly, reducing time complexity requires more random bits per process, i.e., Time × Randomness =O(n· n). Our parameterized consensus solutions are based on a few newly developed paradigms and algorithms for crash-resilient computing, interesting on their own. The first one, called a Fuzzy Counting, provides for each process a number which is in-between the numbers of alive processes at the end and in the beginning of the counting. Our deterministic Fuzzy Counting algorithm works in O(log3 n) rounds and uses only O( n) amortized communication bits per process, unlike previous solutions to counting that required Ω(n) bits. This improvement is possible due to a new Fault-tolerant Gossip solution with O(log3 n) rounds using only O(||· n) communication bits per process, where || is the length of the rumor binary representation. It exploits distributed fault-tolerant divide-and-conquer idea, in which processes run a Bipartite Gossip algorithm for a considered partition of processes. To avoid passing many long messages, processes use a family of small-degree compact expanders for local signaling to their overlay neighbors if they are in a compact (large and well-connected) party, and switch to a denser overlay graph whenever local signalling in the current one is failed.

7 citations


Journal ArticleDOI
TL;DR: In this paper , the dimension-free relations between basic communication and query complexity measures and various matrix norms are studied, where the goal is to obtain inequalities that bound a parameter solely as a function of another parameter, in contrast to perhaps the more common framework in communication complexity where poly-logarithmic dependencies on the number of input bits are tolerated.
Abstract: The purpose of this article is to initiate a systematic study of dimension-free relations between basic communication and query complexity measures and various matrix norms. In other words, our goal is to obtain inequalities that bound a parameter solely as a function of another parameter. This is in contrast to perhaps the more common framework in communication complexity where poly-logarithmic dependencies on the number of input bits are tolerated. Dimension-free bounds are also closely related to structural results, where one seeks to describe the structure of Boolean matrices and functions that have low complexity. We prove such theorems for several communication and query complexity measures as well as various matrix and operator norms. In several other cases we show that such bounds do not exist. We propose several conjectures, and establish that, in addition to applications in complexity theory, these problems are central to characterization of the idempotents of the algebra of Schur multipliers, and could lead to new extensions of Cohen’s celebrated idempotent theorem regarding the Fourier algebra.

7 citations


Book ChapterDOI
TL;DR: It is shown that it is possible to perform n independent copies of 1-out-of-2 oblivious transfer in two messages, where the communication complexity of the receiver and sender (each) is n (1 + o (1)) for sufficiently large n .

6 citations


Book ChapterDOI
TL;DR: In this article , the authors proposed a secure MPC protocol in the dishonest majority setting with sub-linear communication complexity for a constant fraction of corrupted parties (i.e., if 99 percent of the parties are corrupt), which achieves a communication complexity of O(1) field elements per multiplication gate across all parties.
Abstract: In the last few years, the efficiency of secure multi-party computation (MPC) in the dishonest majority setting has increased by several orders of magnitudes starting with the SPDZ protocol family which offers a speedy information-theoretic online phase in the prepossessing model. However, state-of-the-art n-party MPC protocols in the dishonest majority setting incur online communication complexity per multiplication gate which is linear in the number of parties, i.e. O(n), per gate across all parties. In this work, we construct the first MPC protocols in the preprocessing model for dishonest majority with sub-linear communication complexity per gate in the number of parties n. To achieve our results, we extend the use of packed secret sharing to the dishonest majority setting. For a constant fraction of corrupted parties (i.e. if 99 percent of the parties are corrupt), we can achieve a communication complexity of O(1) field elements per multiplication gate across all parties. At the crux of our techniques lies a new technique called sharing transformation. The sharing transformation technique allows us to transform shares under one type of linear secret sharing scheme into another, and even perform arbitrary linear maps on the secrets of (packed) secret sharing schemes with optimal communication complexity. This technique can be of independent interest since transferring shares from one type of scheme into another (e.g., for degree reduction) is ubiquitous in MPC. Furthermore, we introduce what we call sparsely packed Shamir sharing which allows us to address the issue of network routing efficiently, and packed Beaver triples which is an extension of the widely used technique of Beaver triples for packed secret sharing (for dishonest majority).

6 citations



Journal ArticleDOI
TL;DR: A new secure computation protocol with perfect, optimal resilience and malicious security that incurs (verifiably) sharing O(n) values per multiplication is constructed, which has an overall round complexity that is proportional only to the multiplicative depth of the circuit.

5 citations


Proceedings ArticleDOI
01 Oct 2022
TL;DR: In this article , the complexity of the maximum-cardinality bipartite matching problem was shown to be polylogarithmic in five models of computation: two-party communication, AND query, OR query, XOR query, and quantum edge query models.
Abstract: We settle the complexities of the maximum-cardinality bipartite matching problem (BMM) up to polylogarithmic factors in five models of computation: the two-party communication, AND query, OR query, XOR query, and quantum edge query models. Our results answer open problems that have been raised repeatedly since at least three decades ago [Hajnal, Maass, and Turan STOC’88; Ivanyos, Klauck, Lee, Santha, and de Wolf FSTTCS’12; Dobzinski, Nisan, and Oren STOC’14; Nisan SODA’21] and tighten the lower bounds shown by Beniamini and Nisan [STOC’21] and Zhang [ICALP’04]. We also settle the communication complexity of the generalizations of BMM, such as maximum-cost bipartite b-matching and transshipment; and the query complexity of unique bipartite perfect matching (answering an open question by Beniamini [2022]). Our algorithms and lower bounds follow from simple applications of known techniques such as cutting planes methods and set disjointness.

4 citations


Book ChapterDOI
TL;DR: In this paper , the authors improved the communication complexity of broadcast in constant expected time to O(nL+n^4 +n^6 log n) using packed verifiable secret sharing.
Abstract: AbstractBroadcast is an essential primitive for secure computation. We focus in this paper on optimal resilience (i.e., when the number of corrupted parties t is less than a third of the computing parties n), and with no setup or cryptographic assumptions.While broadcast with worst case t rounds is impossible, it has been shown [Feldman and Micali STOC’88, Katz and Koo CRYPTO’06] how to construct protocols with expected constant number of rounds in the private channel model. However, those constructions have large communication complexity, specifically \({\mathcal {O}}(n^2L+n^6\log n)\) expected number of bits transmitted for broadcasting a message of length L. This leads to a significant communication blowup in secure computation protocols in this setting.In this paper, we substantially improve the communication complexity of broadcast in constant expected time. Specifically, the expected communication complexity of our protocol is \({\mathcal {O}}(nL+n^4\log n)\). For messages of length \(L=\varOmega (n^3 \log n)\), our broadcast has no asymptotic overhead (up to expectation), as each party has to send or receive \({\mathcal {O}}(n^3 \log n)\) bits. We also consider parallel broadcast, where n parties wish to broadcast L bit messages in parallel. Our protocol has no asymptotic overhead for \(L=\varOmega (n^2\log n)\), which is a common communication pattern in perfectly secure MPC protocols. For instance, it is common that all parties share their inputs simultaneously at the same round, and verifiable secret sharing protocols require the dealer to broadcast a total of \({\mathcal {O}}(n^2\log n)\) bits.As an independent interest, our broadcast is achieved by a packed verifiable secret sharing, a new notion that we introduce. We show a protocol that verifies \({\mathcal {O}}(n)\) secrets simultaneously with the same cost of verifying just a single secret. This improves by a factor of n the state-of-the-art.KeywordsMPCByzantine agreementBroadcast

4 citations


Proceedings ArticleDOI
09 Jun 2022
TL;DR: In this paper , the authors show that the sample complexity of the distributed quantum inner product estimation problem is at least O(max ε(1/ε)-1/varepsilon^2, ϵ(d) in(0,1) , where ϵ is the additive error of the inner product.
Abstract: As small quantum computers are becoming available on different physical platforms, a benchmarking task known as cross-platform verification has been proposed that aims to estimate the fidelity of states prepared on two quantum computers. This task is fundamentally distributed, as no quantum communication can be performed between the two physical platforms due to hardware constraints, which prohibits a joint SWAP test. In this paper we settle the sample complexity of this task across all measurement and communication settings. The essence of the task, which we call distributed quantum inner product estimation, involves two players Alice and Bob who have $k$ copies of unknown states $\rho,\sigma$ (acting on $\mathbb{C}^{d}$) respectively. Their goal is to estimate $\mathrm{Tr}(\rho\sigma)$ up to additive error $\varepsilon\in(0,1)$, using local quantum operations and classical communication. In the weakest setting where only non-adaptive single-copy measurements and simultaneous message passing are allowed, we show that $k=O(\max\{1/\varepsilon^2,\sqrt{d}/\varepsilon\})$ copies suffice. This achieves a savings compared to full tomography which takes $\Omega(d^3)$ copies with single-copy measurements. Surprisingly, we also show that the sample complexity must be at least $\Omega(\max\{1/\varepsilon^2,\sqrt{d}/\varepsilon\})$, even in the strongest setting where adaptive multi-copy measurements and arbitrary rounds of communication are allowed. This shows that the success achieved by shadow tomography, for sample-efficiently learning the properties of a single system, cannot be generalized to the distributed setting. Furthermore, the fact that the sample complexity remains the same with single and multi-copy measurements contrasts with single system quantum property testing, which often demonstrate exponential separations in sample complexity with single and multi-copy measurements.

4 citations


Journal ArticleDOI
TL;DR: Lower-bound results in complexity theory that have been obtained via newfound interconnections between propositional proof complexity, boolean circuit complexity, and query/communication complexity are surveyed.
Abstract: We survey lower-bound results in complexity theory that have been obtained via newfound interconnections between propositional proof complexity, boolean circuit complexity, and query/communication complexity. We advocate for the theory of total search problems (TFNP) as a unifying language for these connections and discuss how this perspective suggests a whole programme for further research.

Proceedings ArticleDOI
TL;DR: The honest majority secure multiparty computation in the preprocessing model with information theoretic security that achieves the best online communication complexity is presented in this paper , where the total end-to-end communication cost is linear in n, i.e., 10n + 44, which is larger than the 4n complexity of the state-of-theart protocols.
Abstract: We present a novel approach to honest majority secure multiparty computation in the preprocessing model with information theoretic security that achieves the best online communication complexity. The online phase of our protocol requires 12 elements in total per multiplication gate with circuit-dependent preprocessing, or 20 elements in total with circuit-independent preprocessing. Prior works achieved linear online communication complexity in n, the number of parties, with the best prior existing solution involving 1.5n elements per multiplication gate. Only one recent work packing [28] achieves constant online communication complexity, but the constants are large (108 elements for passive security, and twice that for active security). That said, our protocol offers a very efficient information theoretic online phase for any number of parties. The total end-to-end communication cost with the preprocessing phase is linear in n, i.e., 10n + 44, which is larger than the 4n complexity of the state-of-the-art protocols. The gap is not significant when the online phase must be optimized as a priority and a reasonably large number of parties is involved. Unlike previous works based on packed secret-sharing to reduce communication complexity, we further reduce the communication by avoiding the use of complex and expensive network routing or permutations tools. Furthermore, we also allow for a maximal honest majority adversary, while most previous works require the set of honest parties to be strictly larger than a majority. Our protocol is simple and offers concrete efficiency. To illustrate this we present a full-fledged implementation together with experimental results that show improvements in online phase runtimes that go up to 5x in certain settings (e.g. 45 parties, LAN network, circuit of depth 10 with 1M gates).

Journal ArticleDOI
TL;DR: In this paper , the authors proposed an efficient outsourced private set intersection cardinality (EO-PSI-CA) protocol in the multi-party setting, where two or more parties outsource their private input sets to the cloud.
Abstract: Private set intersection cardinality (PSI-CA) is a useful cryptographic primitive for many data analysis techniques, e.g. in genomic computations and data mining. In the last few years, several classical multi-party PSI-CA protocols have been designed where parties jointly compute the PSI-CA and at the end of the protocol none of them learns more than their private input sets and the output. The computation complexity of these multi-party protocols is quadratic in the size of the input sets and linear in the number of the parties involved in the protocol. In addition, the communication complexity scales quadratically as the number of parties increases. With the advent of cloud computing, it is now necessary to gain the benefits of the computation and storage capabilities of the cloud for outsourcing private input sets and PSI-CA computation. For the first time, in this paper, we design an efficient outsourced private set intersection cardinality named EO-PSI-CA in the multi-party setting. This protocol computes PSI-CA by employing the Bloom filter (BF) technique and the exponential ElGamal cryptosystem over encrypted Bloom filters. In our protocol, two or more parties outsource their private input sets to the cloud and finally one of the parties requests the EO-PSI-CA value. Due to the use of Bloom filter, the size of the parties’ sets is independent of each other, and the computational and communication complexity of each party is independent of the total number of parties. We formally prove the security of our protocol in the semi-honest adversarial model and we claim that our scheme addresses the intersection size hiding. On a more positive note, our EO-PSI-CA is the first in its kind with linear complexity supporting outsourcing in a multi-party setting.

Book ChapterDOI
TL;DR: In this paper , a sublinear-communication protocol for secure evaluation of general layered circuits, given any 2-round rate-1 batch oblivious transfer (OT) protocol with a particular decomposability property, is presented.
Abstract: AbstractSecure computation enables mutually distrusting parties to jointly compute a function on their secret inputs, while revealing nothing beyond the function output. A long-running challenge is understanding the required communication complexity of such protocols—in particular, when communication can be sublinear in the circuit representation size of the desired function. For certain functions, such as Private Information Retrieval (PIR), this question extends to even sublinearity in the input size.We develop new techniques expanding the set of computational assumptions for sublinear communication in both settings: Circuit size. We present sublinear-communication protocols for secure evaluation of general layered circuits, given any 2-round rate-1 batch oblivious transfer (OT) protocol with a particular “decomposability” property. In particular, this condition can be shown to hold for the recent batch OT protocols of (Brakerski et al. Eurocrypt 2022), in turn yielding a new sublinear secure computation feasibility: from Quadratic Residuosity (QR) together with polynomial-noise-rate Learning Parity with Noise (LPN). Our approach constitutes a departure from existing paths toward sublinear secure computation, all based on fully homomorphic encryption or homomorphic secret sharing. Input size. We construct single-server PIR based on the Computational Diffie-Hellman (CDH) assumption, with polylogarithmic communication in the database input size n. Previous constructions from CDH required communication \(\Omega (n)\). In hindsight, our construction comprises of a relatively simple combination of existing tools from the literature. KeywordsFoundationsPrivate information retrievalSecure multiparty computation

Proceedings ArticleDOI
01 Jul 2022
TL;DR: In this paper , the min-max complexity of distributed stochastic convex optimization in the intermittent communication setting was shown to be polynomial in the number of machines working in parallel to optimize the objective.
Abstract: We resolve the min-max complexity of distributed stochastic convex optimization (up to a log factor) in the intermittent communication setting, where M machines work in parallel over the course of R rounds of communication to optimize the objective, and during each round of communication, each machine may sequentially compute K stochastic gradient estimates. We present a novel lower bound with a matching upper bound that establishes an optimal algorithm.

Proceedings ArticleDOI
17 May 2022
TL;DR: This work considers the time and energy complexities of randomized leader election in a multiple-access channel, where the number of devices n ≥ 2 is unknown.
Abstract: We consider the time (number of communication rounds) and energy (number of non-idle communication rounds per device) complexities of randomized leader election in a multiple-access channel, where the number of devices n ≥ 2 is unknown. It is well-known that for polynomial-time randomized leader election algorithms with success probability 1 - 1/poly(n), the optimal energy complexity is Θ(log log* n) if receivers can detect collisions, and it is Θ(log* n) otherwise.

Book ChapterDOI
TL;DR: In this article , Cohen et al. studied communication complexity in computational settings where bad inputs may exist, but they should be hard to find for any computationally bounded adversary, and showed that efficient protocols for equality imply secret key-agreement protocols in a constructive manner.
Abstract: WeCohen, Shahar P. Naor, Moni study communication complexity in computational settings where bad inputs may exist, but they should be hard to find for any computationally bounded adversary. We define a model where there is a source of public randomness but the inputs are chosen by a computationally bounded adversarial participant after seeing the public randomness. We show that breaking the known communication lower bounds of the private coins model in this setting is closely connected to known cryptographic assumptions. We consider the simultaneous messages model and the interactive communication model and show that for any non trivial predicate (with no redundant rows, such as equality): The other model we study is that of a stateful “free talk”, where participants can communicate freely before the inputs are chosen and may maintain a state, and the communication complexity is measured only afterwards. We show that efficient protocols for equality in this model imply secret key-agreement protocols in a constructive manner. On the other hand, secret key-agreement protocols imply optimal (in terms of error) protocols for equality.

Proceedings ArticleDOI
07 Nov 2022
TL;DR: In this paper , the authors presented an efficient honest majority secure multiparty computation with information theoretic security that achieves the best online communication complexity, which requires 12 elements in total per multiplication gate with circuit-dependent preprocessing, or 20 elements with circuit independent preprocessing.
Abstract: We present a novel approach to honest majority secure multiparty computation in the preprocessing model with information theoretic security that achieves the best online communication complexity. The online phase of our protocol requires 12 elements in total per multiplication gate with circuit-dependent preprocessing, or 20 elements in total with circuit-independent preprocessing. Prior works achieved linear online communication complexity in n, the number of parties, with the best prior existing solution involving 1.5n elements per multiplication gate. Only one recent work packing [28] achieves constant online communication complexity, but the constants are large (108 elements for passive security, and twice that for active security). That said, our protocol offers a very efficient information theoretic online phase for any number of parties.

Journal ArticleDOI
TL;DR: In this paper, a multi-key fully homomorphic encryption (MKFHE) based on the standard assumption (LWE) was proposed, where each party first independently shares their own data between two servers and each server only needs a one-round communication with another to construct a ciphertext of the same plaintext under a sum of associated parties' keys.
Abstract: Lopez-Alt et al.(STOC12) put forward a primitive called multi-key fully homomorphic encryption (MKFHE), in which each involved party encrypts their own data using keys that are independently and randomly chosen whereby arbitrary computations can be performed on these encrypted data by a final collector. Subsequently, several superior schemes based on the standard assumption (LWE) were proposed. Most of these schemes were constructed by expanding a fresh GSW-ciphertext or BGV-ciphertext under a single key to a new same-type ciphertext of the same message under a combination of associated parties’ keys. Therefore, the new ciphertext’s size grew more or less linearly with an increase in the number of parties. In this paper, we proposed a novel and simple scheme of MKFHE based on LWE without increasing the size of the ciphertext in the two non-collusion server model. In other words, each party first independently shares their own data between two servers and each server only needs a one-round communication with another to construct a ciphertext of the same plaintext under a sum of associated parties’ keys. Our new ciphertext under multiple keys has the same size as that of the original one with only one-round communication between two servers. The communication complexity is O(kmlogq)- bit, where k is the number of input ciphertexts involved, m is the size of a GSW-ciphertext and q is a modulus. In conclusion, we proved that our scheme is CPA-secure against semi-honest adversaries.

Journal ArticleDOI
TL;DR: For combinatorial auctions with polynomial communication, the first separation in the approximation guarantee between truthful and nontruthful algorithms was established by Schapira et al. as discussed by the authors .
Abstract: We provide the first separation in the approximation guarantee achievable by truthful and nontruthful algorithms for combinatorial auctions with polynomial communication. Specifically, we prove that any truthful mechanism guaranteeing a $( icefrac{3}{4}- icefrac{1}{240}+\varepsilon)$-approximation for two buyers with XOS valuations over $m$ items requires $\exp(\Omega(\varepsilon^2 \cdot m))$ communication, whereas a nontruthful algorithm by Dobzinski and Schapira [Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), SIAM, 2006, pp. 1064--1073] and Feige [SIAM J. Comput., 39 (2009), pp. 122--142] is already known to achieve a $ icefrac{3}{4}$-approximation in ${poly}(m)$ communication. We obtain our separation by proving that any simultaneous protocol (not necessarily truthful) which guarantees a $( icefrac{3}{4}- icefrac{1}{240}+\varepsilon)$-approximation requires communication $\exp(\Omega(\varepsilon^2 \cdot m))$. The taxation complexity framework of Dobzinski [Proceedings of the 57th Annual Symposium on Foundations of Computer Science (FOCS), IEEE, 2016, pp. 209--218] extends this lower bound to all truthful mechanisms (including interactive truthful mechanisms).

Proceedings ArticleDOI
01 Feb 2022
TL;DR: In this paper , a direct product theorem for the entanglement-assisted interactive quantum communication complexity in terms of the quantum partition bound for product distributions was given for two-input functions or relations whose outputs are non-boolean as well.
Abstract: We give a direct product theorem for the entanglement-assisted interactive quantum communication complexity in terms of the quantum partition bound for product distributions. The quantum partition or efficiency bound is a lower bound on communication complexity, a non-distributional version of which was introduced by Laplante, Lerays and Roland (2012). For a two-input boolean function, the best result for interactive quantum communication complexity known previously was due to Sherstov (2018), who showed a direct product theorem in terms of the generalized discrepancy. While there is no direct relationship between the maximum distributional quantum partition bound for product distributions, and the generalized discrepancy method, unlike Sherstov's result, our result works for two-input functions or relations whose outputs are non-boolean as well. As an application of our result, we show that it is possible to do device-independent quantum key distribution (DIQKD) without the assumption that devices do not leak any information after inputs are provided to them. We analyze the DIQKD protocol given by Jain, Miller and Shi (2020), and show that when the protocol is carried out with devices that are compatible with several copies of the Magic Square game, it is possible to extract a linear (in the number of copies of the game) amount of key from it, even in the presence of a linear amount of leakage. Our security proof is parallel, i.e., the honest parties can enter all their inputs into their devices at once, and works for a leakage model that is arbitrarily interactive, i.e., the devices of the honest parties Alice and Bob can exchange information with each other and with the eavesdropper Eve in any number of rounds, as long as the total number of bits or qubits communicated is bounded.

Book ChapterDOI
27 Oct 2022
TL;DR: In this paper , the authors considered the problem of proving lower bounds on the communication complexity of quantum k-party protocols in the setting of oblivious communication, where the communication pattern and the amount of communication exchanged between each pair of players at each round is fixed independently of the input before the execution of the protocol.
Abstract: The main conceptual contribution of this paper is investigating quantum multiparty communication complexity in the setting where communication is oblivious. This requirement, which to our knowledge is satisfied by all quantum multiparty protocols in the literature, means that the communication pattern, and in particular the amount of communication exchanged between each pair of players at each round is fixed independently of the input before the execution of the protocol. We show, for a wide class of functions, how to prove strong lower bounds on their oblivious quantum k-party communication complexity using lower bounds on their two-party communication complexity. We apply this technique to prove tight lower bounds for all symmetric functions with AND gadget, and in particular obtain an optimal $$\varOmega (k\sqrt{n})$$ lower bound on the oblivious quantum k-party communication complexity of the n-bit Set-Disjointness function. We also show the tightness of these lower bounds by giving (nearly) matching upper bounds.

Proceedings ArticleDOI
01 May 2022
TL;DR: In this article , the authors present a protocol whose message complexity is two when there are sufficiently many users, and they show that the error introduced by the protocol is small, using rigorous analysis as well as experiments on real-world data.
Abstract: There has been much recent work in the shuffle model of differential privacy, particularly for approximate d-bin histograms. While these protocols achieve low error, the number of messages sent by each user—the message complexity—has so far scaled with d or the privacy parameters. The message complexity is an informative predictor of a shuffle protocol’s resource consumption. We present a protocol whose message complexity is two when there are sufficiently many users. The protocol essentially pairs each row in the dataset with a fake row and performs a simple randomization on all rows. We show that the error introduced by the protocol is small, using rigorous analysis as well as experiments on real-world data. We also prove that corrupt users have a relatively low impact on our protocol’s estimates.

Book ChapterDOI
TL;DR: In this paper , a lower bound for oblivious-transfer complexity of noisy coin-tossing was shown for a single-party protocol, where Alice and Bob obtain fair coins which are of opposite values with probability p. This lower bound was obtained by providing a general connection between the OT complexity of randomized functions and the complexity of secure zero communication reduction.
Abstract: Abstract In p-noisy coin-tossing, Alice and Bob obtain fair coins which are of opposite values with probability p. Its Oblivious-Transfer (OT) complexity refers to the least number of OTs required by a semi-honest perfectly secure 2-party protocol for this task. We show a tight bound of \(\varTheta (\log 1/p)\) for the OT complexity of p-noisy coin-tossing. This is the first instance of a lower bound for OT complexity that is independent of the input/output length of the function.We obtain our result by providing a general connection between the OT complexity of randomized functions and the complexity of Secure Zero Communication Reductions (SZCR), as recently defined by Narayanan et al. (TCC 2020), and then showing a lower bound for the complexity of an SZCR from noisy coin-tossing to (a predicate corresponding to) OT.

Posted ContentDOI
06 Dec 2022
TL;DR: The LOCAL model of distributed computing was considered in this paper , where in a single round of communication each node can send to each of its neighbors a message of an arbitrary size.
Abstract: We consider the LOCAL model of distributed computing, where in a single round of communication each node can send to each of its neighbors a message of an arbitrary size. It is know that, classically, the round complexity of 3-coloring an $n$-node ring is $\Theta(\log^*\!n)$. In the case where communication is quantum, only trivial bounds were known: at least some communication must take place. We study distributed algorithms for coloring the ring that perform only a single round of one-way communication. Classically, such limited communication is already known to reduce the number of required colors from $\Theta(n)$, when there is no communication, to $\Theta(\log n)$. In this work, we show that the probability of any quantum single-round one-way distributed algorithm to output a proper $3$-coloring is exponentially small in $n$.

Proceedings ArticleDOI
01 Nov 2022
TL;DR: In this article , a generic construction of t-private PIR schemes using circulant permutation matrices (CPMs) was proposed, which can protect the user's perfect privacy from any collusion of up to t servers.
Abstract: Private information retrieval (PIR) schemes allow a user to retrieve entries of a database without revealing the index of the desired item. The focus of this paper lies on constructions of PIR schemes with optimal computational complexity for the servers, which play a crucial part in fast retrieval. This paper first proposes a generic construction of t-private PIR schemes using circulant permutation matrices (CPMs), which can protect the user’s perfect privacy from any collusion of up to t servers. Then this paper takes Byzantine and unresponsive servers into account in the t-private PIR schemes using CPMs and proposes a generic construction of t-private robust PIR schemes using CPMs. The proposed constructions of PIR schemes enjoy the advantages of optimal computational complexity for the servers, competitive user computational complexity, acceptable communication complexity, low memory space for storing all possible queries for the user, and low encoding complexity upon encoding the database.

Posted ContentDOI
07 Nov 2022
TL;DR: In this paper , the authors introduced the concept of certificate game complexity, a measure of complexity based on the probability of winning a game where two players are given inputs with different function values and are asked to output $i$ such that $x_i eq y_i$ (zero-communication setting).
Abstract: We introduce and study Certificate Game complexity, a measure of complexity based on the probability of winning a game where two players are given inputs with different function values and are asked to output $i$ such that $x_i eq y_i$ (zero-communication setting). We give upper and lower bounds for private coin, public coin, shared entanglement and non-signaling strategies, and give some separations. We show that complexity in the public coin model is upper bounded by Randomized query and Certificate complexity. On the other hand, it is lower bounded by fractional and randomized certificate complexity, making it a good candidate to prove strong lower bounds on randomized query complexity. Complexity in the private coin model is bounded from below by zero-error randomized query complexity. The quantum measure highlights an interesting and surprising difference between classical and quantum query models. Whereas the public coin certificate game complexity is bounded from above by randomized query complexity, the quantum certificate game complexity can be quadratically larger than quantum query complexity. We use non-signaling, a notion from quantum information, to give a lower bound of $n$ on the quantum certificate game complexity of the $OR$ function, whose quantum query complexity is $\Theta(\sqrt{n})$, then go on to show that this ``non-signaling bottleneck'' applies to all functions with high sensitivity, block sensitivity or fractional block sensitivity. We consider the single-bit version of certificate games (inputs of the two players have Hamming distance $1$). We prove that the single-bit version of certificate game complexity with shared randomness is equal to sensitivity up to constant factors, giving a new characterization of sensitivity. The single-bit version with private randomness is equal to $\lambda^2$, where $\lambda$ is the spectral sensitivity.

Proceedings ArticleDOI
01 Jun 2022
TL;DR: In this paper , the authors make an in-depth investigation of the performance-complexity trade-off of low complexity MIMO signal detection based on message passing algorithms and propose a new simplified BP detection scheme, which combines the advantages of QR precoding and interference cancellation.
Abstract: In this study, we intend to make an in-depth investigation of the performance-complexity trade-off of low complexity Multiple-Input Multiple-Output (MIMO) signal detection based on message passing algorithms. Several detection algorithms such as Belief Propagation (BP) and Expectation Propagation (EP) have been proposed to approximate symbol Maximum A Posteriori (MAP) for high dimensional signaling. We propose a thorough examination of those algorithms and some of their low-complexity versions, through a complexity/performance trade-off analysis to identify modes of operation depending on the number of antennas and constellation order. Finally, we propose a new simplified BP detection scheme, which combines the advantages of QR precoding and Interference Cancellation (IC).

Posted ContentDOI
02 Apr 2022
TL;DR: In this article , the authors proposed a 2-dimensional sharding strategy, which inherently supports cross-shard transactions, alleviating the need for complicated communication protocols between shards, while keeping the computation and storage benefits of sharding.
Abstract: Although blockchain, the supporting technology of various cryptocurrencies, has offered a potentially effective framework for numerous decentralized trust management systems, its performance is still sub-optimal in real-world networks. With limited bandwidth, the communication complexity for nodes to process a block scales with the growing network size and hence becomes the limiting factor of blockchain's performance. In this paper, we suggest a re-design of existing blockchain systems, which addresses the issue of the communication burden. First, by employing techniques from Coded Computation, our scheme guarantees correct verification of transactions while reducing the bit complexity dramatically such that it grows logarithmically with the number of nodes. Second, with the adoption of techniques from Information Dispersal and State Machine Replication, the system is resilient to Byzantine faults and achieves linear message complexity. Third, we propose a novel 2-dimensional sharding strategy, which inherently supports cross-shard transactions, alleviating the need for complicated communication protocols between shards, while keeping the computation and storage benefits of sharding.

Posted ContentDOI
04 Jan 2022
TL;DR: In this article , the message complexity of state machine replication protocols with Byzantine failures in the partial synchrony model was studied, and it was shown that the lower bound of Dolev and Reischuk's lower bound is tight.
Abstract: We consider the message complexity of State Machine Replication protocols dealing with Byzantine failures in the partial synchrony model. A result of Dolev and Reischuk gives a quadratic lower bound for the message complexity, but it was unknown whether this lower bound is tight, with the most efficient known protocols giving worst-case message complexity $O(n^3)$. We describe a protocol which meets Dolev and Reischuk's quadratic lower bound, while also satisfying other desirable properties. To specify these properties, suppose that we have $n$ replicas, $f$ of which display Byzantine faults (with $n\geq 3f+1$). Suppose that $\Delta$ is an upper bound on message delay, i.e. if a message is sent at time $t$, then it is received by time $ \text{max} \{ t, GST \} +\Delta $. We describe a deterministic protocol that simultaneously achieves $O(n^2)$ worst-case message complexity, optimistic responsiveness, $O(f\Delta )$ time to first confirmation after $GST$ and $O(n)$ mean message complexity.