scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 2016"


Book ChapterDOI
08 May 2016
TL;DR: In this article, a zero-knowledge argument for arithmetic circuit satisfiability with a communication complexity that grows logarithmically in the size of the circuit was proposed, where the soundness of the argument relies solely on the well-established discrete log-arm assumption in prime order groups.
Abstract: We provide a zero-knowledge argument for arithmetic circuit satisfiability with a communication complexity that grows logarithmically in the size of the circuit. The round complexity is also logarithmic and for an arithmetic circuit with fan-in 2 gates the computation of the prover and verifier is linear in the size of the circuit. The soundness of our argument relies solely on the well-established discrete logarithm assumption in prime order groups. At the heart of our new argument system is an efficient zero-knowledge argument of knowledge of openings of two Pedersen multicommitments satisfying an inner product relation, which is of independent interest. The inner product argument requires logarithmic communication, logarithmic interaction and linear computation for both the prover and the verifier. We also develop a scheme to commit to a polynomial and later reveal the evaluation at an arbitrary point, in a verifiable manner. This is used to build an optimized version of the constant round square root complexity argument of Groth CRYPTO 2009, which reduces both communication and round complexity.

220 citations


Journal ArticleDOI
TL;DR: This work presents a tripartite communication task for which such a superposition of the direction of communication allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when the authors allow for protocols with bounded error probability.
Abstract: In communication complexity, a number of distant parties have the task of calculating a distributed function of their inputs, while minimizing the amount of communication between them. It is known that with quantum resources, such as entanglement and quantum channels, one can obtain significant reductions in the communication complexity of some tasks. In this work, we study the role of the quantum superposition of the direction of communication as a resource for communication complexity. We present a tripartite communication task for which such a superposition allows for an exponential saving in communication, compared to one-way quantum (or classical) communication; the advantage also holds when we allow for protocols with bounded error probability.

186 citations


Posted Content
TL;DR: An accelerated variant of the DANE algorithm, called AIDE, is proposed that not only matches the communication lower bounds but can also be implemented using a purely first-order oracle.
Abstract: In this paper, we present two new communication-efficient methods for distributed minimization of an average of functions. The first algorithm is an inexact variant of the DANE algorithm that allows any local algorithm to return an approximate solution to a local subproblem. We show that such a strategy does not affect the theoretical guarantees of DANE significantly. In fact, our approach can be viewed as a robustification strategy since the method is substantially better behaved than DANE on data partition arising in practice. It is well known that DANE algorithm does not match the communication complexity lower bounds. To bridge this gap, we propose an accelerated variant of the first method, called AIDE, that not only matches the communication lower bounds but can also be implemented using a purely first-order oracle. Our empirical results show that AIDE is superior to other communication efficient algorithms in settings that naturally arise in machine learning applications.

140 citations


Proceedings ArticleDOI
19 Jun 2016
TL;DR: In this article, the authors studied the tradeoff between the statistical error and communication cost of distributed statistical estimation problems in high dimensions and provided a lower bound for the distributed sparse linear regression problem: to achieve the statistical minimax error, the total communication is at least Ω(min{n,d}m, where n is the number of observations that each machine receives and d is the ambient dimension.
Abstract: We study the tradeoff between the statistical error and communication cost of distributed statistical estimation problems in high dimensions. In the distributed sparse Gaussian mean estimation problem, each of the m machines receives n data points from a d-dimensional Gaussian distribution with unknown mean θ which is promised to be k-sparse. The machines communicate by message passing and aim to estimate the mean θ. We provide a tight (up to logarithmic factors) tradeoff between the estimation error and the number of bits communicated between the machines. This directly leads to a lower bound for the distributed sparse linear regression problem: to achieve the statistical minimax error, the total communication is at least Ω(min{n,d}m), where n is the number of observations that each machine receives and d is the ambient dimension. These lower results improve upon Shamir (NIPS'14) and Steinhardt-Duchi (COLT'15) by allowing multi-round iterative communication model. We also give the first optimal simultaneous protocol in the dense case for mean estimation. As our main technique, we prove a distributed data processing inequality, as a generalization of usual data processing inequalities, which might be of independent interest and useful for other problems.

123 citations


Proceedings Article
01 Jan 2016
TL;DR: This paper provides competitive convergence guarantees for without-replacement sampling under several scenarios, focusing on the natural regime of few passes over the data, yielding a nearly-optimal algorithm for regularized least squares under broad parameter regimes.
Abstract: Stochastic gradient methods for machine learning and optimization problems are usually analyzed assuming data points are sampled *with* replacement. In contrast, sampling *without* replacement is far less understood, yet in practice it is very common, often easier to implement, and usually performs better. In this paper, we provide competitive convergence guarantees for without-replacement sampling under several scenarios, focusing on the natural regime of few passes over the data. Moreover, we describe a useful application of these results in the context of distributed optimization with randomly-partitioned data, yielding a nearly-optimal algorithm for regularized least squares (in terms of both communication complexity and runtime complexity) under broad parameter regimes. Our proof techniques combine ideas from stochastic optimization, adversarial online learning and transductive learning theory, and can potentially be applied to other stochastic optimization and learning problems.

104 citations


Journal ArticleDOI
TL;DR: A variation of the algebraic method based on 2k evaluations of the circuit over a suitable algebra can break the trivial upper bounds for the disjoint summation problem and is applied to problems in exact counting.
Abstract: The fastest known randomized algorithms for several parameterized problems use reductions to the k-MlD problem: detection of multilinear monomials of degree k in polynomials presented as circuits The fastest known algorithm for k-MlD is based on 2k evaluations of the circuit over a suitable algebra We use communication complexity to show that it is essentially optimal within this evaluation framework On the positive side, we give additional applications of the method: finding a copy of a given tree on k nodes, a minimum set of nodes that dominate at least t nodes, and an m-dimensional k-matching In each case, we achieve a faster algorithm than what was known before We also apply the algebraic method to problems in exact counting Among other results, we show that a variation of it can break the trivial upper bounds for the disjoint summation problem

94 citations


Posted Content
TL;DR: In this paper, the authors proposed a new protocol for secure three-party computation of any functionality, with an honest majority and a malicious adversary, which is distinguished by extremely low communication complexity and very simple computation.
Abstract: In this paper, we describe a new protocol for secure three-party computation of any functionality, with an honest majority and a malicious adversary. Our protocol has both an information-theoretic and computational variant, and is distinguished by extremely low communication complexity and very simple computation. We start from the recent semi-honest protocol of Araki et al. (ACM CCS 2016) in which the parties communicate only a single bit per AND gate, and modify it to be secure in the presence of malicious adversaries. Our protocol follows the paradigm of first constructing Beaver multiplication triples and then using them to verify that circuit gates are correctly computed. As in previous work (e.g., the so-called TinyOT and SPDZ protocols), we rely on the cut-and-choose paradigm to verify that triples are correctly constructed. We are able to utilize the fact that at most one of three parties is corrupted in order to construct an extremely simple and efficient method of constructing such triples. We also present an improved combinatorial analysis for this cut-and-choose which can be used to achieve improvements in other protocols using this approach.

80 citations


Journal ArticleDOI
TL;DR: A proof-of-principle experimental demonstration of a quantum fingerprinting protocol that for the first time surpasses the ultimate classical limit to transmitted information.
Abstract: Quantum communication has historically been at the forefront of advancements, from fundamental tests of quantum physics to utilizing the quantum-mechanical properties of physical systems for practical applications. In the field of communication complexity, quantum communication allows the advantage of an exponential reduction in the transmitted information over classical communication to accomplish distributed computational tasks. However, to date, demonstrating this advantage in a practical setting continues to be a central challenge. Here, we report a proof-of-principle experimental demonstration of a quantum fingerprinting protocol that for the first time surpasses the ultimate classical limit to transmitted information. Ultralow noise superconducting single-photon detectors and a stable fiber-based Sagnac interferometer are used to implement a quantum fingerprinting system that is capable of transmitting less information than the classical proven lower bound over 20 km standard telecom fiber for input sizes of up to 2 Gbits. The results pave the way for experimentally exploring the advanced features of quantum communication and open a new window of opportunity for research in communication complexity and testing the foundations of physics.

71 citations


Journal ArticleDOI
TL;DR: It is proved that any large advantage over the best known classical strategy makes use of Bell nonlocal correlations, providing the missing link to the fundamental equivalence between Bell nonlocality and quantum advantage.
Abstract: We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs.

62 citations


Proceedings ArticleDOI
01 Oct 2016
TL;DR: The first true size-space trade-offs for the cutting planes proof system are obtained, where the upper bounds hold for size and total space for derivations with constantsize coefficients, and the lower bounds apply to length and formula space even for derivation with exponentially large coefficients.
Abstract: We obtain the first true size-space trade-offs for the cutting planes proof system, where the upper bounds hold for size and total space for derivations with constantsize coefficients, and the lower bounds apply to length and formula space (i.e., number of inequalities in memory) even for derivations with exponentially large coefficients. These are also the first trade-offs to hold uniformly for resolution, polynomial calculus and cutting planes, thus capturing the main methods of reasoning used in current state-of-the-art SAT solvers. We prove our results by a reduction to communication lower bounds in a round-efficient version of the real communication model of [Kraj´iˇcek ’98], drawing on and extending techniques in [Raz and McKenzie ’99] and [G¨o¨os et al. ’15]. The communication lower bounds are in turn established by a reduction to trade-offs between cost and number of rounds in the game of [Dymond and Tompa ’85] played on directed acyclic graphs. As a by-product of the techniques developed to show these proof complexity trade-off results, we also obtain an exponential separation between monotone-ACi1 and monotone-ACi, improving exponentially over the superpolynomial separation in [Raz and McKenzie ’99]. That is, we give an explicit Boolean function that can be computed by monotone Boolean circuits of depth logi n and polynomial size, but for which circuits of depth O(logi1 n) require exponential size.

59 citations


Proceedings ArticleDOI
19 Jun 2016
TL;DR: In this article, it was shown that the quantum query complexity of the same function is O(n 1/4 ) while the deterministic query complexity is Ω(n/log(n)) and bounded-error randomized query complexity R(g) = O(√n).
Abstract: In 1986, Saks and Wigderson conjectured that the largest separation between deterministic and zero-error randomized query complexity for a total boolean function is given by the function f on n=2k bits defined by a complete binary tree of NAND gates of depth k, which achieves R0(f) = O(D(f)0.7537…). We show this is false by giving an example of a total boolean function f on n bits whose deterministic query complexity is Ω(n/log(n)) while its zero-error randomized query complexity is O(√n). We further show that the quantum query complexity of the same function is O(n1/4), giving the first example of a total function with a super-quadratic gap between its quantum and deterministic query complexities. We also construct a total boolean function g on n variables that has zero-error randomized query complexity Ω(n/log(n)) and bounded-error randomized query complexity R(g) = O(√n). This is the first super-linear separation between these two complexity measures. The exact quantum query complexity of the same function is QE(g) = O(√n). These functions show that the relations D(f) = O(R1(f)2) and R0(f) = O(R(f)2) are optimal, up to poly-logarithmic factors. Further variations of these functions give additional separations between other query complexity measures: a cubic separation between Q and R0, a 3/2-power separation between QE and R, and a 4th power separation between approximate degree and bounded-error randomized query complexity. All of these examples are variants of a function recently introduced by Goos, Pitassi, and Watson which they used to separate the unambiguous 1-certificate complexity from deterministic query complexity and to resolve the famous Clique versus Independent Set problem in communication complexity.

Proceedings ArticleDOI
19 Jun 2016
TL;DR: The space complexity of single-pass streaming algorithms for approximating the classic set cover problem is resolved, and it is shown that Θ(mn/α2) space is both sufficient and necessary for estimating the size of a minimum set cover to within a factor of α.
Abstract: We resolve the space complexity of single-pass streaming algorithms for approximating the classic set cover problem. For finding an α-approximate set cover (for α= o(√n)) via a single-pass streaming algorithm, we show that Θ(mn/α) space is both sufficient and necessary (up to an O(logn) factor); here m denotes number of the sets and n denotes size of the universe. This provides a strong negative answer to the open question posed by Indyk (2015) regarding the possibility of having a single-pass algorithm with a small approximation factor that uses sub-linear space. We further study the problem of estimating the size of a minimum set cover (as opposed to finding the actual sets), and establish that an additional factor of α saving in the space is achievable in this case and that this is the best possible. In other words, we show that Θ(mn/α2) space is both sufficient and necessary (up to logarithmic factors) for estimating the size of a minimum set cover to within a factor of α. Our algorithm in fact works for the more general problem of estimating the optimal value of a covering integer program. On the other hand, our lower bound holds even for set cover instances where the sets are presented in a random order.

Journal ArticleDOI
TL;DR: This paper develops a factor graph model, using the network topology, and proposes a novel distributed BP algorithm, which theoretically proves the existence of the fixed point in the BP algorithm.
Abstract: In heterogeneous networks (HetNets), the load between macrocell base stations (MBSs) and small-cell base stations (SBSs) is imbalanced due to their different transmission powers and locations. This load imbalance significantly impacts system performance and affects the experience of mobile users (MUs) with different priorities. In this paper, we aim to distributively optimize the user association in HetNets with various user priorities to solve the load balancing problem. Since the user association is a binary matching problem, which is NP-hard, we propose a distributed belief propagation (BP) algorithm to approach the optimal solution. We first develop a factor graph model, using the network topology, to represent this user association problem. With this factor graph, we propose a novel distributed BP algorithm by adopting the proportional fairness as the objective. Next, we theoretically prove the existence of the fixed point in our BP algorithm. To be more practical, we develop an approximation method to significantly reduce the computational and communication complexity of the BP algorithm. Furthermore, we analyze some properties of the factor graph relevant to the performance of the BP algorithm using the stochastic geometry. Simulation results show that 1) the proposed BP algorithm well approaches the optimal system performance and achieves a much better performance compared with other association schemes and that 2) the analytical results on the average degree distribution and sparsity of the factor graph match with those obtained from the Monte Carlo simulations.

Journal ArticleDOI
TL;DR: An unconditionally secure quantum Private Set Intersection Cardinality protocol, which requires O(1) communication cost, and a novel anonymous authentication scheme which can not only achieve two basic secure goals: secure authentication and anonymity, but can also dynamically update the authorized clients.

Proceedings Article
09 Jul 2016
TL;DR: This paper introduces a general technique for obtaining lower bounds on Decomposable Negation Normal Form (DNNFs), one of the most widely studied and succinct representation languages, by relating the size of DNNFs to multi-partition communication complexity.
Abstract: Choosing a language for knowledge representation and reasoning involves a trade-off between two competing desiderata: succinctness (the encoding should be small) and tractability (the language should support efficient reasoning algorithms). The area of knowledge compilation is devoted to the systematic study of representation languages along these two dimensions--in particular, it aims to determine the relative succinctness of languages. Showing that one language is more succinct than another typically involves proving a nontrivial lower bound on the encoding size of a carefully chosen function, and the corresponding arguments increase in difficulty with the succinctness of the target language. In this paper, we introduce a general technique for obtaining lower bounds on Decomposable Negation Normal Form (DNNFs), one of the most widely studied and succinct representation languages, by relating the size of DNNFs to multi-partition communication complexity. This allows us to directly translate lower bounds from the communication complexity literature into lower bounds on the size of DNNF representations. We use this approach to prove exponential separations of DNNFs from deterministic DNNFs and of CNF formulas from DNNFs.

Proceedings ArticleDOI
19 Jun 2016
TL;DR: An explicit example of a search problem with external information complexity ≤ O(k), withrespect to any input distribution, and distributional communication complexity ≥ 2k, with respect to some input distribution is obtained.
Abstract: We show an exponential gap between communication complexity and external information complexity, by analyzing a communication task suggested as a candidate by Braverman. Previously, only a separation of communication complexity and internal information complexity was known. More precisely, we obtain an explicit example of a search problem with external information complexity ≤ O(k), with respect to any input distribution, and distributional communication complexity ≥ 2k, with respect to some input distribution. In particular, this shows that a communication protocol cannot always be compressed to its external information. By a result of Braverman, our gap is the largest possible. Moreover, since the upper bound of O(k) on the external information complexity of the problem is obtained with respect to any input distribution, our result implies an exponential gap between communication complexity and information complexity (both internal and external) in the non-distributional setting of Braverman. In this setting, no gap was previously known, even for internal information complexity.

Journal ArticleDOI
TL;DR: A quantum channel is constructed whose entanglement assisted zero-error one-shot capacity can only be unlocked using a non-maximally entangled state, and homomorphisms allow definition of a chromatic number for non-commutative graphs.
Abstract: Alice and Bob receive a bipartite state (possibly entangled) from some finite collection or from some subspace. Alice sends a message to Bob through a noisy quantum channel such that Bob may determine the initial state, with zero chance of error. This framework encompasses, for example, teleportation, dense coding, entanglement assisted quantum channel capacity, and one-way communication complexity of function evaluation. With classical sources and channels, this problem can be analyzed using graph homomorphisms. We show this quantum version can be analyzed using homomorphisms on non-commutative graphs (an operator space generalization of graphs). Previously the Lovasz $\vartheta $ number has been generalized to non-commutative graphs; we show this to be a homomorphism monotone, thus providing bounds on quantum source-channel coding. We generalize the Schrijver and Szegedy numbers, and show these to be monotones as well. As an application, we construct a quantum channel whose entanglement assisted zero-error one-shot capacity can only be unlocked using a non-maximally entangled state. These homomorphisms allow definition of a chromatic number for non-commutative graphs. Many open questions are presented regarding the possibility of a more fully developed theory.

Journal ArticleDOI
TL;DR: In this article, the authors reported on the experimental realisation of three-party quantum communication protocols using single three-level quantum system (qutrit) communication: secret-sharing, detectable Byzantine agreement and communication complexity reduction for a three-valued function.
Abstract: Quantum information science breaks limitations of conventional information transfer, cryptography and computation by using quantum superpositions or entanglement as resources for information processing. Here we report on the experimental realisation of three-party quantum communication protocols using single three-level quantum system (qutrit) communication: secret-sharing, detectable Byzantine agreement and communication complexity reduction for a three-valued function. We have implemented these three schemes using the same optical fibre interferometric setup. Our realisation is easily scalable without compromising on detection efficiency or generating extremely complex many-particle entangled states. Quantum-mechanically secure communication between three different parties has been achieved by researchers in Sweden. Mohamed Bourennane and colleagues at Stockholm University achieved this by sharing a three-state quantum object across an optical fiber network. Quantum cryptography takes advantage of the unusual properties of quantum particles to achieve a degree of security beyond that possible in any classical communication system. Often this relies on linking two particles through a quantum mechanical connection known as entanglement; however, entanglement can be difficult to create. The scheme used by Bourennane's team uses just a single quantum object known as a qutrit that can exist in one of three states. The team demonstrated three different protocols known as secret sharing, detectable Byzantine agreement and communication complexity reduction. The method shows potential for scaling up to large-scale applications.

Posted Content
TL;DR: For constraint satisfaction problems (CSPs), sub-exponential size linear programming relaxations are as powerful as nΩ(1)-rounds of the Sherali-Adams linear programming hierarchy and lower bounds are obtained by exploiting and extending the recent progress in communication complexity for "lifting" query lower bounds to communication problems.
Abstract: We show that for constraint satisfaction problems (CSPs), sub-exponential size linear programming relaxations are as powerful as $n^{\Omega(1)}$-rounds of the Sherali-Adams linear programming hierarchy. As a corollary, we obtain sub-exponential size lower bounds for linear programming relaxations that beat random guessing for many CSPs such as MAX-CUT and MAX-3SAT. This is a nearly-exponential improvement over previous results, previously, it was only known that linear programs of size $n^{o(\log n)}$ cannot beat random guessing for any CSP (Chan-Lee-Raghavendra-Steurer 2013). Our bounds are obtained by exploiting and extending the recent progress in communication complexity for "lifting" query lower bounds to communication problems. The main ingredient in our results is a new structural result on "high-entropy rectangles" that may of independent interest in communication complexity.

Journal ArticleDOI
TL;DR: A cheat-sensitive quantum scheme for Private Set Intersection that has lower communication complexity, which is independent of the size of the server’s set, makes it very suitable for big data services in Cloud or large-scale client–server networks.
Abstract: Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client---server networks.

Posted Content
TL;DR: This paper provides competitive convergence guarantees for without-replacement sampling, under various scenarios, for three types of algorithms: Any algorithm with online regret guarantees, stochastic gradient descent, and SVRG.
Abstract: Stochastic gradient methods for machine learning and optimization problems are usually analyzed assuming data points are sampled \emph{with} replacement. In practice, however, sampling \emph{without} replacement is very common, easier to implement in many cases, and often performs better. In this paper, we provide competitive convergence guarantees for without-replacement sampling, under various scenarios, for three types of algorithms: Any algorithm with online regret guarantees, stochastic gradient descent, and SVRG. A useful application of our SVRG analysis is a nearly-optimal algorithm for regularized least squares in a distributed setting, in terms of both communication complexity and runtime complexity, when the data is randomly partitioned and the condition number can be as large as the data size per machine (up to logarithmic factors). Our proof techniques combine ideas from stochastic optimization, adversarial online learning, and transductive learning theory, and can potentially be applied to other stochastic optimization and learning problems.

Journal ArticleDOI
TL;DR: It is proved that the answer is yes, at least for protocols that use a bounded number of rounds, and if a Reverse Newman’s Theorem can be proven in full generality, then full compression of interactive communication and fully-general direct-sum theorems will result.
Abstract: Newman's theorem states that we can take any public-coin communication protocol and convert it into one that uses only private randomness with but a little increase in communication complexity. We consider a reversed scenario in the context of information complexity: can we take a protocol that uses private randomness and convert it into one that only uses public randomness while preserving the information revealed to each player? We prove that the answer is yes, at least for protocols that use a bounded number of rounds. As an application, we prove new direct-sum theorems through the compression of interactive communication in the bounded-round setting. To obtain this application, we prove a new one-shot variant of the Slepian---Wolf coding theorem, interesting in its own right. Furthermore, we show that if a Reverse Newman's Theorem can be proven in full generality, then full compression of interactive communication and fully-general direct-sum theorems will result.

Posted Content
TL;DR: In this paper, the communication complexity of computing a conjunctive query on a large database in a parallel setting with p servers was studied, and the authors showed that for a single round, they can obtain an optimal worst-case algorithm for several classes of queries.
Abstract: In this paper, we study the communication complexity for the problem of computing a conjunctive query on a large database in a parallel setting with $p$ servers. In contrast to previous work, where upper and lower bounds on the communication were specified for particular structures of data (either data without skew, or data with specific types of skew), in this work we focus on worst-case analysis of the communication cost. The goal is to find worst-case optimal parallel algorithms, similar to the work of [18] for sequential algorithms. We first show that for a single round we can obtain an optimal worst-case algorithm. The optimal load for a conjunctive query $q$ when all relations have size equal to $M$ is $O(M/p^{1/\psi^*})$, where $\psi^*$ is a new query-related quantity called the edge quasi-packing number, which is different from both the edge packing number and edge cover number of the query hypergraph. For multiple rounds, we present algorithms that are optimal for several classes of queries. Finally, we show a surprising connection to the external memory model, which allows us to translate parallel algorithms to external memory algorithms. This technique allows us to recover (within a polylogarithmic factor) several recent results on the I/O complexity for computing join queries, and also obtain optimal algorithms for other classes of queries.

Book ChapterDOI
14 Aug 2016
TL;DR: In this article, it was shown that for the honest majority setting, and for the dishonest majority setting with preprocessing, any gate-by-gate protocol must communicate a constant number of bits for every multiplication gate, where n is the number of players.
Abstract: Many information-theoretic secure protocols are known for general secure multi-party computation, in the honest majority setting, and in the dishonest majority setting with preprocessing. All known protocols that are efficient in the circuit size of the evaluated function follow the same "gate-by-gate" design pattern: we work through an arithmetic boolean circuit on secret-shared inputs, such that after we process a gate, the output of the gate is represented as a random secret sharing among the players. This approach usually allows non-interactive processing of addition gates but requires communication for every multiplication gate. Thus, while information-theoretic secure protocols are very efficient in terms of computational work, they seem to require more communication and more rounds than computationally secure protocols. Whether this is inherent is an open and probably very hard problem. However, in this work we show that it is indeed inherent for protocols that follow the "gate-by-gate" design pattern. We present the following results:In the honest majority setting, as well as for dishonest majority with preprocessing, any gate-by-gate protocol must communicate $$\varOmega n$$ bits for every multiplication gate, where n is the number of players.In the honest majority setting, we show that one cannot obtain a bound that also grows with the field size. Moreover, for a constant number of players, amortizing over several multiplication gates does not allow us to save on the computational work, and --- in a restricted setting --- we show that this also holds for communication. All our lower bounds are met upi¾?to a constant factor by known protocols that follow the typical gate-by-gate paradigm. Our results imply that a fundamentally new approach must be found in order to improve the communication complexity of known protocols, such as BGW, GMW, SPDZ etc.

Journal ArticleDOI
TL;DR: This work reports on the experimental realization of three-party quantum communication protocols using single three-level quantum system (qutrit) communication: secret sharing, detectable Byzantine agreement, and communication complexity reduction for a three-valued function.
Abstract: Quantum information science breaks limitations of conventional information transfer, cryptography and computation by using quantum superpositions or entanglement as resources for information processing. Here, we report on the experimental realization of three-party quantum communication protocols using single three-level quantum system (qutrit) communication: secret sharing, detectable Byzantine agreement, and communication complexity reduction for a three-valued function. We have implemented these three schemes using the same optical fiber interferometric setup. Our realization is easily scalable without sacrificing detection efficiency or generating extremely complex many-particle entangled states.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: The technical approach is to adapt the Raz-McKenzie simulation theorem (FOCS 1999) into geometric settings, thereby "smoothly lifting" the deterministic query lower bound for finding an approximate fixed point (Hirsch, Papadimitriou and Vavasis, Complexity 1989) from the oracle model to the two-party model.
Abstract: We study the two-party communication complexity of finding an approximate Brouwer fixed point of a composition of two Lipschitz functions g ○ f: [0,1]n → [0,1]n, where Alice holds f and Bob holds g. We prove an exponential (in n) lower bound on the deterministic communication complexity of this problem. Our technical approach is to adapt the Raz-McKenzie simulation theorem (FOCS 1999) into geometric settings, thereby "smoothly lifting" the deterministic query lower bound for finding an approximate fixed point (Hirsch, Papadimitriou and Vavasis, Complexity 1989) from the oracle model to the two-party model. Our results also suggest an approach to the well-known open problem of proving strong lower bounds on the communication complexity of computing approximate Nash equilibria. Specifically, we show that a slightly "smoother" version of our fixed-point computation lower bound (by an absolute constant factor) would imply that: ● The deterministic two-party communication complexity of finding an e = Ω(1/log2 N)-approximate Nash equilibrium in an N × N bimatrix game (where each player knows only his own payoff matrix) is at least Nγ for some constant γ > 0. (In contrast, the nondeterministic communication complexity of this problem is only O(log6 N)). ● The deterministic (Number-In-Hand) multiparty communication complexity of finding an e = Ω(1)-Nash equilibrium in a k-player constant-action game is at least 2Ω(k/log k) (while the nondeterministic communication complexity is only O(k)).

Journal ArticleDOI
TL;DR: In this paper, the authors studied a class of atomic rank functions defined on a convex cone which generalize several notions of "positive" ranks such as nonnegative rank or cp-rank (for completely positive matrices).
Abstract: The nonnegative rank of a matrix A is the smallest integer r such that A can be written as the sum of r rank-one nonnegative matrices. The nonnegative rank has received a lot of attention recently due to its application in optimization, probability and communication complexity. In this paper we study a class of atomic rank functions defined on a convex cone which generalize several notions of "positive" ranks such as nonnegative rank or cp-rank (for completely positive matrices). The main contribution of the paper is a new method to obtain lower bounds for such ranks. Additionally the bounds we propose can be computed by semidefinite programming using sum-of-squares relaxations. The idea of the lower bound relies on an atomic norm approach where the atoms are self-scaled according to the vector (or matrix, in the case of nonnegative rank) of interest. This results in a lower bound that is invariant under scaling and that enjoys other interesting structural properties. For the case of the nonnegative rank we show that our bound has an appealing connection with existing combinatorial bounds and other norm-based bounds. For example we show that our lower bound is a non-combinatorial version of the fractional rectangle cover number, while the sum-of-squares relaxation is closely related to the Lovasz $$\bar{\vartheta }$$ź¯ number of the rectangle graph of the matrix. We also prove that the lower bound is always greater than or equal to the hyperplane separation bound (and other similar "norm-based" bounds). We also discuss the case of the tensor nonnegative rank as well as the cp-rank, and compare our bound with existing results.

Journal ArticleDOI
TL;DR: A low-complexity, near-maximum likelihood (ML) error performance achieving detection scheme is proposed for QSM to reduce the overall computational complexity of the ML detector.
Abstract: Quadrature spatial modulation (QSM) is a recently proposed multiple-input multiple-output transmission scheme which improves the spectral efficiency of classical spatial modulation (SM) by increasing the number of information bits transmitted by antenna indices. In QSM, a complex data symbol is decomposed into its real and imaginary components; then, these two components are independently transmitted using the SM principle. A low-complexity, near-maximum likelihood (ML) error performance achieving detection scheme is proposed for QSM to reduce the overall computational complexity of the ML detector. First, the proposed detector determines the set of most probable active transmit antennas and the corresponding possible transmission patterns. Then, ML-based detection is used to determine the transmitted complex data vector by performing a search over these transmission patterns and M-ary constellation symbols. It has been shown via computer simulations that the proposed detection algorithm exhibits near-ML bit error rate performance with considerably lower decoding complexity.

Posted Content
TL;DR: In this article, a poly(n) lower bound on the communication complexity of Nash equilibrium in two-player NxN games was shown. And for n-player binary-action games, an exp(n)-lower bound for communication complexity was shown for weak Nash equilibria.
Abstract: For a constant $\epsilon$, we prove a poly(N) lower bound on the (randomized) communication complexity of $\epsilon$-Nash equilibrium in two-player NxN games. For n-player binary-action games we prove an exp(n) lower bound for the (randomized) communication complexity of $(\epsilon,\epsilon)$-weak approximate Nash equilibrium, which is a profile of mixed actions such that at least $(1-\epsilon)$-fraction of the players are $\epsilon$-best replying.

Journal Article
TL;DR: Aaronson, Ben-David, and Kothari as discussed by the authors gave the first super-quadratic separation between quantum and randomized communication complexity for a total function, giving an example exhibiting a power 2.5 gap.
Abstract: While exponential separations are known between quantum and randomized communication complexity for partial functions (Raz, STOC 1999), the best known separation between these measures for a total function is quadratic, witnessed by the disjointness function. We give the first super-quadratic separation between quantum and randomized communication complexity for a total function, giving an example exhibiting a power 2.5 gap. We further present a 1.5 power separation between exact quantum and randomized communication complexity, improving on the previous ~1.15 separation by Ambainis (STOC 2013). Finally, we present a nearly optimal quadratic separation between randomized communication complexity and the logarithm of the partition number, improving upon the previous best power 1.5 separation due to Goos, Jayram, Pitassi, and Watson. Our results are the communication analogues of separations in query complexity proved using the recent cheat sheet framework of Aaronson, Ben-David, and Kothari (STOC 2016). Our main technical results are randomized communication and information complexity lower bounds for a family of functions, called lookup functions, that generalize and port the cheat sheet framework to communication complexity.