scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 2009"


Book ChapterDOI
20 Feb 2009
TL;DR: The main insight of this work comes from a simple connection between PoR schemes and the notion of hardness amplification, and then building nearly optimal PoR codes using state-of-the-art tools from coding and complexity theory.
Abstract: Proofs of Retrievability (PoR) , introduced by Juels and Kaliski [JK07], allow the client to store a file F on an untrusted server, and later run an efficient audit protocol in which the server proves that it (still) possesses the client's data. Constructions of PoR schemes attempt to minimize the client and server storage, the communication complexity of an audit, and even the number of file-blocks accessed by the server during the audit. In this work, we identify several different variants of the problem (such as bounded-use vs. unbounded-use, knowledge-soundness vs. information-soundness), and giving nearly optimal PoR schemes for each of these variants. Our constructions either improve (and generalize) the prior PoR constructions, or give the first known PoR schemes with the required properties. In particular, we Formally prove the security of an (optimized) variant of the bounded-use scheme of Juels and Kaliski [JK07], without making any simplifying assumptions on the behavior of the adversary. Build the first unbounded-use PoR scheme where the communication complexity is linear in the security parameter and which does not rely on Random Oracles, resolving an open question of Shacham and Waters [SW08]. Build the first bounded-use scheme with information-theoretic security. The main insight of our work comes from a simple connection between PoR schemes and the notion of hardness amplification , extensively studied in complexity theory. In particular, our improvements come from first abstracting a purely information-theoretic notion of PoR codes , and then building nearly optimal PoR codes using state-of-the-art tools from coding and complexity theory.

381 citations


Book ChapterDOI
02 Dec 2009
TL;DR: A privacy-preserving face recognition scheme that substantially improves over previous work in terms of communication-and computation efficiency and has a substantially smaller online communication complexity.
Abstract: Automatic recognition of human faces is becoming increasingly popular in civilian and law enforcement applications that require reliable recognition of humans. However, the rapid improvement and widespread deployment of this technology raises strong concerns regarding the violation of individuals' privacy. A typical application scenario for privacy-preserving face recognition concerns a client who privately searches for a specific face image in the face image database of a server. In this paper we present a privacy-preserving face recognition scheme that substantially improves over previouswork in terms of communication-and computation efficiency: the most recent proposal of Erkin et al. (PETS'09) requires O(log M) rounds and computationally expensive operations on homomorphically encrypted data to recognize a face in a database of M faces. Our improved scheme requires only O(1) rounds and has a substantially smaller online communication complexity (by a factor of 15 for each database entry) and less computation complexity. Our solution is based on known cryptographic building blocks combining homomorphic encryption with garbled circuits. Our implementation results show the practicality of our scheme also for large databases (e.g., for M = 1000 we need less than 13 seconds and less than 4 MByte on-line communication on two 2.4GHz PCs connected via Gigabit Ethernet).

335 citations


Book ChapterDOI
12 Mar 2009
TL;DR: An asynchronous protocol for general multiparty computation that is secure against an adaptive and active adversary corrupting less than n /3 players and allows automatic parallelization of primitive operations such as secure multiplications, without having to resort to complicated multithreading is proposed.
Abstract: We propose an asynchronous protocol for general multiparty computation. The protocol has perfect security and communication complexity $\mathcal{O}(n^2|C|k)$, where n is the number of parties, |C | is the size of the arithmetic circuit being computed, and k is the size of elements in the underlying field. The protocol guarantees termination if the adversary allows a preprocessing phase to terminate, in which no information is released. The communication complexity of this protocol is the same as that of a passively secure solution up to a constant factor. It is secure against an adaptive and active adversary corrupting less than n /3 players. We also present a software framework for implementation of asynchronous protocols called VIFF (Virtual Ideal Functionality Framework), which allows automatic parallelization of primitive operations such as secure multiplications, without having to resort to complicated multithreading. Benchmarking of a VIFF implementation of our protocol confirms that it is applicable to practical non-trivial secure computations.

278 citations


Proceedings ArticleDOI
12 May 2009
TL;DR: A distributed data-allocation scheme is presented that enables robots to simultaneously process and update their local data and a computationally efficient distributed marginalization of past robot poses is introduced for limiting the size of the optimization problem.
Abstract: This paper presents a distributed Maximum A Posteriori (MAP) estimator for multi-robot Cooperative Localization (CL). As opposed to centralized MAP-based CL, the proposed algorithm reduces the memory and processing requirements by distributing data and computations amongst the robots. Specifically, a distributed data-allocation scheme is presented that enables robots to simultaneously process and update their local data. Additionally, a distributed Conjugate Gradient algorithm is employed that reduces the cost of computing the MAP estimates, while utilizing all available resources in the team and increasing robustness to single-point failures. Finally, a computationally efficient distributed marginalization of past robot poses is introduced for limiting the size of the optimization problem. The communication and computational complexity of the proposed algorithm is described in detail, while extensive simulation studies are presented for validating the performance of the distributed MAP estimator and comparing its accuracy to that of existing approaches.

241 citations


Journal ArticleDOI
TL;DR: Algebraic relativization or algebraic algebrization as discussed by the authors is a new barrier to progress in complexity theory, and it has been shown that relativizing some complexity class inclusion should give the simulating machine access not only to an oracle A, but also to a low-degree extension of A over a finite field or ring.
Abstract: Any proof of P ≠ NP will have to overcome two barriers: relativization and natural proofs. Yet over the last decade, we have seen circuit lower bounds (e.g., that PP does not have linear-size circuits) that overcome both barriers simultaneously. So the question arises of whether there is a third barrier to progress on the central questions in complexity theory.In this article, we present such a barrier, which we call algebraic relativization or algebrization. The idea is that, when we relativize some complexity class inclusion, we should give the simulating machine access not only to an oracle A, but also to a low-degree extension of A over a finite field or ring.We systematically go through basic results and open problems in complexity theory to delineate the power of the new algebrization barrier. First, we show that all known nonrelativizing results based on arithmetization---both inclusions such as IP = PSPACE and MIP = NEXP, and separations such as MAEXP ⊄ P/poly---do indeed algebrize. Second, we show that almost all of the major open problems---including P versus NP, P versus RP, and NEXP versus P/poly---will require non-algebrizing techniques. In some cases, algebrization seems to explain exactly why progress stopped where it did: for example, why we have superlinear circuit lower bounds for PromiseMA but not for NP.Our second set of results follows from lower bounds in a new model of algebraic query complexity, which we introduce in this article and which is interesting in its own right. Some of our lower bounds use direct combinatorial and algebraic arguments, while others stem from a surprising connection between our model and communication complexity. Using this connection, we are also able to give an MA-protocol for the Inner Product function with O (Snlogn) communication (essentially matching a lower bound of Klauck), as well as a communication complexity conjecture whose truth would imply NL ≠ NP.

220 citations


Book
Satyanarayana V. Lokam1
24 Jul 2009
TL;DR: This work surveys several techniques for proving lower bounds in Boolean, algebraic, and communication complexity based on certain linear algebraic approaches to study robustness measures of matrix rank that capture the complexity in a given model.
Abstract: We survey several techniques for proving lower bounds in Boolean, algebraic, and communication complexity based on certain linear algebraic approaches. The common theme among these approaches is to study robustness measures of matrix rank that capture the complexity in a given model. Suitably strong lower bounds on such robustness functions of explicit matrices lead to important consequences in the corresponding circuit or communication models. Many of the linear algebraic problems arising from these approaches are independently interesting mathematical challenges.

126 citations


Journal ArticleDOI
TL;DR: A general construction of a zero-knowledge proof for an NP relation $R(x,w)$, which makes only a black-box use of any secure protocol for a related multiparty functionality $f$.
Abstract: A zero-knowledge proof allows a prover to convince a verifier of an assertion without revealing any further information beyond the fact that the assertion is true. Secure multiparty computation allows $n$ mutually suspicious players to jointly compute a function of their local inputs without revealing to any $t$ corrupted players additional information beyond the output of the function. We present a new general connection between these two fundamental notions. Specifically, we present a general construction of a zero-knowledge proof for an NP relation $R(x,w)$, which makes only a black-box use of any secure protocol for a related multiparty functionality $f$. The latter protocol is required only to be secure against a small number of “honest but curious” players. We also present a variant of the basic construction that can leverage security against a large number of malicious players to obtain better efficiency. As an application, one can translate previous results on the efficiency of secure multiparty computation to the domain of zero-knowledge, improving over previous constructions of efficient zero-knowledge proofs. In particular, if verifying $R$ on a witness of length $m$ can be done by a circuit $C$ of size $s$, and assuming that one-way functions exist, we get the following types of zero-knowledge proof protocols: (1) Approaching the witness length. If $C$ has constant depth over $\wedge,\vee,\oplus, eg$ gates of unbounded fan-in, we get a zero-knowledge proof protocol with communication complexity $m\cdot{poly}(k)\cdot{polylog}(s)$, where $k$ is a security parameter. (2) “Constant-rate” zero-knowledge. For an arbitrary circuit $C$ of size $s$ and a bounded fan-in, we get a zero-knowledge protocol with communication complexity $O(s)+{poly}(k,\log s)$. Thus, for large circuits, the ratio between the communication complexity and the circuit size approaches a constant. This improves over the $O(ks)$ complexity of the best previous protocols.

121 citations


Book
22 Sep 2009
TL;DR: Lower Bounds in Communication Complexity focuses on showing lower bounds on the communication complexity of explicit functions, and treats different variants of communication complexity, including randomized, quantum, and multiparty models.
Abstract: In the 30 years since its inception, communication complexity has become a vital area of theoretical computer science. The applicability of communication complexity to other areas, including circuit and formula complexity, VLSI design, proof complexity, and streaming algorithms, has meant that it has attracted a lot of interest. Lower Bounds in Communication Complexity focuses on showing lower bounds on the communication complexity of explicit functions. It treats different variants of communication complexity, including randomized, quantum, and multiparty models. Many tools have been developed for this purpose from a diverse set of fields including linear algebra, Fourier analysis, and information theory. As is often the case in complexity theory, demonstrating a lower bound is usually the more difficult task. Lower Bounds in Communication Complexity describes a three-step approach for the development and application of these techniques. This approach can be applied in much the same way for different models, be they randomized, quantum, or multiparty. Lower Bounds in Communication Complexity is an ideal primer for anyone with an interest in this current and popular topic.

118 citations


Journal ArticleDOI
TL;DR: The design of both a transmitter and a receiver for noncoherent communication over a frequency-flat, richly scattered multiple-input multiple-output (MIMO) channel is considered and greedy, direct and rotation-based techniques for designing constellations are proposed.
Abstract: This paper considers the design of both a transmitter and a receiver for noncoherent communication over a frequency-flat, richly scattered multiple-input multiple-output (MIMO) channel. The design is guided by the fact that at high signal-to-noise ratios (SNRs), the ergodic capacity of the channel can be achieved by input signals that are isotropically distributed on the (compact) Grassmann manifold. The first part of the paper considers the design of Grassmannian constellations that MIMIC the isotropic distribution. A subspace perturbation analysis is used to determine an appropriate metric for the distance between Grassmannian constellation points, and using this metric, greedy, direct and rotation-based techniques for designing constellations are proposed. These techniques offer different tradeoffs between the minimum distance of the constellation and the design complexity. In addition, the rotation-based technique results in constellations that have lower storage requirements and admit a natural ldquoquasi-set-partitioningrdquo binary labeling.

117 citations


Book ChapterDOI
19 Aug 2009
TL;DR: A sub-linear size zero-knowledge argument is offered for a committed matrix being equal to the Hadamard product of two other committed matrices, and many other sub- linear sizezero-knowledge arguments are suggested for statements involving linear algebra.
Abstract: We suggest practical sub-linear size zero-knowledge arguments for statements involving linear algebra. Given commitments to matrices over a finite field, we give a sub-linear size zero-knowledge argument that one committed matrix is the product of two other committed matrices. We also offer a sub-linear size zero-knowledge argument for a committed matrix being equal to the Hadamard product of two other committed matrices. Armed with these tools we can give many other sub-linear size zero-knowledge arguments, for instance for a committed matrix being upper or lower triangular, a committed matrix being the inverse of another committed matrix, or a committed matrix being a permutation of another committed matrix. A special case of what can be proved using our techniques is the satisfiability of an arithmetic circuit with N gates. Our arithmetic circuit zero-knowledge argument has a communication complexity of $O(\sqrt{N})$ group elements. We give both a constant round variant and an O(logN) round variant of our zero-knowledge argument; the latter has a computation complexity of O(N/logN) exponentiations for the prover and O(N) multiplications for the verifier making it efficient for the prover and very efficient for the verifier. In the case of a binary circuit consisting of NAND-gates we give a zero-knowledge argument of circuit satisfiability with a communication complexity of $O(\sqrt{N})$ group elements and a computation complexity of O(N) multiplications for both the prover and the verifier.

115 citations


Book ChapterDOI
06 Jul 2009
TL;DR: Among other results, it is shown that a combination of dynamic programming and a variation of the algebraic method can break the trivial upper bounds for exact parameterized counting in fairly general settings.
Abstract: The algebraic framework introduced in [Koutis, Proc. of the 35 th ICALP 2008] reduces several combinatorial problems in parameterized complexity to the problem of detecting multilinear degree-k monomials in polynomials presented as circuits. The best known (randomized) algorithm for this problem requires only O *(2 k ) time and oracle access to an arithmetic circuit, i.e. the ability to evaluate the circuit on elements from a suitable group algebra. This algorithm has been used to obtain the best known algorithms for several parameterized problems. In this paper we use communication complexity to show that the O *(2 k ) algorithm is essentially optimal within this evaluation oracle framework. On the positive side, we give new applications of the method: finding a copy of a given tree on k nodes, a spanning tree with at least k leaves, a minimum set of nodes that dominate at least t nodes, and an m -dimensional k -matching. In each case we achieve a faster algorithm than what was known. We also apply the algebraic method to problems in exact counting. Among other results, we show that a combination of dynamic programming and a variation of the algebraic method can break the trivial upper bounds for exact parameterized counting in fairly general settings.

Book ChapterDOI
06 Jul 2009
TL;DR: These are the first nontrivial algorithms for distributed monitoring of non-monotone functions when f is either H, the empirical Shannon entropy of a stream, or any of a related class of entropy functions (Tsallis entropies).
Abstract: The notion of distributed functional monitoring was recently introduced by Cormode, Muthukrishnan and Yi to initiate a formal study of the communication cost of certain fundamental problems arising in distributed systems, especially sensor networks. In this model, each of k sites reads a stream of tokens and is in communication with a central coordinator, who wishes to continuously monitor some function f of *** , the union of the k streams. The goal is to minimize the number of bits communicated by a protocol that correctly monitors f (*** ), to within some small error. As in previous work, we focus on a threshold version of the problem, where the coordinator's task is simply to maintain a single output bit, which is 0 whenever f (*** ) ≤ *** (1 *** *** ) and 1 whenever f (*** ) *** *** . Following Cormode et al., we term this the (k ,f ,*** ,*** ) functional monitoring problem. In previous work, some upper and lower bounds were obtained for this problem, with f being a frequency moment function, e.g., F 0 , F 1 , F 2 . Importantly, these functions are monotone . Here, we further advance the study of such problems, proving three new classes of results. First, we provide nontrivial monitoring protocols when f is either H , the empirical Shannon entropy of a stream, or any of a related class of entropy functions (Tsallis entropies). These are the first nontrivial algorithms for distributed monitoring of non-monotone functions. Second, we study the effect of non-monotonicity of f on our ability to give nontrivial monitoring protocols, by considering f = F p with deletions allowed, as well as f = H . Third, we prove new lower bounds on this problem when f = F p , for several values of p .

Journal ArticleDOI
TL;DR: A systematic methodology, based on the concepts of information structures and information states, to search for an optimal real-time communication strategy is presented and trades off complexity in communication length with complexity in alphabet sizes as the communication length is usually order of magnitudes bigger than the alphabet sizes.
Abstract: Optimal design of sequential real-time communication of a Markov source over a noisy channel is investigated. In such a system, the delay between the source output and its reconstruction at the receiver should equal a fixed prespecified amount. An optimal communication strategy must minimize the total expected symbol-by-symbol distortion between the source output and its reconstruction. Design techniques or performance bounds for such real-time communication systems are unknown. In this paper a systematic methodology, based on the concepts of information structures and information states, to search for an optimal real-time communication strategy is presented. This methodology trades off complexity in communication length (linear in contrast to doubly exponential) with complexity in alphabet sizes (doubly exponential in contrast to exponential). As the communication length is usually order of magnitudes bigger than the alphabet sizes, the proposed methodology simplifies the search for an optimal communication strategy. In spite of this simplification, the resultant optimality equations cannot be solved efficiently using existing algorithmic techniques. The main idea is to formulate a zero-delay communication problem as a dynamic team with nonclassical information structure. Then, an appropriate choice of information states converts the dynamic team problem into a centralized stochastic control problem in function space. Thereafter, Markov decision theory is used to derive nested optimality equations for choosing an optimal design. For infinite horizon problems, these optimality equations give rise to a fixed point functional equation. Communication systems with fixed finite delay constraint, a higher-order Markov source, and channels with memory are treated in the same manner after an appropriate expansion of the state space. Thus, this paper presents a comprehensive methodology to study different variations of real-time communication.

Journal IssueDOI
TL;DR: It follows from the results that this bound on the saving in communication is tight almost always, and shed some light on the question how much communication can be saved by using entanglement.
Abstract: We introduce a new method to derive lower bounds on randomized and quantum communication complexity. Our method is based on factorization norms, a notion from Banach Space theory. This approach gives us access to several powerful tools from this area such as normed spaces duality and Grothendiek's inequality. This extends the arsenal of methods for deriving lower bounds in communication complexity. As we show, our method subsumes most of the previously known general approaches to lower bounds on communication complexity. Moreover, we extend all (but one) of these lower bounds to the realm of quantum communication complexity with entanglement. Our results also shed some light on the question how much communication can be saved by using entanglement. It is known that entanglement can save one of every two qubits, and examples for which this is tight are also known. It follows from our results that this bound on the saving in communication is tight almost always. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2009

Journal ArticleDOI
TL;DR: A new way of characterizing the complexity of online problems is proposed, which measures the amount of problem-relevant information contained in the input to be the minimal number of bits communicated by the algorithm to the oracle in order to solve the problem optimally.
Abstract: We propose a new way of characterizing the complexity of online problems. Instead of measuring the degradation of the output quality caused by the ignorance of the future we choose to quantify the amount of additional global information needed for an online algorithm to solve the problem optimally. In our model, the algorithm cooperates with an oracle that can see the whole input. We define the advice complexity of the problem to be the minimal number of bits (normalized per input request, and minimized over all algorithm-oracle pairs) communicated by the algorithm to the oracle in order to solve the problem optimally. Hence, the advice complexity measures the amount of problem-relevant information contained in the input. We introduce two modes of communication between the algorithm and the oracle based on whether the oracle offers an advice spontaneously (helper) or on request (answerer). We analyze the Paging and DiffServ problems in terms of advice complexity and deliver upper and lower bounds in both communication modes; in the case of DiffServ problem in helper mode the bounds are tight.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: This work modifies the robust soliton distribution of LT codes at the broadcaster, based on the number of input symbols already decoded at the receivers, and shows that significant savings can be even achieved with a low number of feedback messages transmitted at a uniform rate.
Abstract: The erasure resilience of rateless codes, such as Luby-Transform (LT) codes, makes them particularly suitable to a wide variety of loss-prone wireless and sensor network applications, ranging from digital video broadcast to software updates. Yet, traditional rateless codes usually make no use of a feedback communication channel, a feature available in many wireless settings. As such, we generalize LT codes to situations where receiver(s) provide feedback to the broadcaster. Our approach, referred to as Shifted LT (SLT) code, modifies the robust soliton distribution of LT codes at the broadcaster, based on the number of input symbols already decoded at the receivers. While implementing this modification entails little change to the LT encoder and decoder, we show both analytically and through real experiments, that it achieves significant savings in communication complexity, memory usage, and overall energy consumption. Furthermore, we show that significant savings can be even achieved with a low number of feedback messages (on the order of the square root of the total number of input symbols) transmitted at a uniform rate. The practical benefits of Shifted LT codes are demonstrated through the implementation of a real over-the-air programming application for sensor networks, based on the Deluge protocol.

Journal ArticleDOI
TL;DR: A new scheme for conducting private keyword search on streaming data which requires O(m) server to client communication complexity to return the content of the matching documents, where m is an upper bound on the size of the documents.
Abstract: A system for private stream searching, introduced by Ostrovsky and Skeith, allows a client to provide an untrusted server with an encrypted search query. The server uses the query on a stream of documents and returns the matching documents to the client while learning nothing about the nature of the query. We present a new scheme for conducting private keyword search on streaming data which requires O(m) server to client communication complexity to return the content of the matching documents, where m is an upper bound on the size of the documents. The required storage on the server conducting the search is also O(m). The previous best scheme for private stream searching was shown to have O(m logm) communication and storage complexity. Our solution employs a novel construction in which the user reconstructs the matching files by solving a system of linear equations. This allows the matching documents to be stored in a compact buffer rather than relying on redundancies to avoid collisions in the storage buffer as in previous work. This technique requires a small amount of metadata to be returned in addition to the documents; for this the original scheme of Ostrovsky and Skeith may be employed with O(m logm) communication and storage complexity. We also present an alternative method for returning the necessary metadata based on a unique encrypted Bloom filter construction. This method requires O(m log(t/m)) communication and storage complexity, where t is the number of documents in the stream. In this article we describe our scheme, prove it secure, analyze its asymptotic performance, and describe a number of extensions. We also provide an experimental analysis of its scalability in practice. Specifically, we consider its performance in the demanding scenario of providing a privacy preserving version of the Google News Alerts service.

Journal ArticleDOI
TL;DR: It is proved that up to a small multiplicative constant, margin complexity is equal to the inverse of discrepancy, which establishes a strong tie between seemingly very different notions from two distinct areas.
Abstract: This paper has two main focal points. We first consider an important class of machine learning algorithms: large margin classifiers, such as Support Vector Machines. The notion of margin complexity quantifies the extent to which a given class of functions can be learned by large margin classifiers. We prove that up to a small multiplicative constant, margin complexity is equal to the inverse of discrepancy. This establishes a strong tie between seemingly very different notions from two distinct areas. In the same way that matrix rigidity is related to rank, we introduce the notion of rigidity of margin complexity. We prove that sign matrices with small margin complexity rigidity are very rare. This leads to the question of proving lower bounds on the rigidity of margin complexity. Quite surprisingly, this question turns out to be closely related to basic open problems in communication complexity, e.g., whether PSPACE can be separated from the polynomial hierarchy in communication complexity. Communication is a key ingredient in many types of learning. This explains the relations between the field of learning theory and that of communication complexity [6, l0, 16, 26]. The results of this paper constitute another link in this rich web of relations. These new results have already been applied toward the solution of several open problems in communication complexity [18, 20, 29].

Book ChapterDOI
T. S. Jayram1
21 Aug 2009
TL;DR: A new proof of an ***(1/t ) lower bound on the information complexity of ${\textsc{And}}$ in the number-in-hand model of communication is given.
Abstract: The ${\textsc{And}}$ problem on t bits is a promise decision problem where either at most one bit of the input is set to 1 ( No instance) or all t bits are set to 1 (${\textsc{Yes}}$ instance) In this note, I will give a new proof of an ***(1/t ) lower bound on the information complexity of ${\textsc{And}}$ in the number-in-hand model of communication This was recently established by Gronemeier, STACS 2009 The proof exploits the information geometry of communication protocols via Hellinger distance in a novel manner and avoids the analytic approach inherent in previous work As previously known, this bound implies an ***(n /t ) lower bound on the communication complexity of multiparty disjointness and consequently a ***(n 1 *** 2/k ) space lower bound on estimating the k -th frequency moment F k

Posted Content
TL;DR: In this article, the partition bounds for randomized communication complexity and query complexity were introduced. But the partition bound is stronger than both the rectangle/corruption bound and the \gamma_2/generalized discrepancy bound.
Abstract: We describe new lower bounds for randomized communication complexity and query complexity which we call the partition bounds. They are expressed as the optimum value of linear programs. For communication complexity we show that the partition bound is stronger than both the rectangle/corruption bound and the \gamma_2/generalized discrepancy bounds. In the model of query complexity we show that the partition bound is stronger than the approximate polynomial degree and classical adversary bounds. We also exhibit an example where the partition bound is quadratically larger than polynomial degree and classical adversary bounds.

Posted Content
TL;DR: A structural conjecture about the Fourier spectra of boolean functions is made which would imply that the quantum and classical exact communication complexities of all XOR functions are asymptotically equivalent.
Abstract: An XOR function is a function of the form g(x;y) = f(x y), for some boolean function f on n bits. We study the quantum and classical communication complexity of XOR functions. In the case of exact protocols, we completely characterise one-way communication complexity for all f. We also show that, when f is monotone, g’s quantum and classical complexities are quadratically related, and that when f is a linear threshold function, g’s quantum complexity is ( n). More generally, we make a structural conjecture about the Fourier spectra of boolean functions which, if true, would imply that the quantum and classical exact communication complexities of all XOR functions are asymptotically equivalent. We give two randomised classical protocols for general XOR functions which are ecient for certain functions, and a third protocol for linear threshold functions with high margin. These protocols operate in the symmetric message passing model with shared randomness.

Proceedings ArticleDOI
15 Dec 2009
TL;DR: The enhanced version of OCSA (One Column Striping with non-increasing Area first mapping) for rectangular mapping is proposed, which is also simple and fast to implement; however, eOCSA considers the allocation of an additional resource to ensure the QoS.
Abstract: Mobile WiMAX systems based on the IEEE 802.16e standard require all downlink allocations to be mapped to a rectangular region in the two dimensional subcarrier-time map. Many published resource allocation schemes ignore this requirement. It is possible that the allocations when mapped to rectangular regions may exceed the capacity of the downlink frame, and the QoS of some flows may be violated. The rectangle mapping problem is a variation of the bin or strip packing problem, which is known to be NP-complete. In a previous paper, an algorithm called OCSA (One Column Striping with non-increasing Area first mapping) for rectangular mapping was introduced. In this paper, we propose an enhanced version of the algorithm. Similar to OCSA, the enhanced algorithm is also simple and fast to implement; however, eOCSA considers the allocation of an additional resource to ensure the QoS. eOCSA also avoids an enumeration process and so lowers the complexity to O(n2).

Proceedings ArticleDOI
17 Nov 2009
TL;DR: The investigation aims to characterize optimal sequences of routes over which a secondary flow is maintained, according to a novel metric that considers the maintenance cost of a route as channels and/or links must be switched due to the primary user activity.
Abstract: Cognitive Radio Networks (CRNs) are composed of frequency-agile radio devices that allow licensed (primary) and unlicensed (secondary) users to coexist, where secondary users opportunistically access channels without interfering with the operation of primary ones. From the perspective of secondary users, spectrum availability is a time varying network resource over which multi-hop end-to-end connections must be maintained. In this work, a theoretical outlook on the problem of routing secondary user flows in a CRN is provided. The investigation aims to characterize optimal sequences of routes over which a secondary flow is maintained. The optimality is defined according to a novel metric that considers the maintenance cost of a route as channels and/or links must be switched due to the primary user activity. Different from the traditional notion of route stability, the proposed approach considers subsequent path selections, as well. The problem is formulated as an integer programming optimization model and shown to be of polynomial time complexity in case of full knowledge of primary user activity. Properties of the problem are also formally introduced and leveraged to design a heuristic algorithm to solve the minimum maintenance cost routing problem when information on primary user activity is not complete. Numerical results are presented to assess the optimality gap of the heuristic routing algorithm.

Book ChapterDOI
06 Jul 2009
TL;DR: In this paper, the authors consider the problem of annotating a data stream as it is read and show upper bounds that achieve a non-trivial tradeoff between the amount of annotation used and the space required to verify it.
Abstract: The central goal of data stream algorithms is to process massive streams of data using sublinear storage space. Motivated by work in the database community on outsourcing database and data stream processing, we ask whether the space usage of such algorithms be further reduced by enlisting a more powerful "helper" who can annotate the stream as it is read. We do not wish to blindly trust the helper, so we require that the algorithm be convinced of having computed a correct answer. We show upper bounds that achieve a non-trivial tradeoff between the amount of annotation used and the space required to verify it. We also prove lower bounds on such tradeoffs, often nearly matching the upper bounds, via notions related to Merlin-Arthur communication complexity. Our results cover the classic data stream problems of selection, frequency moments, and fundamental graph problems such as triangle-freeness and connectivity. Our work is also part of a growing trend -- including recent studies of multi-pass streaming, read/write streams and randomly ordered streams -- of asking more complexity-theoretic questions about data stream processing. It is a recognition that, in addition to practical relevance, the data stream model raises many interesting theoretical questions in its own right.

Journal ArticleDOI
TL;DR: It is shown that both the error-bounded randomized complexity and quantum communication with entanglementcomplexity are Θ(r0 + r1), where r0 and r1 are the minimum integers such that r0, r1 >n/2 and S(k) = S( k + 2) for all k ∈ [r0, n - r1).
Abstract: We call F : {0, 1}n × {0, 1}n → {0, 1} a symmetric XOR function if for a functionS : {0, 1, ..., n} → {0, 1}, F(x, y) = S(|x⊕y|), for any x, y ∈ {0, 1}n, where |x⊕y| is theHamming weight of the bit-wise XOR of x and y. We show that for any such function,(a) the deterministic communication complexity is always Θ(n) except for four simplefunctions that have a constant complexity, and (b) up to a polylog factor, both theerror-bounded randomized complexity and quantum communication with entanglementcomplexity are Θ(r0 + r1), where r0 and r1 are the minimum integers such that r0, r1 >n/2 and S(k) = S(k + 2) for all k ∈ [r0, n - r1).

Journal ArticleDOI
TL;DR: This article proposes protocols that improve the previously known results by an O(N) factor in the computation and communication complexities of fundamental set operations including set intersection, cardinality of set intersections, element reduction, overthreshold set-union, and subset relation.
Abstract: Many applications require performing set operations without publishing individual datesets. In this article, we address this problem for five fundamental set operations including set intersection, cardinality of set intersection, element reduction, overthreshold set-union, and subset relation. Our protocols are obtained in the universally composable security framework, in the assumption of the probabilistic polynomial time bounded adversary, which actively controls a fixed set of t parties and the assumption of an authenticated broadcast channel. Our constructions utilize building blocks of nonmalleable NonInteractive Zero-Knowledge (NIZK) arguments, which are based on a (t + 1,N)-threshold version (N is the number of parties in the protocol) of the boneh-goh-nissim (BGN) cryptosystem whose underlying group supports bilinear maps, in the assumption that the public key and shares of the secret key have been generated by a trusted dealer. The previous studies were all based on the stand-alone model with the same assumptions on the adversary, broadcast channel, and key generation. For the first four operations, we propose protocols that improve the previously known results by an O(N) factor in the computation and communication complexities. For the subset relation, our protocol is the first one secure against the active adversary. Our constructions of NIZK have independent interest in that, though also mentioned as building blocks, the previous work did not illustrate how to construct them. We construct these NIZK with an additional nonmalleable property, the same complexity as claimed in the previous work, and also an improvement on the communication complexity.

Proceedings ArticleDOI
25 Oct 2009
TL;DR: In this paper, it was shown that the randomized k-party communication complexity of depth 4 AC^0 functions in the number-on-forehead (NOF) model for up to Theta(log n) players can be reduced to O(log log n) for non-constant k. This lower bound implies the first super polynomial lower bounds for the simulation of AC-0 by MAJ-SYMM-AND circuits.
Abstract: We prove an n^Omega(1)/4^k lower bound on the randomized k-party communication complexity of depth 4 AC^0 functions in the number-on-forehead (NOF) model for up to Theta(log n) players. These are the first non-trivial lower bounds for general NOF multiparty communication complexity for any AC^0 function for omega(log log n) players. For non-constant k the bounds are larger than all previous lower bounds for any AC^0 function even for simultaneous communication complexity. Our lower bounds imply the first super polynomial lower bounds for the simulation of AC^0 by MAJ-SYMM-AND circuits, showing that the well-known quasipolynomial simulations of AC^0 by such circuits are qualitatively optimal, even for formulas of small constant depth. We also exhibit a depth 5 formula in NPc-BPPc for up to Theta(log n) players and derive an Omega(2^{sqrt{log n}/sqrt{k}}) lower bound on the randomized k-party NOF communication complexity of set disjointness for up to Theta(log^{1/3} n) players which is significantly larger than the O(log log n) players allowed in the best previous lower bounds for multiparty set disjointness. We prove other strong results for depth 3 and 4 AC^0 functions.

Journal ArticleDOI
TL;DR: It is proved that the weak topology discovery problem is solvable only if the connectivity of the network exceeds the number of faults in the system and that the strong version of the problem issolvable onlyif the network connectivity is more than twice theNumber of faults.
Abstract: We pose and study the problem of Byzantine-robust topology discovery in an arbitrary asynchronous network. The problem is an abstraction of fault-tolerant routing. We formally state the weak and strong versions of the problem. The weak version requires that either each node discovers the topology of the network or at least one node detects the presence of a faulty node. The strong version requires that each node discovers the topology regardless of faults. We focus on non-cryptographic solutions to these problems. We explore their bounds. We prove that the weak topology discovery problem is solvable only if the connectivity of the network exceeds the number of faults in the system. Similarly, we show that the strong version of the problem is solvable only if the network connectivity is more than twice the number of faults. We present solutions to both versions of the problem. The presented algorithms match the established graph connectivity bounds. The algorithms do not require the individual nodes to know either the diameter or the size of the network. The message complexity of both programs is low polynomial with respect to the network size. We describe how our solutions can be extended to add the property of termination, handle topology changes, and perform neighborhood discovery.

Journal ArticleDOI
TL;DR: In this article, it was shown that the eventual leader election oracle Omega can also be implemented in a system with at least one process with f outgoing moving eventually timely links, assuming either unicast or broadcast steps.
Abstract: Aguilera et al. and Malkhi et al. presented two system models, which are weaker than all previously proposed models where the eventual leader election oracle Omega can be implemented, and thus, consensus can also be solved. The former model assumes unicast steps and at least one correct process with f outgoing eventually timely links, whereas the latter assumes broadcast steps and at least one correct process with f bidirectional but moving eventually timely links. Consequently, those models are incomparable. In this paper, we show that Omega can also be implemented in a system with at least one process with f outgoing moving eventually timely links, assuming either unicast or broadcast steps. It seems to be the weakest system model that allows to solve consensus via Omega-based algorithms known so far. We also provide matching lower bounds for the communication complexity of Omega in this model, which are based on an interesting ldquostabilization propertyrdquo of infinite runs. Those results reveal a fairly high price to be paid for this further relaxation of synchrony properties.

Journal ArticleDOI
TL;DR: This work proposes a sub-optimal two-step solution which decouples beamforming from subcarrier and power allocation in the MIMO-OFDMA system and reveals comparable performance with the hugely complex optimal solution.
Abstract: We study the downlink multiuser Multiple Input Multiple Output-Orthogonal Frequency Division Multiple Access (MIMO-OFDMA) system for margin adaptive resource allocation where the base station (BS) has to satisfy individual quality of service (QoS) constraints of the users subject to transmit power minimization. Low complexity solutions involve beamforming techniques for multiuser inter-stream interference cancellation. However, when beamforming is introduced in the margin adaptive objective, it becomes a joint beamforming and resource allocation problem. We propose a sub-optimal two-step solution which decouples beamforming from subcarrier and power allocation. First a reduced number of user groups are formed and then the problem is formulated as a convex optimization problem. Finally an efficient algorithm is developed which allocates the best user group to each subcarrier. Simulation results reveal comparable performance with the hugely complex optimal solution.