scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 2008"


Book ChapterDOI
17 Aug 2008
TL;DR: A simple and efficient compiler is presented for transforming secure multi-party computation protocols that enjoy security only with an honest majority into MPC protocols that guarantee security with no honest majority, in the oblivious-transfer (OT) hybrid model.
Abstract: We present a simple and efficient compiler for transforming secure multi-party computation (MPC) protocols that enjoy security only with an honest majority into MPC protocols that guarantee security with no honest majority, in the oblivious-transfer (OT) hybrid model. Our technique works by combining a secure protocol in the honest majority setting with a protocol achieving only security against semi-honestparties in the setting of no honest majority. Applying our compiler to variants of protocols from the literature, we get several applications for secure two-party computation and for MPC with no honest majority. These include: Constant-rate two-party computation in the OT-hybrid model. We obtain a statistically UC-secure two-party protocol in the OT-hybrid model that can evaluate a general circuit Cof size sand depth dwith a total communication complexity of O(s) + poly(k, d, log s) and O(d) rounds. The above result generalizes to a constant number of parties. Extending OTs in the malicious model. We obtain a computationally efficient protocol for generating many string OTs from few string OTs with only a constant amortized communication overheadcompared to the total length of the string OTs. Black-box constructions for constant-round MPC with no honest majority. We obtain general computationally UC-secure MPC protocols in the OT-hybrid model that use only a constant number of rounds, and only make a black-boxaccess to a pseudorandom generator. This gives the first constant-round protocols for three or more parties that only make a black-box use of cryptographic primitives (and avoid expensive zero-knowledge proofs).

635 citations


Journal ArticleDOI
TL;DR: This work proposes a diffusion recursive least-squares algorithm where nodes need to communicate only with their closest neighbors and requires no transmission or inversion of matrices, therefore saving in communications and complexity.
Abstract: We study the problem of distributed estimation over adaptive networks where a collection of nodes are required to estimate in a collaborative manner some parameter of interest from their measurements. The centralized solution to the problem uses a fusion center, thus, requiring a large amount of energy for communication. Incremental strategies that obtain the global solution have been proposed, but they require the definition of a cycle through the network. We propose a diffusion recursive least-squares algorithm where nodes need to communicate only with their closest neighbors. The algorithm has no topology constraints, and requires no transmission or inversion of matrices, therefore saving in communications and complexity. We show that the algorithm is stable and analyze its performance comparing it to the centralized global solution. We also show how to select the combination weights optimally.

592 citations


Journal ArticleDOI
01 Feb 2008
TL;DR: VLSI implementation results are provided which demonstrate that single tree-search, sorted QR-decomposition, channel matrix regularization, log-likelihood ratio clipping, and imposing runtime constraints are the key ingredients for realizing soft-output MIMO detectors with near max-log performance at a chip area that is only 58% higher than that of the best-known hard-output sphere decoder VLSI Implementation.
Abstract: Multiple-input multiple-output (MIMO) detection algorithms providing soft information for a subsequent channel decoder pose significant implementation challenges due to their high computational complexity. In this paper, we show how sphere decoding can be used as an efficient tool to implement soft-output MIMO detection with flexible trade-offs between computational complexity and (error rate) performance. In particular, we provide VLSI implementation results which demonstrate that single tree-search, sorted QR-decomposition, channel matrix regularization, log-likelihood ratio clipping, and imposing runtime constraints are the key ingredients for realizing soft-output MIMO detectors with near max-log performance at a chip area that is only 58% higher than that of the best-known hard-output sphere decoder VLSI implementation.

404 citations


Book ChapterDOI
19 Mar 2008
TL;DR: This protocol provides perfect security against an active, adaptive adversary corrupting t < n/3 players, which is optimal, and improves the efficiency of perfectly secure MPC protocols by a factor of Ω(n2).
Abstract: Secure multi-party computation (MPC) allows a set of n players to securely compute an agreed function, even when up to t players are under the control of an adversary. Known perfectly secure MPC protocols require communication of at least Ω(n3) field elements per multiplication, whereas cryptographic or unconditional security is possible with communication linear in the number of players. We present a perfectly secure MPC protocol communicating O(n) field elements per multiplication. Our protocol provides perfect security against an active, adaptive adversary corrupting t < n/3 players, which is optimal. Thus our protocol improves the security of the most efficient information-theoretically secure protocol at no extra costs, respectively improves the efficiency of perfectly secure MPC protocols by a factor of Ω(n2). To achieve this, we introduce a novel technique - constructing detectable protocols with the help of so-called hyper-invertible matrices, which we believe to be of independent interest. Hyper-invertible matrices allow (among other things) to perform efficient correctness checks of many instances in parallel, which was until now possible only if error-probability was allowed.

255 citations


Journal ArticleDOI
TL;DR: This work advocates a cross-layer approach to joint multiuser transmit beamforming and admission control, aiming to maximize the number of users that can be served at their desired QoS.
Abstract: Multiuser downlink beamforming under quality of service (QoS) constraints has attracted considerable interest in years, because it is particularly appealing from a network operator's perspective (e.g., UMTS, 802.16e). When there are many co-channel users and/or the service constraints are stringent, the problem becomes infeasible and some form of admission control is necessary. We advocate a cross-layer approach to joint multiuser transmit beamforming and admission control, aiming to maximize the number of users that can be served at their desired QoS. It is shown that the core problem is NP-hard, yet amenable to convex approximation tools. Two computationally efficient convex approximation algorithms are proposed: one is based on semidefinite relaxation of an equivalent problem reformulation; the other takes a penalized second-order cone approach. Their performance is assessed in a range of experiments, using both simulated and measured channel data. In all experiments considered, the proposed algorithms work remarkably well in terms of the attained performance-complexity trade-off, consistently exhibiting close to optimal performance at an affordable computational complexity.

226 citations


Journal ArticleDOI
20 Jan 2008
TL;DR: In this paper, the authors studied the problem of minimizing the number of bits communicated between the players and the coordinator in a distributed, functional monitoring problem, where the goal is to minimize the communication complexity.
Abstract: We study what we call functional monitoring problems. We have k players each tracking their inputs, say player i tracking a multiset Ai(t) up until time t, and communicating with a central coordinator. The coordinator's task is to monitor a given function f computed over the union of the inputs ∪iAi(t), continuously at all times t. The goal is to minimize the number of bits communicated between the players and the coordinator. A simple example is when f is the sum, and the coordinator is required to alert when the sum of a distributed set of values exceeds a given threshold τ. Of interest is the approximate version where the coordinator outputs 1 if f ≥ τ and 0 if f ≤ (1 - e)τ. This defines the (k, f, τ, e) distributed, functional monitoring problem. Functional monitoring problems are fundamental in distributed systems, in particular sensor networks, where we must minimize communication; they also connect to problems in communication complexity, communication theory, and signal processing. Yet few formal bounds are known for functional monitoring. We give upper and lower bounds for the (k, f, τ, e) problem for some of the basic f's. In particular, we study frequency moments (F0, F1, F2). For F0 and F1, we obtain continuously monitoring algorithms with costs almost the same as their one-shot computation algorithms. However, for F2 the monitoring problem seems much harder. We give a carefully constructed multi-round algorithm that uses "sketch summaries" at multiple levels of detail and solves the (k, F2, τ, e) problem with communication O(k2/e+ (√k/e)3). Since frequency moment estimation is central to other problems, our results have immediate applications to histograms, wavelet computations, and others. Our algorithmic techniques are likely to be useful for other functional monitoring problems as well.

180 citations


Proceedings Article
27 Jun 2008
TL;DR: A more efficient algorithm is given and some related issues are addressed, such as the number of orders that may be compatible with a given profile, or the communication complexity of preference aggregation under the single-peakedness assumption.
Abstract: A common way of dealing with the paradoxes of preference aggregation consists in restricting the domain of admissible preferences. The most well-known such restriction is single-peakedness. In this paper we focus on the problem of determining whether a given profile is single-peaked with respect to some axis, and on the computation of such an axis. This problem has already been considered in [2]; we give here a more efficient algorithm and address some related issues, such as the number of orders that may be compatible with a given profile, or the communication complexity of preference aggregation under the single-peakedness assumption.

137 citations


Proceedings ArticleDOI
17 Jun 2008
TL;DR: The protocol for the single-step k-NN search is provably secure and has linear computation and communication complexity, and the protocols and correctness proofs can be extended to suit other privacy-preserving data mining tasks, such as classification and outlier detection.
Abstract: We give efficient protocols for secure and private k-nearest neighbor (k-NN) search, when the data is distributed between two parties who want to cooperatively compute the answers without revealing to each other their private data. Our protocol for the single-step k-NN search is provably secure and has linear computation and communication complexity. Previous work on this problem had a quadratic complexity, and also leaked information about the parties' inputs. We adapt our techniquesto also solve the general multi-step k-NN search, and describe a specific embodiment of it for the case of sequence data. The protocols and correctness proofs can be extended to suit other privacy-preserving data mining tasks, such as classification and outlier detection.

118 citations


Journal Article
TL;DR: A lower bound of Ω(n 1 k+1 22 (k−1)2k −1 ) for the k-party randomized communication complexity of the Disjointness function in the number on the forehead model of multiparty communication was obtained by Lee and Shraibman as discussed by the authors.
Abstract: We obtain a lower bound of Ω ( n 1 k+1 22 (k−1)2k−1 ) on the k-party randomized communication complexity of the Disjointness function in the ‘Number on the Forehead’ model of multiparty communication In particular, this yields a bound of n when k is a constant The previous best lower bound for three players until recently was Ω(logn) Our bound separates the communication complexity classes NP k and BPP k for k = o(log log n) Furthermore, by the results of Beame, Pitassi and Segerlind [4], our bound implies proof size lower bounds for tree-like, degree k − 1 threshold systems and superpolynomial size lower bounds for Lovasz-Schrijver proofs Sherstov [16] recently developed a novel technique to obtain lower bounds on two-party communication using the approximate polynomial degree of boolean functions We obatin our results by extending his technique to the multi-party setting using ideas from Chattopadhyay [8] A similar bound for Disjointness has been recently and independently obtained by Lee and Shraibman

98 citations


Proceedings ArticleDOI
25 Oct 2008
TL;DR: In this paper, a non-quantum version of the Bonami-Beckner hypercontractive inequality for matrix-valued functions on the Boolean cube is presented, based on a powerful inequality by Ball, Carlen, and Lieb.
Abstract: The Bonami-Beckner hypercontractive inequality is a powerful tool in Fourier analysis of real-valued functions on the Boolean cube. In this paper we present a version of this inequality for matrix-valued functions on the Boolean cube. Its proof is based on a powerful inequality by Ball, Carlen, and Lieb. We also present a number of applications. First, we analyze maps that encode n classical bits into m qubits, in such a way that each set of k bits can be recovered with some probability by an appropriate measurement on the quantum encoding; we show that if m < 0.7 n, then the success probability is exponentially small in k. This result may be viewed as a direct product version of Nayak's quantum random access code bound. It in turn implies strong direct product theorems for the one-way quantum communication complexity of Disjointness and other problems. Second, we prove that error-correcting codes that are locally decodable with 2 queries require length exponential in the length of the encoded string. This gives what is arguably the first "non-quantum" proof of a result originally derived by Kerenidis and de Wolf using quantum information theory.

97 citations


Book ChapterDOI
07 Jul 2008
TL;DR: This paper shows that if an additional interactive verification phase is allowed for some NP languages, one can construct PCPs that are significantly shorter than the known PCPs (without the additional interactive phase) for these languages.
Abstract: A central line of research in the area of PCPs is devoted to constructing short PCPs. In this paper, we show that if we allow an additional interactive verification phase, with very low communication complexity, then for some NP languages, one can construct PCPs that are significantly shorter than the known PCPs (without the additional interactive phase) for these languages. We give many cryptographical applications and motivations for our results and for the study of the new model in general. More specifically, we study a new model of proofs: interactive-PCP . Roughly speaking, an interactive-PCP (say, for the membership xi¾? L) is a proof-string that can be verified by reading only one of its bits, with the help of an interactive-proof with very small communication complexity. We show that for membership in some NP languages L, there are interactive-PCPs that are significantly shorter than the known (non-interactive) PCPs for these languages. Our main result is that for any constant depth Boolean formula i¾?(z 1 ,...,z k ) of size n(over the gates i¾? , i¾? , i¾? , ¬), a prover, Alice, can publish a proof-string for the satisfiability of i¾?, where the size of the proof-string is poly(k). Later on, any user who wishes to verify the published proof-string needs to interact with Alice via a short interactive protocol of communication complexity poly(logn), while accessing the proof-string at a single location. Note that the size of the published proof-string is poly(k), rather than poly(n), i.e., the size is polynomial in the size of the witness, rather than polynomial in the size of the instance. This compares to the known (non-interactive) PCPs that are of size polynomial in the size of the instance. By reductions, this result extends to many other central NP languages (e.g., SAT, k-clique, Vertex-Cover, etc.). More generally, we show that the satisfiability of $\bigwedge_{i=1}^n[\Phi_i(z_1,\ldots,z_k) =0]$, where each i¾? i (z 1 ,...,z k ) is an arithmetic formula of size n(say, over $\mathbb{GF}[2]$) that computes a polynomial of degree d, can be proved by a published proof-string of size poly(k,d). Later on, any user who wishes to verify the published proof-string needs to interact with the prover via an interactive protocol of communication complexity poly(d,logn), while accessing the proof-string at a single location. We give many applications and motivations for our results and for the study of the notion of interactive PCP in general. In particular, we have the following applications: Succinct zero knowledge proofs: We show that any interactive PCP, with certain properties, can be converted into a zero-knowledge interactive proof. We use this to construct zero-knowledge proofs of communication complexity polynomial in the size of the witness, rather than polynomial in the size of the instance, for many NP languages. Succinct probabilistically checkable arguments: In a subsequent paper, we study the new notion of probabilistically checkable argument, and show that any interactive PCP, with certain properties, translates into a probabilistically checkable argument [18]. We use this to construct probabilistically checkable arguments of size polynomial in the size of the witness, rather than polynomial in the size of the instance, for many NP languages. Commit-Reveal schemes: We show that Alice can commit to a string wof kbits, by a message of size poly(k), and later on, for any predicate i¾?of size n, whose satisfiability can be proved by an efficient enough interactive PCP with certain properties, Alice can prove the statement i¾?(w) = 1, by a zero-knowledge interactive proof with communication complexity poly(logn). (Surprisingly, the communication complexity may be significantly smaller than kand n).

Proceedings ArticleDOI
17 May 2008
TL;DR: An entirely different proof of Razborov's result is given, using the original, one-dimensional discrepancy method, which refutes the commonly held intuition that the original discrepancy method fails for functions such as DISJOINTNESS and establishes a large new class of total Boolean functions whose quantum communication complexity is at best polynomially smaller than their classical complexity.
Abstract: In a breakthrough result, Razborov (2003) gave optimal lower bounds on the communication complexity of every function f of the form f(x,y)=D(|x AND y|) for some D:{0,1,...,n}->{0,1}, in the bounded-error quantum model with and without prior entanglement. This was proved by the multidimensional discrepancy method. We give an entirely different proof of Razborov's result, using the original, one-dimensional discrepancy method. This refutes the commonly held intuition (Razborov 2003) that the original discrepancy method fails for functions such as DISJOINTNESS. More importantly, our communication lower bounds hold for a much broader class of functions for which no methods were available. Namely, fix an arbitrary function f:{0,1}n/4->{0,1} and let A be the Boolean matrix whose columns are each an application of f to some subset of the variables x1,x2,...,xn. We prove that the communication complexity of A in the bounded-error quantum model with and without prior entanglement is Omega(d), where d is the approximate degree of f. From this result, Razborov's lower bounds follow easily. Our result also establishes a large new class of total Boolean functions whose quantum communication complexity (regardless of prior entanglement) is at best polynomially smaller than their classical complexity. Our proof method is a novel combination of two ingredients. The first is a certain equivalence of approximation and orthogonality in Euclidean n-space, which follows by linear-programming duality. The second is a new construction of suitably structured matrices with low spectral norm, the pattern matrices, which we realize using matrix analysis and the Fourier transform over (Z2)n. The method of this paper has recently inspired important progress in multiparty communication complexity.

Proceedings ArticleDOI
17 May 2008
TL;DR: Lower bounds are proved for when the order of the items in the stream is chosen not adversarially but rather uniformly (or near-uniformly) from the set of all permuations, which gives stronger evidence for the inherent hardness of streaming problems.
Abstract: We study the communication complexity of evaluating functions when the input data is randomly allocated (according to some known distribution) amongst two or more players, possibly with information overlap. This naturally extends previously studied variable partition models such as the best-case and worst-case partition models [32,29]. We aim to understand whether the hardness of a communication problem holds for almost every allocation of the input, as opposed to holding for perhaps just a few atypical partitions. A key application is to the heavily studied data stream model. There is a strong connection between our communication lower bounds and lower bounds in the data stream model that are "robust" to the ordering of the data. That is, we prove lower bounds for when the order of the items in the stream is chosen not adversarially but rather uniformly (or near-uniformly) from the set of all permuations. This random-order data stream model has attracted recent interest, since lower bounds here give stronger evidence for the inherent hardness of streaming problems. Our results include the first random-partition communication lower bounds for problems including multi-party set disjointness and gap-Hamming-distance. Both are tight. We also extend and improve previous results [19,7] for a form of pointer jumping that is relevant to the problem of selection (in particular, median finding). Collectively, these results yield lower bounds for a variety of problems in the random-order data stream model, including estimating the number of distinct elements, approximating frequency moments, and quantile estimation.

Journal ArticleDOI
TL;DR: This note gives a simple proof of a linear lower bound for the randomized one-way communication complexity of the Hamming distance problem using a simple reduction from the indexing problem and avoids the VC-dimension arguments used in the previous paper.
Abstract: Consider the following version of the Hamming distance problem for ±1 vec- tors of length n: the promise is that the distance is either at least n + p n or at most n p n, and the goal is to find out which of these two cases occurs. Woodruff (Proc. ACM-SIAM Symposium on Discrete Algorithms, 2004) gave a linear lower bound for the randomized one-way communication complexity of this problem. In this note we give a simple proof of this result. Our proof uses a simple reduction from the indexing problem and avoids the VC-dimension arguments used in the previous paper. As shown by Woodruff (loc. cit.), this implies an W(1/e 2 )-space lower bound for approximating frequency moments within a factor 1+e in the data stream model.

Journal ArticleDOI
TL;DR: A new distributed self-diagnosis protocol, called Dynamic-DSDP, is developed for MANETs that identifies both hard and soft faults in a finite amount of time and is constructed on top of a reliable multi-hop architecture.

Proceedings ArticleDOI
18 Nov 2008
TL;DR: A new localization algorithm which can be effectively used in three-dimensional (3D) wireless sensor networks and needs no additional hardware support and is implemented in a distributed way.
Abstract: In this paper, we propose a new localization algorithm which can be effectively used in three-dimensional (3D) wireless sensor networks. This scheme needs no additional hardware support and can be implemented in a distributed way. The proposed method can improve the location accuracy with relatively low communication traffic and computing complexity. Simulation results show that the performance of the proposed algorithm is superior to that of the conventional centroid algorithm.

Journal ArticleDOI
TL;DR: This work gives an exponential separation between one-way quantum and classical communication protocols for a partial Boolean function (a variant of the Boolean hidden matching problem of Bar-Yossef et al.) and gives a number of applications of this separation.
Abstract: We give an exponential separation between one-way quantum and classical communication protocols for a partial Boolean function (a variant of the Boolean hidden matching problem of Bar-Yossef et al.). Previously, such an exponential separation was known only for a relational problem. The communication problem corresponds to a strong extractor that fails against a small amount of quantum information about its random source. Our proof uses the Fourier coefficients inequality of Kahn, Kalai, and Linial. We also give a number of applications of this separation. In particular, we show that there are privacy amplification schemes that are secure against classical adversaries but not against quantum adversaries; and we give the first example of a key-expansion scheme in the model of bounded-storage cryptography that is secure against classical memory-bounded adversaries but not against quantum ones.

Journal Article
TL;DR: A new and growing body of work in communication complexity that centers around the dual objects, i.e., polynomials that certify the diffi culty of approximating or signrepresenting a given function.
Abstract: Representations of Boolean functions by real polynomials play an important role in complexity theory. Typically, one is intere sted in the least degree of a polynomial p(x1,..., xn) that approximates or sign-represents a given Boolean function f (x1,..., xn). This article surveys a new and growing body of work in communication complexity that centers around the dual objects, i.e., polynomials that certify the diffi culty of approximating or signrepresenting a given function. We provide a unified guide to t he following results, complete with all the key proofs:

Journal ArticleDOI
13 Jun 2008
TL;DR: A low complexity scheduling algorithm which aims to maximize the capacity upper bound is proposed and results show that the proposed scheduling algorithm achieves comparable total throughput as the optimal algorithm with much lower complexity.
Abstract: In multiuser MIMO systems, the base station schedules transmissions to a group of users simultaneously. Since the data transmitted to each user are different, in order to avoid the inter-user interference, a transmit preprocessing technique which decomposes the multiuser MIMO downlink channel into multiple parallel independent single-user MIMO channels can be used. When the number of users is larger than the maximum that the system can support simultaneously, the base station selects a subset of users who have the best instantaneous channel quality to maximize the system throughput. Since the exhaustive search for the optimal user set is computationally prohibitive, a low complexity scheduling algorithm which aims to maximize the capacity upper bound is proposed. Simulation results show that the proposed scheduling algorithm achieves comparable total throughput as the optimal algorithm with much lower complexity.

Journal ArticleDOI
TL;DR: Miltersen and Wigderson as discussed by the authors gave a new lower bound for the predecessor problem that matches the bounds of Beame and Fich, and used the round elimination approach to obtain a tight lower bound.

Proceedings ArticleDOI
09 Jun 2008
TL;DR: This work introduces a monitoring method, based on a geometric interpretation of the problem, which enables to define local constraints at the nodes, and extends the concept of safe zones for the monitoring problem, and shows that previous work on geometric monitoring is a special case of the proposed extension.
Abstract: A fundamental problem in distributed computation is the distributed evaluation of functions. The goal is to determine the value of a function over a set of distributed inputs, in a communication efficient manner. Specifically, we assume that each node holds a time varying input vector, and we are interested in determining, at any given time, whether the value of an arbitrary function on the average of these vectors crosses a predetermined threshold.In this paper, we introduce a new method for monitoring distributed data, which we term shape sensitive geometric monitoring. It is based on a geometric interpretation of the problem, which enables to define local constraints on the data received at the nodes. It is guaranteed that as long as none of these constraints has been violated, the value of the function does not cross the threshold. We generalize previous work on geometric monitoring, and solve two problems which seriously hampered its performance: as opposed to the constraints used so far, which depend only on the current values of the local input vectors, here we incorporate their temporal behavior into the constraints. Also, the new constraints are tailored to the geometric properties of the specific function which is being monitored, while the previous constraints were generic.Experimental results on real world data reveal that using the new geometric constraints reduces communication by up to three orders of magnitude in comparison to existing approaches, and considerably narrows the gap between existing results and a newly defined lower bound on the communication complexity.

Journal ArticleDOI
TL;DR: This paper proposes a simple policy for tree topologies under the primary interference model that requires each link to exchange only 1 bit information with its adjacent links and approximates the maximum throughput region using a computation time that depends only on the maximum degree of nodes and the approximation factor.
Abstract: Several policies have recently been proposed for attaining the maximum throughput region, or a guaranteed fraction thereof, through dynamic link scheduling Among these policies, the ones that attain the maximum throughput region require a computation time which is linear in the network size, and the ones that require constant or logarithmic computation time attain only certain fractions of the maximum throughput region In contrast, in this paper we propose policies that can attain any desirable fraction of the maximum throughput region using a computation time that is largely independent of the network size First, using a combination of graph partitioning techniques and Lyapunov arguments, we propose a simple policy for tree topologies under the primary interference model that requires each link to exchange only 1 bit information with its adjacent links and approximates the maximum throughput region using a computation time that depends only on the maximum degree of nodes and the approximation factor Then we develop a framework for attaining arbitrary close approximations for the maximum throughput region in arbitrary networks, and use this framework to obtain any desired tradeoff between throughput guarantees and computation times for a large class of networks and interference models Specifically, given any epsiv > 0, the maximum throughput region can be approximated in these networks within a factor of 1-epsiv using a computation time that depends only on the maximum node degree and epsiv

Proceedings ArticleDOI
13 Apr 2008
TL;DR: In this paper, the problem of placing both operators and intermediate data objects inside the network for a set of queries so as to minimize the total cost of storage, computation, and data transmission is considered.
Abstract: Recent advances in computer technology and wireless communications have enabled the emergence of stream-based sensor networks. In such sensor networks, real-time data are generated by a large number of distributed sources. Queries are made that may require sophisticated processing and filtering of the data. A query is represented by a query graph. In order to reduce the data transmission and to better utilize resources, it is desirable to place operators of the query graph inside the network, and thus to perform in-network processing. Moreover, given that various queries occur with different frequencies and that only a subset of sensor data may actually be queried, caching intermediate data objects inside the network can help improve query efficiency. In this paper, we consider the problem of placing both operators and intermediate data objects inside the network for a set of queries so as to minimize the total cost of storage, computation, and data transmission. We propose distributed algorithms that achieve optimal solutions for tree-structured query graph topologies and general network topologies. The algorithms converge in Lmax(.HQ + 1) iterations, where Lmax is the order of the diameter of the sensor network, and Hq represents the depth of the query graph, defined as the maximum number of operations needed for a raw data to become a final data. For a regular grid network and complete binary tree query graph, the complexity is 0(radic(N)log2 M), where N is the number of nodes in the sensor network and M is the number of data objects in a query graph. The most attractive features of these algorithms are that they require only information exchanges between neighbors, can be executed asynchronously, are adaptive to cost change and topology change, and are resilient to node or link failures.

Journal ArticleDOI
TL;DR: The communication rate is introduced as the ratio of the secret size and the total number of communication bits transmitted from the participants to the combiner in the secret reconstruction phase and the number of channels can be reduced from n to O(log n), where is the number in a secret sharing scheme.
Abstract: A secret sharing scheme typically requires secure communications in each of two distribution phases: (1) a dealer distributes shares to participants (share distribution phase); and later (2) the participants in some authorised subset send their share information to a combiner (secret reconstruction phase). While problems on storage required for participants, for example, the size of shares, have been well studied, problems regarding the communication complexity of the two distribution phases seem to have been mostly neglected in the literature so far. In this correspondence, we deal with several communication related problems in the secret reconstruction phase. Firstly, we show that there is a tradeoff between the communication costs and the number of participants involved in the secret reconstruction. We introduce the communication rate as the ratio of the secret size and the total number of communication bits transmitted from the participants to the combiner in the secret reconstruction phase. We derive a lower bound on the communication rate and give constructions that meet the bound. Secondly, we show that the point-to-point secure communication channels for participants to send share information to the combiner can be replaced with partial broadcast channels. We formulate partial broadcast channels as set systems and show that they are equivalent to the well-known combinatorial objects of cover-free family. Surprisingly, we find that the number of partial broadcast channels can be significantly reduced from the number of point-to-point secure channels. Precisely, in its optimal form, the number of channels can be reduced from n to O(log n), where is the number of participants in a secret sharing scheme. We also study the communication rates of partial broadcast channels for the secret reconstruction.

Journal ArticleDOI
TL;DR: In this article, the authors gave the first exponential separation between quantum and bounded-error randomized one-way communication complexity for the Hidden Matching Problem (HM$_n), and showed that the complexity of the hidden matching problem with bounded error is O(log n) bits.
Abstract: We give the first exponential separation between quantum and bounded-error randomized one-way communication complexity. Specifically, we define the Hidden Matching Problem HM$_n$: Alice gets as input a string ${\bf x}\in\{0, 1\}^n$, and Bob gets a perfect matching $M$ on the $n$ coordinates. Bob's goal is to output a tuple $\langle i,j,b \rangle$ such that the edge $(i,j)$ belongs to the matching $M$ and $b=x_i\oplus x_j$. We prove that the quantum one-way communication complexity of HM$_n$ is $O(\log n)$, yet any randomized one-way protocol with bounded error must use $\Omega({\sqrt{n}})$ bits of communication. No asymptotic gap for one-way communication was previously known. Our bounds also hold in the model of Simultaneous Messages (SM), and hence we provide the first exponential separation between quantum SM and randomized SM with public coins. For a Boolean decision version of HM$_n$, we show that the quantum one-way communication complexity remains $O(\log n)$ and that the 0-error randomized one-way communication complexity is $\Omega(n)$. We prove that any randomized linear one-way protocol with bounded error for this problem requires $\Omega(\sqrt[3]{n \log n})$ bits of communication.

Proceedings ArticleDOI
Peter Larsson1
15 Apr 2008
TL;DR: Overall, it is found that the throughput is significantly higher than multicast selective repeat ARQ, and that the optimal throughput for an erasure channel is attained.
Abstract: In this paper, multiuser ARQ is extended to multicasting. The core idea is that the sender, based on feedback from users regarding successfully received transmissions, adapts code weights for data packet linear combinations that are then sent. Each user exploits its previously received information in decoding the linearly combined packets. Specifically, a throughput optimal, low en-/decoding complexity enabling, low overhead and on-line multicast coding and scheduling algorithm is devised based on a per user rank increase criterion. For throughput optimality, a minimum field size criterion is derived. Relative previous work, which adaptively identifies sets of users suited to receive linearly combined packets and uses GF(2) and XOR coding, the proposed method adaptively select weights from a sufficient large finite field for optimality instead. Throughput is analyzed and simulated, and en-/decoding complexity, signaling overhead, and latency etc. are studied through realistic simulations. Overall, it is found that the throughput is significantly higher than multicast selective repeat ARQ, and that the optimal throughput for an erasure channel is attained.

Proceedings ArticleDOI
17 May 2008
TL;DR: The subdistribution bound is introduced, which is a relaxation of the well-studied rectangle or corruption bound in communication complexity, and it is shown that for the communication complexity of Boolean functions with constant error, the subdist distribution bound is the same as the latter measure, up to a constant factor.
Abstract: A basic question in complexity theory is whether the computational resources required for solving k independent instances of the same problem scale as k times the resources required for one instance. We investigate this question in various models of classical communication complexity. We introduce a new measure, the subdistribution bound , which is a relaxation of the well-studied rectangle or corruption bound in communication complexity. We nonetheless show that for the communication complexity of Boolean functions with constant error, the subdistribution bound is the same as the latter measure, up to a constant factor. We prove that the one-way version of this bound tightly captures the one-way public-coin randomized communication complexity of any relation, and the two-way version bounds the two-way public-coin randomized communication complexity from below. More importantly, we show that the bound satisfies the strong direct product property under product distributions for both one- and two-way protocols, and the weak direct product property under arbitrary distributions for two-way protocols. These results subsume and strengthen, in a unified manner, several recent results on the direct product question. The simplicity and broad applicability of our technique is perhaps an indication of its potential to solve yet more challenging questions regarding the direct product problem.

Journal ArticleDOI
TL;DR: A new multicast key distribution scheme whose computation complexity is significantly reduced, instead of using conventional encryption algorithms, that employs MDS codes, a class of error control codes, to distribute multicasts key dynamically.
Abstract: Efficient key distribution is an important problem for secure group communications. The communication and storage complexity of multicast key distribution problem has been studied extensively. In this paper, we propose a new multicast key distribution scheme whose computation complexity is significantly reduced. Instead of using conventional encryption algorithms, the scheme employs MDS codes, a class of error control codes, to distribute multicast key dynamically. This scheme drastically reduces the computation load of each group member compared to existing schemes employing traditional encryption algorithms. Such a scheme is desirable for many wireless applications where portable devices or sensors need to reduce their computation as much as possible due to battery power limitations. Easily combined with any key-tree-based schemes, this scheme provides much lower computation complexity while maintaining low and balanced communication complexity and storage complexity for secure dynamic multicast key distribution.

Posted Content
TL;DR: In this article, the authors propose a theoretical framework for the design of PORs and propose a new variant on the Juels-Kaliski protocol and describe a prototype implementation.
Abstract: A proof of retrievability (POR) is a compact proof by a file system (prover) to a client (verifier) that a target file F is intact, in the sense that the client can fully recover it. As PORs incur lower communication complexity than transmission of F itself, they are an attractive building block for high-assurance remote storage systems. In this paper, we propose a theoretical framework for the design of PORs. Our framework improves the previously proposed POR constructions of Juels-Kaliski and Shacham-Waters, and also sheds light on the conceptual limitations of previous theoretical models for PORs. It supports a fully Byzantine adversarial model, carrying only the restriction—fundamental to all PORs—that the adversary’s error rate 2 be bounded when the client seeks to extract F . Our techniques support efficient protocols across the full possible range of 2, up to 2 non-negligibly close to 1. We propose a new variant on the Juels-Kaliski protocol and describe a prototype implementation. We demonstrate practical encoding even for files F whose size exceeds that of client main memory.

Journal ArticleDOI
TL;DR: Extensive simulations with random network topology demonstrate that, by exploiting the unique characteristics of UWB communications and allowing concurrent transmissions appropriately, the proposed exclusive-region based scheduling algorithms can significantly increase the network throughput.
Abstract: With the capability of supporting very high data rate services in a short range, ultra-wideband (UWB) technology is appealing to multimedia applications in future wireless personal area networks (WPANs) and broadband home networks. However, the WPAN medium access control (MAC) protocol in IEEE 802.15.3 standard was originally designed for narrowband communication networks, without considering any specific features of UWB. In this paper, we explore the unique characteristics of UWB communications from which a sufficient condition for scheduling concurrent transmissions in UWB networks is derived: concurrent transmissions can improve the network throughput if all senders are outside the exclusive regions of other flows. We also study the optimal exclusive region size for a UWB network where devices are densely and uniformly located. Since the optimal scheduling problem for peer-to-peer concurrent transmissions in a WPAN is NP-hard, the induced computation load for solving the problem may not be affordable to the network coordinator, commonly a normal UWB device with limited computational power. We propose two simple heuristic scheduling algorithms with polynomial time complexity. Extensive simulations with random network topology demonstrate that, by exploiting the unique characteristics of UWB communications and allowing concurrent transmissions appropriately, the proposed exclusive-region based scheduling algorithms can significantly increase the network throughput.