scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 2012"


Book ChapterDOI
19 Aug 2012
TL;DR: A general multiparty computation protocol secure against an active adversary corrupting up to $$n-1$$ of the n players is proposed, which may be used to compute securely arithmetic circuits over any finite field $$\mathbb {F}_{p^k}$$.
Abstract: We propose a general multiparty computation protocol secure against an active adversary corrupting up to $$n-1$$ of the n players. The protocol may be used to compute securely arithmetic circuits over any finite field $$\mathbb {F}_{p^k}$$. Our protocol consists of a preprocessing phase that is both independent of the function to be computed and of the inputs, and a much more efficient online phase where the actual computation takes place. The online phase is unconditionally secure and has total computational and communication complexity linear in n, the number of players, where earlier work was quadratic in n. Moreover, the work done by each player is only a small constant factor larger than what one would need to compute the circuit in the clear. We show this is optimal for computation in large fields. In practice, for 3 players, a secure 64-bit multiplication can be done in 0.05 ms. Our preprocessing is based on a somewhat homomorphic cryptosystem. We extend a scheme by Brakerski et al., so that we can perform distributed decryption and handle many values in parallel in one ciphertext. The computational complexity of our preprocessing phase is dominated by the public-key operations, we need $$On^2/s$$ operations per secure multiplication where s is a parameter that increases with the security parameter of the cryptosystem. Earlier work in this model needed $$\varOmega n^2$$ operations. In practice, the preprocessing prepares a secure 64-bit multiplication for 3 players in about 13 ms.

1,232 citations


Journal ArticleDOI
TL;DR: The verification problem in distributed networks is studied, stated as follows: let H be a subgraph of a network G where each vertex of G knows which edges incident on it are in H.
Abstract: We study the verification problem in distributed networks, stated as follows. Let $H$ be a subgraph of a network $G$ where each vertex of $G$ knows which edges incident on it are in $H$. We would l...

251 citations


Journal ArticleDOI
TL;DR: This paper divides the sensors into a number of nondisjoint feasible subsets such that only one subset of sensors is turned on at a period of time while guaranteeing that the necessary detection and false alarm thresholds are satisfied, and formulate such problem of energy-efficient cooperative spectrum sensing in sensor-aided CR networks as a scheduling problem, which is proved to be NP-complete.
Abstract: A promising technology that tackles the conflict between spectrum scarcity and underutilization is cognitive radio (CR), of which spectrum sensing is one of the most important functionalities. The use of dedicated sensors is an emerging service for spectrum sensing, where multiple sensors perform cooperative spectrum sensing. However, due to the energy constraint of battery-powered sensors, energy efficiency arises as a critical issue in sensor-aided CR networks. An optimal scheduling of each sensor active time can effectively extend the network lifetime. In this paper, we divide the sensors into a number of nondisjoint feasible subsets such that only one subset of sensors is turned on at a period of time while guaranteeing that the necessary detection and false alarm thresholds are satisfied. Each subset is activated successively, and nonactivated sensors are put in a low-energy sleep mode to extend the network lifetime. We formulate such problem of energy-efficient cooperative spectrum sensing in sensor-aided CR networks as a scheduling problem, which is proved to be NP-complete. We employ Greedy Degradation to degrade it into a linear integer programming problem and propose three approaches, namely, Implicit Enumeration (IE), General Greedy (GG), and λ-Greedy (λG), to solve the subproblem. Among them, IE can achieve an optimal solution with the highest computational complexity, whereas GG can provide a solution with the lowest complexity but much poorer performance. To achieve a better tradeoff in terms of network lifetime and computational complexity, a brand new λG is proposed to approach IE with the complexity comparable with GG. Simulation results are presented to verify the performance of our approaches, as well as to study the effect of adjustable parameters on the performance.

201 citations


Journal ArticleDOI
TL;DR: A hybrid Bayesian filter that operates by partitioning the state space into smaller subspaces and thereby reducing the complexity involved with high-dimensional state space is proposed.
Abstract: We propose a cognitive radar network (CRN) system for the joint estimation of the target state comprising the positions and velocities of multiple targets, and the channel state comprising the propagation conditions of an urban transmission channel. We develop a measurement model for the received signal by considering a finite-dimensional representation of the time-varying system function which characterizes the urban transmission channel. We employ sequential Bayesian filtering at the receiver to estimate the target and the channel state. We propose a hybrid Bayesian filter that operates by partitioning the state space into smaller subspaces and thereby reducing the complexity involved with high-dimensional state space. The feedback loop that embodies the radar environment and the receiver enables the transmitter to employ approximate greedy programming to find a suitable subset of antennas to be employed in each tracking interval, as well as the power transmitted by these antennas. We compute the posterior Cramer-Rao bound (PCRB) on the estimates of the target state and the channel state and use it as an optimization criterion for the antenna selection and power allocation algorithms. We use several numerical examples to demonstrate the performance of the proposed system.

183 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the interference alignment problem without channel extension and proved that the problem of maximizing the total achieved degrees of freedom for a given MIMO interference channel is NP-hard.
Abstract: Consider a multiple input-multiple output (MIMO) interference channel where each transmitter and receiver are equipped with multiple antennas. An effective approach to practically achieving high system throughput is to deploy linear transceivers (or beamformers) that can optimally exploit the spatial characteristics of the channel. The recent work of Cadambe and Jafar (IEEE Trans. Inf. Theory, vol. 54, no. 8) suggests that optimal beamformers should maximize the total degrees of freedom and achieve interference alignment in the high signal-to-noise ratio (SNR) regime. In this paper we first consider the interference alignment problem without channel extension and prove that the problem of maximizing the total achieved degrees of freedom for a given MIMO interference channel is NP-hard. Furthermore, we show that even checking the achievability of a given tuple of degrees of freedom for all receivers is NP-hard when each receiver is equipped with at least three antennas. Interestingly, the same problem becomes polynomial time solvable when each transmit/receive node is equipped with no more than two antennas. We also propose a distributed algorithm for transmit covariance matrix design that does not require the DoF tuple preassignment, under the assumption that each receiver uses a linear minimum mean square error (MMSE) beamformer. The simulation results show that the proposed algorithm outperforms the existing interference alignment algorithms in terms of system throughput.

174 citations


Proceedings ArticleDOI
19 May 2012
TL;DR: It is shown that IC(f) is equal to the amortized (randomized) communication complexity of f, and this connection implies that a non-trivial exchange of information is required when solving problems that have non-Trivial communication complexity.
Abstract: The primary goal of this paper is to define and study the interactive information complexity of functions. Let f(x,y) be a function, and suppose Alice is given x and Bob is given y. Informally, the interactive information complexity IC(f) of f is the least amount of information Alice and Bob need to reveal to each other to compute f. Previously, information complexity has been defined with respect to a prior distribution on the input pairs (x,y). Our first goal is to give a definition that is independent of the prior distribution. We show that several possible definitions are essentially equivalent. We establish some basic properties of the interactive information complexity IC(f). In particular, we show that IC(f) is equal to the amortized (randomized) communication complexity of f. We also show a direct sum theorem for IC(f) and give the first general connection between information complexity and (non-amortized) communication complexity. This connection implies that a non-trivial exchange of information is required when solving problems that have non-trivial communication complexity. We explore the information complexity of two specific problems - Equality and Disjointness. We show that only a constant amount of information needs to be exchanged when solving Equality with no errors, while solving Disjointness with a constant error probability requires the parties to reveal a linear amount of information to each other.

165 citations


Proceedings ArticleDOI
17 Jan 2012
TL;DR: This work presents the first deterministic one-pass streaming (1 - 1/e)-approximation algorithm using O(n) space for this setting and introduces an e-matching cover of a bipartite graph G, which is a sparse subgraph of the original graph that preserves the size of maximum matching between every subset of vertices to within an additive en error.
Abstract: Consider the following communication problem Alice holds a graph GA = (P,Q,EA) and Bob holds a graph GB = (P,Q,EB), where |P| = |Q| = n Alice is allowed to send Bob a message m that depends only on the graph GA Bob must then output a matching M ⊆ EA ∪ EB What is the minimum message size of the message m that Alice sends to Bob that allows Bob to recover a matching of size at least (1 − e) times the maximum matching in GA ∪ GB? The minimum message length is the one-round communication complexity of approximating bipartite matching It is easy to see that the one-round communication complexity also gives a lower bound on the space needed by a one-pass streaming algorithm to compute a (1 − e)-approximate bipartite matching The focus of this work is to understand one-round communication complexity and one-pass streaming complexity of maximum bipartite matching In particular, how well can one approximate these problems with linear communication and space? Prior to our work, only a 1/2-approximation was known for both these problemsIn order to study these questions, we introduce the concept of an e-matching cover of a bipartite graph G, which is a sparse subgraph of the original graph that preserves the size of maximum matching between every subset of vertices to within an additive en error We give a polynomial time construction of a 1/2-matching cover of size O(n) with some crucial additional properties, thereby showing that Alice and Bob can achieve a 2/3-approximation with a message of size O(n) While we do not provide bounds on the size of e-matching covers for e 0, a (2/3 + δ)-approximation requires a communication complexity of n1+Ω(1/log log n)We also consider the natural restrictingon of the problem in which GA and GB are only allowed to share vertices on one side of the bipartition, which is motivated by applications to one-pass streaming with vertex arrivals We show that a 3/4-approximation can be achieved with a linear size message in this case, and this result is best possible in that super-linear space is needed to achieve any better approximationFinally, we build on our techniques for the restricted version above to design one-pass streaming algorithm for the case when vertices on one side are known in advance, and the vertices on the other side arrive in a streaming manner together with all their incident edges This is precisely the setting of the celebrated (1 − 1/e)-competitive randomized algorithm of Karp-Vazirani-Vazirani (KVV) for the online bipartite matching problem [12] We present here the first deterministic one-pass streaming (1 - 1/e)-approximation algorithm using O(n) space for this setting

161 citations


Proceedings ArticleDOI
01 Apr 2012
TL;DR: Simulation results illustrate that the proposed iterative resource allocation algorithm converges in a small number of iterations, and unveil the trade-off between energy efficiency and network capacity.
Abstract: In this paper, resource allocation for energy efficient communication in multi-cell orthogonal frequency division multiple access (OFDMA) downlink networks with cooperative base stations (BSs) is studied. The considered problem is formulated as a non-convex optimization problem which takes into account the circuit power consumption, the limited backhaul capacity, and the minimum required data rate for joint BS zero-forcing beamforming (ZFBF) transmission. By exploiting the properties of fractional programming, the considered non-convex optimization problem in fractional form is transformed into an equivalent optimization problem in subtractive form, which enables the derivation of an efficient iterative resource allocation algorithm. For each iteration, the optimal power allocation solution is derived with a low complexity suboptimal subcarrier allocation policy for maximization of the energy efficiency of data transmission (bit/Joule delivered to the users). Simulation results illustrate that the proposed iterative resource allocation algorithm converges in a small number of iterations, and unveil the trade-off between energy efficiency and network capacity.

146 citations


16 Apr 2012
TL;DR: In this article, the authors consider the problem of PAC-learning from distributed data and analyze fundamental communication complexity questions involved, providing general upper and lower bounds on the amount of communication needed to learn well, showing that in addition to VC-dimension and covering number, quantities such as the teachingdimension and mistake bound of a class play an important role.
Abstract: We consider the problem of PAC-learning from distributed data and analyze fundamental communication complexity questions involved. We provide general upper and lower bounds on the amount of communication needed to learn well, showing that in addition to VC-dimension and covering number, quantities such as the teaching-dimension and mistake-bound of a class play an important role. We also present tight results for a number of common concept classes including conjunctions, parity functions, and decision lists. For linear separators, we show that for non-concentrated distributions, we can use a version of the Perceptron algorithm to learn with much less communication than the number of updates given by the usual margin bound. We also show how boosting can be performed in a generic manner in the distributed setting to achieve communication with only logarithmic dependence on 1= for any concept class, and demonstrate how recent work on agnostic learning from class-conditional queries can be used to achieve low communication in agnostic settings as well. We additionally present an analysis of privacy, considering both differential privacy and a notion of distributional privacy that is especially appealing in this context.

133 citations


Book ChapterDOI
19 Aug 2012
TL;DR: In this paper, an n-player MPC protocol with passive security was proposed, which has an amortized communication complexity of O(log n + \kappa /n^{const} bits per multiplication gate.
Abstract: In the setting of unconditionally-secure MPC, where dishonest players are unbounded and no cryptographic assumptions are used, it was known since the 1980's that an honest majority of players is both necessary and sufficient to achieve privacy and correctness, assuming secure point-to-point and broadcast channels The main open question that was left is to establish the exact communication complexity We settle the above question by showing an unconditionally-secure MPC protocol, secure against a dishonest minority of malicious players, that matches the communication complexity of the best known MPC protocol in the honest-but-curious setting More specifically, we present a new n-player MPC protocol that is secure against a computationally-unbounded malicious adversary that can adaptively corrupt $$t < n/2$$ of the players For polynomially-large binary circuits that are not too unshaped, our protocol has an amortized communication complexity of $$On \log n + \kappa /n^{const}$$ bits per multiplication iei¾?AND gate, where $$\kappa $$ denotes the security parameter and $${const}\in \mathbb {Z}$$ is an arbitrary non-negative constant This improves on the previously most efficient protocol with the same security guarantee, which offers an amortized communication complexity of $$On^2 \kappa $$ bits per multiplication gate For any $$\kappa $$ polynomial in n, the amortized communication cty of our protocol matches the $$On \log n$$ bit communication complexity of the best known MPC protocol with passive security We introduce several novel techniques that are of independent interest and we believe will have wider applicability One is a novel idea of computing authentication tags by means of a mini MPC, which allows us to avoid expensive double-sharings; the other is a batch-wise multiplication verification that allows us to speedup Beaver's "multiplication triples"

121 citations


Journal ArticleDOI
TL;DR: A distributed recursive least-squares algorithm is developed for cooperative estimation using ad hoc wireless sensor networks, and computer simulations demonstrate that the theoretical findings are accurate also in the pragmatic settings whereby sensors acquire temporally-correlated data.
Abstract: The recursive least-squares (RLS) algorithm has well-documented merits for reducing complexity and storage requirements, when it comes to online estimation of stationary signals as well as for tracking slowly-varying nonstationary processes. In this paper, a distributed recursive least-squares (D-RLS) algorithm is developed for cooperative estimation using ad hoc wireless sensor networks. Distributed iterations are obtained by minimizing a separable reformulation of the exponentially-weighted least-squares cost, using the alternating-minimization algorithm. Sensors carry out reduced-complexity tasks locally, and exchange messages with one-hop neighbors to consent on the network-wide estimates adaptively. A steady-state mean-square error (MSE) performance analysis of D-RLS is conducted, by studying a stochastically-driven `averaged' system that approximates the D-RLS dynamics asymptotically in time. For sensor observations that are linearly related to the time-invariant parameter vector sought, the simplifying independence setting assumptions facilitate deriving accurate closed-form expressions for the MSE steady-state values. The problems of mean- and MSE-sense stability of D-RLS are also investigated, and easily-checkable sufficient conditions are derived under which a steady-state is attained. Without resorting to diminishing step-sizes which compromise the tracking ability of D-RLS, stability ensures that per sensor estimates hover inside a ball of finite radius centered at the true parameter vector, with high-probability, even when inter-sensor communication links are noisy. Interestingly, computer simulations demonstrate that the theoretical findings are accurate also in the pragmatic settings whereby sensors acquire temporally-correlated data.

Journal ArticleDOI
TL;DR: This work proposes a low-complexity and high-performance design that derives a lower bound that demands low computational effort and approximates, with a constant shift, the mutual information for various settings.
Abstract: This paper investigates linear precoding scheme that maximizes mutual information for multiple-input multiple-output (MIMO) channels with finite-alphabet inputs. In contrast with recent studies, optimizing mutual information directly with extensive computational burden, this work proposes a low-complexity and high-performance design. It derives a lower bound that demands low computational effort and approximates, with a constant shift, the mutual information for various settings. Based on this bound, the precoding problem is solved efficiently. Numerical examples show the efficacy of this method for constant and fading MIMO channels. Compared to its conventional counterparts, the proposed method reduces the computational complexity without performance loss.

Posted Content
TL;DR: In this paper, the authors consider PAC-learning from distributed data and analyze fundamental communication complexity questions involved, and provide general upper and lower bounds on the amount of communication needed to learn well, showing that in addition to VC-dimension and covering number, quantities such as the teachingdimension and mistake bound of a class play an important role.
Abstract: We consider the problem of PAC-learning from distributed data and analyze fundamental communication complexity questions involved. We provide general upper and lower bounds on the amount of communication needed to learn well, showing that in addition to VC-dimension and covering number, quantities such as the teaching-dimension and mistake-bound of a class play an important role. We also present tight results for a number of common concept classes including conjunctions, parity functions, and decision lists. For linear separators, we show that for non-concentrated distributions, we can use a version of the Perceptron algorithm to learn with much less communication than the number of updates given by the usual margin bound. We also show how boosting can be performed in a generic manner in the distributed setting to achieve communication with only logarithmic dependence on 1/epsilon for any concept class, and demonstrate how recent work on agnostic learning from class-conditional queries can be used to achieve low communication in agnostic settings as well. We additionally present an analysis of privacy, considering both differential privacy and a notion of distributional privacy that is especially appealing in this context.

Proceedings ArticleDOI
20 Oct 2012
TL;DR: This work shows how to efficiently simulate any interactive protocol in the presence of constant-rate adversarial noise, while incurring only a constant blow-up in the communication complexity (CC).
Abstract: In this work, we study the problem of constructing interactive protocols that are robust to noise, a problem that was originally considered in the seminal works of Schulman (FOCS '92, STOC '93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an error-free channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (non-interactive) error correcting codes, the noise can be either stochastic, i.e.\ drawn from some distribution, or adversarial, i.e.\ arbitrary subject only to a global bound on the number of errors. We show how to \emph{efficiently} simulate any interactive protocol in the presence of constant-rate \emph{adversarial} noise, while incurring only a constant blow-up in the communication complexity (CC). Our simulator is randomized, and succeeds in simulating the original protocol with probability at least $1-2^{-\Omega(CC)}$.

Proceedings ArticleDOI
19 May 2012
TL;DR: It is shown that solving l instances of set disjointness requires l*Omega(n/4k)1/4 bits of communication, even to achieve correctness probability exponentially close to 1/2, which gives the first direct-product result for multiparty set disJointness.
Abstract: We study the set disjointness problem in the number-on-the-forehead model of multiparty communication. (i) We prove that k-party set disjointness has communication complexity Omega(n/4k)1/4 in the randomized and nondeterministic models and Omega(n/4k)1/8 in the Merlin-Arthur model. These lower bounds are close to tight. Previous lower bounds (2007-2008) for k>=3 parties were weaker than Omega(n/2k3)1/(k+1) in all three models. (ii) We prove that solving l instances of set disjointness requires l*Omega(n/4k)1/4 bits of communication, even to achieve correctness probability exponentially close to 1/2. This gives the first direct-product result for multiparty set disjointness, solving an open problem due to Beame, Pitassi, Segerlind, and Wigderson (2005). (iii) We construct a read-once {∧,∨}-circuit of depth 3 with exponentially small discrepancy for up to k≈(1/2)log n parties. This result is optimal with respect to depth and solves an open problem due to Beame and Huynh-Ngoc (FOCS '09), who gave a depth-6 construction. Applications to circuit complexity are given.

Journal ArticleDOI
TL;DR: In this article, Indyk and Woodruff proved an optimal multipass space lower bound for the gap-hamming distance problem in the data stream model, and showed that the naive protocol is asymptotically optimal.
Abstract: We prove an optimal $\Omega(n)$ lower bound on the randomized communication complexity of the much-studied gap-hamming-distance problem. As a consequence, we obtain essentially optimal multipass space lower bounds in the data stream model for a number of fundamental problems, including the estimation of frequency moments. The gap-hamming-distance problem is a communication problem, wherein Alice and Bob receive $n$-bit strings $x$ and $y$, respectively. They are promised that the Hamming distance between $x$ and $y$ is either at least $n/2+\sqrt{n}$ or at most $n/2-\sqrt{n}$, and their goal is to decide which of these is the case. Since the formal presentation of the problem by Indyk and Woodruff [Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, 2003, pp. 283--289], it had been conjectured that the naive protocol, which uses $n$ bits of communication, is asymptotically optimal. The conjecture was shown to be true in several special cases, e.g., when the communication is de...

Proceedings ArticleDOI
19 May 2012
TL;DR: A family of modified pebbling formulas such as F_n, a new two-player communication complexity lower bound for composed search problems in terms of block sensitivity, are exhibited, a contribution that is believed to be of independent interest.
Abstract: An active line of research in proof complexity over the last decade has been the study of proof space and trade-offs between size and space. Such questions were originally motivated by practical SAT solving, but have also led to the development of new theoretical concepts in proof complexity of intrinsic interest and to results establishing nontrivial relations between space and other proof complexity measures. By now, the resolution proof system is fairly well understood in this regard, as witnessed by a sequence of papers leading up to [Ben-Sasson and Nordstrom 2008, 2011] and [Beame, Beck, and Impagliazzo 2012]. However, for other relevant proof systems in the context of SAT solving, such as polynomial calculus (PC) and cutting planes (CP), very little has been known. Inspired by [BN08, BN11], we consider CNF encodings of so-called pebble games played on graphs and the approach of making such pebbling formulas harder by simple syntactic modifications. We use this paradigm of hardness amplification to make progress on the relatively longstanding open question of proving time-space trade-offs for PC and CP. Namely, we exhibit a family of modified pebbling formulas {F_n} such that: - The formulas F_n have size O(n) and width O(1). - They have proofs in length O(n) in resolution, which generalize to both PC and CP. - Any refutation in CP or PCR (a generalization of PC) in length L and space s must satisfy s log L >≈ √[4]{n}. A crucial technical ingredient in these results is a new two-player communication complexity lower bound for composed search problems in terms of block sensitivity, a contribution that we believe to be of independent interest.

Proceedings ArticleDOI
21 May 2012
TL;DR: It is shown that randomization can lead to significant improvements for a few fundamental problems in distributed tracking, and techniques are extended to two related distributed tracking problems: frequency-tracking and rank-tracking, and obtain similar improvements over previous deterministic algorithms.
Abstract: We show that randomization can lead to significant improvements for a few fundamental problems in distributed tracking. Our basis is the count-tracking problem, where there are k players, each holding a counter ni that gets incremented over time, and the goal is to track an ∑-approximation of their sum n=∑ini continuously at all times, using minimum communication. While the deterministic communication complexity of the problem is θ(k/e • log N), where N is the final value of n when the tracking finishes, we show that with randomization, the communication cost can be reduced to θ(√k/e • log N). Our algorithm is simple and uses only O(1) space at each player, while the lower bound holds even assuming each player has infinite computing power. Then, we extend our techniques to two related distributed tracking problems: frequency-tracking and rank-tracking, and obtain similar improvements over previous deterministic algorithms. Both problems are of central importance in large data monitoring and analysis, and have been extensively studied in the literature.

Journal ArticleDOI
TL;DR: It is shown via simulation that for two-way networks with both single relay and multiple relays, proper user power control and relay distributed beamforming can significantly improve the network performance, especially when the power constraints of the two end-users in the networks are unbalanced.
Abstract: This paper deals with optimal joint user power control and relay distributed beamforming for two-way relay networks, where two end-users exchange information through multiple relays, each of which is assumed to have its own power constraint. The problem includes the design of the distributed beamformer at the relays and the power control scheme for the two end-users to optimize the network performance. Considering the overall two-way network performance, we maximize the lower signal-to-noise ratio (SNR) of the two communication links. For single-relay networks, this maximization problem is solved analytically. For multi-relay networks, we propose an iterative numerical algorithm to find the optimal solution. While the complexity of the optimal algorithm is too high for large networks, two sub-optimal algorithms with low complexity are also proposed, which are numerically shown to perform close to the optimal technique. It is also shown via simulation that for two-way networks with both single relay and multiple relays, proper user power control and relay distributed beamforming can significantly improve the network performance, especially when the power constraints of the two end-users in the networks are unbalanced. Our approach also improves the power efficiency of the network largely.

Journal ArticleDOI
TL;DR: This work considers the problem of information aggregation in sensor networks, where one is interested in computing a function of the sensor measurements, and presents a cut-set lower bound and an achievable scheme based on aggregation along trees.
Abstract: We consider the problem of information aggregation in sensor networks, where one is interested in computing a function of the sensor measurements. We allow for block processing and study in-network function computation in directed graphs and undirected graphs. We study how the structure of the function affects the encoding strategies and the effect of interactive information exchange. Depending on the application, there could be a designated collector node, or every node might want to compute the function. We begin by considering a directed graph C = (γ. e) on the sensor nodes, where the goal is to determine the optimal encoders on each edge which achieve function computation at the collector node. Our goal is to characterize the rate region in R|e|, i.e., the set of points for which there exist feasible encoders with given rates which achieve zero-error computation for asymptotically large block length. We determine the solution for directed trees, specifying the optimal encoder and decoder for each edge. For general directed acyclic graphs, we provide an outer bound on the rate region by finding the disambiguation requirements for each cut, and describe examples where this outer bound is tight. Next, we address the scenario where nodes are connected in an undirected tree network, and every node wishes to compute a given symmetric Boolean function of the sensor data. Undirected edges permit interactive computation, and we therefore study the effect of interaction on the aggregation and communication strategies. We focus on sum-threshold functions and determine the minimum worst case total number of bits to be exchanged on each edge. The optimal strategy involves recursive in-network aggregation which is reminiscent of message passing. In the case of general graphs, we present a cut-set lower bound and an achievable scheme based on aggregation along trees. For complete graphs, we prove that the complexity of this scheme is no more than twice that of the optimal scheme.

Proceedings ArticleDOI
25 Jun 2012
TL;DR: This paper revisits the communication complexity of large-scale 3D fast Fourier transforms (FFTs) and asks what impact trends in current architectures will have on FFT performance at exascale, and develops suitable analytical models to derive suitable models.
Abstract: This paper revisits the communication complexity of large-scale 3D fast Fourier transforms (FFTs) and asks what impact trends in current architectures will have on FFT performance at exascale. We analyze both memory hierarchy traffic and network communication to derive suitable analytical models, which we calibrate against current software implementations; we then evaluate models to make predictions about potential scaling outcomes at exascale, based on extrapolating current technology trends. Of particular interest is the performance impact of choosing high-density processors, typified today by graphics co-processors (GPUs), as the base processor for an exascale system. Among various observations, a key prediction is that although inter-node all-to-all communication is expected to be the bottleneck of distributed FFTs, intra-node communication---expressed precisely in terms of the relative balance among compute capacity, memory bandwidth, and network bandwidth---will play a critical role.

Journal ArticleDOI
TL;DR: This work shows that the formulated optimization problem has a special structure which can be exploited to implement a fast barrier method to obtain the optimal solution with a reasonable complexity, and proposes an effective measurement criterion to normalize OFDM subchannels' achievable rates.
Abstract: In this paper we study the resource allocation in multiuser orthogonal frequency division multiplexing (OFDM)-based cognitive radio (CR) networks, where secondary users (SUs) have flexible traffic demands, including heterogeneous real-time (RT) and non-real-time (NRT) services. We try to maximize the sum capacity of the NRT users and maintain the minimal rate requirements of the RT users simultaneously. Additionally, the interference introduced to primary users, which is generated by the access of the SUs, should be kept below a predefined threshold, which makes the optimization task more complex. The contribution of this work is two folds. First, we show that the formulated optimization problem has a special structure which can be exploited to implement a fast barrier method to obtain the optimal solution with a reasonable complexity. Second, we propose an effective measurement criterion to normalize OFDM subchannels' achievable rates, based on which we develop simple but efficient heuristic algorithm for subchannel assignment and power distribution. Simulation results show that our proposed resource allocation schemes work quite well for concerned wireless scenarios. The fast barrier method converges very fast and can always work out the optimal solution, while the heuristic algorithm produces solution close to the optimal with much lower complexity.

Journal ArticleDOI
TL;DR: This work introduces a monitoring method, based on a geometric interpretation of the problem, which enables to define local constraints at the nodes, and extends the concept of safe zones for the monitoring problem, and shows that previous work on geometric monitoring is a special case of the proposed extension.
Abstract: An important problem in distributed, dynamic databases is to continuously monitor the value of a function defined on the nodes, and check that it satisfies some threshold constraint. We introduce a monitoring method, based on a geometric interpretation of the problem, which enables to define local constraints at the nodes. It is guaranteed that as long as none of these constraints is violated, the value of the function did not cross the threshold. We generalize previous work on geometric monitoring, and solve two problems which seriously hampered its performance: as opposed to the constraints used so far, which depend only on the current values of the local data, here we incorporate their temporal behavior. Also, the new constraints are tailored to the geometric properties of the specific monitored function. In addition, we extend the concept of safe zones for the monitoring problem, and show that previous work on geometric monitoring is a special case of the proposed extension. Experimental results on real data reveal that the new approach reduces communication by up to three orders of magnitude in comparison to existing approaches, and considerably narrows the gap between achievable results and a newly defined lower bound on communication complexity.

Journal ArticleDOI
TL;DR: It is shown that when computing symmetric functions of binary sources, the sink will inevitably learn certain additional information that is not demanded in computing the function, which leads to new improved bounds for the minimum sum rate.
Abstract: A problem of interactive function computation in a collocated network is studied in a distributed block source coding framework. With the goal of computing samples of a desired function of sources at the sink, the source nodes exchange messages through a sequence of error-free broadcasts. For any function of independent sources, a computable characterization of the set of all feasible message coding rates-the rate region-is derived in terms of single-letter information measures. In the limit as the number of messages tends to infinity, the infinite-message minimum sum rate, viewed as a functional of the joint source probability mass function, is characterized as the least element of a partially ordered family of functionals having certain convex-geometric properties. This characterization leads to a family of lower bounds for the infinite-message minimum sum rate and a simple criterion to test the optimality of any achievable infinite-message sum rate. An iterative algorithm for evaluating the infinite-message minimum sum-rate functional is proposed and is demonstrated through an example of computing the minimum function of three Bernoulli sources. Based on the characterizations of the rate regions, it is shown that when computing symmetric functions of binary sources, the sink will inevitably learn certain additional information that is not demanded in computing the function. This conceptual understanding leads to new improved bounds for the minimum sum rate. The new bounds are shown to be orderwise better than those based on cut-sets as the network scales. The scaling law of the minimum sum rate is explored for different classes of symmetric functions and source parameters.

Proceedings ArticleDOI
TL;DR: In this article, a relaxed version of the partition bound of Jain and Klauck is defined, and it is shown that it lower bounds the information complexity of any function.
Abstract: We show that almost all known lower bound methods for communication complexity are also lower bounds for the information complexity. In particular, we define a relaxed version of the partition bound of Jain and Klauck and prove that it lower bounds the information complexity of any function. Our relaxed partition bound subsumes all norm based methods (e.g. the factorization norm method) and rectangle-based methods (e.g. the rectangle/corruption bound, the smooth rectangle bound, and the discrepancy bound), except the partition bound. Our result uses a new connection between rectangles and zero-communication protocols where the players can either output a value or abort. We prove the following compression lemma: given a protocol for a function f with information complexity I, one can construct a zero-communication protocol that has non-abort probability at least 2^{-O(I)} and that computes f correctly with high probability conditioned on not aborting. Then, we show how such a zero-communication protocol relates to the relaxed partition bound. We use our main theorem to resolve three of the open questions raised by Braverman. First, we show that the information complexity of the Vector in Subspace Problem is {\Omega}(n^{1/3}), which, in turn, implies that there exists an exponential separation between quantum communication complexity and classical information complexity. Moreover, we provide an {\Omega}(n) lower bound on the information complexity of the Gap Hamming Distance Problem.

Book ChapterDOI
05 Sep 2012
TL;DR: The techniques reduction pattern matching and generalized Hamming distance problem to a novel linear algebra formulation that allows for generic solutions based on any additively homomorphic encryption are believed to be of independent interest.
Abstract: In this paper we consider the problem of secure pattern matching that allows single character wildcards and substring matching in the malicious (stand-alone) setting. Our protocol, called 5PM, is executed between two parties: Server, holding a text of length n, and Client, holding a pattern of length m to be matched against the text, where our notion of matching is more general and includes non-binary alphabets, non-binary Hamming distance and non-binary substring matching. 5PM is the first protocol with communication complexity sub-linear in circuit size to compute non-binary substring matching in the malicious model (general MPC has communication complexity which is at least linear in the circuit size). 5PM is also the first sublinear protocol to compute non-binary Hamming distance in the malicious model. Additionally, in the honest-but-curious (semi-honest) model, 5PM is asymptotically more efficient than the best known scheme when amortized for applications that require single charcter wildcards or substring pattern matching. 5PM in the malicious model requires O((m+n)k2) bandwidth and O(m+n) encryptions, where m is the pattern length and n is the text length. Further, 5PM can hide pattern size with no asymptotic additional costs in either computation or bandwidth. Finally, 5PM requires only 2 rounds of communication in the honest-but-curious model and 8 rounds in the malicious model. Our techniques reduce pattern matching and generalized Hamming distance problem to a novel linear algebra formulation that allows for generic solutions based on any additively homomorphic encryption. We believe our efficient algebraic techniques are of independent interest.

Journal Article
TL;DR: In this paper, it was shown that no two sets of large enough Gaussian measure (at least e d n for some constant d > 0) can have correlation substantially lower than would two random sets of the same size.
Abstract: Given two sets A; B R n , a measure of their correlation is given by the expected squared inner product between random x2 A and y2 B. We prove an inequality showing that no two sets of large enough Gaussian measure (at least e d n for some constant d > 0) can have correlation substantially lower than would two random sets of the same size. Our proof is based on a concentration inequality for the overlap of a random Gaussian vector on a large set. As an application, we show how our result can be combined with the partition bound of Jain and Klauck to give a simpler proof of a recent linear lower bound on the randomized communication complexity of the Gap-Hamming-Distance problem due to Chakrabarti and Regev.

Journal ArticleDOI
TL;DR: This paper develops two low-complexity channel assignment algorithms that can efficiently utilize the spectrum holes on a set of channels and designs a distributed medium access control protocol for access contention resolution and integrates it into the overlapping channel assignment algorithm.
Abstract: In this paper, we consider the channel assignment problem for cognitive radio networks with hardware-constrained secondary users (SUs). In particular, we assume that SUs exploit spectrum holes on a set of channels where each SU can use at most one available channel for communication. We present the optimal brute-force search algorithm to solve the corresponding nonlinear integer optimization problem and analyze its complexity. Because the optimal solution has exponential complexity with the numbers of channels and SUs, we develop two low-complexity channel assignment algorithms that can efficiently utilize the spectrum holes. In the first algorithm, SUs are assigned distinct sets of channels. We show that this algorithm achieves the maximum throughput limit if the number of channels is sufficiently large. In addition, we propose an overlapping channel assignment algorithm that can improve the throughput performance compared with its nonoverlapping channel assignment counterpart. Moreover, we design a distributed medium access control (MAC) protocol for access contention resolution and integrate it into the overlapping channel assignment algorithm. We then analyze the saturation throughput and the complexity of the proposed channel assignment algorithms. We also present several potential extensions, including the development of greedy channel assignment algorithms under the max-min fairness criterion and throughput analysis, considering sensing errors. Finally, numerical results are presented to validate the developed theoretical results and illustrate the performance gains due to the proposed channel assignment algorithms.

Journal ArticleDOI
TL;DR: A new technique based on the factorial moment expansion of functionals of point processes to analyze functions of interference, in particular outage probability is introduced, wherein increasing the number of terms in the series provides a better approximation at the cost of increased complexity of computation.
Abstract: The spatial correlations in transmitter node locations introduced by common multiple access protocols make the analysis of interference, outage, and other related metrics in a wireless network extremely difficult. Most works therefore assume that nodes are distributed either as a Poisson point process (PPP) or a grid, and utilize the independence properties of the PPP (or the regular structure of the grid) to analyze interference, outage and other metrics. But, the independence of node locations makes the PPP a dubious model for nontrivial MACs which intentionally introduce correlations, e.g., spatial separation, while the grid is too idealized to model real networks. In this paper, we introduce a new technique based on the factorial moment expansion of functionals of point processes to analyze functions of interference, in particular outage probability. We provide a Taylor-series type expansion of functions of interference, wherein increasing the number of terms in the series provides a better approximation at the cost of increased complexity of computation. Various examples illustrate how this new approach can be used to find outage probability in both Poisson and non-Poisson wireless networks.

Proceedings ArticleDOI
Le Yi Wang1, Ali Syed1, George Yin1, Abhilash Pandya1, Hongwei Zhang1 
01 Dec 2012
TL;DR: The main advantages are demonstrated, including using local control to achieve a global deployment so that communication complexity is reduced; scalability to accommodate dynamic changes of the member vehicles and communication networks; robustness against road conditions and communication uncertainties.
Abstract: This paper introduces a new method for enhancing highway safety and efficiency by coordinated control of vehicle platoons. One of our aims is to understand influence of communication network topologies and uncertainties on control performance. Vehicle deployment is formulated as a weighted and constrained consensus control problem. Algorithms are introduced and their convergence properties are established. The main advantages of the methods are demonstrated, including using local control to achieve a global deployment so that communication complexity is reduced; scalability to accommodate dynamic changes of the member vehicles and communication networks; robustness against road conditions and communication uncertainties.