scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 2011"


Journal ArticleDOI
TL;DR: The proposed systems prove that carrying out complex tasks like ECG classification in the encrypted domain efficiently is indeed possible in the semihonest model, paving the way to interesting future applications wherein privacy of signal owners is protected by applying high security standards.
Abstract: Privacy protection is a crucial problem in many biomedical signal processing applications. For this reason, particular attention has been given to the use of secure multiparty computation techniques for processing biomedical signals, whereby nontrusted parties are able to manipulate the signals although they are encrypted. This paper focuses on the development of a privacy preserving automatic diagnosis system whereby a remote server classifies a biomedical signal provided by the client without getting any information about the signal itself and the final result of the classification. Specifically, we present and compare two methods for the secure classification of electrocardiogram (ECG) signals: the former based on linear branching programs (a particular kind of decision tree) and the latter relying on neural networks. The paper deals with all the requirements and difficulties related to working with data that must stay encrypted during all the computation steps, including the necessity of working with fixed point arithmetic with no truncation while guaranteeing the same performance of a floating point implementation in the plain domain. A highly efficient version of the underlying cryptographic primitives is used, ensuring a good efficiency of the two proposed methods, from both a communication and computational complexity perspectives. The proposed systems prove that carrying out complex tasks like ECG classification in the encrypted domain efficiently is indeed possible in the semihonest model, paving the way to interesting future applications wherein privacy of signal owners is protected by applying high security standards.

194 citations


Posted Content
TL;DR: It is shown how to efficiently simulate the sending of a single message M to a receiver who has partial information about the message, so that the expected number of bits communicated in the simulation is close to the amount of additional information that the message reveals to the receiver.
Abstract: We show how to efficiently simulate the sending of a message M to a receiver who has partial information about the message, so that the expected number of bits communicated in the simulation is close to the amount of additional information that the message reveals to the receiver. This is a generalization and strengthening of the Slepian-Wolf theorem, which shows how to carry out such a simulation with low amortized communication in the case that M is a deterministic function of X. A caveat is that our simulation is interactive. As a consequence, we prove that the internal information cost (namely the information revealed to the parties) involved in computing any relation or function using a two party interactive protocol is exactly equal to the amortized communication complexity of computing independent copies of the same relation or function. We also show that the only way to prove a strong direct sum theorem for randomized communication complexity is by solving a particular variant of the pointer jumping problem that we define. Our work implies that a strong direct sum theorem for communication complexity holds if and only if efficient compression of communication protocols is possible.

180 citations


Journal ArticleDOI
TL;DR: The paper suggests a novel combine-project-adapt protocol for cooperation among the nodes of the network; such a protocol fits naturally with the philosophy that underlies the projection-based rationale.
Abstract: In this paper, the problem of adaptive distributed learning in diffusion networks is considered. The algorithms are developed within the convex set theoretic framework. More specifically, they are based on computationally simple geometric projections onto closed convex sets. The paper suggests a novel combine-project-adapt protocol for cooperation among the nodes of the network; such a protocol fits naturally with the philosophy that underlies the projection-based rationale. Moreover, the possibility that some of the nodes may fail is also considered and it is addressed by employing robust statistics loss functions. Such loss functions can easily be accommodated in the adopted algorithmic framework; all that is required from a loss function is convexity. Under some mild assumptions, the proposed algorithms enjoy monotonicity, asymptotic optimality, asymptotic consensus, strong convergence and linear complexity with respect to the number of unknown parameters. Finally, experiments in the context of the system-identification task verify the validity of the proposed algorithmic schemes, which are compared to other recent algorithms that have been developed for adaptive distributed learning.

178 citations


Journal ArticleDOI
08 Jun 2011
TL;DR: In this article, a technique for proving lower bounds in property testing, by showing a strong connection between testing and communication complexity, was developed, which is general and implies a number of new testing bounds, as well as simpler proofs of several known bounds.
Abstract: We develop a new technique for proving lower bounds in property testing, by showing a strong connection between testing and communication complexity. We give a simple scheme for reducing communication problems to testing problems, thus allowing us to use known lower bounds in communication complexity to prove lower bounds in testing. This scheme is general and implies a number of new testing bounds, as well as simpler proofs of several known bounds. For the problem of testing whether a boolean function is k-linear (a parity function on k variables), we achieve a lower bound of Omega(k) queries, even for adaptive algorithms with two-sided error, thus confirming a conjecture of Goldreich (2010). The same argument behind this lower bound also implies a new proof of known lower bounds for testing related classes such as k-juntas. For some classes, such as the class of monotone functions and the class of s-sparse GF(2) polynomials, we significantly strengthen the best known bounds.

150 citations


Proceedings ArticleDOI
22 Oct 2011
TL;DR: It is shown how to efficiently simulate the sending of a message to a receiver who has partial information about the message, so that the expected number of bits communicated in the simulation is close to the amount of additional information that the message reveals to the receiver, who has some information About the message.
Abstract: We show how to efficiently simulate the sending of a message to a receiver who has partial information about the message, so that the expected number of bits communicated in the simulation is close to the amount of additional information that the message reveals to the receiver who has some information about the message. This is a generalization and strengthening of the Slepian Wolf theorem, which shows how to carry out such a simulation with low amortized communication in the case that the message is a deterministic function of an input. A caveat is that our simulation is interactive. As a consequence, we prove that the internal information cost(namely the information revealed to the parties) involved in computing any relation or function using a two party interactive protocol is exactly equal to the amortized communication complexity of computing independent copies of the same relation or function. We also show that the only way to prove a strong direct sum theorem for randomized communication complexity is by solving a particular variant of the pointer jumping problem that we define. Our work implies that a strong direct sum theorem for communication complexity holds if and only if efficient compression of communication protocols is possible.

127 citations


Journal ArticleDOI
TL;DR: This paper demonstrates that near-optimal performance in MIMO-ISI channels with large dimensions can be achieved at low complexities through simple yet effective simplifications/approximations, and shows that these message passing algorithms can be used in an iterative manner with local neighborhood search algorithms to improve the reliability/performance of M-QAM symbol detection.
Abstract: In this paper, we deal with low-complexity near-optimal detection/equalization in large-dimension multiple-input multiple-output inter-symbol interference (MIMO-ISI) channels using message passing on graphical models. A key contribution in the paper is the demonstration that near-optimal performance in MIMO-ISI channels with large dimensions can be achieved at low complexities through simple yet effective simplifications/approximations, although the graphical models that represent MIMO-ISI channels are fully/densely connected (loopy graphs). These include 1) use of Markov random field (MRF)-based graphical model with pairwise interaction, in conjunction with message damping, and 2) use of factor graph (FG)-based graphical model with Gaussian approximation of interference (GAI). The per-symbol complexities are O(K2nt2) and O(Knt) for the MRF and the FG with GAI approaches, respectively, where K and nt denote the number of channel uses per frame, and number of transmit antennas, respectively. These low-complexities are quite attractive for large dimensions, i.e., for large Knt. From a performance perspective, these algorithms are even more interesting in large-dimensions since they achieve increasingly closer to optimum detection performance for increasing Knt. Also, we show that these message passing algorithms can be used in an iterative manner with local neighborhood search algorithms to improve the reliability/performance of M-QAM symbol detection.

112 citations


Journal ArticleDOI
TL;DR: It is shown that the proposed suboptimal solution for secrecy rate maximization in a multicarrier decode-and-forward relay system is asymptotically optimal in the limit as the number of subcarriers goes to infinity.
Abstract: We study power allocation for secrecy rate maximization in a multicarrier decode-and-forward relay system, where an eavesdropper exists. We consider three transmission modes: in no communication, the source and the relay do not transmit at all; in direct communication, the source broadcasts signals during the first time slot and the relay does not forward any signal during the second time slot; in relay communication, the source broadcasts signals during the first time slot and the relay forwards the reencoded signals to the destination during the second time slot. Determining the transmission strategy adaptively on each subcarrier, the optimal source power and the optimal relay power over all subcarriers are derived to maximize the sum secrecy rate under a total system power constraint. In addition, a suboptimal power allocation scheme is proposed to substantially reduce the computational complexity. It is shown that the proposed suboptimal solution is asymptotically optimal in the limit as the number of subcarriers goes to infinity. Extensive numerical results are presented for various scenarios. In particular, the performance of the suboptimal scheme is very close to that of the optimal scheme even if the number of subcarriers is moderately small.

110 citations


Proceedings ArticleDOI
06 Jun 2011
TL;DR: The question whether the same exponential separation can be achieved with a quantum protocol that uses only one round of communication is settled in the affirmative.
Abstract: In STOC 1999, Raz presented a (partial) function for which there is a quantum protocol communicating only O(log n) qubits, but for which any classical (randomized, bounded-error) protocol requires poly(n) bits of communication. That quantum protocol requires two rounds of communication. Ever since Raz's paper it was open whether the same exponential separation can be achieved with a quantum protocol that uses only one round of communication. Here we settle this question in the affirmative.

109 citations


Journal Article
TL;DR: A new technique for proving lower bounds in property testing is developed, by showing a strong connection between testing and communication complexity, and significantly strengthens the best known bounds.
Abstract: We develop a new technique for proving lower bounds in property testing, by showing a strong connection between testing and communication complexity. We give a simple scheme for reducing communication problems to testing problems, thus allowing us to use known lower bounds in communication complexity to prove lower bounds in testing. This scheme is general and implies a number of new testing bounds, as well as simpler proofs of several known bounds. For the problem of testing whether a boolean function is k-linear (a parity function on k variables), we achieve a lower bound of Omega(k) queries, even for adaptive algorithms with two-sided error, thus confirming a conjecture of Goldreich (2010). The same argument behind this lower bound also implies a new proof of known lower bounds for testing related classes such as k-juntas. For some classes, such as the class of monotone functions and the class of s-sparse GF(2) polynomials, we significantly strengthen the best known bounds.

107 citations


Journal ArticleDOI
TL;DR: The proposed PTS with simple detector achieves almost the same bit error rate (BER) performance as the C-PTS with perfect side information, under both additive white Gaussian noise (AWGN) channel and Rayleigh fading channel.
Abstract: Partial transmit sequence (PTS) is one of the most well-known peak-to-average power ratio (PAPR) reduction techniques proposed for orthogonal frequency-division multiplexing (OFDM) systems. The main drawbacks of the conventional PTS (C-PTS) are high computational complexity and transmission of several side information bits. A new PTS with simple detector is proposed in this paper to deal with these drawbacks of C-PTS. The candidates can be generated through cyclically shifting each sub-block sequence in time domain and combining them in a recursive manner. At the receiver, by utilizing the natural diversity of phase constellation for different candidates, the detector can successfully recover the original signal without side information. Numerical simulation shows that the proposed scheme performs very well in terms of PAPR. The probability of detection failure of the side information demonstrates that the detector could work without any side information with high reliability. The proposed scheme achieves almost the same bit error rate (BER) performance as the C-PTS with perfect side information, under both additive white Gaussian noise (AWGN) channel and Rayleigh fading channel.

104 citations


Journal ArticleDOI
Lukasz Olejnik1
TL;DR: The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents.
Abstract: We propose a quantum solution to the classical private information retrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private information retrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offers substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett. 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutionsmore » as well as the Deutsch-Jozsa oracle.« less

Journal ArticleDOI
TL;DR: This work considers detection over linear channels impaired by additive white Gaussian noise and proposes novel detection algorithms derived by applying the sum-product algorithm to a suitably designed factor graph that can approach or even outperform the performance provided by much more complex algorithms.
Abstract: We consider detection over linear channels impaired by additive white Gaussian noise. For this general model, which describes a large variety of scenarios, novel detection algorithms are derived by applying the sum-product algorithm to a suitably designed factor graph. Being soft-input soft-output (SISO) in nature, the proposed detectors can be adopted in turbo processing without additional modifications. Among various applications, we focus on channels with known intersymbol interference, on frequency-division-multiplexed systems where adjacent signals are allowed to overlap in frequency to increase the spectral efficiency, and on code division multiple access systems. When compared with the existing interference-cancellation algorithms, the proposed schemes result very appealing in terms of tradeoff between performance and computational complexity. Particularly, the proposed schemes can approach or even outperform the performance provided by much more complex algorithms.

Journal ArticleDOI
TL;DR: This paper presents and incorporates an error bound function into the two channel estimation methods, which can automatically adjust the error bound with the update of the channel estimates, and shows good performance of the proposed algorithms in terms of convergence speed, steady-state mean square error, and bit error rate.
Abstract: In this paper, we consider a general cooperative wireless sensor network (WSN) with multiple hops and the problem of channel estimation. Two matrix-based set-membership (SM) algorithms are developed for the estimation of complex matrix channel parameters. The main goal is to significantly reduce the computational complexity, compared with existing channel estimators, and extend the lifetime of the WSN by reducing its power consumption. The first proposed algorithm is the SM normalized least mean squares (SM-NLMS) algorithm. The second is the SM recursive least squares (RLS) algorithm called BEACON. Then, we present and incorporate an error bound function into the two channel estimation methods, which can automatically adjust the error bound with the update of the channel estimates. Steady-state analysis in the output mean-square error (MSE) is presented, and closed-form formulas for the excess MSE and the probability of update in each recursion are provided. Computer simulations show good performance of our proposed algorithms in terms of convergence speed, steady-state mean square error, and bit error rate (BER) and demonstrate reduced complexity and robustness against time-varying environments and different signal-to-noise ratio (SNR) values.

Journal ArticleDOI
TL;DR: An optimal power allocation algorithm for the orthogonal frequency division multiplexing (OFDM)-based cognitive radio (CR) systems with different statistical interference constraints imposed by different primary users (PUs) is developed and the performance has been investigated.
Abstract: In this letter, we develop an optimal power allocation algorithm for the orthogonal frequency division multiplexing (OFDM)-based cognitive radio (CR) systems with different statistical interference constraints imposed by different primary users (PUs). Given the fact that the interference constraints are met in a statistical manner, the CR transmitter does not require the instantaneous channel quality feedback from the PU receivers. A suboptimal algorithm with reduced complexity has been proposed and the performance has been investigated. Presented numerical results show that with our proposed optimal power allocation algorithm CR user can achieve significantly higher transmission capacity for given statistical interference constraints and a given power budget compared to the classical power allocation algorithms namely, uniform and water-filling power allocation algorithms. The suboptimal algorithm outperforms both water-filling algorithm and uniform power loading algorithm. The proposed suboptimal algorithm give an option of using a low complexity power allocation algorithm where complexity is an issue with a certain amount of transmission rate degradation.

Proceedings ArticleDOI
06 Jun 2011
TL;DR: An optimal Ω(n) lower bound on the randomized communication complexity of the much-studied Gap-Hamming-Distance problem is proved, and essentially optimal multi-pass space lower bounds in the data stream model are obtained for a number of fundamental problems, including the estimation of frequency moments.
Abstract: We prove an optimal Ω(n) lower bound on the randomized communication complexity of the much-studied Gap-Hamming-Distance problem. As a consequence, we obtain essentially optimal multi-pass space lower bounds in the data stream model for a number of fundamental problems, including the estimation of frequency moments.The Gap-Hamming-Distance problem is a communication problem, wherein Alice and Bob receive n-bit strings x and y, respectively. They are promised that the Hamming distance between x and y is either at least n/2+√n or at most n/2-√n, and their goal is to decide which of these is the case. Since the formal presentation of the problem by Indyk and Woodruff (FOCS, 2003), it had been conjectured that the naive protocol, which uses n bits of communication, is asymptotically optimal. The conjecture was shown to be true in several special cases, e.g., when the communication is deterministic, or when the number of rounds of communication is limited.The proof of our aforementioned result, which settles this conjecture fully, is based on a new geometric statement regarding correlations in Gaussian space, related to a result of C. Borell (1985). To prove this geometric statement, we show that random projections of not-too-small sets in Gaussian space are close to a mixture of translated normal variables.

Posted Content
TL;DR: The main known lower bounds on the minimum sizes of extended formulations for fixed polytope P (Yannakakis 1991) are closely related to the concept of non-deterministic communication complexity as mentioned in this paper.
Abstract: An extended formulation of a polytope P is a polytope Q which can be projected onto P. Extended formulations of small size (i.e., number of facets) are of interest, as they allow to model corresponding optimization problems as linear programs of small sizes. The main known lower bounds on the minimum sizes of extended formulations for fixed polytope P (Yannakakis 1991) are closely related to the concept of nondeterministic communication complexity. We study the relative power and limitations of the bounds on several examples.

Proceedings ArticleDOI
06 Jun 2011
TL;DR: In this paper, the authors study the verification problem in distributed networks, and give almost tight lower bounds on the running time of distributed verification algorithms for many fundamental problems such as connectivity, spanning connected subgraph, and s-t cut verification.
Abstract: We study the verification problem in distributed networks, stated as follows. Let H be a subgraph of a network G where each vertex of G knows which edges incident on it are in H. We would like to verify whether H has some properties, e.g., if it is a tree or if it is connected (every node knows in the end of the process whether H has the specified property or not). We would like to perform this verification in a decentralized fashion via a distributed algorithm. The time complexity of verification is measured as the number of rounds of distributed communication.In this paper we initiate a systematic study of distributed verification, and give almost tight lower bounds on the running time of distributed verification algorithms for many fundamental problems such as connectivity, spanning connected subgraph, and s-t cut verification. We then show applications of these results in deriving strong unconditional time lower bounds on the hardness of distributed approximation for many classical optimization problems including minimum spanning tree, shortest paths, and minimum cut. Many of these results are the first non-trivial lower bounds for both exact and approximate distributed computation and they resolve previous open questions. Moreover, our unconditional lower bound of approximating minimum spanning tree (MST) subsumes and improves upon the previous hardness of approximation bound of Elkin [STOC 2004] as well as the lower bound for (exact) MST computation of Peleg and Rubinovich [FOCS 1999]. Our result implies that there can be no distributed approximation algorithm for MST that is significantly faster than the current exact algorithm, for any approximation factor.Our lower bound proofs show an interesting connection between communication complexity and distributed computing which turns out to be useful in establishing the time complexity of exact and approximate distributed computation of many problems.

Journal ArticleDOI
TL;DR: This paper presents new design formulations that aim at optimizing the performance of an orthogonal frequency-division multiple-access (OFDMA) ad hoc cognitive radio network through joint subcarrier assignment and power allocation and recommends that the network collaboration is made possible through the implementation of virtual timers at individual secondary users and through the exchange of pertinent information over a common reserved channel.
Abstract: This paper presents new design formulations that aim at optimizing the performance of an orthogonal frequency-division multiple-access (OFDMA) ad hoc cognitive radio network through joint subcarrier assignment and power allocation. Aside from an important constraint on the tolerable interference induced to primary networks, to efficiently implement spectrum-sharing control within the unlicensed network, the optimization problems considered here strictly enforce upper and lower bounds on the total amount of temporarily available bandwidth that is granted to individual secondary users. These new requirements are of particular relevance in cognitive radio settings, where the spectral activities of primary users are highly dynamic, leaving little opportunity for secondary access. A dual decomposition framework is then developed for two criteria (throughput maximization and power minimization), which gives rise to the realization of distributed solutions. Because the proposed distributed protocols require very limited cooperation among the participating network elements, they are particularly applicable to ad hoc cognitive networks, where centralized processing and control are certainly inaccessible. In this paper, we recommend that the network collaboration is made possible through the implementation of virtual timers at individual secondary users and through the exchange of pertinent information over a common reserved channel. It is shown that not only is the computational complexity of the devised algorithms affordable but that the performance of these algorithms in practical scenarios attains the actual global optimum as well. The potential of the proposed approaches is thoroughly verified by asymptotic complexity analysis and numerical results.

Journal ArticleDOI
TL;DR: A centralized approximate solution to power control in interference-limited cellular, ad-hoc, and cognitive underlay networks is developed, which alternates between distributed approximation and distributed deflation - reaching consensus on a user to drop, when needed.
Abstract: Power control is important in interference-limited cellular, ad-hoc, and cognitive underlay networks, when the objective is to ensure a certain quality of service to each connection. Power control has been extensively studied in this context, including distributed algorithms that are particularly appealing in ad-hoc and cognitive settings. A long-standing issue is that the power control problem may be infeasible, thus requiring appropriate admission control. The power and admission control parts of the problem are tightly coupled, but the joint optimization problem is NP-hard. We begin with a convenient reformulation which enables a disciplined convex approximation approach. This leads to a centralized approximate solution that is numerically shown to outperform the prior art, and even yield close to optimal results in certain cases - at affordable complexity. The issue of imperfect channel state information is also considered. A distributed implementation is then developed, which alternates between distributed approximation and distributed deflation - reaching consensus on a user to drop, when needed. Both phases require only local communication and computation, yielding a relatively lightweight distributed algorithm with the same performance as its centralized counterpart.

Journal ArticleDOI
TL;DR: In this paper, the clock synchronization for wireless sensor networks in the presence of unknown exponential delay is investigated under the two-way message exchange mechanism and a low-complexity maximum likelihood estimator is proposed.
Abstract: In this paper, the clock synchronization for wireless sensor networks in the presence of unknown exponential delay is investigated under the two-way message exchange mechanism. The maximum-likelihood estimator for joint estimation of clock offset, clock skew and fixed delay is first cast into a linear programming problem. Based on novel geometric analyses of the feasible domain, a low-complexity maximum likelihood estimator is then proposed. Complexities of the proposed estimators and existing algorithms are compared analytically and numerically. Simulation results further demonstrate that our proposed algorithms have advantages in terms of both performance and computational complexities.

Journal ArticleDOI
TL;DR: The notion of block sensitivity is polynomially related to a number of other complexity measures of a Boolean function, including the decision tree complexity, the polynomial degree, and the certificate complexity.
Abstract: The sensitivity of a Boolean function f of n Boolean variables is the maximum over all inputs x of the number of positions i such that flipping the i-th bit of x changes the value of f(x). Permitting to flip disjoint blocks of bits leads to the notion of block sensitivity, known to be polynomially related to a number of other complexity measures of f , including the decision-tree complexity, the polynomial degree, and the certificate complexity. A long-standing open question is whether sensitivity also belongs to this equivalence class. A positive answer to this question is known as the Sensitivity Conjecture. We present a selection of known as well as new variants of the Sensitivity Conjecture and point out some weaker versions that are also open. Among other things, we relate the problem to Communication Complexity via recent results by Sherstov (QIC 2010). We also indicate new connections to Fourier analysis.

Journal ArticleDOI
TL;DR: This work proposes to exploit network path diversity via a novel randomized network coding (RNC) approach that provides unequal error protection (UEP) to the packets conveying the video content through a distributed receiver-driven streaming solution.
Abstract: We address the problem of prioritized video streaming over lossy overlay networks. We propose to exploit network path diversity via a novel randomized network coding (RNC) approach that provides unequal error protection (UEP) to the packets conveying the video content. We design a distributed receiver-driven streaming solution, where a client requests packets from the different priority classes from its neighbors in the overlay. Based on the received requests, a node in turn forwards combinations of the selected packets to the requesting peers. Choosing a network coding strategy at every node can be cast as an optimization problem that determines the rate allocation between the different packet classes such that the average distortion at the requesting peer is minimized. As the optimization problem has log-concavity properties, it can be solved with low complexity by an iterative algorithm. Our simulation results demonstrate that the proposed scheme respects the relative priorities of the different packet classes and achieves a graceful quality adaptation to network resource constraints. Therefore, our scheme substantially outperforms reference schemes such as baseline network coding techniques as well as solutions that employ rateless codes with built-in UEP properties. The performance evaluation provides additional evidence of the substantial robustness of the proposed scheme in a variety of transmission scenarios.

Journal ArticleDOI
TL;DR: These results refer to scalar linear, vector linear, and nonlinear encoding functions and are the first results that address the computational complexity of achieving the network coding capacity in both the vector linear and general network coding scenarios.
Abstract: This work addresses the computational complexity of achieving the capacity of a general network coding instance. It has been shown [Lehman and Lehman, SODA 2005] that determining the “scalar linear” capacity of a general network coding instance is NP-hard. In this paper we address the notion of approximation in the context of both linear and nonlinear network coding. Loosely speaking, we show that given an instance of the general network coding problem of capacity C , constructing a code of rate αC for any universal (i.e., independent of the size of the instance) constant α ≤ 1 is “hard”. Specifically, finding such network codes would solve a long standing open problem in the field of graph coloring. Our results refer to scalar linear, vector linear, and nonlinear encoding functions and are the first results that address the computational complexity of achieving the network coding capacity in both the vector linear and general network coding scenarios. In addition, we consider the problem of determining the (scalar) linear capacity of a planar network coding instance (i.e., an instance in which the underlying graph is planar). We show that even for planar networks this problem remains NP-hard.

Book ChapterDOI
04 Dec 2011
TL;DR: In this paper, it was shown that homomorphism of commitments is not a necessity for computational verifiable secret sharing in the synchronous or in the asynchronous communication model, and the first two-round VSS scheme for n≥2t+1 was presented.
Abstract: Verifiable secret sharing (VSS) is an important primitive in distributed cryptography that allows a dealer to share a secret among n parties in the presence of an adversary controlling at most t of them. In the computational setting, the feasibility of VSS schemes based on commitments was established over two decades ago. Interestingly, all known computational VSS schemes rely on the homomorphic nature of these commitments or achieve weaker guarantees. As homomorphism is not inherent to commitments or to the computational setting in general, a closer look at its utility to VSS is called for. In this work, we demonstrate that homomorphism of commitments is not a necessity for computational VSS in the synchronous or in the asynchronous communication model. We present new VSS schemes based only on the definitional properties of commitments that are almost as good as the existing VSS schemes based on homomorphic commitments. Importantly, they have significantly lower communication complexities than their (statistical or perfect) unconditional counterparts. Further, in the synchronous communication model, we observe that a crucial interactive complexity measure of round complexity has never been formally studied for computational VSS. Interestingly, for the optimal resiliency conditions, the least possible round complexity in the known computational VSS schemes is identical to that in the (statistical or perfect) unconditional setting: three rounds. Considering the strength of the computational setting, this equivalence is certainly surprising. In this work, we show that three rounds are actually not mandatory for computational VSS. We present the first two-round VSS scheme for n≥2t+1 and lower-bound the result tightly by proving the impossibility of one-round computational VSS for t≥2 or n≤3t. We also include a new two-round VSS scheme using homomorphic commitments that has the same communication complexity as the well-known three-round Feldman and Pedersen VSS schemes.

Proceedings ArticleDOI
23 Jan 2011
TL;DR: A new technique for proving streaming lower bounds (and one-way communication lower bounds), by reductions from a problem called the Boolean Hidden Hypermatching problem (BHH), which is a generalization of the well-known Boolean Hidden Matching problem.
Abstract: In this paper we introduce a new technique for proving streaming lower bounds (and one-way communication lower bounds), by reductions from a problem called the Boolean Hidden Hypermatching problem (BHH). BHH is a generalization of the well-known Boolean Hidden Matching problem, which was used by Gavinsky et al. to prove an exponential separation between quantum communication complexity and one-way randomized communication complexity. We are the first to introduce BHH, and to prove a lower bound for it.The hardness of the BHH problem is inherently oneway: it is easy to solve BHH using logarithmic two-way communication, but it requires √n communication if Alice is only allowed to send messages to Bob, and not vice-versa. This one-wayness allows us to prove lower bounds, via reductions, for streaming problems and related communication problems whose hardness is also inherently one-way.By designing reductions from BHH, we prove lower bounds for the streaming complexity of approximating the sorting by reversal distance, of approximately counting the number of cycles in a 2-regular graph, and of other problems.For example, here is one lower bound that we prove, for a cycle-counting problem: Alice gets a perfect matching EA on a set of n nodes, and Bob gets a perfect matching EB on the same set of nodes. The union EA U EB is a collection of cycles, and the goal is to approximate the number of cycles in this collection. We prove that if Alice is allowed to send o(√n) bits to Bob (and Bob is not allowed to send anything to Alice), then the number of cycles cannot be approximated to within a factor of 1.999, even using a randomized protocol. We prove that it is not even possible to distinguish the case where all cycles are of length 4, from the case where all cycles are of length 8. This lower bound is "natively" one-way: With 4 rounds of communication, it is easy to distinguish these two cases.

Book ChapterDOI
28 Mar 2011
TL;DR: A new cryptographic notion is introduced that is a one-time computable pseudorandom function (PRF) that can be evaluated on at most one input, even by an adversary who controls the device storing the key K, and it is shown that this tool can be used to improve the communication complexity of proofs-of-erasure schemes.
Abstract: This paper studies the design of cryptographic schemes that are secure even if implemented on untrusted machines that fall under adversarial control. For example, this includes machines that are infected by a software virus. We introduce a new cryptographic notion that we call a one-time computable pseudorandom function (PRF), which is a PRF FK(ċ) that can be evaluated on at most one input, even by an adversary who controls the device storing the key K, as long as: (1) the adversary cannot "leak" the key K out of the device completely (this is similar to the assumptions made in the Bounded-Retrieval Model), and (2) the local read/write memory of the machine is restricted, and not too much larger than the size of K. In particular, the only way to evaluate FK(x) on such device, is to overwrite part of the key K during the computation, thus preventing all future evaluations of FK(ċ) at any other point x′ ≠ x. We show that this primitive can be used to construct schemes for password protected storage that are secure against dictionary attacks, even by a virus that infects the machine. Our constructions rely on the random-oracle model, and lower-bounds for graphs pebbling problems. We show that our techniques can also be used to construct another primitive, called uncomputable hash functions, which are hash functions that have a short description but require a large amount of space to compute on any input. We show that this tool can be used to improve the communication complexity of proofs-of-erasure schemes, introduced recently by Perito and Tsudik (ESORICS 2010).

Proceedings ArticleDOI
08 Jun 2011
TL;DR: Lower bounds for the QMA-communication complexity of the functions Inner Product and Disjointness are shown, and how one can 'transfer' hardness under an analogous measure in the query complexity model to the communication model using Sherstov's pattern matrix method is described.
Abstract: We show several results related to interactive proof modes of communication complexity. First we show lower bounds for the QMA-communication complexity of the functions Inner Product and Disjointness. We describe a general method to prove lower bounds for QMA-communication complexity, and show how one can 'transfer' hardness under an analogous measure in the query complexity model to the communication model using Sherstov's pattern matrix method.Combining a result by Vereshchagin and the pattern matrix method we find a partial function with AM-communication complexity O(\log n), PP-communication complexity \Omega(n^{1/3}), and QMA-communication complexity \Omega(n^{1/6}). Hence in the world of communication complexity noninteractive quantum proof systems are not able to efficiently simulate co-nondeterminism or interaction. These results imply that the related questions in Turing machine complexity theory cannot be resolved by 'algebrizing' techniques. Finally we show that in MA-protocols there is an exponential gap between one-way protocols and two-way protocols for a partial function (this refers to the interaction between Alice and Bob). This is in contrast to nondeterministic, AM-, and QMA-protocols, where one-way communication is essentially optimal.

Proceedings ArticleDOI
06 Jun 2011
TL;DR: A tight unconditional lower bound on the time complexity of distributed random walk computation is shown, which is the first lower bound that the diameter plays a role of multiplicative factor.
Abstract: We consider the problem of performing a random walk in a distributed network. Given bandwidth constraints, the goal of the problem is to minimize the number of rounds required to obtain a random walk sample. Das Sarma et al. [PODC'10] show that a random walk of length l on a network of diameter D can be performed in O(√{lD}+D) time. A major question left open is whether there exists a faster algorithm, especially whether the multiplication of √{l} and √{D} is necessary.In this paper, we show a tight unconditional lower bound on the time complexity of distributed random walk computation. Specifically, we show that for any n, D, and D ≤ l ≤ (n/(D3 log n))1/4, performing a random walk of length Θ(l) on an n-node network of diameter D requires Ω(√{lD}+D) time. This bound is unconditional, i.e., it holds for any (possibly randomized) algorithm. To the best of our knowledge, this is the first lower bound that the diameter plays a role of multiplicative factor. Our bound shows that the algorithm of Das Sarma et al. is time optimal.Our proof technique introduces a new connection between bounded-round communication complexity and distributed algorithm lower bounds with D as a trade-off parameter, strengthening the previous study by Das Sarma et al. [STOC'11]. In particular, we make use of the bounded-round communication complexity of the pointer chasing problem. Our technique can be of independent interest and may be useful in showing non-trivial lower bounds on the complexity of other fundamental distributed computing problems.

Journal ArticleDOI
TL;DR: In this paper, the energy and area efficiency metrics are proposed for design space exploration to quantify the algorithmic and the implementation complexity of a receiver, and an exploration approach is presented, which permits an appropriate benchmarking of implementation efficiency, communications performance, and flexibility trade-offs.
Abstract: Future wireless communication systems require efficient and flexible baseband receivers. Meaningful efficiency metrics are key for design space exploration to quantify the algorithmic and the implementation complexity of a receiver. Most of the current established efficiency metrics are based on counting operations, thus neglecting important issues like data and storage complexity. In this paper we introduce suitable energy and area efficiency metrics which resolve the afore-mentioned disadvantages. These are decoded information bit per energy and throughput per area unit. Efficiency metrics are assessed by various implementations of turbo decoders, LDPC decoders and convolutional decoders. An exploration approach is presented, which permit an appropriate benchmarking of implementation efficiency, communications performance, and flexibility trade-offs. Two case studies demonstrate this approach and show that design space exploration should result in various efficiency evaluations rather than a single snapshot metric as done often in state-of-the-art approaches.

Book ChapterDOI
01 Jul 2011
TL;DR: In this article, the authors studied the complexity of non-signaling distributions, i.e., those where Alice's marginal distribution does not depend on Bob's input, and vice versa.
Abstract: We study a model of communication complexity that encompasses many well-studied problems, including classical and quantum communication complexity, the complexity of simulating distributions arising from bipartite measurements of shared quantum states, and XOR games. In this model, Alice gets an input x, Bob gets an input y, and their goal is to each produce an output a, b distributed according to some pre-specified joint distribution p(a, b|x, y). Our results apply to any non-signaling distribution, that is, those where Alice's marginal distribution does not depend on Bob's input, and vice versa. By taking a geometric view of the non-signaling distributions, we introduce a simple new technique based on affine combinations of lower-complexity distributions, and we give the first general technique to apply to all these settings, with elementary proofs and very intuitive interpretations. Specifically, we introduce two complexity measures, one which gives lower bounds on classical communication, and one for quantum communication. These measures can be expressed as convex optimization problems. We show that the dual formulations have a striking interpretation, since they coincide with maximum violations of Bell and Tsirelson inequalities. The dual expressions are closely related to the winning probability of XOR games. Despite their apparent simplicity, these lower bounds subsume many known communication complexity lower bound methods, most notably the recent lower bounds of Linial and Shraibman for the special case of Boolean functions. We show that as in the case of Boolean functions, the gap between the quantum and classical lower bounds is at most linear in the size of the support of the distribution, and does not depend on the size of the inputs. This translates into a bound on the gap between maximal Bell and Tsirelson inequality violations, which was previously known only for the case of distributions with Boolean outcomes and uniform marginals. It also allows us to show that for some distributions, information theoretic methods are necessary to prove strong lower bounds. Finally, we give an exponential upper bound on quantum and classical communication complexity in the simultaneous messages model, for any non-signaling distribution. One consequence of this is a simple proof that any quantum distribution can be approximated with a constant number of bits of communication.