scispace - formally typeset
Search or ask a question

Showing papers on "Communication complexity published in 2007"


Proceedings ArticleDOI
01 Dec 2007
TL;DR: A continuous-time distributed Kalman filter that uses local aggregation of the sensor data but attempts to reach a consensus on estimates with other nodes in the network and gives rise to two iterative distributedKalman filtering algorithms with different consensus strategies on estimates.
Abstract: In this paper, we introduce three novel distributed Kalman filtering (DKF) algorithms for sensor networks. The first algorithm is a modification of a previous DKF algorithm presented by the author in CDC-ECC '05. The previous algorithm was only applicable to sensors with identical observation matrices which meant the process had to be observable by every sensor. The modified DKF algorithm uses two identical consensus filters for fusion of the sensor data and covariance information and is applicable to sensor networks with different observation matrices. This enables the sensor network to act as a collective observer for the processes occurring in an environment. Then, we introduce a continuous-time distributed Kalman filter that uses local aggregation of the sensor data but attempts to reach a consensus on estimates with other nodes in the network. This peer-to-peer distributed estimation method gives rise to two iterative distributed Kalman filtering algorithms with different consensus strategies on estimates. Communication complexity and packet-loss issues are discussed. The performance and effectiveness of these distributed Kalman filtering algorithms are compared and demonstrated on a target tracking task.

1,514 citations


Proceedings ArticleDOI
11 Jun 2007
TL;DR: A general construction of a zero-knowledge proof for an NP relation R(x,w) which only makes a black-box use of a secure protocol for a related multi-partyfunctionality f, which improves over the O(ks) complexity of the best previous protocols.
Abstract: We present a general construction of a zero-knowledge proof for an NP relation R(x,w) which only makes a black-box use of a secure protocol for a related multi-partyfunctionality f. The latter protocol is only required to be secure against a small number of "honest but curious" players. As an application, we can translate previous results on the efficiency of secure multiparty computation to the domain of zero-knowledge, improving over previous constructions of efficient zero-knowledge proofs. In particular, if verifying R on a witness of length m can be done by a circuit C of size s, and assuming one-way functions exist, we get the following types of zero-knowledge proof protocols.Approaching the witness length. If C has constant depth over ∧,∨,⊕, - gates of unbounded fan-in, we get a zero-knowledge protocol with communication complexity m·poly(k)·polylog(s), where k is a security parameter. Such a protocol can be implemented in either the standard interactive model or, following a trusted setup, in a non-interactive model."Constant-rate" zero-knowledge. For an arbitrary circuit C of size s and a bounded fan-in, we geta zero-knowledge protocol with communication complexity O(s)+poly(k). Thus, for large circuits, the ratio between the communication complexity and the circuit size approaches a constant. This improves over the O(ks) complexity of the best previous protocols.

351 citations


Book ChapterDOI
19 Aug 2007
TL;DR: These are the first unconditionally secure protocols where the part of the communication complexity that depends on the circuit size is linear in n and the protocol has so called everlasting security.
Abstract: We present a multiparty computation protocol that is unconditionally secure against adaptive and active adversaries, with communication complexity O(Cn)k + O(Dn2)k + poly(nκ), where C is the number of gates in the circuit, n is the number of parties, k is the bit-length of the elements of the field over which the computation is carried out, D is the multiplicative depth of the circuit, and κ is the security parameter. The corruption threshold is t < n/3. For passive security the corruption threshold is t < n/2 and the communication complexity is O(nC)k. These are the first unconditionally secure protocols where the part of the communication complexity that depends on the circuit size is linear in n. We also present a protocol with threshold t < n/2 and complexity O(Cn)k+poly(nκ) based on a complexity assumption which, however, only has to hold during the execution of the protocol - that is, the protocol has so called everlasting security.

267 citations


Book ChapterDOI
19 Aug 2007
TL;DR: Boneh, DiCrescenzo, Ostrovsky and Persiano as discussed by the authors proposed a public-key encryption with keyword search (PKEKS) scheme, which allows PIR searching over encrypted documents.
Abstract: Consider the following problem: Alice wishes to maintain her email using a storage-provider Bob (such as a Yahoo! or hotmail email account). This storage-provider should provide for Alice the ability to collect, retrieve, search and delete emails but, at the same time, should learn neither the content of messages sent from the senders to Alice (with Bob as an intermediary), nor the search criteria used by Alice. A trivial solution is that messages will be sent to Bob in encrypted form and Alice, whenever she wants to search for some message, will ask Bob to send her a copy of the entire database of encrypted emails. This however is highly inefficient. We will be interested in solutions that are communication-efficient and, at the same time, respect the privacy of Alice. In this paper, we show how to create a public-key encryption scheme for Alice that allows PIR searching over encrypted documents. Our solution is the first to reveal no partial information regarding the user's search (including the access pattern) in the public-key setting and with nontrivially small communication complexity. This provides a theoretical solution to a problem posed by Boneh, DiCrescenzo, Ostrovsky and Persiano on "Public-key Encryption with Keyword Search." The main technique of our solution also allows for Single-Database PIR writing with sublinear communication complexity, which we consider of independent interest.

241 citations


Proceedings ArticleDOI
28 Oct 2007
TL;DR: A new error-resilient privacy-preserving string searching protocol that allows to execute any finite state machine in an oblivious manner, requiring a communication complexity which is linear both in the number of states and the length of the input string.
Abstract: Human Desoxyribo-Nucleic Acid (DNA) sequences offer a wealth of information that reveal, among others, predisposition to various diseases and paternity relations. The breadth and personalized nature of this information highlights the need for privacy-preserving protocols. In this paper, we present a new error-resilient privacy-preserving string searching protocol that is suitable for running private DNA queries. This protocol checks if a short template (e.g., a string that describes a mutation leading to a disease), known to one party, is present inside a DNA sequence owned by another party, accounting for possible errors and without disclosing to each party the other party's input. Each query is formulated as a regular expression over a finite alphabet and implemented as an automaton. As the main technical contribution, we provide a protocol that allows to execute any finite state machine in an oblivious manner, requiring a communication complexity which is linear both in the number of states and the length of the input string.

239 citations


Posted Content
TL;DR: This paper shows how to create a public-key encryption scheme for Alice that allows PIR searching over encrypted documents and is the first to reveal no partial information regarding the user's search (including the access pattern) in the public- key setting and with nontrivially small communication complexity.
Abstract: Consider the following problem: Alice wishes to maintain her email using a storageprovider Bob (such as a Yahoo! or hotmail e-mail account). This storage-provider should provide for Alice the ability to collect, retrieve, search and delete emails but, at the same time, should learn neither the content of messages sent from the senders to Alice (with Bob as an intermediary), nor the search criteria used by Alice. A trivial solution is that messages will be sent to Bob in encrypted form and Alice, whenever she wants to search for some message, will ask Bob to send her a copy of the entire database of encrypted emails. This however is highly inefficient. We will be interested in solutions that are communication-efficient and, at the same time, respect the privacy of Alice. In this paper, we show how to create a publickey encryption scheme for Alice that allows PIR searching over encrypted documents. Our solution provides a theoretical solution to an open problem posed by Boneh, DiCrescenzo, Ostrovsky and Persiano on “Public-key Encryption with Keyword Search”, providing the first scheme that does not reveal any partial information regarding user’s search (including the access pattern) in the public-key setting and with non-trivially small communication complexity. The main technique of our solution also allows for Single-Database PIR writing with sublinear communication complexity, which we consider of independent interest.

211 citations


Journal ArticleDOI
TL;DR: A class of algorithms are developed, parameterized by V, that come within a logarithmic factor of achieving this fundamental delay tradeoff in a multiuser wireless downlink with randomly varying channels.
Abstract: We consider the fundamental delay tradeoffs for minimizing energy expenditure in a multiuser wireless downlink with randomly varying channels. First, we extend the Berry-Gallager bound to a multiuser context, demonstrating that any algorithm that yields average power within O(1/V) of the minimum power required for network stability must also have an average queueing delay greater than or equal to Omega(radicV). We then develop a class of algorithms, parameterized by V, that come within a logarithmic factor of achieving this fundamental tradeoff. The algorithms overcome an exponential state-space explosion, and can be implemented in real time without a priori knowledge of traffic rates or channel statistics. Further, we discover a ldquosuperfastrdquo scheduling mode that beats the Berry-Gallager bound in the exceptional case when power functions are piecewise linear.

188 citations


Proceedings ArticleDOI
11 Jun 2007
TL;DR: In this article, an exponential separation between one-way quantum and classical communication protocols for two-partial Boolean functions was shown for two variants of the Hidden Matching Problem of Bar-Yossef et al. They used the Fourier coefficients inequality of Kahn, Kalai, and Linial.
Abstract: We give an exponential separation between one-way quantum and classical communication protocols for twopartial Boolean functions, both of which are variants of the Boolean Hidden Matching Problem of Bar-Yossef et al. Earlier such an exponential separation was known only for a relational version of the Hidden Matching Problem. Our proofs use the Fourier coefficients inequality of Kahn, Kalai, and Linial. We give a number of applications of this separation. In particular, in the bounded-storage model of cryptography we exhibita scheme that is secure against adversaries with a certain amount of classical storage, but insecure against adversaries with a similar (or even much smaller) amount of quantum storage; in the setting of privacy amplification, we show that there are strong extractors that yield a classically secure key, but are insecure against a quantum adversary.

184 citations


Journal ArticleDOI
TL;DR: This paper proposes a formal model for a network of robotic agents that move and communicate and defines notions of robotic network, control and communication law, coordination task, and time and communication complexity.
Abstract: This paper proposes a formal model for a network of robotic agents that move and communicate. Building on concepts from distributed computation, robotics, and control theory, we define notions of robotic network, control and communication law, coordination task, and time and communication complexity. We illustrate our model and compute the proposed complexity measures in the example of a network of locally connected agents on a circle that agree upon a direction of motion and pursue their immediate neighbors.

160 citations


Proceedings ArticleDOI
24 Sep 2007
TL;DR: In this article, the authors focus on the fundamental problem of finding the optimal encoding for the broadcasted packets that minimizes the overall number of transmissions and show that this problem is NP-complete over GF(2) and establish several fundamental properties of the optimal solution.
Abstract: The advent of network coding presents promising opportunities in many areas of communication and networking. It has been recently shown that network coding technique can significantly increase the overall throughput of wireless networks by taking advantage of their broadcast nature. In wireless networks, each transmitted packet is broadcasted within a certain area and can be overheard by the neighboring nodes. When a node needs to transmit packets, it employs the opportunistic coding approach that uses the knowledge of what the node's neighbors have heard in order to reduce the number of transmissions. With this approach, each transmitted packet is a linear combination of the original packets over a certain finite field. In this paper, we focus on the fundamental problem of finding the optimal encoding for the broadcasted packets that minimizes the overall number of transmissions. We show that this problem is NP-complete over GF(2) and establish several fundamental properties of the optimal solution. We also propose a simple heuristic solution for the problem based on graph coloring and present some empirical results for random settings.

159 citations


Journal ArticleDOI
01 Nov 2007
TL;DR: This paper considers a scenario where multiple data sources are willing to run data mining algorithms over the union of their data as long as each data source is guaranteed that its information that does not pertain to another data source will not be revealed.
Abstract: Data mining over multiple data sources has emerged as an important practical problem with applications in different areas such as data streams, data-warehouses, and bioinformatics. Although the data sources are willing to run data mining algorithms in these cases, they do not want to reveal any extra information about their data to other sources due to legal or competition concerns. One possible solution to this problem is to use cryptographic methods. However, the computation and communication complexity of such solutions render them impractical when a large number of data sources are involved. In this paper, we consider a scenario where multiple data sources are willing to run data mining algorithms over the union of their data as long as each data source is guaranteed that its information that does not pertain to another data source will not be revealed. We focus on the classification problem in particular and present an efficient algorithm for building a decision tree over an arbitrary number of distributed sources in a privacy preserving manner using the ID3 algorithm.

Proceedings ArticleDOI
13 Jun 2007
TL;DR: A direct-sum theorem in communication complexity is derived by employing a rejection sampling procedure that relates the relative entropy between two distributions to the communication complexity of generating one distribution from the other.
Abstract: We examine the communication required for generating random variables remotely. One party Alice is given a distribution D, and she has to send a message to Bob, who is then required to generate a value with distribution exactly D. Alice and Bob are allowed to share random bits generated without the knowledge of D. There are two settings based on how the distribution D provided to Alice is chosen. If D is itself chosen randomly from some set (the set and distribution are known in advance) and we wish to minimize the expected communication in order for Alice to generate a value y, with distribution D, then we characterize the communication required in terms of the mutual information between the input to Alice and the output Bob is required to generate. If D is chosen from a set of distributions D, and we wish to devise a protocol so that the expected communication (the randomness comes from the shared random string and Alice's coin tosses) is small for each D isin D, then we characterize the communication required in this case in terms of the channel capacity associated with the set D. Our proofs are based on an improved rejection sampling procedure that relates the relative entropy between two distributions to the communication complexity of generating one distribution from the other. As an application of these results, we derive a direct sum theorem in communication complexity that substantially improves the previous such result shown by Jain et al. (2003).

Proceedings ArticleDOI
21 Oct 2007
TL;DR: There is an obvious connection between IRSS schemes and the fact that there exist functions with an exponential gap in their communication complexity for k and k-1 rounds, and the scheme implies such a separation which is in several aspects stronger than the previously known ones.
Abstract: We introduce a new primitive called intrusion-resilient secret sharing (IRSS), whose security proof exploits the fact that there exist functions which can be efficiently computed interactively using low communication complexity in k, but not in k-1 rounds. IRSS is a means of sharing a secret message amongst a set of players which comes with a very strong security guarantee. The shares in an IRSS are made artificially large so that it is hard to retrieve them completely, and the reconstruction procedure is interactive requiring the players to exchange k short messages. The adversaries considered can attack the scheme in rounds, where in each round the adversary chooses some player to corrupt and some function, and retrieves the output of that function applied to the share of the corrupted player. This model captures for example computers connected to a network which can occasionally he infected by malicious software like viruses, which can compute any function on the infected machine, but cannot sent out a huge amount of data. Using methods from the bounded-retrieval model, we construct an IRSS scheme which is secure against any computationally unbounded adversary as long as the total amount of information retrieved by the adversary is somewhat less than the length of the shares, and the adversary makes at most k-1 corruption rounds (as described above, where k rounds are necessary for reconstruction). We extend our basic scheme in several ways in order to allow the shares sent by the dealer to be short (the players then blow them up locally) and to handle even stronger adversaries who can learn some of the shares completely. As mentioned, there is an obvious connection between IRSS schemes and the fact that there exist functions with an exponential gap in their communication complexity for k and k-1 rounds. Our scheme implies such a separation which is in several aspects stronger than the previously known ones.

Proceedings ArticleDOI
13 Jun 2007
TL;DR: This work combines a query complexity separation due to Beigel with a technique of Razborov that translates the acceptance probability of quantum protocols to polynomials and study how small the bias of minimal-degree polynmials that sign-represent Boolean functions needs to be.
Abstract: We present two results for computational models that allow error probabilities close to 1/2. First, most computational complexity classes have an analogous class in communication complexity. The class PP in fact has two, a version with weakly restricted bias called PPcc, and a version with unrestricted bias called UPPcc. Ever since their introduction by Babai, Frankl, and Simon in 1986, it has been open whether these classes are the same. We show that PPcc subne UPPcc. Our proof combines a query complexity separation due to Beigel with a technique of Razborov that translates the acceptance probability of quantum protocols to polynomials. Second, we study how small the bias of minimal-degree polynomials that sign-represent Boolean functions needs to be. We show that the worst-case bias is at worst double- exponentially small in the sign-degree (which was very recently shown to be optimal by Podolski), while the average- case bias can be made single-exponentially small in the sign-degree (which we show to be close to optimal).

Journal ArticleDOI
TL;DR: In this paper, the Fourier transform was used to prove lower bounds on the bounded error quantum communication complexity of functions, for which a polynomial quantum speedup is possible.
Abstract: We prove lower bounds on the bounded error quantum communication complexity. Our methods are based on the Fourier transform of the considered functions. First we generalize a method for proving classical communication complexity lower bounds developed by Raz [Comput. Complexity, 5 (1995), pp. 205-221] to the quantum case. Applying this method, we give an exponential separation between bounded error quantum communication complexity and nondeterministic quantum communication complexity. We develop several other lower bound methods based on the Fourier transform, notably showing that $\sqrt{\bar{s}(f)/\log n}$, for the average sensitivity $\bar{s}(f)$ of a function $f$, yields a lower bound on the bounded error quantum communication complexity of $f((x \wedge y)\oplus z)$, where $x$ is a Boolean word held by Alice and $y,z$ are Boolean words held by Bob. We then prove the first large lower bounds on the bounded error quantum communication complexity of functions, for which a polynomial quantum speedup is possible. For all the functions we investigate, the only previously applied general lower bound method based on discrepancy yields bounds that are $O(\log n)$.

Journal ArticleDOI
TL;DR: A new (efficiently computable) deterministic schedule that uses 2D + Δlog n +-O(log3n) time units to complete the gossiping task in any radio network with size n, diameter D and max-degree Δ is proposed.
Abstract: This paper concerns the communication primitives of broadcasting (one-to-all communication) and gossiping (all-to-all communication) in known topology radio networks, i.e., where for each primitive the schedule of transmissions is precomputed in advance based on full knowledge about the size and the topology of the network. The first part of the paper examines the two communication primitives in arbitrary graphs. In particular, for the broadcast task we deliver two new results: a deterministic efficient algorithm for computing a radio schedule of length D + O(log3 n), and a randomized algorithm for computing a radio schedule of length D + O(log2 n). These results improve on the best currently known D + O(log4 n) time schedule due to Elkin and Kortsarz (Proceedings of the 16th ACM-SIAM Symposium on Discrete Algorithms, pp. 222–231, 2005). Later we propose a new (efficiently computable) deterministic schedule that uses 2D + Δlog n + O(log3 n) time units to complete the gossiping task in any radio network with size n, diameter D and max-degree Δ. Our new schedule improves and simplifies the currently best known gossiping schedule, requiring time $$O(D+\sqrt[{i+2}]{D}\Delta\log^{i+1} n)$$ , for any network with the diameter D = Ω(log i+4 n), where i is an arbitrary integer constant i ≥ 0, see Gąsieniec et al. (Proceedings of the 11th International Colloquium on Structural Information and Communication Complexity, vol. 3104, pp. 173–184, 2004). The second part of the paper focuses on radio communication in planar graphs, devising a new broadcasting schedule using fewer than 3D time slots. This result improves, for small values of D, on the currently best known D + O(log3 n) time schedule proposed by Elkin and Kortsarz (Proceedings of the 16th ACM-SIAM Symposium on Discrete Algorithms, pp. 222–231, 2005). Our new algorithm should be also seen as a separation result between planar and general graphs with small diameter due to the polylogarithmic inapproximability result for general graphs by Elkin and Kortsarz (Proceedings of the 7th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, vol. 3122, pp. 105–116, 2004; J. Algorithms 52(1), 8–25, 2004).

Proceedings ArticleDOI
10 Oct 2007
TL;DR: Experimental evaluation presented in the paper shows that the new strategy has a low overhead and that is able to support large number of faults while maintaining a high reliability.
Abstract: There is an inherent trade-off between epidemic and deterministic tree-based broadcast primitives. Tree-based approaches have a small message complexity in steady-state but are very fragile in the presence of faults. Gossip, or epidemic, protocols have a higher message complexity but also offer much higher resilience. This paper proposes an integrated broadcast scheme that combines both approaches. We use a low cost scheme to build and maintain broadcast trees embedded on a gossip-based overlay. The protocol sends the message payload preferably via tree branches but uses the remaining links of the gossip overlay for fast recovery and expedite tree healing. Experimental evaluation presented in the paper shows that our new strategy has a low overhead and that is able to support large number of faults while maintaining a high reliability.

Proceedings ArticleDOI
11 Jun 2007
TL;DR: It is proved that AC 0 cannot be efficiently simulated by MAJºMAJº MAJ circuits of quasipolynomial size and a novel technique for communication lower bounds is developed, the Degree/Discrepancy Theorem, which translates lower bounds on the threshold degree of a Boolean function into upper bounds on a related function.
Abstract: We prove that AC0 cannot be efficiently simulated by MAJoMAJ circuits. Namely, we construct an AC0 circuit of depth 3 that requires MAJoMAJ circuits of size 2Ω(n1/5). This matches Allender's classic result that AC0 can be simulated by MAJoMAJoMAJ circuits of quasipolynomial size.Our proof is based on communication complexity. To obtain the above result, we develop a novel technique for communication lower bounds, the Degree/Discrepancy Theorem. This technique is a separate contribution of our paper. It translates lower bounds on the threshold degree of a Boolean function into upper bounds on the discrepancy of a related function. Upper bounds on the discrepancy, in turn, immediately imply communication lower bounds as well as lower bounds against threshold circuits.As part of our proof, we use the Degree/Discrepancy Theorem to obtain an explicit AC0 circuit of depth 3 that has discrepancy 2-Ω(n1/5), under an explicit distribution. This yields the first known AC0 function with exponentially small discrepancy. Finally, we apply our work to learning theory, showing that polynomial-size DNF and CNF formulas have margin complexity 2Ω(n1/5).

Journal ArticleDOI
TL;DR: In the model of a common random string it is proved that O(k) communication bits are sufficient, regardless of n, and in the models of private random coins O( k+ log log n) bits suffice.
Abstract: We study the communication complexity of the disjointness function, in which each of two players holds a k-subset of a universe of size n and the goal is to determine whether the sets are disjoint. In the model of a common random string we prove that O(k) communication bits are sufficient, regardless of n. In the model of private random coins O(k+ log log n) bits suffice. Both results are asymptotically tight.

Book ChapterDOI
21 Feb 2007
TL;DR: An efficient communication-optimal tworound PSMT protocol for messages of length polynomial in n that is almost optimally resilient in that it requires a number of channels n ≥ (2 + ɚ)t, for any arbitrarily small constant ɛ > 0.
Abstract: Perfectly secure message transmission (PSMT), a problem formulated by Dolev, Dwork, Waarts and Yung, involves a sender S and a recipient R who are connected by n synchronous channels of which up to t may be corrupted by an active adversary. The goal is to transmit, with perfect security, a message from S to R. PSMT is achievable if and only if n > 2t. For the case n >2t, the lower bound on the number of communication rounds between S and R required for PSMT is 2, and the only known efficient (i.e., polynomial in n) two-round protocol involves a communication complexity ofO(n3l) bits, wherel is the lengthof themessage. A recent solution by Agarwal, Cramer and de Haan is provably communication-optimal by achieving an asymptotic communication complexity of O(nl) bits; however, it requires the messages to be exponentially large, i.e., l=ω(2n). In this paper we present an efficient communication-optimal tworound PSMT protocol for messages of length polynomial in n that is almost optimally resilient in that it requires a number of channels n ≥ (2 + ɛ)t, for any arbitrarily small constant ɛ > 0. In this case, optimal communication complexity is O(l) bits.

Journal ArticleDOI
TL;DR: An in-depth analysis of the media distortion characteristics allows us to define a low complexity algorithm for an optimal flow rate allocation in multipath network scenarios, and shows that a greedy allocation of rate along paths with increasing error probability leads to an optimal solution.
Abstract: We address the problem of joint path selection and source rate allocation in order to optimize the media specific quality of service in streaming of stored video sequences on multipath networks. An optimization problem is proposed in order to minimize the end-to-end distortion, which depends on video sequence dependent parameters, and network properties. An in-depth analysis of the media distortion characteristics allows us to define a low complexity algorithm for an optimal flow rate allocation in multipath network scenarios. In particular, we show that a greedy allocation of rate along paths with increasing error probability leads to an optimal solution. We argue that a network path shall not be chosen for transmission, unless all other available paths with lower error probability have been chosen. Moreover, the chosen paths should be used at their maximum available end-to-end bandwidth. Simulation results show that the optimal flow rate allocation carefully adapts the total streaming rate and the number of chosen paths, to the end-to-end transmission error probability. In many scenarios, the optimal rate allocation provides more than 20% improvement in received video quality, compared to heuristic-based algorithms. This motivates its use in multipath networks, where it optimizes media specific quality of service, and simultaneously saves network resources at the price of a very low computational complexity.

Posted Content
TL;DR: There is an obvious connection between IRSS schemes and the fact that there exist functions with an exponential gap in their communication complexity for k and k-1 rounds, and the scheme implies such a separation which is in several aspects stronger than the previously known ones.
Abstract: We introduce a new primitive called Intrusion-Resilient Secret Sharing (IRSS), whose security proof exploits the fact that there exist functions which can be efficiently computed interactively using low communication complexity in k, but not in k − 1 rounds. IRSS is a means of sharing a secret message amongst a set of players which comes with a very strong security guarantee. The shares in an IRSS are made artificially large so that it is hard to retrieve them completely, and the reconstruction procedure is interactive requiring the players to exchange k short messages. The adversaries considered can attack the scheme in rounds, where in each round the adversary chooses some player to corrupt and some function, and retrieves the output of that function applied to the share of the corrupted player. This model captures for example computers connected to a network which can occasionally be infected by malicious software like viruses, which can compute any function on the infected machine, but cannot sent out a huge amount of data. Using methods from the Bounded-Retrieval Model, we construct an IRSS scheme which is secure against any computationally unbounded adversary as long as the total amount of information retrieved by the adversary is somewhat less than the length of the shares, and the adversary makes at most k−1 corruption rounds (as described above, where k rounds are necessary for reconstruction). We extend our basic scheme in several ways in order to allow the shares sent by the dealer to be short (the players then blow them up locally) and to handle even stronger adversaries who can learn some of the shares completely. As mentioned, there is an obvious connection between IRSS schemes and the fact that there exist functions with an exponential gap in their communication complexity for k and k − 1 rounds. Our scheme implies such a separation which is in several aspects stronger than the previously known ones.

Proceedings ArticleDOI
21 Oct 2007
TL;DR: It is proved that depth three circuits consisting of a MAJORITY gate at the output, gates computing arbitrary symmetric function at the second layer and arbitrary gates of bounded fan-in at the base layer cannot simulate the circuit class AC0 in sub-exponential size.
Abstract: We develop a new technique of proving lower bounds for the randomized communication complexity of boolean functions in the multiparty 'number on the forehead' model. Our method is based on the notion of voting polynomial degree of functions and extends the degree-discrepancy lemma in the recent work of Sherstov (2007). Using this we prove that depth three circuits consisting of a MAJORITY gate at the output, gates computing arbitrary symmetric function at the second layer and arbitrary gates of bounded fan-in at the base layer i.e. circuits of type MAJ o SYMM o ANYO(1) cannot simulate the circuit class AC0 in sub-exponential size. Further, even if the fan-in of the bottom ANY gates are increased to o(log log n), such circuits cannot simulate AC0 in quasi-polynomial size. This is in contrast to the classical result of Yao and Beigel-Tarui that shows that such circuits, having only MAJORITY gales, can simulate the class ACC0 in quasi-polynomial size when the bottom fan-in is increased to poly-logarithmic size. In the second part, we simplify the arguments in the breakthrough work of Bourgain (2005) for obtaining exponentially small upper bounds on the correlation between the boolean function MODq and functions represented bv polynomials of small degree over Zm, when m,q ges 2 are co-prime integers. Our calculation also shows similarity with techniques used to estimate discrepancy of functions in the multiparty communication setting. This results in a slight improvement of the estimates of Bourgain et al. (2005). It is known that such estimates imply that circuits of type MAJ o MODm o ANDisin log n cannot compute the MODq function in sub-exponential size. It remains a major open question to determine if such circuits can simulate ACC0 in polynomial size when the bottom fan-in is increased to poly-logarithmic size.

Proceedings ArticleDOI
01 May 2007
TL;DR: Two different algorithms whose performance is arbitrarily close to that of maximal schedules, but which require low complexity due to the fact that they do not necessarily attempt to find maximal schedules are presented.
Abstract: We consider the problem of distributed scheduling in wireless networks. We present two different algorithms whose performance is arbitrarily close to that of maximal schedules, but which require low complexity due to the fact that they do not necessarily attempt to find maximal schedules. The first algorithm requires each link to collect local queue-length information in its neighborhood, and its complexity is independent of the size and topology of the network. The second algorithm is presented for the node-exclusive interference model, does not require nodes to collect queue-length information even in their local neighborhoods, and its complexity depends only on the maximum node degree in the network.

Journal ArticleDOI
Jung-Hoon Lee1, Sungeun Lee1, Keukjoon Bang1, Sungkeun Cha1, Daesik Hong1 
TL;DR: A new carrier frequency offset estimator based on the system of estimation of signal parameters via rotational invariance technique is proposed for interleaved orthogonal frequency division multiple access uplink systems.
Abstract: In this paper, a new carrier frequency offset (CFO) estimator based on the system of estimation of signal parameters via rotational invariance technique is proposed for interleaved orthogonal frequency division multiple access uplink systems. This new estimator performs better in relatively low signal-to-noise region than a CFO estimator recently proposed by Cao and Yao, and has a much lower computational complexity as well. Simulation results show several performance examples for the proposed CFO estimator which demonstrate the advantages of the proposed estimator over the Cao and Yao's version.

Patent
10 Oct 2007
TL;DR: In this article, the authors present a method for synthesizing application-specific NoC architectures by defining a communication path, that is, a sequence of switches to be traversed to connect the aforementioned pair of communicating elements, calculating metrics as affected by the need to render said path into physical connectivity.
Abstract: To tackle the increasing communication complexity of multi-core systems, scalable Networks on Chips (NoCs) are needed to interconnect the processor, memory and hardware cores of the systems. For the use of NoCs to be feasible in today's industrial designs, a custom-tailored, application-specific architecture that satisfies the objectives and constraints of the targeted application domain is required. In this work we present a method for synthesizing such application-specific NoC architectures. This best topology is achieved by a method to design Networks on Chips (NoCs)-based communication system for connecting on-chip components in a multicore system, said system comprising several elements such as processors, hardware blocks, memories, communicating through the communication system, said communication system comprising at least switches, said method comprising the steps of: - obtaining predefined communication characteristics modelling the applications running on the multicore system, - establishing the number and configuration of switches to connect the elements, - establishing physical connectivity between the elements and the switches, - for each of at least two pairs of communicating elements : a defining a communication path, that is, a sequence of switches to be traversed to connect the aforementioned pair of communicating elements, b calculating metrics as affected by the need to render said path into physical connectivity, said metrics being selected among one or a combination of power consumption of the involved switches, area of the involved switches, number of inputs and outputs of the involved switches, total length of wires used, maximum possible speed of operation of the system and number of switches to be traversed, taking into account any previously defined physical connectivity, c iterating the steps a and b for a plurality of possible paths, d choosing the path having the optimal metrics, e establishing any missing physical connectivity between the switches so that the selected optimal path occurs across physically connected switches.

Book ChapterDOI
16 Apr 2007
TL;DR: A HVZK argument based on homomorphic integer commitments is suggested, which improves both on round complexity, communication complexity and computational complexity when shuffling large ciphertexts in comparison with state of the art.
Abstract: A shuffle is a permutation and rerandomization of a set of ciphertexts. Among other things, it can be used to construct mix-nets that are used in anonymization protocols and voting schemes. While shuffling is easy, it is hard for an outsider to verify that a shuffle has been performed correctly. We suggest two efficient honest verifier zero-knowledge (HVZK) arguments for correctness of a shuffle. Our goal is to minimize round-complexity and at the same time have low communicational and computational complexity. The two schemes we suggest are both 3-move HVZK arguments for correctness of a shuffle. We first suggest a HVZK argument based on homomorphic integer commitments, and improve both on round complexity, communication complexity and computational complexity in comparison with state of the art. The second HVZK argument is based on homomorphic commitments over finite fields. Here we improve on the computational complexity and communication complexity when shuffling large ciphertexts.

Proceedings ArticleDOI
07 Jan 2007
TL;DR: In this paper, it was shown that concatenated codes can correct errors up to the Shannon capacity even when the errors are only slightly random, for t roughly ω(log n).
Abstract: Communicating over a noisy channel is typically much easier when errors are drawn from a fixed, known distribution than when they are chosen adversarially. This paper looks at how one can use schemes designed for random errors in an adversarial context, at the cost of few additional random bits and without relying on unproven computational assumptions.The basic approach is to permute the positions of a bit string using a permutation drawn from a t-wise independent family, where t = o(n). This leads to several new results:• We show that concatenated codes can correct errors up to the Shannon capacity even when the errors are only slightly random --- it is sufficient that they be t-wise independently distributed, for t roughly ω(log n).• We construct computationally efficient information reconciliation protocols correcting pn adversarial binary Hamming errors with optimal communication complexity and entropy loss n(h(p) + o(1)) bits, where n is the length of the strings and h() is the binary entropy function.Information reconciliation protocols allow cooperating parties to correct errors in a shared string. They are important tools in two applications: first, for dealing with noisy secrets in cryptography; second, for synchronizing remote copies of large files. Entropy loss measures how much information is leaked to an eavesdropper listening in on the protocol.• We improve the randomness complexity (key length) of efficiently decodable capacity-approaching private codes from Θ(n log n) to n + o(n).We also present a simplified proof of an existential result on private codes due to Langberg (FOCS '04).

Proceedings ArticleDOI
12 Aug 2007
TL;DR: This paper improves the communication complexity of their fork-linearizable storage access protocol with n clients from Ω(n2) to O(n).
Abstract: When data is stored on a faulty server that is accessed concurrently by multiple clients, the server may present inconsistent data to different clients. For example, the server might complete a write operation of one client, but respond with stale data to another client. Mazieres and Shasha (PODC 2002) introduced the notion of fork-consistency, also called fork-linearizability, which ensures that the operations seen by every client are linearizable and guarantees that if the server causes the views of two clients to differ in a single operation, they may never again see each other's updates after that without the server being exposed as faulty. In this paper, we improve the communication complexity of their fork-linearizable storage access protocol with n clients from Ω(n2) to O(n). We also prove that in every such protocol, a reader must wait for a concurrent writer. This explains a seeming limitation of their and of our improved protocol. Furthermore, we give novel characterizations of fork-linearizability and prove that it is neither stronger nor weaker than sequential consistency.

Proceedings ArticleDOI
11 Jun 2007
TL;DR: It follows from the results that this bound on the saving in communication is tight almost always, and this approach gives access to several powerful tools from this area such as normed spaces duality and Grothendiek's inequality.
Abstract: We introduce a new method to derive lower bounds on randomized and quantum communication complexity. Our method is based on factorization norms, a notion from Banach Space theory. This approach gives us access toseveral powerful tools from this area such as normed spaces duality and Grothendiek's inequality. This extends the arsenal of methods for deriving lower bounds in communication complexity. As we show,our method subsumes most of the previously known general approaches to lower bounds on communication complexity. Moreover, we extend all (but one) of these lower bounds to the realm of quantum communication complexity with entanglement. Our results also shed some light on the question how much communication can be saved by using entanglement.It is known that entanglement can save one of every two qubits, and examples for which this is tight are also known. It follows from our results that this bound on the saving in communication is tight almost always.