scispace - formally typeset
Search or ask a question
Topic

Communication complexity

About: Communication complexity is a research topic. Over the lifetime, 3870 publications have been published within this topic receiving 105832 citations.


Papers
More filters
Proceedings ArticleDOI
23 May 1998
TL;DR: The multiparty communication model of Chandra, Furst, and Lipton (1983) is generalized to functions with b-bit output, and new families of explicit boolean functions for which Ω(n/ck) bits of communication are required to find the “missing bit” are constructed.
Abstract: We generalize the multiparty communication model of Chandra, Furst, and Lipton (1983) to functions with b-bit output (b = 1 in the CFL model). We allow the players to receive up to b 1 bits of information from an all-powerful benevolent Helper who can see all the input. Extending results of Babai, Nisan, and Szegedy (1992) to this model, we construct families of explicit functions for which ( n/c k ) bits of communication are required to find the “missing bit,” where n is the length of each player’s input and k is the number of players. As a consequence we settle the problem of separating the one-way vs. multiround communication complexities (in the CFL sense) for k (1 )logn players, extending a result of Nisan and Wigderson (1991) who demonstrated this separation for k = 3 players. As a byproduct we obtain ( n/c k ) lower bounds for the multiparty complexity (in the CFL sense) of new families of explicit boolean functions (not derivable from BNS).

44 citations

Proceedings ArticleDOI
04 Jun 2007
TL;DR: A new variable step size LMS(VSS-LMS) algorithm is proposed, by constructing a nonlinear function between the step factor μ and the error signal e(n), which could improve the convergence rate and steady-state error performance effectively.
Abstract: In this paper, a relation among the theoretical optimum step size, the error signal, and the input signal is introduced, and based on the relation, a new variable step size least mean square (VSS-LMS) algorithm for adaptive filtering is proposed. The proposed algorithm has less computational complexity and converges faster to a lower steady state error than existing algorithms. Meanwhile, it eliminates much of the shortcoming resulted from changing the step size in the adaptive process. Some computer simulation results are included to support the theoretical analysis of the proposed algorithm.

44 citations

Journal ArticleDOI
TL;DR: A new multicast key distribution scheme whose computation complexity is significantly reduced, instead of using conventional encryption algorithms, that employs MDS codes, a class of error control codes, to distribute multicasts key dynamically.
Abstract: Efficient key distribution is an important problem for secure group communications. The communication and storage complexity of multicast key distribution problem has been studied extensively. In this paper, we propose a new multicast key distribution scheme whose computation complexity is significantly reduced. Instead of using conventional encryption algorithms, the scheme employs MDS codes, a class of error control codes, to distribute multicast key dynamically. This scheme drastically reduces the computation load of each group member compared to existing schemes employing traditional encryption algorithms. Such a scheme is desirable for many wireless applications where portable devices or sensors need to reduce their computation as much as possible due to battery power limitations. Easily combined with any key-tree-based schemes, this scheme provides much lower computation complexity while maintaining low and balanced communication complexity and storage complexity for secure dynamic multicast key distribution.

44 citations

Proceedings ArticleDOI
11 Jun 2007
TL;DR: A near-linear lower bound on the size of the internal memory used by a randomized algorithm with 2-sided error that is allowed to have o(log N/log log N) passes over the streams is shown.
Abstract: Motivated by the capabilities of modern storage architectures, we consider the following generalization of the data stream model where the algorithm has sequential access to multiple streams. Unlike the data stream model, where the stream is read only, in this new model (introduced in [8,9]) the algorithms can also write onto streams. There is no limit on the size of the streams but the number of passes made on the streams is restricted. On the other hand, the amount of internal memory used by the algorithm is scarce, similar to data stream model.We resolve the main open problem in [7] of proving lower bounds in this model for algorithms that are allowed to have 2-sided error. Previously, such lower bounds were shown only for deterministic and 1-sided error randomized algorithms [9,7]. We consider the classical set disjointness problemthat has proved to be invaluable for deriving lower bounds for many other problems involving data streams and other randomized models of computation. For this problem, we show a near-linear lower bound on the size of the internal memory used by a randomized algorithm with 2-sided error that is allowed to have o(log N/log log N) passes over the streams. This bound is almost optimal sincethere is a simple algorithm that can solve this problem using logarithmic memory if the number of passes over the streams.Applications include near-linear lower bounds onthe internal memory for well-known problems in the literature:(1) approximately counting the number of distinct elements in the input (F0);(2) approximating the frequency of the mod of an input sequence(F*∞);(3) computing the join of two relations; and (4) deciding if some node of an XML document matches an XQuery (or XPath) query. Our techniques involve a novel direct-sum type of argument that yields lower bounds for many other problems. Our results asymptotically improve previously known bounds for any problem even in deterministic and 1-sided error models of computation.

44 citations

Book ChapterDOI
02 May 2004
TL;DR: This work revisits the following open problem in information-theoretic cryptography: can computationally unbounded players compute an arbitrary function of their inputs with polynomial communication complexity and a linear threshold of unconditional privacy?
Abstract: We revisit the following open problem in information-theoretic cryptography: Does the communication complexity of unconditionally secure computation depend on the computational complexity of the function being computed? For instance, can computationally unbounded players compute an arbitrary function of their inputs with polynomial communication complexity and a linear threshold of unconditional privacy? Can this be done using a constant number of communication rounds?

43 citations


Network Information
Related Topics (5)
Upper and lower bounds
56.9K papers, 1.1M citations
84% related
Encryption
98.3K papers, 1.4M citations
82% related
Network packet
159.7K papers, 2.2M citations
81% related
Server
79.5K papers, 1.4M citations
81% related
Wireless network
122.5K papers, 2.1M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202319
202256
2021161
2020165
2019149
2018141