scispace - formally typeset
Search or ask a question
Topic

Communication complexity

About: Communication complexity is a research topic. Over the lifetime, 3870 publications have been published within this topic receiving 105832 citations.


Papers
More filters
Proceedings ArticleDOI
19 Jun 2017
TL;DR: The algorithm is based on the sum-of-squares hierarchy and its analysis is inspired by Lovett's proof that the communication complexity of every rank-n Boolean matrix is bounded by Õ(√n).
Abstract: For every constant e>0, we give an exp(O(∞n))-time algorithm for the 1 vs 1 - e Best Separable State (BSS) problem of distinguishing, given an n2 x n2 matrix ℳ corresponding to a quantum measurement, between the case that there is a separable (i.e., non-entangled) state ρ that ℳ accepts with probability 1, and the case that every separable state is accepted with probability at most 1 - e. Equivalently, our algorithm takes the description of a subspace 𝒲 ⊆ 𝔽n2 (where 𝔽 can be either the real or complex field) and distinguishes between the case that contains a rank one matrix, and the case that every rank one matrix is at least e far (in 𝓁2 distance) from 𝒲. To the best of our knowledge, this is the first improvement over the brute-force exp(n)-time algorithm for this problem. Our algorithm is based on the sum-of-squares hierarchy and its analysis is inspired by Lovett's proof (STOC '14, JACM '16) that the communication complexity of every rank-n Boolean matrix is bounded by O(√n).

24 citations

Book ChapterDOI
02 Nov 1992
TL;DR: A transformer takes a distributed algorithm whose message complexity is f·m and produces a new distributed algorithm for the same problem with O(f·n log n + m log n) message complexity, where n and m are the total number of nodes and links in the network.
Abstract: This paper introduces a transformer for improving the communication complexity of several classes of distributed algorithms. The transformer takes a distributed algorithm whose message complexity is O(f·m) and produces a new distributed algorithm for the same problem with O(f·n log n + m log n) message complexity, where n and m are the total number of nodes and links in the network, and f is an arbitrary function of n and m.

24 citations

Proceedings ArticleDOI
16 May 2010
TL;DR: A matched filter (MF) based low complexity max log MAP bit metrics for BICM single input single output (SISO) and low dimensional B ICM MIMO systems using Gray encoded quadrature amplitude modulation (QAM) alphabets are proposed.
Abstract: Bit interleaved coded modulation (BICM) because of its improved diversity over fast fading channels is an attractive transmission scheme for future wireless systems. For coded BICM systems, receivers need to employ max log MAP demodulators (demappers) that calculate soft-decision metrics i.e. log-likelihood ratios (LLRs) for the decoder. The complexity of the calculation of these LLRs is exponential in the number of bits per symbol and moreover for systems exploiting spatial dimension (MIMO), the complexity further increases exponentially in the number of transmit antennas. In this paper we propose matched filter (MF) based low complexity max log MAP bit metrics for BICM single input single output (SISO) and low dimensional BICM MIMO systems using Gray encoded $M$-ary quadrature amplitude modulation (QAM) alphabets. For SISO systems, the maximum likelihood (ML) detector needs computation and comparison of minimum distances between the received symbol and $M$ constellation points on the complex plane for the calculation of each LLR. In this paper we show that these LLRs can be computed precisely from the MF output and therefore do not necessitate any minimum distance calculations. For low dimensional BICM MIMO systems, we further propose a MF based bit metric which successfully trims down one complex dimension of the system thereby reducing complexity. Both these metrics substantially reduce the number of calculations needed for each LLR without compromising the performance and MF being an integral part of all receiver structures facilitates their hardware implementation. Simulation results over Rayleigh fading channels verify similar performance of the simplified metrics as those of the original metrics but with a significant reduction in the complexity.

24 citations

Journal ArticleDOI
Matthew Andrews1, Lisa Zhang1
TL;DR: This paper shows that MIN-SUMFIBER-FIXEDROUTE cannot be approximated within any constant factor unless NP-hard problems have efficient algorithms, and shows that the problem of wavelength assignment is inherently hard by itself.
Abstract: We study the complexity of a set of design problems for optical networks. Under wavelength division multiplexing (WDM) technology, demands sharing a common fiber are transported on distinct wavelengths. Multiple fibers may be deployed on a physical link. Our basic goal is to design networks of minimum cost, minimum congestion and maximum throughput. This translates to three variants in the design objectives: 1) MIN-SUMFIBER: minimizing the total cost of fibers deployed to carry all demands; 2) MIN-MAXFIBER: minimizing the maximum number of fibers per link to carry all demands; and 3) MAX-THROUGHPUT: maximizing the carried demands using a given set of fibers. We also have two variants in the design constraints: 1) CHOOSEROUTE: Here we need to specify both a routing path and a wavelength for each demand; 2) FIXEDROUTE: Here we are given demand routes and we need to specify wavelengths only. The FIXEDROUTE variant allows us to study wavelength assignment in isolation. Combining these variants, we have six design problems. Previously we have shown that general instances of the problems MIN-SUMFIBER-CHOOSEROUTE and MIN-MAXFIBER-FIXEDROUTE have no constant-approximation algorithms. In this paper, we prove that a similar statement holds for all four other problems. Our main result shows that MIN-SUMFIBER-FIXEDROUTE cannot be approximated within any constant factor unless NP-hard problems have efficient algorithms. This, together with the previous hardness result of MIN-MAXFIBER-FIXEDROUTE, shows that the problem of wavelength assignment is inherently hard by itself. We also study the complexity of problems that arise when multiple demands can be time-multiplexed onto a single wavelength (as in time-domain wavelength interleaved networking (TWIN) networks) and when wavelength converters can be placed along the path of a demand.

24 citations

01 Jan 2000
TL;DR: An algorithm is presented that uses GCSs and implements a coordinator-based strategy and the notion of view-graphs that represent the partially-ordered view evolution history witnessed by the processors are introduced.
Abstract: This work considers the problem of performing a set of N tasks on a set of P cooperating message-passing processors (P ≤ N). The processors use a group communication service (GCS) to coordinate their activity in the setting where dynamic changes in the underlying network topology cause the processor groups to change over time. GCSs have been recognized as effective building blocks for fault-tolerant applications in such settings. Our results explore the efficiency of fault-tolerant cooperative computation using GCSs. The original investigation of this area by (Dolev et al., Dynamic load balancing with group communication, in: Proc. of the 6th International Colloquium on Structural Information and Communication Complexity, 1999) focused on competitive lower bounds, non-redundant task allocation schemes and work-efficient algorithms in the presence of fragmentation regroupings. In this work we investigate work-efficient and message-efficient algorithms for fragmentation and merge regroupings. We present an algorithm that uses GCSs and implements a coordinator-based strategy. For the analysis of our algorithm we introduce the notion of view-graphs that represent the partially-ordered view evolution history witnessed by the processors. For fragmentations and merges, the work of the algorithm (defined as the worst case total number of task executions counting multiplicities) is not more than min{N ċ f + N, N ċ P}, and the message complexity is no worse than 4(N ċ f + N + P ċ m), where f and m denote the number of new groups created by fragmentations and merges, respectively. Note that the constants are very small and that, interestingly, while the work efficiency depends on the number of groups f created as the result of fragmentations, work does not depend on the number of groups m created as the result of merges.

24 citations


Network Information
Related Topics (5)
Upper and lower bounds
56.9K papers, 1.1M citations
84% related
Encryption
98.3K papers, 1.4M citations
82% related
Network packet
159.7K papers, 2.2M citations
81% related
Server
79.5K papers, 1.4M citations
81% related
Wireless network
122.5K papers, 2.1M citations
80% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202319
202256
2021161
2020165
2019149
2018141