scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Proceedings Article
01 Jan 1999
TL;DR: This paper solves the widely publicised open problem to decide whether or not it is possible to partition the vertices of a graph into four non-empty sets A, B, C, and D, and shows that RET-C4 is NP-complete, but for any graph H, other than Cd, with at most four vertices: RET-H is polynomial time solvable.
Abstract: In this paper, we solve a widely publicised open problem posed by Peter Winkler in 1988. The problem is to decide whether or not it is possible to partition the vertices of a graph into four non-empty sets A, B, C, and D, such that there is no edge between the sets A and C, and between the sets B and D; and that there is at least one edge between any other pair of sets. Winkler asked whether this problem is NP-complete. He was motivated by a general problem that we explain after introducing the following definitions. In the following, let G and H be graphs. A homomorphism f : G -) H, of G to H, is a mapping f of the vertices of G to the vertices of H, such that if g and g’ are adjacent vertices of G then f(g) and f(g’) are either adjacent vertices of H or the same vertex of H. Note that we have deviated slightly from the usual definition of a homomorphism by allowing two adjacent vertices of G to map to the same vertex of H. A compaction c : G -* H, of G to H, is a homomorphism of G to H, such that for every vertex x of H, there exists a vertex v of G with C(V) = x, and for every edge hh’ of H, there exists an edge gg’ of G with c(g) = h and c(d) = h’. Notice that the first part of the definition for a compaction (the requirement for every vertex x of H) follows from the second part unless H has isolated vertices. If there exists a compaction of G to H then G is said to compact to H. Now suppose that H is an induced subgraph of G. A retraction r : G ‘+ H, of G to H, is a homomorphism of G to H, such that r(h) = h, for every vertex h of H. If there exists a retraction of G to H then G is said to retract to H, and H is said to, be a retract of G. We shall denote a k-cycle by Ck. The problem of deciding the existence of a compaction to a fixed graph H, called the compaction problem for H, and denoted as COMP-IT, is the following : Instance : A graph G. Question : Does G compact to H? Note that Winkler’s problem is the problem COMPCJ. When both G and H are input graphs (Le., H is not fixed), the problem of deciding whether or not G -01 of Computing Science, Simon Fraser &iversity, Bum‘aby, British Columbia, Canada V5A lS6. compacts to H has been studied by Almira Karabeg and Dino Karabeg. The problem of deciding the existence of a retraction to a fixed graph H, called the retraction problem for H, and denoted as RET-H, asks whether or not an input graph G, containing H as an induced subgraph, retracts to H _ Retraction problems have been of continuing interest in graph theory and have considerable literature. It is not difficult to show that for every fixed graph H, if RET-H is solvable in polynomial time then COMPH is also solvable in polynomial time. Is the converse true? This was the general problem that motivated Winkler. It turns out that RET-C4 is NP-complete, but for any graph H, other than Cd, with at most four vertices: RET-H is polynomial time solvable. In other words, the unique smallest graph H for which RETH is NP-complete is C4. This observation was made by Tomas Feder and Peter Winkler, and led Winlrler to ask specifically the following question in 1988 which has been a popular open problem : Is COMP-Cd NPcomplete? We show that COMP-C4 is NP-complete. To show this, we give a transformation from RET-Cd to COMP-Cd, using the technique explained below. Let a graph G containing H as an induced subgraph be an instance of RET-H. When we give a transformation from RET-H to COMP-H, we do the following. We construct in time polynomial in the size of G, a graph G’ (containing G as an induced subgraph), such that the following statements (i), (ii), and (iii) are equivalent : (i) G retracts to H. (ii) G’ retracts to H. (iii) G’ compacts to H. Thus if RET-H is NP-complete, this shows that COMPHis also NP-complete. It is this technique that we have used throughout, when giving a transformation from RET-H to COMP-H, for any graph H. We prove the equivalence of the above statements by showing that (i) is equivalent to (ii), and (ii) is equivalent to (iii). Feder extended the proof of NP-completeness of RET-C4 to apply to any cycle Ck, k 1 4 (the same result was proved independently by Gary MacGillivray). Correspondingly, we show that COMP-Ck is also NPcomplete, for all k 2 4. We show this by giving a

15 citations

Proceedings ArticleDOI
TL;DR: Aaronson, Ben-David, and Kothari as discussed by the authors gave the first super-quadratic separation between quantum and randomized communication complexity for a total function, giving an example exhibiting a power 2.5 gap.
Abstract: While exponential separations are known between quantum and randomized communication complexity for partial functions (Raz, STOC 1999), the best known separation between these measures for a total function is quadratic, witnessed by the disjointness function. We give the first super-quadratic separation between quantum and randomized communication complexity for a total function, giving an example exhibiting a power 2.5 gap. We further present a 1.5 power separation between exact quantum and randomized communication complexity, improving on the previous ~1.15 separation by Ambainis (STOC 2013). Finally, we present a nearly optimal quadratic separation between randomized communication complexity and the logarithm of the partition number, improving upon the previous best power 1.5 separation due to Goos, Jayram, Pitassi, and Watson. Our results are the communication analogues of separations in query complexity proved using the recent cheat sheet framework of Aaronson, Ben-David, and Kothari (STOC 2016). Our main technical results are randomized communication and information complexity lower bounds for a family of functions, called lookup functions, that generalize and port the cheat sheet framework to communication complexity.

15 citations

Proceedings ArticleDOI
22 Oct 1990
TL;DR: The authors prove general lower bounds on the length of the random input of parties computing a function f, depending on the number of bits communicated and the deterministic communication complexity of f.
Abstract: A quantitative investigation of the power of randomness in the context of communication complexity is initiated. The authors prove general lower bounds on the length of the random input of parties computing a function f, depending on the number of bits communicated and the deterministic communication complexity of f. Four standard models for communication complexity are considered: the random input of the parties may be shared or local, and the communication may be one-way or two-way. The bounds are shown to be tight for all the models, for all values of the deterministic communication complexity, and for all possible quantities of bits exchanged. It is shown that it is possible to reduce the number of random bits required by any protocol, without increasing the number of bits exchanged (up to a limit depending on the advantage achieved by the protocol). >

15 citations

Journal ArticleDOI
TL;DR: A general dual complexity space is constructed by using (complexity) partial functions endowed with a subinvariant bicomplete extended quasi-metric as complexity distance to modelling certain processes that arise, in a natural way, in symbolic computation.
Abstract: Dual complexity spaces were introduced by Romaguera and Schellekens in order to obtain a robust mathematical model for the complexity analysis of algorithms and programs This model is based on the notions of a cone and of a quasi-metric space Later on, the structure of the dual complexity spaces was modified with the purpose of giving quantitative measures of the improvements in the complexity of programs This new complexity structure was presented as an ordered cone endowed with an invariant extended quasi-metric Here we construct a general dual complexity space by using (complexity) partial functions This new complexity structure is a pointed ordered cone endowed with a subinvariant bicomplete extended quasi-metric as complexity distance We apply this approach to modelling certain processes that arise, in a natural way, in symbolic computation

15 citations

Journal ArticleDOI
TL;DR: A weakening of Blum's Axioms for abstract computational complexity is introduced in order to take into a better account measures that can be finite even when the computations diverge.

15 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732