scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The distributed approximating functional-path integral is formulated as an iterated sequence of {ital d}-dimensional integrals, based on deterministic ``low discrepancy sequences,`` as opposed to products of one-dimensional quadratures or basis functions.
Abstract: The distributed approximating functional-path integral is formulated as an iterated sequence of $d$-dimensional integrals, where $d$ is the intrinsic number of degrees of freedom for the system under consideration. This is made practical for larger values of $d$ by evaluating these integrals using average-case complexity integration techniques, based on deterministic ``low discrepancy sequences,'' as opposed to products of one-dimensional quadratures or basis functions. The integration converges as $(\mathrm{log}P{)}^{d\ensuremath{-}1}/P,$ where P is the number of sample points used, and the dimensionality of the integral does not increase with the number of time slices required.

29 citations

Journal ArticleDOI
TL;DR: It is shown that the average running time is almost linear in the input size, which explains why the Hensel lifting technique is fast in practice for most polynomials.
Abstract: This paper presents an average time analysis of a Hensel lifting based factorisation algorithm for bivariate polynomials over finite fields. It is shown that the average running time is almost linear in the input size. This explains why the Hensel lifting technique is fast in practice for most polynomials.

29 citations

Proceedings ArticleDOI
11 Jun 2005
TL;DR: It is proved that corruption, one of the most powerful measures used to analyze 2-party randomized communication complexity, satisfies a strong direct sum property under rectangular distributions.
Abstract: We prove that corruption, one of the most powerful measures used to analyze 2-party randomized communication complexity, satisfies a strong direct sum property under rectangular distributions. This direct sum bound holds even when the error is allowed to be exponentially close to 1. We use this to analyze the complexity of the widely-studied set disjointness problem in the usual "number-on-the-forehead" (NOF) model of multiparty communication complexity.

29 citations

Proceedings ArticleDOI
08 May 2006
TL;DR: It is demonstrated that NP-hard manipulations may be tractable in the average-case, and a family of important voting protocols is susceptible to manipulation by coalitions, when the number of candidates is constant.
Abstract: Encouraging voters to truthfully reveal their preferences in an election has long been an important issue. Previous studies have shown that some voting protocols are hard to manipulate, but predictably used NP-hardness as the complexity measure. Such a worst-case analysis may be an insufficient guarantee of resistance to manipulation.Indeed, we demonstrate that NP-hard manipulations may be tractable in the average-case. For this purpose, we augment the existing theory of average-case complexity with new concepts; we consider elections distributed with respect to junta distributions, which concentrate on hard instances, and introduce a notion of heuristic polynomial time. We use our techniques to prove that a family of important voting protocols is susceptible to manipulation by coalitions, when the number of candidates is constant.

29 citations

Book ChapterDOI
07 Jul 2008
TL;DR: In this paper, the authors show that the fixed-point theory is flawed and correct the algorithm without affecting its space and time complexity, and they show how the fixed point operator can be repaired, and that the algorithm is incorrect.
Abstract: Although there are many efficient algorithms for calculating the simulation preorder on finite Kripke structures, only two have been proposed of which the space complexity is of the same order as the size of the output of the algorithm. Of these, the one with the best time complexity exploits the representation of the simulation problem as a generalised coarsest partition problem. It is based on a fixed-point operator for obtaining a generalised coarsest partition as the limit of a sequence of partition pairs. We show that this fixed-point theory is flawed, and that the algorithm is incorrect. Although we do not see how the fixed-point operator can be repaired, we correct the algorithm without affecting its space and time complexity.

29 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732