scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Book ChapterDOI
27 Aug 2001
TL;DR: This work surveys some known implications between hypotheses of type P≠NP and describes attempts to establish further connections and discusses a new result involving a concept of approximative complexity.
Abstract: Several models of NP-completeness in an algebraic framework of computation have been proposed in the past, each of them hinging on a fundamental hypothesis of type P≠NP. We first survey some known implications between such hypotheses and then describe attempts to establish further connections. This leads us to the problem of relating the complexity of computational and decisional tasks and naturally raises the question about the connection of the complexity of a polynomial with those of its factors. After reviewing what is known with this respect, we discuss a new result involving a concept of approximative complexity.

15 citations

Journal ArticleDOI
TL;DR: This work provides lower and upper bounds for the contention-free step and register complexity of solving the mutual exclusion problem as a function of the number of processes and the size of the largest register that can be accessed in one atomic step.
Abstract: Worst-case time complexity is a measure of the maximum time needed to solve a problem over all runs. Contention-free time complexity indicates the maximum time needed when a process executes by itself, without competition from other processes. Since contention is rare in well-designed systems, it is important to design algorithms which perform well in the absence of contention. We study the contention-free time complexity of shared memory algorithms using two measures: step complexity, which counts the number of accesses to shared registers; and register complexity, which measures the number of different registers accessed. Depending on the system architecture, one of the two measures more accurately reflects the elapsed time. We provide lower and upper bounds for the contention-free step and register complexity of solving the mutual exclusion problem as a function of the number of processes and the size of the largest register that can be accessed in one atomic step. We also present bounds on the worst-case and contention-free step and register complexities of solving the naming problem. These bounds illustrate that the proposed complexity measures are useful in differentiating among the computational powers of different primitives

15 citations

Journal ArticleDOI
TL;DR: A unified derivation of the bounds of the linear complexity is given for a sequence obtained from a periodic sequence over GF(q) by either substituting, inserting, or deleting k symbols within one period.
Abstract: A unified derivation of the bounds of the linear complexity is given for a sequence obtained from a periodic sequence over GF(q) by either substituting, inserting, or deleting k symbols within one period. The lower bounds are useful in case of n

15 citations

Proceedings Article
04 Aug 2001
TL;DR: An optimal 3B- consistency algorithm whose time-complexity of O(md2n) improves the known bound by a factor n, and it is proved that improved bounds on time complexity can effectively be reached for higher values of k.
Abstract: kB-consistencies form the class of strong consistencies used in interval constraint programming We survey, prove, and give theoretical motivations to some technical improvements to a naive kB- consistency algorithm Our contribution is twofold: on the one hand, we introduce an optimal 3B- consistency algorithm whose time-complexity of O(md2n) improves the known bound by a factor n (m is the number of constraints, n is the number of variables, and d is the maximal size of the intervals of the box) On the other hand, we prove that improved bounds on time complexity can effectively be reached for higher values of k These results are obtained with very affordable overheads in terms of space complexity

15 citations

Book ChapterDOI
19 Apr 2010
TL;DR: A lattice algorithm specifically designed for some classical applications of lattice reduction for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short, which is an improvement over the quadratic complexity floating-point LLL algorithms.
Abstract: We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate the usefulness of this algorithm we show that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well.

14 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732