scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A large family of k-context-free sequences are introduced for which a polynomial upper bound on their complexity functions is given.

5 citations

Journal ArticleDOI
TL;DR: A new recursive algorithm for deriving the layout of parallel multipliers is presented and a network for performing multiplications of two's complement numbers is proposed, showing how the structure can be pipelined with period complexity and used for single and double precision multiplication.
Abstract: A new recursive algorithm for deriving the layout of parallel multipliers is presented. Based on this algorithm, a network for performing multiplications of two's complement numbers is proposed. The network can be implemented in a synchronous or an asynchronous way. If the factors to be multiplied have N bits, the area complexity of the network is O(N2) for practical values of N as in the case of cellular multipliers. Due to the design approach based on a recursive algorithm, a time complexity O(log N) is achieved.It is shown how the structure can he pipelined with period complexity O(1) and used for single and double precision multiplication.

5 citations

Proceedings ArticleDOI
06 Apr 2003
TL;DR: This work proposes to use the M and T algorithms in order to reduce the computational complexity of the Viterbi algorithm and shows that these algorithms enable a reduction of the number of particles by up to 20%, practically without loss of performance.
Abstract: For a given computational complexity, the Viterbi algorithm applied on the discrete representation of the state space provided by a standard particle filtering, outperforms the particle filtering. However, the computational complexity of the Viterbi algorithm is still high. We propose to use the M and T algorithms in order to reduce the computational complexity of the Viterbi algorithm and we show that these algorithms enable a reduction of the number of particles by up to 20%, practically without loss of performance with respect to the Viterbi algorithm.

5 citations

01 Jan 2008
TL;DR: An algorithm with Θ(n log 2 3) as its running time is presented and a proof of the theorem is presented: the largest solutions of f (m) = 3k, 3k±1 are, respectively, m = 3 k , 3 k ± 3 k−1.
Abstract: —The integer complexity of a positive integer n, denoted f (n), is defined as the least number of 1's required to represent n, using only 1's, the addition and multiplication operators, and the parentheses. The running time of the algorithm currently used to compute f (n) is Θ(n 2). In this paper we present an algorithm with Θ(n log 2 3) as its running time. We also present a proof of the theorem: the largest solutions of f (m) = 3k, 3k±1 are, respectively, m = 3 k , 3 k ± 3 k−1 .

5 citations

01 Jan 2007
TL;DR: This thesis explores the applications of pseudorandomness within complexity theory, with a focus on pseudOrandomness that can be constructed unconditionally, that is without relying on any unproven complexity assumptions.
Abstract: Pseudorandomness—that is, information that "appears random" even though it is generated using very little true randomness—is a fundamental notion in cryptography and complexity theory. This thesis explores the applications of pseudorandomness within complexity theory, with a focus on pseudorandomness that can be constructed unconditionally, that is without relying on any unproven complexity assumptions. Such pseudorandomness only "fools" restricted classes of algorithms, and yet it can be applied to prove complexity results that concern very general models of computation. For instance, we show the following: (1) Randomness-Efficient Error Reduction for Parallel Algorithms. Typically, to gain confidence in a randomized algorithm, one repeats the algorithm several times (with independent randomness) and takes the majority vote of the executions. While very effective, this is wasteful in terms of the number of random bits that are used. Randomness-efficient error reduction techniques are known for polynomial-time algorithms, but do not readily apply to parallel algorithms since the techniques seem inherently sequential. We achieve randomness-efficient error reduction for highly-parallel algorithms. Specifically, we can reduce the error of a parallel algorithm to any δ > 0 while paying only O(log(1/δ)) additional random bits, thereby matching the results for polynomial-time. (2) Hardness Amplification within NP. A fundamental question in average-case complexity is whether P ≠ NP implies the existence of functions in NP that are hard on average (over randomly-chosen inputs). While the answer to this question seems far beyond the reach of current techniques, we show that powerful hardness amplification is indeed feasible within NP. In particular, we show that if NP has a mildly hard-on-average function f (i.e., any small circuit for computing f fails on at least a constant fraction of inputs), then NP has a function f' that is extremely hard on average (i.e., any small circuit for computing f' only succeeds with exponentially-small advantage over random guessing). Previous results only obtained functions f' that could not be computed with polynomial advantage over random guessing. Our stronger results are obtained by using derandomization and nondeterminism in constructing f'. A common theme in our results is the computational efficiency of pseudorandom generators. Indeed, our results both rely upon, and enable us to construct pseudorandom generators that can be computed very efficiently (in terms of parallel complexity).

5 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732