scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Book ChapterDOI
01 Jan 1997
TL;DR: A new lower bound on the computational complexity of infinite word generation is found: real-time, binary working alphabet, and o(n/(log n)2 space is insufficient to generate a concrete infinite word over two-letter alphabet.
Abstract: The most of the previous work on the complexity of infinite words has measured the complexity as descriptional one, i. e. an infinite word w had a “small” complexity if it was generated by a morphism or another simple machinery, and w has been considered to be “complex” if one needs to use more complex devices (gsm's) to generate it. In [5] the study of the computational complexity of infinite word generation and of its relation to the descriptional characterizations mentioned above was started. The complexity classes GSPACE(f) = {infinite words generated in space f(n)} are defined there, and some fundamental mechanisms for infinite word generation are related to them. It is also proved there, that there is no hierarchy between GSPACE(O(1)) and GSPACE(log2n). Here, GSPACE(f) ⊂ GSPACE(g) for g(n)≥f(n)≥log2n, f(n)=o(g(n)) is proved. The main result of this paper is a new lower bound on the computational complexity of infinite word generation: real-time, binary working alphabet, and o(n/(log n)2 space is insufficient to generate a concrete infinite word over two-letter alphabet.

1 citations

Proceedings Article
01 Jan 2006
TL;DR: The formal organization of the Dagstuhl seminar "Complexity of Boolean Functions" held in March 2006 is described and the different topics that have been discussed there are introduced and some of the major achievements are mentioned.
Abstract: We briefly describe the state of the art concerning the complexity of discrete functions. Computational models and analytical techniques are summarized. After describing the formal organization of the Dagstuhl seminar "Complexity of Boolean Functions" held in March 2006, we introduce the different topics that have been discussed there and mention some of the major achievements. The summary closes with an outlook on the development of discrete computational complexity in the future.

1 citations

Proceedings ArticleDOI
25 Nov 2015
TL;DR: A general decomposition approach is given to decompose a binary sequence with period 2n into some disjoint cubes and a counting formula for m-cubes with the same linear complexity is derived, which is equivalent to the count formula for k-error vectors.
Abstract: The linear complexity and k-error linear complexity of a sequencehave been used as important measures for keystream strength, hencedesigning a sequence with high linear complexity and k-errorlinear complexity is a popular research topic in cryptography. In order to study k-error linear complexity of binarysequences with period 2n, a new tool called cube theory isdeveloped. In this paper, we first give ageneral decomposition approach to decompose a binary sequence with period 2n into some disjoint cubes. Second, a countingformula for m-cubes with the same linear complexity is derived, which is equivalent to the counting formula for k-error vectors. The counting formulaof 2n-periodic binary sequences which can be decomposedinto more than one cube is also investigated, which extends an important result by Etzion et al.

1 citations

Proceedings ArticleDOI
01 Jan 1991
TL;DR: A method of global searching which takes some of the advantageous principles of Bayesian methods such as memory of past evaluations, yet also uses principles of genetic algorithms such as parallel structure and reduced complexity is discussed.
Abstract: A method of global searching which takes some of the advantageous principles of Bayesian methods such as memory of past evaluations, yet also uses principles of genetic algorithms such as parallel structure and reduced complexity. is discussed. Results for this method are found on the basis of the number of evaluations needed to converge upon the global solution for a standard test function. The algorithm is shown to converge probabilistically as the number of evaluations approaches infinity, and is shown to have a computational complexity of O(i), where i is the number of iterations. >

1 citations

Proceedings ArticleDOI
D.N. Kwon1
02 Jul 2007
TL;DR: It is shown that the proposed optimization scheme can reduce the coding computing complexity while the degradation of visual quality is almost negligible.
Abstract: H.264 is well known for the best compression performance with high computational complexity. In order to be more viable in computationally constraint environments, it needs a performance optimization scheme, proposed in the paper based on trade-off between the computational complexity and distortion. Computationally intensive modules are empirically identified and analyzed in terms of computational complexity and distortion in the H.264 video coding framework. Operating modes are extracted in the complexity-distortion space through an exhaustive search. It is shown that the proposed optimization scheme can reduce the coding computing complexity while the degradation of visual quality is almost negligible.

1 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732