scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the similarities and differences between discrete and real Kolmogorov Complexity were investigated, as introduced by Montana and Pardo (1998), and the BSS-machine (aka real-RAM) has been established as a major model of computation.

7 citations

Book ChapterDOI
31 Mar 2009
TL;DR: An extension of the generalized complexity spaces of Romaguera and Schellekens is presented and it is shown that the new complexity approach is suitable to provide quantitative computational models in Theoretical Computer Science.
Abstract: In 1995 M. Schellekens introduced the theory of complexity spaces as a part of the development of a mathematical (topological) foundation for the complexity analysis of programs and algorithms [Electronic Notes in Theoret. Comput. Sci. 1 (1995), 211-232]. This theory is based on the structure of quasi-metric spaces which allow to measure relative progress made in lowering the complexity when a program is replaced by another one. In his paper, Schellekens showed the applicability of the theory of complexity spaces to the analysis of Divide & Conquer algorithms. Later on, S. Romaguera and Schellekens introduced the so-called dual (quasi-metric) complexity space in order to obtain a more robust mathematical structure for the complexity analysis of programs and algorithms [Topology Appl. 98 (1999), 311-322]. They studied some properties of the original complexity space, which are interesting from a computational point of view, via the analysis of the dual ones and they also gave an application of the dual approach to the complexity analysis of Divide and Conquer algorithms. Most recently, Romaguera and Schellekens introduced and studied a general complexity framework which unifies the original complexity space and the dual one under the same formalism [Quaestiones Mathematicae 23 (2000), 359-374]. Motivated by the former work we present an extension of the generalized complexity spaces of Romaguera and Schellekens and we show, by means of the so-called domain of words, that the new complexity approach is suitable to provide quantitative computational models in Theoretical Computer Science. In particular our new complexity framework is shown to be an appropriate tool to model the meaning of while-loops in formal analysis of high-level programming languages.

7 citations

Book ChapterDOI
06 Apr 1992
TL;DR: A time efficient distributed algorithm is presented that is time efficient for every class of networks with a polynomial number of maximal cliques that makes use of the algebraic properties of bipartite cliques which form a lattice structure.
Abstract: A time efficient distributed algorithm for computing all maximal cliques in an arbitrary network is presented that is time efficient for every class of networks with a polynomial number of maximal cliques. The algorithm makes use of the algebraic properties of bipartite cliques which form a lattice structure. Assuming that it takes unit time to transmit the message of length \(\mathcal{O}\)(log n) bits, the algorithm has a time complexity of \(\mathcal{O}\)(M n log n) where M is the number of maximal cliques, and n is the number of processors in the network. The communication complexity is \(\mathcal{O}\)(M2 n2log n) assuming message length is \(\mathcal{O}\)(log n) bits.

7 citations

Journal ArticleDOI
TL;DR: This work presents the shuffling buffer technique to introduce sufficient randomness to guarantee an improvement on the worst case complexity by knowing only k data in advance.
Abstract: The complexity of randomized incremental algorithms is analyzed with the assumption of a random order of the input. To guarantee this hypothesis, the n data have to be known in advance in order to be mixed what contradicts with the on-line nature of the algorithm. We present the shuffling buffer technique to introduce sufficient randomness to guarantee an improvement on the worst case complexity by knowing only k data in advance. Typically, an algorithm with O(n2) worst-case complexity and O(n) or O(nlog n) randomized complexity has an complexity for the shuffling buffer. We illustrate this with binary search trees, the number of Delaunay triangles or the number of trapezoids in a trapezoidal map created during an incremental construction.

7 citations

Proceedings ArticleDOI
19 May 2013
TL;DR: A novel class of set-theoretic adaptive sparsity promoting algorithms of linear computational complexity induced via generalized thresholding operators, which correspond to nonconvex penalties such as those used in a number of sparse LMS based schemes.
Abstract: This paper deals with a novel class of set-theoretic adaptive sparsity promoting algorithms of linear computational complexity. Sparsity is induced via generalized thresholding operators, which correspond to nonconvex penalties such as those used in a number of sparse LMS based schemes. The results demonstrate the significant performance gain of our approach, at comparable computational cost.

6 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732