scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Proceedings ArticleDOI
05 Dec 2005
TL;DR: Two QRD based tree search algorithms which can be likely candidates for implementation purposes due to their relatively low computational complexity are compared and while both algorithms achieve near-ML performance, the QRD-stack algorithm is shown to be more efficient and has a much lower computational complexity as compared to QRD -M.
Abstract: Due to the capacity achievable, multiple-input/multiple-output (MIMO) systems has gained popularity in recent years. While several data detection algorithms are available for MIMO systems, simple algorithms usually perform unsatisfactorily, while more complex ones are infeasible for hardware implementation. In this paper, we compare two QRD based tree search algorithms which can be likely candidates for implementation purposes due to their relatively low computational complexity. The QRD-M algorithm proposed in (J Yue, et al, 2003) yields near-M L performance while only requiring a fraction of the computational complexity of an ML receiver. The QRD-stack algorithm displays similar performance. The performance of both QRD-M and QRD-stack algorithms are compared and while both algorithms achieve near-ML performance, the QRD-stack algorithm is shown to be more efficient and has a much lower computational complexity as compared to QRD-M.

62 citations

Journal ArticleDOI
TL;DR: It is shown that the worst-case computational time complexity of the algorithm presented is O(Kr(m+n logn)), which is also the best-known complexity to solve this problem.

62 citations

Journal ArticleDOI
TL;DR: An extension of the framework for discussing the computational complexity of problems involving uncountably many objects, such as real numbers, sets and functions, that can be represented only through approximation is proposed, using a certain class of string functions as names representing these objects.
Abstract: We propose an extension of the framework for discussing the computational complexity of problems involving uncountably many objects, such as real numbers, sets and functions, that can be represented only through approximation. The key idea is to use (a certain class of) string functions as names representing these objects. These are more expressive than infinite sequences, which served as names in prior work that formulated complexity in more restricted settings. An advantage of using string functions is that we can define their "size" in the way inspired by higher-type complexity theory. This enables us to talk about computation on string functions whose time or space is bounded polynomially in the input size, giving rise to more general analogues of the classes P, NP, and PSPACE. We also define NP- and PSPACE-completeness under suitable many-one reductions. Because our framework separates machine computation and semantics, it can be applied to problems on sets of interest in analysis once we specify a suitable representation (encoding). As prototype applications, we consider the complexity of functions (operators) on real numbers, real sets, and real functions. For example, the task of numerical algorithms for solving a certain class of differential equations is naturally viewed as an operator taking real functions to real functions. As there was no complexity theory for operators, previous results only stated how complex the solution can be. We now reformulate them and show that the operator itself is polynomial-space complete.

62 citations

Journal ArticleDOI
30 Sep 2011-Chaos
TL;DR: A geometric approach to complexity based on the principle that complexity requires interactions at different scales of description is developed, which presents a theory of complexity measures for finite random fields using the geometric framework of hierarchies of exponential families.
Abstract: We develop a geometric approach to complexity based on the principle that complexity requires interactions at different scales of description. Complex systems are more than the sum of their parts of any size and not just more than the sum of their elements. Using information geometry, we therefore analyze the decomposition of a system in terms of an interaction hierarchy. In mathematical terms, we present a theory of complexity measures for finite random fields using the geometric framework of hierarchies of exponential families. Within our framework, previously proposed complexity measures find their natural place and gain a new interpretation.

62 citations

Journal ArticleDOI
TL;DR: A deterministic protocol that has linear space complexity, linear time complexity for a read operation, and constant time complexity of a write, and an overwhelmingly small, controllable probability of error is provided.
Abstract: We address the problem of reading several variables (components) X/sub 1/,...,X/sub c/, all in one atomic operation, by only one process, called the reader, while each of these variables are being written by a set of writers. All operations (i.e., both reads and writes) are assumed to be totally asynchronous and wait-free. For this problem, only algorithms that require at best quadratic time and space complexity can be derived from the existing literature. (The time complexity of a construction is the number of suboperations of a high-level operation and its space complexity is the number of atomic shared variables it needs) In this paper, we provide a deterministic protocol that has linear (in the number of processes) space complexity, linear time complexity for a read operation, and constant time complexity for a write. Our solution does not make use of time-stamps. Rather, it is the memory location where a write writes that differentiates it from the other writes. Also, introducing randomness in the location where the reader gets the value that it returns, we get a conceptually very simple probabilistic algorithm. This algorithm has an overwhelmingly small, controllable probability of error. Its space complexity, and also the time complexity of a read operation, are sublinear. The time complexity of a write is constant. On the other hand, under the Archimedean time assumption, we get a protocol whose time and space complexity do not depend on the number of writers, but are linear in the number of components only. (The time complexity of a write operation is still constant.). >

61 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732