scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper eliminates simplifying assumptions of earlier studies of A* tree-searching and replaces “Error” with a concept called “discrepancy”, a measure of the relative attractiveness of a node for expansion when that node is compared with competing nodes on the solution path.
Abstract: Previous studies of A* tree-searching have modeled heuristics as random variables. The average number of nodes expanded is expressed asymptotically in terms of distance to goal. The conclusion reached is that A* complexity is an exponential function of heuristic error: Polynomial error implies exponential complexity and logarithmic accuracy is required for polynomial complexity.

2 citations

Journal ArticleDOI
TL;DR: It is demonstrated how parameters convey a sense of randomness in parameter values in parameterized sets in uniformly fixed‐parameter decidability classes interact with both instance complexity and Kolmogorov complexity.
Abstract: We extend the reach of fixed-parameter analysis by introducing classes of parameterized sets defined based on decidability instead of complexity. Known results in computability theory can be expressed in the language of fixed-parameter analysis, making use of the landscape of these new classes. On the one hand this unifies results that would not otherwise show their kinship, while on the other it allows for further exchange of insights between complexity theory and computability theory. In the landscape of our fixed-parameter decidability classes, we recover part of the classification of real numbers according to their computability. From this, using the structural properties of the landscape, we get a new proof of the existence of P-selective bi-immune sets. Furthermore, we show that parameter values in parameterized sets in our uniformly fixed-parameter decidability classes interact with both instance complexity and Kolmogorov complexity. By deriving a parameter based upper bound on instance complexity, we demonstrate how parameters convey a sense of randomness. Motivated by the instance complexity conjecture, we show that the upper bound on the instance complexity is infinitely often also an upper bound on the Kolmogorov complexity.

2 citations

Journal ArticleDOI
TL;DR: The design and performance evaluation of a reduced complexity algorithm for timing synchronization for wideband-code division multiple access (W-CDMA) systems and experimental results show that the proposed approach is able to deliver performance similar to traditional approaches.
Abstract: This paper presents the design and performance evaluation of a reduced complexity algorithm for timing synchronization. The complexity reduction is obtained via the introduction of approximate computing, which lightens the computational load of the algorithm with a minimal loss in precision. Timing synchronization for wideband-code division multiple access (W-CDMA) systems is utilized as the case study and experimental results show that the proposed approach is able to deliver performance similar to traditional approaches. At the same time, the proposed algorithm is able to cut the computational complexity of the traditional algorithm by a 20% factor. Furthermore, the estimation of power consumption on a reference architecture, showed that a 20% complexity reduction, corresponds to a total power saving of 45%.

2 citations

01 Jan 2009
TL;DR: A quadrant localized lattice decode algorithm for the golden codes is proposed, which exploits the directional attributes of the received points to localize the search in a single quadrant of the multi-dimensional space and hence reduces the computational complexity of the existing lattice decoding algorithms significantly.
Abstract: A quadrant localized lattice decoding algorithm for the golden codes is proposed in this paper that requires very low computational complexity as compared to the sphere decoder. The proposed algorithm is mainly based on the idea of utilizing its directional attributes and localizing the search in a single quadrant of the multi-dimensional space in which the received point lies. We specifically apply this algorithm to the conventional sphere decoder and analyze the performance of the proposed decoder in comparison to the latter one. The adopted approach, however, is sufficiently general to be applied to any popular decoding algorithm for reducing its computational complexity. conventional sphere decoder to reduce its complexity. Despite these efforts, the decoder complexity still remains to be the bottle-neck for the implementation of the MIMO systems. In this paper, we propose a new algorithm, which exploits the directional attributes of the received points to localize the search in a single quadrant of the multi-dimensional space and hence reduces the computational complexity of the existing lattice decoding algorithms significantly. We specifically apply this algorithm to the conventional sphere decoder and analyze the performance of the proposed decoder in comparison to the latter one. The adopted approach, however, is sufficiently general to be applied to any popular decoding algorithm for reducing its computational complexity. Note that a full-rate STC, i.e., the golden code, is used in the encoder of the present system to exploit its property of being an optimal code in the Rayleigh fading environment.

2 citations

Proceedings ArticleDOI
09 Jul 2006
TL;DR: This work establishes a relationship between the minimum distance of a linear code C and the branching program complexity of computing the syndrome function for C and/or its dual code Cperp and proves the conjecture of Bazzi and Mitter that a sequence of codes whose encoder function is computable by a branching program with time-space complexity ST = o(n2) cannot be asymptotically good.
Abstract: The branching program is a fundamental model of (nonuniform) computation, which conveniently captures both time and space restrictions. Recently, an interesting connection between the minimum distance of a code and the branching program complexity of its encoder was established by Bazzi and Mitter. Here, we establish a relationship between the minimum distance of a linear code C and the branching program complexity of computing the syndrome function for C and/or its dual code Cperp. Specifically, let C be an (n, k, d) linear code over Fq, and suppose that there is a branching program B that computes the syndrome vector with respect to the dual code Cperp in time T and space S. We prove that the minimum distance of C is then bounded by d les 2T(S+log2T)/klog2q + 1. We also consider the average-case complexity in the branching program model: we show that if B computes the syndrome with respect to Cperp in expected time T and expected space S, then d les 12T(S+log2T + 6)/klog2q + 1. Since there are trivial branching programs that compute the syndrome vector with time-space complexity ST = O(n2 log q), the bound in (2) is asymptotically tight. Furthermore, with the help of the bounds in (1) and (2), we prove the conjecture of Bazzi and Mitter that a sequence of codes whose encoder function is computable by a branching program with time-space complexity ST = o(n2) cannot be asymptotically good, for the special case of self-dual codes. Our proof of these results is based on the probabilistic method developed by Borodin-Cook and Abrahamson

2 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732