scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Proceedings ArticleDOI
19 Jun 2016
TL;DR: An explicit example of a search problem with external information complexity ≤ O(k), withrespect to any input distribution, and distributional communication complexity ≥ 2k, with respect to some input distribution is obtained.
Abstract: We show an exponential gap between communication complexity and external information complexity, by analyzing a communication task suggested as a candidate by Braverman. Previously, only a separation of communication complexity and internal information complexity was known. More precisely, we obtain an explicit example of a search problem with external information complexity ≤ O(k), with respect to any input distribution, and distributional communication complexity ≥ 2k, with respect to some input distribution. In particular, this shows that a communication protocol cannot always be compressed to its external information. By a result of Braverman, our gap is the largest possible. Moreover, since the upper bound of O(k) on the external information complexity of the problem is obtained with respect to any input distribution, our result implies an exponential gap between communication complexity and information complexity (both internal and external) in the non-distributional setting of Braverman. In this setting, no gap was previously known, even for internal information complexity.

38 citations

Book
01 May 2001
TL;DR: A more recent snapshot of resource-bounded measure is given, focusing not so much on what has been achieved to date as on what the authors hope will be achieved in the near future.

38 citations

Proceedings ArticleDOI
24 Oct 1999
TL;DR: A geometrically motivated classifier is presented and applied, with both training and testing stages, to 3 real datasets and the results compared to those from 33 other classifiers have the least error.
Abstract: Automation has arrived to parallel coordinates. A geometrically motivated classifier is presented and applied, with both training and testing stages, to 3 real datasets. Our results compared to those from 33 other classifiers have the least error. The algorithm is based on parallel coordinates and has very low computational complexity in the number of variables and the size of the dataset-contrasted with the very high or unknown (often unstated) complexity of other classifiers, the low complexity enables the rule derivation to be done in near real-time hence making the classification adaptive to changing conditions, provides comprehensible and explicit rules-contrasted to neural networks which are "black boxes", does dimensionality selection-where the minimal set of original variables (not transformed new variables as in Principal Component Analysis) required to state the rule is found, orders these variables so as to optimize the clarity of separation between the designated set and its complement-this solves the pesky "ordering problem" in parallel coordinates. The algorithm is display independent, hence it can be applied to very large in size and number of variables datasets. Though it is instructive to present the results visually, the input size is no longer display-limited as for visual data mining.

38 citations

Journal ArticleDOI
TL;DR: A lattice algorithm specifically designed for some classical applications of lattice reduction, for lattice bases with a generalized knapsack-type structure, where the target vectors have bounded depth, which is an improvement over the quadratic complexity floating-point LLL algorithms.
Abstract: We present a lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors have bounded depth. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then our algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity floating-point LLL algorithms. To illustrate the usefulness of this algorithm we show that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well.

37 citations

01 Jun 1987
TL;DR: This a r t i c l e is d e d i c a t e d to t h e s t u d y of t h E complex i ty of c o m p u t i n g a n d dec id ing s e l ec t ed p r o b l e m s in a lgebra.

37 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732