scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Posted Content
TL;DR: It is shown that any parametric elimination procedure which owns this property must neccessarily have an exponential sequential time complexity even if highly performant data structures are used.
Abstract: This paper is devoted to the complexity analysis of a particular property, called algebraic robustness owned by all known symbolic methods of parametric polynomial equation solving (geometric elimination) It is shown that any parametric elimination procedure which owns this property must neccessarily have an exponential sequential time complexity even if highly performant data structures (as eg the straight–line program encoding of polynomials) are used The paper finishes with the motivated introduction of a new non-uniform complexity measure for zero-dimensional polynomial equation systems, called elimination complexity

23 citations

Proceedings ArticleDOI
30 Oct 1989
TL;DR: In this paper, it was shown that one can learn under all simple distributions if one can also learn under one fixed simple distribution, called the universal distribution, which is called simple if it is dominated by a semicomputable distribution.
Abstract: It is pointed out that in L.G. Valiant's learning model (Commun. ACM, vol.27, p.1134-42, 1984) many concepts turn out to be too hard to learn, whereas in practice, almost nothing we care to learn appears to be not learnable. To model the intuitive notion of learning more closely, it is assumed that learning happens under an arbitrary distribution, rather than under an arbitrary simple distribution, as assumed by Valiant. A distribution is called simple if it is dominated by a semicomputable distribution. A general theory of learning under simple distributions is developed. In particular, it is shown that one can learn under all simple distributions if one can learn under one fixed simple distribution, called the universal distribution. Interesting learning algorithms and several quite general new learnable classes are presented. It is shown that for essentially all algorithms, if the inputs are distributed according to the universal distribution, then the average-case complexity is of the same order of magnitude as the worst-case complexity. >

23 citations

Journal ArticleDOI
TL;DR: It turns out that, asymptotically, and in the average case, the complexity gap between the several constructions is significantly larger than in the worst case.

22 citations

Journal Article
TL;DR: The hypothesis is that a concept’s level of difficulty is determined by that of the multi-agent communication protocol, and that logical complexity (i.e., the maximal Boolean compression of the disjunctive normal form) is the best possible measure of conceptual complexity.
Abstract: Conceptual complexity is assessed by a multi-agent system which is tested experimentally. In this model, where each agent represents a working memory unit, concept learning is an inter-agent communication process that promotes the elaboration of common knowledge from distributed knowledge. Our hypothesis is that a concept’s level of difficulty is determined by that of the multi-agent communication protocol. Three versions of the model, which differ according to how they compute entropy, are tested and compared to Feldman’s model (Nature, 2000), where logical complexity (i.e., the maximal Boolean compression of the disjunctive normal form) is the best possible measure of conceptual complexity. All three models proved superior to Feldman’s: the serial version is ahead by 5.5 points of variance in explaining adult inter-concept performance. Computational complexity theories (Johnson, 1990; Lassaigne & Rougemont, 1996) provide a measure of complexity in terms of the computation load associated with a program’s execution time. In this approach, called the structural approach, problems are grouped into classes on the basis of the machine time and space required by the algorithms used to solve them. A program is a function or a combination of functions. In view of developing psychological models, it can be likened to a concept, especially when y’s domain [y = f(x)] is confined to the values 0 and 1. A neighboring perspective (Delahaye, 1994) aimed at describing the complexity of objects (and not at solving problems) is useful for distinguishing between the“orderless, irregular, random, chaotic, random” complexity (this quantity is called algorithmic complexity, algorithmic randomness, algorithmic information content or Chaitin-Kolmogorov complexity; Chaitin,

22 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732