scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper focuses on the case of norms over C[0,1] and introduces the notion of dependence of a norm on a point and relate it to the query complexity of the norm, and shows that the dependence of almost every point is of the order of the query simplicity of thenorm.

19 citations

Proceedings Article
09 Jul 2016
TL;DR: This paper provides theoretical justification for exact values (or in some cases bounds) of some of the most central information complexity parameters, namely the VC dimension, the (recursive) teaching dimension), the self-directed learning complexity, and the optimal mistake bound, for classes of acyclic CP-nets.
Abstract: Learning of user preferences has become a core issue in AI research. For example, recent studies investigate learning of Conditional Preference Networks (CP-nets) from partial information. To assess the optimality of learning algorithms as well as to better understand the combinatorial structure of CP-net classes, it is helpful to calculate certain learning-theoretic information complexity parameters. This paper provides theoretical justification for exact values (or in some cases bounds) of some of the most central information complexity parameters, namely the VC dimension, the (recursive) teaching dimension, the self-directed learning complexity, and the optimal mistake bound, for classes of acyclic CP-nets. We further provide an algorithm that learns tree-structured CP-nets from membership queries. Using our results on complexity parameters, we can assess the optimality of our algorithm as well as that of another query learning algorithm for acyclic CP-nets presented in the literature.

19 citations

Journal ArticleDOI
TL;DR: This paper exploits the notion of “unfinished site”, introduced by Katajainen and Koppinen (1998) in the analysis of a two-dimensional Delaunay triangulation algorithm, based on a regular grid, and generalizes it to any dimension k⩾2, which allows the algorithm to adapt efficiently to irregular distributions.
Abstract: This paper exploits the notion of “unfinished site”, introduced by Katajainen and Koppinen (1998) in the analysis of a two-dimensional Delaunay triangulation algorithm, based on a regular grid. We generalize the notion and its properties to any dimension k⩾2 : in the case of uniform distributions, the expected number of unfinished sites in a k -rectangle is O (N 1−1/k ) . This implies, under some specific assumptions, the linearity of a class of divide-and-conquer schemes based on balanced k-d trees. This general result is then applied to the analysis of a new algorithm for constructing Delaunay triangulations in the plane. According to Su and Drysdale (1995, 1997), the best known algorithms for this problem run in linear expected time, thanks in particular to the use of bucketing techniques to partition the domain. In our algorithm, the partitioning is based on a 2-d tree instead, the construction of which takes Θ(N log N) time, and we show that the rest of the algorithm runs in linear expected time. This “preprocessing” allows the algorithm to adapt efficiently to irregular distributions, as the domain is partitioned using point coordinates, as opposed to a fixed, regular basis (buckets or grid). We checked that even for the largest data sets that could fit in internal memory (over 10 million points), constructing the 2-d tree takes noticeably less CPU time than triangulating the data. With this in mind, our algorithm is only slightly slower than the reputedly best algorithms on uniform distributions, and is even the most efficient for data sets of up to several millions of points distributed in clusters.

19 citations

Journal ArticleDOI
TL;DR: In this article, the average case complexity of multivariate integration and L 2 function approximation for the class F = C([0, 1] d ) of continuous functions of d variables was studied.
Abstract: We study the average case complexity of multivariate integration and L 2 function approximation for the class F = C([0, 1] d ) of continuous functions of d variables. The class F is endowed with the isotropic Wiener measure (Brownian motion in Levy's sense). Furthermore, for both problems, only function values are used as data

19 citations

Book ChapterDOI
Sibylle Dr. Mund1
08 Apr 1991
TL;DR: This paper considers the Ziv-Lempel complexity for periodic sequences as well as for pseudorandom number sequences and examines its cryptographic significance and compares it with other complexity measures such as the linear complexity.
Abstract: The Ziv-Lempel complexity is a well-known complexity measure. In our paper we consider the Ziv-Lempel complexity for periodic sequences as well as for pseudorandom number sequences. Further on, we will look at its cryptographic significance and compare it with other complexity measures such as the linear complexity.

19 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732