scispace - formally typeset
Search or ask a question
Topic

Average-case complexity

About: Average-case complexity is a research topic. Over the lifetime, 1749 publications have been published within this topic receiving 44972 citations.


Papers
More filters
Proceedings Article
01 Jan 2002
TL;DR: The values of the control voltage effecting the periodical phase shift in the phase shifter are measured at those points in time at which the phase difference between the two components equals zero.
Abstract: Two radiation components differing from each other with regard to their state of polarization or their wavelength are directed onto closely adjacent points of the object to be measured. The reflected components are recombined and are fed to a polarization- or wavelength-dependent phase shifter periodically shifting the phase positions of the two components by lambda /2 against each other. A phase shift of one of the components caused by the object to be measured is fully compensated for by periodically shifting the phase position of the other component at particular points in time, which can be determined, for example, by means of a connected analyzer and a photodetector. The values of the control voltage effecting the periodical phase shift in the phase shifter are measured at those points in time at which the phase difference between the two components equals zero. These values are proportional to the difference in height between the points of incidence of the two radiation components on the object surface or the slope of the object surface.

21 citations

Proceedings ArticleDOI
01 Jan 1995
TL;DR: This merged algorithm, its error analysis, and software simulation results are presented, showing this structure, the shifter size is reduced to 1/2 (1+9/n+1).
Abstract: The COordinate Rotation DIgital Computer (CORDIC) algorithm is an iterative procedure to evaluate various elementary functions. It usually consists of one scaling multiplication and n+1 elementary shift-add iterations in an n bit processor. These iterations can be paired off to form double iterations to lower the hardware complexity while the computational complexity stays the same. With this structure, the shifter size is reduced to 1/2 (1+9/n+1). In this paper, we present this merged algorithm, its error analysis, and software simulation results.

21 citations

Proceedings Article
28 Jun 2000
TL;DR: A formal learning model for this task that uses a hypothesis class as it “anti-overfitting” mechanism is introduced and it is shown that for some constants, depending on the hypothesis class, these problems are NP-hard to approximate to within these constant factors.
Abstract: We investigate the computational complexity of the task of detecting dense regions of an unknown distribution from unlabeled samples of this distribution. We introduce a formal learning model for this task that uses a hypothesis class as it “anti-overfitting” mechanism. The learning task in our model can be reduced to a combinatorial optimization problem. We can show that for some constants, depending on the hypothesis class, these problems are NP-hard to approximate to within these constant factors. We go on and introduce a new criterion for the success of approximate optimization geometric problems. The new criterion requires that the algorithm competes with hypotheses only on the points that are separated by some margin ? from their boundaries. Quite surprisingly, we discover that for each of the two hypothesis classes that we investigate, there is a “critical value” of the margin parameter ?. For any value below the critical value the problems are NP-hard to approximate, while, once this value is exceeded, the problems become poly-time solvable.

21 citations

Journal ArticleDOI
TL;DR: The result separates the derivational complexity of the word problem of a finitely presented group from its intrinsic complexity, and it is shown that in a given group the lowest degree of complexity that can be realised by a pseudo-natural algorithm is essentially the derivation complexity of that group.
Abstract: A pseudo-natural algorithm for the word problem of a finitely presented group is an algorithm which not only tells us whether or not a word w equals 1 in the group but also gives a derivation of 1 from w when w equals 1. In [13], [14] Madlener and Otto show that, if we measure complexity of a primitive recursive algorithm by its level in the Grzegorczyk hierarchy, there are groups in which a pseudo-natural algorithm is arbitrarily more complicated than an algorithm which simply solves the word problem. In a given group the lowest degree of complexity that can be realised by a pseudo-natural algorithm is essentially the derivational complexity of that group. Thus the result separates the derivational complexity of the word problem of a finitely presented group from its intrinsic complexity. The proof given in [13] involves the construction of a finitely presented group G from a Turing machine T such that the intrinsic complexity of the word problem for G reflects the complexity of the halting problem of T, while the derivational complexity of the word problem for G reflects the runtime complexity of T. The proof of one of the crucial lemmas in [13] is only sketched, and part of the purpose of this paper is to give the full details of this proof. We will also obtain a variant of their proof, using modular machines rather than Turing machines. As for several other results, this simplifies the proofs considerably. MSC: 03D40, 20F10.

21 citations

Book ChapterDOI
01 Oct 2003
TL;DR: This paper presents an efficient deterministic gossip algorithm for p synchronous, crash-prone, message-passing processors that substantially improves the work complexity of previous solutions using simple point-to-point messaging, while “meeting or beating” the corresponding message complexity bounds.
Abstract: This paper presents an efficient deterministic gossip algorithm for p synchronous, crash-prone, message-passing processors. The algorithm has time complexity T = O(log2 p) and message complexity M=O(p 1 + e), for any e>0. This substantially improves the message complexity of the previous best algorithm that has M=O(p 1.77), while maintaining the same time complexity. The strength of the new algorithm is demonstrated by constructing a deterministic algorithm for performing n tasks in this distributed setting. Previous solutions used coordinator or check-pointing approaches, immediately incurring a work penalty Ω(n + f.p) for f crashes, or relied on strong communication primitives, such as reliable broadcast, or had work too close to the trivial Θ(p.n) bound of oblivious algorithms.The new algorithm uses p crash-prone processors to perform n similar and idempotent tasks so long as one processor remains active. The work of the algorithm is W = O(n + p.min{f + 1,log 3 p}) and its message complexity is M = O(fp e + pmin{f + 1, logp}), for any e>0. This substantially improves the work complexity of previous solutions using simple point-to-point messaging, while “meeting or beating” the corresponding message complexity bounds. The new algorithms use communication graphs and permutations with certain combinatorial properties that are shown to exist. The algorithms are correct for any permutations, and in particular, the same expected bounds can be achieved using random permutations.

21 citations


Network Information
Related Topics (5)
Time complexity
36K papers, 879.5K citations
89% related
Approximation algorithm
23.9K papers, 654.3K citations
87% related
Data structure
28.1K papers, 608.6K citations
83% related
Upper and lower bounds
56.9K papers, 1.1M citations
83% related
Computational complexity theory
30.8K papers, 711.2K citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20222
20216
202010
20199
201810
201732