scispace - formally typeset
Proceedings ArticleDOI

Probabilistic computations: Toward a unified measure of complexity

Andrew Chi-Chin Yao
- pp 222-227
TLDR
Two approaches to the study of expected running time of algoritruns lead naturally to two different definitions of intrinsic complexity of a problem, which are the distributional complexity and the randomized complexity, respectively.
Abstract
1. Introduction The study of expected running time of algoritruns is an interesting subject from both a theoretical and a practical point of view. Basically there exist two approaches to this study. In the first approach (we shall call it the distributional approach), some "natural" distribution is assumed for the input of a problem, and one looks for fast algorithms under this assumption (see Knuth [8J). For example, in sorting n numbers, it is usually assumed that all n! initial orderings of the numbers are equally likely. A common criticism of this approach is that distributions vary a great deal in real life situations; fu.rthermore, very often the true distribution of the input is simply not known. An alternative approach which attempts to overcome this shortcoming by allowing stochastic moves in the computation has recently been proposed. This is the randomized approach made popular by Habin [lOJ(also see Gill[3J, Solovay and Strassen [13J), although the concept was familiar to statisticians (for exa'1lple, see Luce and Raiffa [9J). Note that by allowing stochastic moves in an algorithm, the input is effectively being randomized. We shall refer to such an algoritlvn as a randomized algorithm. These two approaches lead naturally to two different definitions of intrinsic complexity of a problem, which we term the distributional complexity and the randomized complexity, respectively. (Precise definitions and examples will be given in Sections 2 and 3.) To solidify the ideas, we look at familiar combinatorial problems that can be modeled by decision trees. In particular, we consider (a) the testing of an arbitrary graph property from an adjacency matrix (Section 2), and (b) partial order problems on n We will show that for these two classes of problems, the two complexity measures always agree by virtue of a famous theorem, the Minimax Theorem of Von Neumann [14J. The connection between the two approaches lends itself to applications. With two different views (and in a sense complementary to each other) on the complexity of a problem, it is frequently easier to derive upper and lower bounds. For example, using adjacency matrix representation for a graph, it can be shown that no randomized algorithm can determine 2 the existence of a perfect matching in less than O(n) probes. Such lower bounds to the randomized approach were lacking previously. As another example of application , we can prove that for the partial order problems in (b), assuming uniform …

read more

Citations
More filters
Book

Randomized Algorithms

TL;DR: This book introduces the basic concepts in the design and analysis of randomized algorithms and presents basic tools such as probability theory and probabilistic analysis that are frequently used in algorithmic applications.
Journal ArticleDOI

Optimal aggregation algorithms for middleware

TL;DR: An elegant and remarkably simple algorithm ("the threshold algorithm", or TA) is analyzed that is optimal in a much stronger sense than FA, and is essentially optimal, not just for some monotone aggregation functions, but for all of them, and not just in a high-probability worst-case sense, but over every database.
Proceedings ArticleDOI

Algorithms, games, and the internet

TL;DR: If the Internet is the next great subject for Theoretical Computer Science to model and illuminate mathematically, then Game Theory, and Mathematical Economics more generally, are likely to prove useful tools.
Proceedings ArticleDOI

An optimal algorithm for on-line bipartite matching

TL;DR: This work applies the general approach to data structures, bin packing, graph coloring, and graph coloring to bipartite matching and shows that a simple randomized on-line algorithm achieves the best possible performance.

Randomized Algorithms

TL;DR: For many applications, a randomized algorithm is either the simplest or the fastest algorithm available, and sometimes both. as discussed by the authors introduces the basic concepts in the design and analysis of randomized algorithms and provides a comprehensive and representative selection of the algorithms that might be used in each of these areas.
References
More filters
Journal ArticleDOI

Universal classes of hash functions

TL;DR: An input independent average linear time algorithm for storage and retrieval on keys that makes a random choice of hash function from a suitable class of hash functions.
Journal ArticleDOI

Zur Theorie der Gesellschaftsspiele

Book

Formal Languages and Their Relation to Automata

TL;DR: The theory of formal languages as a coherent theory is presented and its relationship to automata theory is made explicit, including the Turing machine and certain advanced topics in language theory.
Journal ArticleDOI

A Fast Monte-Carlo Test for Primality

TL;DR: A uniform distribution a from a uniform distribution on the set 1, 2, 3, 4, 5 is a random number and if a and n are relatively prime, compute the residue varepsilon.