scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1992"


Journal ArticleDOI
TL;DR: The scope of the theory of average case complexity is widened to other basic questions in computational complexity, and definitions and basic theorems regarding other complexity classes such as average log-space are provided.

179 citations


01 Jan 1992
TL;DR: This thesis describes the polynomial time computable functions without making any direct reference to polynomials, time, or even computation, and introduces the idea of attributing impredicativity to certain speci c integers, relative to a computation which isbeing performed or a proof which is being carried out.
Abstract: The purpose of this thesis is to give a \foundational" characterization of some common complexity classes. Such a characterization is distinguished by the fact that no explicit resource bounds are used. For example, we characterize the polynomial time computable functions without making any direct reference to polynomials, time, or even computation. Complexity classes characterized in this way include polynomial time, the functional polytime hierarchy, the logspace decidable problems, and NC. After developing these \resource free" de nitions, we apply them to redeveloping the feasible logical system of Cook and Urquhart, and show how this rst-order system relates to the second-order system of Leivant. The connection is an interesting one since the systems were de ned independently and have what appear to be very di erent rules for the principle of induction. Furthermore it is interesting to see, albeit in a very speci c context, how to retract a second order statement, (\induction holds up to x"), into rst order type information. Based on this analysis we give a general discussion of \functional impredicativity", and introduce the idea of attributing impredicativity to certain speci c integers, relative to a computation which is being performed or a proof which is being carried out. { 2 { Acknowledgements I feel a deep gratitude towards Stephen Cook for working with me, for his unfailing help and exemplary professionalism. He has given unsel shly and in great measure. Alasdair Urquhart's unique perspective helped me in many ways. Thanks for the conversation. Sam Buss certainly deserves applause for the time and e ort he spent reading and commenting on the thesis. Thanks for taking an interest in it. Professor Charles Racko provided many insightful comments and corrections. Professors Faith Fich, Al Borodin, Hector Levesque, and Bill Weiss have provided indispensable help. Thanks to Professor Dyer for chairing the defense. I am thankful for the support from the Department of Computer Science and sta , the Connaught Foundation, the University of Toronto, and the Ontario Graduate Scholarship program; without them this work would have been impossible. Toniann Pitassi has been a great friend and colleague, sharing some of the best times. There are many others { I don't think I could make a complete list. Thanks to each of you. Gara Pruesse deserves special mention for pointing out a problem, and for then being willing to help x it. Bruce Kapron, as well as Jim Otto, Stephen Bloch, and others, have my thanks for being there in the community and for welcoming this research. Anne is discovered again here. Saving the best for last. . .Thanks mom & dad!! It is hard to express my deep feeling of love for each of you and gratitude for your tremendous sacri ce and enduring gifts. { 3 {

91 citations


Journal ArticleDOI
TL;DR: The average complexity of any algorithm whatsoever under the universal distribution is of the same order of magnitude as the worst-case complexity for time complexity and for space complexity.

72 citations


Proceedings ArticleDOI
22 Jun 1992
TL;DR: The authors get a special case of Lovasz's fractional cover measure and use it to completely characterize the amortized nondeterministic communication complexity, and obtain some results.
Abstract: It is possible to view communication complexity as the solution of an integer programming problem. The authors relax this integer programming problem to a linear programming problem, and try to deduce from it information regarding the original communication complexity question. This approach works well for nondeterministic communication complexity. In this case the authors get a special case of Lovasz's fractional cover measure and use it to completely characterize the amortized nondeterministic communication complexity. In the case of deterministic complexity the situation is more complicated. The authors discuss two attempts, and obtain some results using each of them. >

66 citations


Journal ArticleDOI
TL;DR: It is shown that relative complexity gives feedback on the same complexity domains that many other metrics do, and developers can save time by choosing one metric to do the work of many.
Abstract: A relative complexity technique that combines the features of many complexity metrics to predict performance and reliability of a computer program is presented. Relative complexity aggregates many similar metrics into a linear compound metric that describes a program. Since relative complexity is a static measure, it is expanded by measuring relative complexity over time to find a program's functional complexity. It is shown that relative complexity gives feedback on the same complexity domains that many other metrics do. Thus, developers can save time by choosing one metric to do the work of many. >

49 citations


Book ChapterDOI
Eric Allender1
01 May 1992
TL;DR: This paper presents one method of using time-bounded Kolmogorov complexity as a measure of the complexity of sets, and outlines a number of applications of this approach to different questions in complexity theory.
Abstract: This paper presents one method of using time-bounded Kolmogorov complexity as a measure of the complexity of sets, and outlines a number of applications of this approach to different questions in complexity theory. Connections will be drawn among the following topics: NE predicates, ranking functions, pseudorandom generators, and hierarchy theorems in circuit complexity.

36 citations


Proceedings ArticleDOI
22 Jun 1992
TL;DR: A theory of relational complexity is developed that bridges the gap between standard complexity and fixpoint logic and yields in a uniform way logical analogs to all containments among the complexity classes P, NP, PSPACE and EXPTIME.
Abstract: To overcome the inherent mismatch between complexity and logic, i.e., that while computational devices work on encodings of problems, logic is applied directly to the underlying mathematical structures, the authors develop a theory of relational complexity that bridges the gap between standard complexity and fixpoint logic. It is shown that questions about containments among standard complexity classes can be translated to questions about containments among relational complexity classes, and that the expressive power of fixpoint logic can be precisely characterized in terms of relational complexity classes. This tight three-way relationship among fixpoint logics, relational complexity and standard complexity yields in a uniform way logical analogs to all containments among the complexity classes P, NP, PSPACE and EXPTIME. >

35 citations


Journal ArticleDOI
TL;DR: Theoretical results to obtain optimal or almost optimal sample points, optimal algorithms, and average case complexity functions for linear multivariate problems equipped with the folded Wiener sheet measure are applied.

32 citations


Proceedings ArticleDOI
08 Nov 1992
TL;DR: An incremental way to compute the changes in the distribution functions, based on gradual time-frame reduction, is presented, which reduces the time complexity of the algorithms to quadratic in the number of operations, without any loss in effectiveness or generality of the algorithm.
Abstract: Force-directed scheduling is a technique which schedules operations under time constraints in order to achieve schedules with a minimum number of resources. The worst case time complexity of the algorithm is cubic in the number of operations. This is due to the computation of the changes in the distribution functions needed for the force calculations. An incremental way to compute the changes in the distribution functions, based on gradual time-frame reduction, is presented. This reduces the time complexity of the algorithm to quadratic in the number of operations, without any loss in effectiveness or generality of the algorithm. Implementations show a substantial CPU-time reduction of force-directed scheduling, which is illustrated by means of some industrially relevant examples. >

28 citations



Journal ArticleDOI
TL;DR: Preliminary results are available, and they indicate that on the average, optimization is not as hard as in the worst case setting, although there are instances, where global optimization is intractable in the best case, whereas it is tractable on theaverage.
Abstract: We discuss the average case complexity of global optimization problems. By the average complexity, we roughly mean the amount of work needed to solve the problem with the expected error not exceeding a preassigned error demand. The expectation is taken with respect to a probability measure on a classF of objective functions. Since the distribution of the maximum, maxxf(x), is known only for a few nontrivial probability measures, the average case complexity of optimization is still unknown. Although only preliminary results are available, they indicate that on the average, optimization is not as hard as in the worst case setting. In particular, there are instances, where global optimization is intractable in the worst case, whereas it is tractable on the average. We stress, that the power of the average case approach is proven by exhibiting upper bounds on the average complexity, since the actual complexity is not known even for relatively simple instances of global optimization problems. Thus, we do not know how much easier global optimization becomes when the average case approach is utilized.

Book ChapterDOI
13 Feb 1992
TL;DR: It will be shown that for Communieation Complexity MOD p -P and MOD q -P are uncomparable via inclusion for all pairs of distinct primes p, q and it is proved that the same is true for PP and MOD -P for any prime number p.
Abstract: We develope new lower bound arguments on communication complexity and establish a number of separation results for Counting Communication Classes. In particular, it will be shown that for Communieation Complexity MOD p -P and MOD q -P are uncomparable via inclusion for all pairs of distinct primes p, q. Further we prove that the same is true for PP and MOD p -P for any prime number p. Our results are due to mathematical characterization of modular and probabilistic communication complexity by the minimum rank of matrices belonging to certain equivalence classes. We use arguments from algebra and analytic geometry.

Journal ArticleDOI
TL;DR: It is proved that any LMP which satisfies (A.1) of Part I is tractable and its exponent is at most 2.1, and it is shown that optimal or nearly optimal sample points can be derived from hyperbolic cross points, and exhibit nearly optimal algorithms.

Book ChapterDOI
18 Dec 1992
TL;DR: The notion of Average-P is generalized into Aver, a set of randomized decision problems (L, μ) such that the density function μ is in F and L is computed by a type-C machine in time t (or space t) on μ-average.
Abstract: Levin introduced an average-case complexity measure among randomized decision problems. We generalize his notion of Average-P into Aver〈C, F〉, a set of randomized decision problems (L, μ) such that the density function μ is in F and L is computed by a type-C machine in time t (or space t) on μ-average. Mainly studied are two sorts of reductions between randomized problems, average-case many-one and Turing reductions, and structural properties of average-case complexity classes. We give average-case analogs of concepts of classical complexity theory, e.g., the polynomial time hierarchy and self-reducibility.

01 Aug 1992
TL;DR: In this article, the complexity of alphabet indexing is shown to be NP-complete and a local search algorithm for this problem is given, and a result on PLS-completeness is given.
Abstract: For two nite disjoint sets P and Q of strings over an alphabet , an alphabet indexing for P;Q by an indexing alphabet with j j < j j is a mapping : ! satisfying ~ (P ) \ ~ (Q) = ;, where ~ : ! is the homomorphism derived from . We de ned this notion through experiments of knowledge acquisition from amino acid sequences of proteins by learning algorithms. This paper analyzes the complexity of nding an alphabet indexing. We rst show that the problem is NP-complete. Then we give a local search algorithm for this problem and show a result on PLS-completeness.

Proceedings ArticleDOI
22 Jun 1992
TL;DR: It is proved that if A is random (or pseudorandom), then most instances to A are hard instances (or, respectively, have nontrivial instance complexity) and these results are used to show that if one-way functions that are secure against polynomial-size circuits exist, then an NP-hard problem A must have a nonsparse core of which all instances have nontribal instance complexity.
Abstract: The relationship between the notion of pseudorandomness and the notion of hard instances is investigated. It is proved that if A is random (or pseudorandom), then most instances to A are hard instances (or, respectively, have nontrivial instance complexity). These results are used to show that if one-way functions that are secure against polynomial-size circuits exist, then an NP-hard problem A must have a nonsparse core of which all instances have nontrivial instance complexity. >

Proceedings ArticleDOI
22 Jun 1992
TL;DR: A reconstruction of the foundations of complexity theory relative to random oracles is begun and a technique called average dependence is introduced and used to investigate what is the best lower bound on the size of nondeterministic circuits that accept coNP/sup R/ sets.
Abstract: A reconstruction of the foundations of complexity theory relative to random oracles is begun. The goals are to identify the simple, core mathematical principles behind randomness; to use these principles to push hard on the current boundaries of randomness; and to eventually apply these principles in unrelativized complexity. The focus in this work is on quantifying the degree of separation between NP/sup R/ and coNP/sup R/ relative to a random oracle R. A technique called average dependence is introduced and used to investigate what is the best lower bound on the size of nondeterministic circuits that accept coNP/sup R/ sets and how close a coNP/sup R/ set can come to 'approximating' an arbitrary NP/sup R/ set. The results show that the average dependence technique is a powerful method for addressing certain random oracle questions but that there is still much room for improvement. Some open questions are briefly discussed. >

Journal ArticleDOI
TL;DR: Using this method, an alternative proof for a known Ω(n log n) lower bound on the depth of noisy Boolean decision trees that compute random functions is derived.

Book ChapterDOI
06 Apr 1992
TL;DR: A time efficient distributed algorithm is presented that is time efficient for every class of networks with a polynomial number of maximal cliques that makes use of the algebraic properties of bipartite cliques which form a lattice structure.
Abstract: A time efficient distributed algorithm for computing all maximal cliques in an arbitrary network is presented that is time efficient for every class of networks with a polynomial number of maximal cliques. The algorithm makes use of the algebraic properties of bipartite cliques which form a lattice structure. Assuming that it takes unit time to transmit the message of length \(\mathcal{O}\)(log n) bits, the algorithm has a time complexity of \(\mathcal{O}\)(M n log n) where M is the number of maximal cliques, and n is the number of processors in the network. The communication complexity is \(\mathcal{O}\)(M2 n2log n) assuming message length is \(\mathcal{O}\)(log n) bits.


Posted Content
TL;DR: This paper will introduce some basic concepts of mathematical complexity theory, show that the problem of Optimal Aggregation is of high computational complexity, and outline a possible way to obtain results good enough for practical use despite of thishigh computational complexity.
Abstract: A combinatorical problem is said to be of high computational complexity, if it can be shown that every efficient algorithm needs a high amount of resources as measured in Computing time or storage capacity. This paper will (1) introduce some basic concepts of mathematical complexity theory; (2) show that the problem of Optimal Aggregation is of high computational complexity; and (3) outline a possible way to obtain results good enough for practical use despite of this high computational complexity.

Proceedings ArticleDOI
30 Aug 1992
TL;DR: A systematic approach for the evaluation of quadtree complexity that is based on a flexible linkage paradigm is proposed and it is realized that a quadtree may undergo a complexity reduction through node condensation.
Abstract: Complexity of hierarchical representation of images is defined as the total number of nodes in the representation tree. An a priori knowledge of this quantity is of considerable interest in problems involving tree search, storage and transmission of imagery. This paper proposes a systematic approach for the evaluation of quadtree complexity that is based on a flexible linkage paradigm. It is further realized that a quadtree may undergo a complexity reduction through node condensation. This event is fully modeled and absorbed in the expected complexity expression through a multidimensional weighting function. Inspection of the weighting surface provides a more clear view of the interaction of quadtree complexity and the random image model. >

Proceedings ArticleDOI
22 Jun 1992
TL;DR: The difference between NP and other complexity classes is examined and the question of whether an NP-hard set can be approximated sufficiently by the sets in other complexityclasses is studied.
Abstract: The difference between NP and other complexity classes is examined. The question of whether an NP-hard set can be approximated sufficiently by the sets in other complexity classes is studied. >

Journal ArticleDOI
TL;DR: An optimal lower bound on the average time required by any algorithm that merges two sorted lists on Valiant’s parallel computation tree model is proven.
Abstract: An optimal lower bound on the average time required by any algorithm that merges two sorted lists on Valiant’s parallel computation tree model is proven.

Dissertation
Mark Stamp1
01 Jan 1992
TL;DR: It is shown that the k-complexity gives more information on the cryptographic strength of a sequence than other previously suggested methods.
Abstract: Certain cryptographic applications require pseudo-random sequences which are "unpredictable," in the sense that recovering the sequence from a short captured segment is computationally infeasible. Such sequences are said to be cryptographically strong. Due to the Berlekamp-Massey algorithm, a cryptographically strong sequence must have a high linear complexity, where the linear complexity of a sequence s is the minimum number of stages in a linear feedback shift register capable of generating s. However, trivial examples exist which show that a high linear complexity does not insure that a sequence is cryptographically strong. In this thesis a generalized linear complexity--the k-complexity--is proposed and analyzed. The k-complexity of s is defined to be the smallest linear complexity that can be obtained by altering any k or fewer elements of s. The k-complexity can be interpreted as a "strong" measure of the complexity of a sequence, or as a worst-case measure of the linear complexity when k or fewer errors occur. It is shown that the k-complexity gives more information on the cryptographic strength of a sequence than other previously suggested methods. An efficient algorithm for finding the k-complexity in the special case where s is a periodic binary sequence with period length 2$\sp{n}$ is given. This algorithm generalizes a linear complexity algorithm of Games and Chan. The computational complexity of the general case is also considered. The k-complexities of a particular class of binary sequences--the de Bruijn sequences--are analyzed and several computational results are given. In addition, a new class of binary sequences which appear to have good k-complexity properties is presented.


Journal ArticleDOI
TL;DR: This paper eliminates simplifying assumptions of earlier studies of A* tree-searching and replaces “Error” with a concept called “discrepancy”, a measure of the relative attractiveness of a node for expansion when that node is compared with competing nodes on the solution path.
Abstract: Previous studies of A* tree-searching have modeled heuristics as random variables. The average number of nodes expanded is expressed asymptotically in terms of distance to goal. The conclusion reached is that A* complexity is an exponential function of heuristic error: Polynomial error implies exponential complexity and logarithmic accuracy is required for polynomial complexity.

Journal ArticleDOI
TL;DR: The lower bound for time complexity of the integer problem on uniqueness of elements is investigated, and lower bounds for several combinatorial geometric problems this problem can be reduced to are also investigated.
Abstract: We consider the integer problem on uniqueness of elements, which is formulated as follows: check whether at least one pair of coinciding numbers exists among n given integers. It is proved that a lower bound for time complexity of this problem in the model of linear decision tree is Q(nlogn). This problem can be reduced to several geometric problems, and therefore we obtain lower bounds for their time complexities. While investigating lower bounds for time complexity, the method where a problem with a known lower bound is reduced to the problem under consideration is widely used. The problem on uniqueness of elements, side by side with a number of other problems, in particular the sorting problem, constitutes a basis for such investigations. Lower bounds for time complexities of some combinatorial geometric problems (e.g. the problems on intersections and proximity [1]) are obtained with its help. In a number of cases, to this end we need the problem on uniqueness of elements in a discrete formulation. The investigation of the lower bound for time complexity of the last problem is non-trivial, in contrast to sorting discrete data. At the same time, a number of authors (including the author of the present paper) used the estimate Π(η log η) as the lower bound for its time complexity without proof [2, 3, 4]. Note that an = Ω(όη) if there exists a constant c> 0 such that an > cbn. In the present paper, the lower bound for time complexity of the integer problem on uniqueness of elements is investigated, and lower bounds for several combinatorial geometric problems this problem can be reduced to are also investigated. The problem on uniqueness of elements is formulated as follows: check whether at least one pair of coinciding elements exists in a set of η given numbers. If the input data are real, then the problem is a special case of the membership problem: for any χ G R check whether the condition χ e UJ6j Mj is satisfied, where R is the n-dimensional Euclidean space, {Mj}j€j is a set of open mutually separable domains in R, and J is a finite set of indices. To prove this, it is sufficient to put J = Sn (Sn is the set of permutations on a set of η elements), Μπ = {(χι, x2, · · , xn) € R Ι &π(ΐ) < χπ(2) < ... < £τΓ(η)5 7Γ G 5η}. It is known that 0(log |J|) is the lower bound for time complexity of the membership problem, if we choose the linear decision tree [5] or the algebraic decision tree [6, 7] as computational models. Therefore Ω(π log n) is the lower bound for time complexity of the problem on uniqueness of elements for real numbers. The question of a lower bound in the case of integers remains unsolved, because not every algorithm which solves the problem for integers can solve it for real numbers. We give an example of *UDC 519.954. Originally published in Diskretnaya Matematika (1990) 3, No. 3, 31-34 (in Russian). Translated by A. V. Kolchin.

Proceedings ArticleDOI
24 Jun 1992
TL;DR: Some results on multivariate integration and function approximation are discussed and how much the complexity is reduced when the average case is utilized is shown.
Abstract: The worst case setting seems to be the most important setting for studying complexity of approximately solved problems. Unfortunately, some problems of practical importance are intractable or even unsolvable in the worst case setting and to cope with the inherent difficulty of such problems one has to switch to other settings. A natural alternative is provided by the average case setting under which worst case intractable and/or unsolvable problems become tractable. In this paper we will discuss some results on multivariate integration and function approximation and show by how much the complexity is reduced when the average case is utilized.

Proceedings ArticleDOI
24 Jun 1992
TL;DR: In this article, the authors studied the complexity of zero finding for univariate functions changing sign at the endpoints of an interval, and survey some results of different authors, and showed that the average case complexity is at most of the order ln ln(1/?).
Abstract: We study the zero finding problem for univariate functions changing sign at the endpoints of an interval, and we survey some results of different authors. The complexity of zero finding, i.e., the minimal cost to determine a zero with a given accuracy ?, is studied in the worst and in the average case setting. For classes of smooth functions the results in both settings differ significantly. While ln(1/?) is the order of the worst case complexity, the average case complexity is at most of the order ln ln(1/?).