scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1989"


Proceedings ArticleDOI
01 Nov 1989
TL;DR: A system to derive time bounds automatically as a function of the size of input using abstract interpretation is described and the semantics-based setting makes it possible to prove the correctness of the time bound function.
Abstract: One way to analyse programs is to to derive expressions for their computational behaviour. A time bound function (or worst-case complexity) gives an upper bound for the computation time as a function of the size of input. We describe a system to derive such time bounds automatically using abstract interpretation. The semantics-based setting makes it possible to prove the correctness of the time bound function. The system can analyse programs in a first-order subset of Lisp and we show how the system also can be used to analyse programs in other languages.

210 citations


Proceedings ArticleDOI
01 Feb 1989
TL;DR: The equivalence of search and decision problems in the context of average case complexity; an initial analysis of the structure of distributional-NP under reductions which preserve average polynomial-time; definitions and basic theorems regarding other complexity classes such as average log-space.
Abstract: This paper takes the next step in developing the theory of average case complexity initiated by Leonid A. Levin. Previous works [Levin 84, Gurevich 87, Venkatesan and Levin 88] have focused on the existence of complete problems. We widen the scope to other basic questions in computational complexity. Our results include: the equivalence of search and decision problems in the context of average case complexity;an initial analysis of the structure of distributional-NP under reductions which preserve average polynomial-time;a proof that if all distributional-NP is in average polynomial-time then non-deterministic exponential-time equals deterministic exponential time (i.e., a collapse in the worst case hierarchy);definitions and basic theorems regarding other complexity classes such as average log-space.

167 citations


Book ChapterDOI
21 Aug 1989
TL;DR: A characterization of nondeterministic polynomial time as the set of properties expressible in first-order logic phs a least fixed point operator and a result that was derived using this approach show that for all s(n) greater than or equal to log n, nondetergetic space s( n) is closed under complementation.
Abstract: Computational complexity began with the natural physical notions of time and space. Given a property, S, an important issue is the complexity of checking whether or not an input satisfies S. For a long time, complexity referred to the time or space used in the computation. A mathematician might ask, "What is the complexity of expressing the property S" It should not be surprising that these two questions that of checking and that of expressing are related. It is startling how closely tied they are when the second question refers to expressing the property in first-order logic. Many complexity classes originally defined in terms of time or space resources have precise definitions as classes in first-order logic. In 1974 Fagin gave a characterization of nondeterministic polynomial time as the set of properties expressible in second-order existential logic. We will begin with this result and then survey some more recent work relating first-order expressibility to computational complexity. Some of the results arising from this approach include characterizing polynomial time as the set of properties expressible in first-order logic phs a least fixed point operator, and showing that the set of first-order inductive definitions for finite structures is closed under complementation. We will end with an unexpected result that was derived using this approach: For all s(n) greater than or equal to log n, nondeterministic space s(n) is closed under complementation.

142 citations


Proceedings ArticleDOI
01 Jul 1989
TL;DR: An efficient, randomized hidden surface removal algorithm, with the best time complexity so far, which provably holds for any input, regardless of the way in which faces are located in the scene.
Abstract: We give an efficient, randomized hidden surface removal algorithm, with the best time complexity so far. A distinguishing feature of this algorithm is that the expected time spent by this algorithm on junctions which are at the "obstruction level" l, with respect to the viewer, is inversely proportional to l. This provably holds for any input, regardless of the way in which faces are located in the scene, because the expectation is with respect to randomization in the algorithm, and does not depend on the input. In practice, this means that the time complexity is roughly proportional to the size of the actually visible output times logarithm of the average depth complexity of the scene (this logarithm is very small generally).

72 citations


Proceedings Article
21 Aug 1989
TL;DR: Two types of time-equation are introduced: suucient-time equations and necessary- time equations, which together provide bounds on the exact time-complexity of expressions in a lazy higher-order language.
Abstract: This paper is concerned with the time-analysis of functional programs. Techniques which enable us to reason formally about a program’s execution costs have had relatively little attention in the study of functional programming. We concentrate here on the construction of equations which compute the time-complexity of expressions in a lazy higher-order language.

60 citations


Proceedings ArticleDOI
30 Oct 1989
TL;DR: An Omega ((log n)/sup 2/) bound on the probabilistic communication complexity of monotonic st-connectivity is proved and it is deduced that every nonmonotonic NC/sup 1/ circuit for st-Connectivity requires a constant fraction of negated input variables.
Abstract: The authors demonstrate an exponential gap between deterministic and probabilistic complexity and between the probabilistic complexity of monotonic and nonmonotonic relations. They then prove, as their main result, an Omega ((log n)/sup 2/) bound on the probabilistic communication complexity of monotonic st-connectivity. From this they deduce that every nonmonotonic NC/sup 1/ circuit for st-connectivity requires a constant fraction of negated input variables. >

53 citations


Proceedings ArticleDOI
19 Jun 1989
TL;DR: The present authors widen the scope to other basic questions in computational complexity to include the equivalence of search and decision problems in the context of average case complexity and an initial analysis of the structure of distributional-NP under reductions which preserve average polynomial-time.
Abstract: Summary form only given, as follows. The authors take the next step in developing the theory of average case complexity initiated by L.A. Levin. Previous work has focused on the existence of complete problems. The present authors widen the scope to other basic questions in computational complexity. Their results include: (1) the equivalence of search and decision problems in the context of average case complexity; (2) an initial analysis of the structure of distributional-NP under reductions which preserve average polynomial-time; (3) a proof that if all distributional-NP is in average polynomial-time then nondeterministic exponential-time equals deterministic exponential time (i.e. a collapse in the worst-case hierarchy); and (4) definitions and basic theorems regarding other complexity classes such as average log space. >

37 citations


Proceedings ArticleDOI
C.G. Plaxton1
30 Oct 1989
TL;DR: An Omega ((n/p) log log p+log p) lower bound is obtained for selection on any network that satisfies a particular low expansion property, such as the tree, multidimensional mesh, hypercube, butterfly, and shuffle-exchange.
Abstract: The sequential complexity of determining the kth largest out of a given set of n keys is known to be linear. Thus, given a p-processor parallel machine, it is asked whether or not an O(n/p) selection algorithm can be devised for that machine. An Omega ((n/p) log log p+log p) lower bound is obtained for selection on any network that satisfies a particular low expansion property. The class of networks satisfying this property includes all of the common network families, such as the tree, multidimensional mesh, hypercube, butterfly, and shuffle-exchange. When n/p is sufficiently large (e.g. greater than log/sup 2/p on the butterfly, hypercube, and shuffle-exchange), this result is matched by the upper bound given previously by the author (Proc. 1st Ann. ACM Symp. on Parallel Algorithms and Architecture p.64-73, 1989). >

34 citations


Proceedings ArticleDOI
19 Jun 1989
TL;DR: The complexity of multiplying together n elements of a group G is studied and it is observed that as G ranges over a sequence of well-studied groups, the iterated multiplication problem is complete for corresponding well- studied complexity classes.
Abstract: The complexity of multiplying together n elements of a group G is studied. It is observed that as G ranges over a sequence of well-studied groups, the iterated multiplication problem is complete for corresponding well-studied complexity classes. Furthermore, the notion of completeness in question is extremely low-level and algebraic. The issue of uniformity is investigated. >

31 citations


Journal Article
TL;DR: This book discusses the theory and practice of logical analysis of the Truth Concept, and the results can be found in the treatise on Logical Analysis of the Concept of Proof, as well as other books onLogical Analysis and Semantics.
Abstract: Elementary Theory of Computation. The Mathematical Concept of Algorithm. Church's Thesis. Universal Programs and the Recursion Theorem. Complexity of Algorithmic Unsolvability. Recursively Unsolvable Problems. The Arithmetical Hierarchy and Degrees of Unsolvability. Abstract Complexity of Computation. Recursiveness and Complexity. Complexity Classes of Recursive Functions. Complexity Classes of Primitive Recursive Functions. Polynomially- and Exponentially-Bounded Complexity Classes. Finite Automata. Context-Free Languages. Elementary Predicate Logic. Logical Analysis of the Truth Concept. Syntax and Semantics. Completeness Theorem. Consequences of the Completeness Theorem. Logical Analysis of the Concept of Proof. Gentzen's Calculus LK. Cut Elimination for LK. Consequences of the Cut Elimination Theorem. Complexity of Logical Decision Problems. Undecidability and Reduction Classes. Incompleteness of Arithmetic. Recursive Lower Complexity Bounds. Bibliography. Index.

27 citations


Proceedings ArticleDOI
30 Oct 1989
TL;DR: In this paper, it was shown that one can learn under all simple distributions if one can also learn under one fixed simple distribution, called the universal distribution, which is called simple if it is dominated by a semicomputable distribution.
Abstract: It is pointed out that in L.G. Valiant's learning model (Commun. ACM, vol.27, p.1134-42, 1984) many concepts turn out to be too hard to learn, whereas in practice, almost nothing we care to learn appears to be not learnable. To model the intuitive notion of learning more closely, it is assumed that learning happens under an arbitrary distribution, rather than under an arbitrary simple distribution, as assumed by Valiant. A distribution is called simple if it is dominated by a semicomputable distribution. A general theory of learning under simple distributions is developed. In particular, it is shown that one can learn under all simple distributions if one can learn under one fixed simple distribution, called the universal distribution. Interesting learning algorithms and several quite general new learnable classes are presented. It is shown that for essentially all algorithms, if the inputs are distributed according to the universal distribution, then the average-case complexity is of the same order of magnitude as the worst-case complexity. >

Journal ArticleDOI
TL;DR: A simple proof of the following result of Alon and Azar: Every parallel comparison tree with p processors that sorts n elements requires average-case time Ω(log(n)/log(1+p/n)) .

01 Jan 1989
TL;DR: A method of automatic complexity analysis to deal with divide-and-conquer algorithms with "intelligent" divide function that are not based on structural induction, but on noetherian induction is presented.
Abstract: Current tools performing automatic complexity analysis are capable to deal with function definitions based on structural induction. Divide-and-conquer algorithms with "intelligent" divide function (like quicksort) are not based on structural induction, but on noetherian induction. This paper presents a method of automatic complexity analysis to deal with such kinds of functions.

Book ChapterDOI
19 Dec 1989
TL;DR: This work studied the average case complexity of the Rete algorithm on collections of patterns and objects with a random tree structure.
Abstract: The Rete algorithm [Forg 82] is a very efficient method for comparing a large collection of patterns with a large collection of objects. It is widely used in rule-based expert systems. We studied ([AF 88] or [Alb 88]) the average case complexity of the Rete algorithm on collections of patterns and objects with a random tree structure. Objects and patterns are often made up of a head-symbol and a list of variable or constant arguments (OPSV [Forg

Proceedings ArticleDOI
01 Mar 1989
TL;DR: The communication complexity of singdarity testing, where the problem is to determine whether a given square matrix M is singular, is investigated and it is shown that, for n x n matrices of k-bit integers, the communication complexity is O(knZ).
Abstract: The communication complexity of a function f measures the communication resources required for computing f. In the design of VLSI systems, where savings on the chip area and computation time are desired, this complexity dictates an area × time2 lower bound. We investigate the communication complexity of singularity testing, where the problem is to determine whether a given square matrix M is singular. We show that, for n × n matrices of k-bit integers, the communication complexity of Singularity Testing is Θ(k n2). Our results imply tight bounds for a wide variety of other problems in numerical linear algebra. Among those problems are determining the rank and computing the determinant, as well as the computation of several matrix decompositions. Another important corollary concerns the solvability of systems of linear equations. This problem is to decide whether a linear system A x = b has a solution. When A is an n × n matrix of k-bit integers and b a vector of n k-bit integers, its communication complexity is Θ(k n2).

Proceedings Article
01 Aug 1989
TL;DR: It is not the case that the Tomita algorithm is always more efficient than Earley’s algorithm; rather there are grammars for which it is exponentially slower, and two main results are presented.
Abstract: The Tomita parsing algorithm adapts Knuth’s (1967) well-known parsing algo­ rithm for LR()t) grammars to non-LR grammars, including ambiguous gram­ mars. Knuth’s algorithm is provably efficient: it requires at most 0 (n |G |) units of time, where |G| is the size of (i.e. the number of symbols in) G and n is the length of the string to be parsed. This is often significantly better than the 0 (n 3|G |2) worst case time required by standard parsing algorithms such as the Earley algorithm. Since the Tomita algorithm is closely related to K nuth’s algorithm, one might expect that it too is provably more efficient than the Ear­ ley algorithm, especially as actual computational implementations of Tom ita’s algorithm outperform implementations of the Earley algorithm (Tomita 1986, 1987). This paper shows that this is not the case. Two main results are presented in this paper. First, for any m there is a grammar Lm such that Tomita’s algorithm performs Q(nm) operations to parse a string of length n. Second, there is a sequence of grammars G m such that Tomita’s algorithm performs f2(nc1Gm') operations to parse a string of length n. Thus it is not the case that the Tomita algorithm is always more efficient than Earley’s algorithm; rather there are grammars for which it is exponentially slower. This result is forshadowed in Tomita (1986, p. 72), where the author remarks that Tomita’s algorithm can require time proportional to more than the cube of the input length. The result showing that the Tomita parser can require time proportional to an exponential function of the grammar size is new, as fair as I can tell.

Book ChapterDOI
03 Apr 1989
TL;DR: To treat possibly non-terminating reduction, the limit of such a reduction is formalized using Scott's order-theoretic approach, and an interpretation of the function symbols of a TRS as a continuous algebra, namely, continuous functions on a cpo, is given.
Abstract: The present paper studies the semantics of linear and non-overlapping TRSs. To treat possibly non-terminating reduction, the limit of such a reduction is formalized using Scott's order-theoretic approach. An interpretation of the function symbols of a TRS as a continuous algebra, namely, continuous functions on a cpo, is given, and universality properties of this interpretation are discussed. Also a measure for computational complexity of possibly non-terminating reduction is proposed. The space of complexity forms a cpo and function symbols can be interpreted as monotone functions on it.

I Ko1
01 Jan 1989
TL;DR: In this paper, the computational complexity of integrals and derivatives of convex functions defined on the interval was studied. But the complexity of the integrals was not studied. And the derivatives of the derivatives were not considered.
Abstract: In this paper,we study the computational complexity of the integrals and the derivatives of convex functions defined on the interval [0,1].

BookDOI
01 Jan 1989
TL;DR: Book Descriptive and computational complexity by N. Immerman Complexity issues in cryptography by A. L. Selman Interactive proof systems by S. Goldwasser.
Abstract: Overview of computational complexity theory by J. Hartmanis The isomorphism conjecture and sparse sets by S. R. Mahaney Restricted relativizations of complexity classes by R. V. Book Descriptive and computational complexity by N. Immerman Complexity issues in cryptography by A. L. Selman Interactive proof systems by S. Goldwasser.


Journal ArticleDOI
TL;DR: A polynomial-time algorithm for linear programming that augments the objective by a logarithmic penalty function and then solves a sequence of quadratic approximations of this program that maintains primal and dual feasibility at all iterations.

Book ChapterDOI
28 Aug 1989
TL;DR: It is proved the p-projection completeness of a number of extremely restricted modifications of the GRAPH-ACCESSIBILITY-PROBLEMS for switching graphs.
Abstract: Due to certain branching program based characterizations of the nonuniform complexity classes we prove the p-projection completeness of a number of extremely restricted modifications of the GRAPH-ACCESSIBILITY-PROBLEMS for switching graphs.

Book ChapterDOI
11 Jul 1989
TL;DR: A new combinatorial technique is introduced to obtain relativized separations of certain complexity classes related to the idea of counting, like PP, G, and ⊕P, thus solving an open problem proposed by Angluin in [An,80].
Abstract: We introduce a new combinatorial technique to obtain relativized separations of certain complexity classes related to the idea of counting, like PP, G (exact counting), and ⊕P (parity). To demonstrate its usefulness we present three relativizations separating NP from G, NP from ⊕P and ⊕P from PP. Other separations follow from these results, and as a consequence we obtain an oracle separating PP from PSPACE, thus solving an open problem proposed by Angluin in [An,80]. From the relativized separations we obtain absolute separations for counting complexity classes with log-time bounded computation time.

Journal ArticleDOI
TL;DR: A lower bound on the deterministic complexity is derived which generalizes the bounds known for 2-processor-systems and is a canonical extension of the results known for the special case k=2.

Proceedings ArticleDOI
30 Oct 1989
TL;DR: An attempt is made to give a more accurate classification of the computational complexity of roots of real functions, and the complexity of their roots is characterized in terms of relations between discrete complexity classes, such as LOGSPACE, P, UP, and NP.
Abstract: An attempt is made to give a more accurate classification of the computational complexity of roots of real functions. Attention is focused on the simplest types of functions, namely, one-to-one and k-to-one functions, and the complexity of their roots is characterized in terms of relations between discrete complexity classes, such as LOGSPACE, P, UP, and NP. >

Journal ArticleDOI
TL;DR: An asynchronous network is considered and a distributed algorithm which constructs the breadth-first search tree with the specified processor as the root is proposed, which is better than other known algorithms in terms of the message complexity.
Abstract: When information data to solve a problem are distributed over processors on a network, the algorithm which solves the problem by exchanging the information data is called a distributed algorithm. A large number of distributed algorithms has been proposed for various problems, but the proof for the validity is shown only for a few of them. This paper considers an asynchronous network and proposes a distributed algorithm which constructs the breadth-first search tree with the specified processor as the root. The validity of the algorithm is shown. In general, the efficiency of the distributed algorithm is evaluated by the total number of messages exchanged during execution (message complexity), and the execution time (ideal-time complexity), assuming the communication delay as a unit time. In the algorithm proposed in this paper, the message complexity and the ideal-time complexity are both O(n·√e where n is the number of processors and e is the number of links in the network. Especially, when e = Q((n/logn)2), the proposed algorithm is better than other known algorithms in terms of the message complexity.

Book ChapterDOI
02 Oct 1989
TL;DR: The average running time of backtracking for solving the set-partitioning problem under two probability models, the constant set size model and the constant occurence model, will be studied.
Abstract: The average running time of backtracking for solving the set-partitioning problem under two probability models, the constant set size model and the constant occurence model, will be studied. Results separating classes of instances with an exponential from such with a polynomial running time in the average will be shown.

Book ChapterDOI
TL;DR: It is deduced that the minimum number of processors in order to compute the Givens factorization in optimal time Topt is equal to Popt=n/2+√2.
Abstract: We study the complexity of the parallel Givens factorization of a square matrix of size n on shared memory multicomputers with p processors. We show how to construct an optimal algorithm using a greedy technique. We deduce that the time complexity is equal to: $$T_{opt} (p) = \frac{{n^2 }}{{2p}} + p + o(n) for 1 \leqslant p \leqslant \frac{n}{{2 + \sqrt 2 }}$$ and that the minimum number of processors in order to compute the Givens factorization in optimal time Topt is equal to Popt=n/2+√2.

Proceedings ArticleDOI
19 Jun 1989
TL;DR: It is proved that a complexity type C contains sets A,B which are incomparable with respect to polynomial-time reductions if and only if it is not true that C contained in P.
Abstract: The fine structure of time complexity classes for random access machines is analyzed. It is proved that a complexity type C contains sets A,B which are incomparable with respect to polynomial-time reductions if and only if it is not true that C contained in P, and that there is a complexity type C that contains a minimal pair with respect to polynomial-time reductions. The fine structure of P with respect to linear-time reductions is analyzed. It is also shown that every complexity type C contains a sparse set. >