scispace - formally typeset
Search or ask a question

Showing papers on "Average-case complexity published in 1978"


Journal ArticleDOI
TL;DR: The strongest known dmgonalization results for both deterministic and nondetermlmstlc time complexity classes are reviewed and orgamzed for comparison with the results of the new padding technique.
Abstract: AaSTancr. A recurslve padding technique is used to obtain conditions sufficient for separation of nondetermlmsttc multltape Turlng machine time complexity classes If T2 is a running time and Tl(n + 1) grows more slowly than T~(n), then there is a language which can be accepted nondetermmlstlcally within time bound T~ but which cannot be accepted nondetermlnlStlcally within time bound T1. If even T~(n + f(n)) grows more slowly than Tz(n), where f is the very slowly growing "rounded reverse" of some real-time countable function, then there is such a language over a single-letter alphabet. The strongest known dmgonalization results for both deterministic and nondetermlmstlc time complexity classes are reviewed and orgamzed for comparison with the results of the new padding technique

189 citations


Book ChapterDOI
01 Jan 1978
TL;DR: In this paper a general integer programming problem is shown to be NP-complete; the proof given for this result uses only elementary linear algebra.
Abstract: Recently much effort has been devoted to determining the computational complexity for a variety of integer programming problems. In this paper a general integer programming problem is shown to be NP-complete; the proof given for this result uses only elementary linear algebra. Complexity results are also summarized for several particularizations of this general problem, including knapsack problems, problems which relax integrality or non-negativity restrictions and integral optimization problems with a fixed number of variables.

140 citations


Journal ArticleDOI
TL;DR: In this article, the authors give timing comparisons for three sorting algorithms written for the CDC STAR computer and show that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
Abstract: This paper gives timing comparisons for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's sorting algorithm, which makes especially good use of vector operations but has a complexity of N (log N)2 as compared to a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

30 citations


Journal ArticleDOI
TL;DR: In 1973 Bl~m, Floyd et al. presented a linear algorithm to select the i-th smallest element of a se?

24 citations


Proceedings ArticleDOI
01 May 1978
TL;DR: A new lower bound on the computational complexity of the theory of real addition and several related theories is established: any decision procedure for these theories requires either space 2 &egr;n or nondeterministic time 2&egR;n2 for some constant O and infinitely many n.
Abstract: A new lower bound on the computational complexity of the theory of real addition and several related theories is established: any decision procedure for these theories requires either space 2en or nondeterministic time 2en2 for some constant e > O and infinitely many n. The proof is based on the families of languages TISP(T(n),S(n)) which can be recognized simultaneously in time T(n) and space S(n) and the conditions under which they form a hierarchy.

21 citations



Proceedings ArticleDOI
16 Oct 1978
TL;DR: In this article, it was shown that there exists an absolute constant c′ ≫ 0 such that Vk(n) -n ≥ c′k log log n as n → ∞, proving a conjecture by Matula.
Abstract: Let Vk (n) be the minimum average number of pairwise comparisons needed to find the k-th largest of n numbers (k≥2), assuming that all n! orderings are equally likely. D. W. Matula proved that, for some absolute constant c, Vk(n)- n ≤ ck log log n as n → ∞. In the present paper, we show that there exists an absolute constant c′ ≫ 0 such that Vk(n) - n ≥ c′k log log n as n → ∞, proving a conjecture by Matula.

8 citations


Proceedings ArticleDOI
01 May 1978
TL;DR: The present paper deals more specifically with the problems involved in stating complexity bounds in a usable closed form for arbitrary operations on arbitrary data types.
Abstract: This paper represents a continuation of work in [LBI] and [LB2] directed toward the development of a unified, relative model for complexity theory. The earlier papers establish a simple, natural and fairly general model, and demonstrated its attractiveness by using it to state and prove a variety of technical results. The present paper uses the same model but deals more specifically with the problems involved in stating complexity bounds in a usable closed form for arbitrary operations on arbitrary data types. Work currently in progress is directed toward similar unified treatment of complexity of data structures.

7 citations


Journal ArticleDOI
TL;DR: It is shown that simple upper bounds on the number of tests required by a combinational network N can be derived from π(N), which is the total number of input-output paths in an acyclic network N.
Abstract: The problem of measuring the structural complexity of logic networks is examined. A complexity measure π(N) is proposed which is the total number of input-output paths in an acyclic network N. π(N) is easily computed by representing network structure in matrix form. It is shown that simple upper bounds on the number of tests required by a combinational network N can be derived from π(N). These bounds are fairly tight when N contains little or no fan-out. The path complexity of combinational functions is defined and briefly discussed.

4 citations



Journal ArticleDOI
TL;DR: The McCreight-Meyer algorithm is a priority-queue construction from abstract recursion theory which was designed for the proof of the so-called Naming or Honesty theorem and its behaviour is pointed at as a “closure operator” and various known and new results are obtained.