scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1976"


Journal ArticleDOI
TL;DR: Several properties of the graph-theoretic complexity are proved which show, for example, that complexity is independent of physical size and complexity depends only on the decision structure of a program.
Abstract: This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program complexity. The paper first explains how the graph-theory concepts apply and gives an intuitive explanation of the graph concepts in programming terms. The control graphs of several actual Fortran programs are then presented to illustrate the correlation between intuitive complexity and the graph-theoretic complexity. Several properties of the graph-theoretic complexity are then proved which show, for example, that complexity is independent of physical size (adding or subtracting functional statements leaves complexity unchanged) and complexity depends only on the decision structure of a program.

5,097 citations


Journal ArticleDOI
TL;DR: For P- complete problems such as traveling salesperson, cycle covers, 0-1 integer programming, multicommodity network flows, quadratic assignment, etc., it is shown that the approximation problem is also P-complete.
Abstract: For P-complete problems such as traveling salesperson, cycle covers, 0-1 integer programming, multicommodity network flows, quadratic assignment, etc., it is shown that the approximation problem is also P-complete. In contrast with these results, a linear time approximation algorithm for the clustering problem is presented.

1,718 citations


Journal ArticleDOI
TL;DR: A very primitive version of Gotlieb’s timetable problem is shown to be NP-complete, and therefore all the common timetable problems areNP-complete.
Abstract: A very primitive version of Gotlieb’s timetable problem is shown to be NP-complete, and therefore all the common timetable problems are NP-complete. A polynomial time algorithm, in case all teachers are binary, is shown. The theorem that a meeting function always exists if all teachers and classes have no time constraints is proved. The multicommodity integral flow problem is shown to be NP-complete even if the number of commodities is two. This is true both in the directed and undirected cases.

1,080 citations


Journal ArticleDOI
TL;DR: Three general techniques are presented to obtain approximate solutions for optimization problems solvable in this way, and polynomial time algorithms are applied to obtain “good” approximate solutions.
Abstract: The following job sequencing problems are studied: (i) single processor job sequencing with deadlines, (ii) job sequencing on m-identical processors to minimize finish time and related problems, (iii) job sequencing on 2-identical processors to minimize weighted mean flow time. Dynamic programming type algorithms are presented to obtain optimal solutions to these problems, and three general techniques are presented to obtain approximate solutions for optimization problems solvable in this way. The techniques are applied to the problems above to obtain polynomial time algorithms that generate “good” approximate solutions.

561 citations


Proceedings ArticleDOI
25 Oct 1976
TL;DR: An O(N log N) algorithm is given to determine whether any two intersect and use it to detect whether two simple plane polygons intersect and to show that the Simplex method is not optimal.
Abstract: We develop optimal algorithms for forming the intersection of geometric objects in the plane and apply them to such diverse problems as linear programming, hidden-line elimination, and wire layout. Given N line segments in the plane, finding all intersecting pairs requires O(N2) time. We give an O(N log N) algorithm to determine whether any two intersect and use it to detect whether two simple plane polygons intersect. We employ an O(N log N) algorithm for finding the common intersection of N half-planes to show that the Simplex method is not optimal. The emphasis throughout is on obtaining upper and lower bounds and relating these results to other problems in computational geometry.

473 citations


Proceedings ArticleDOI
13 Oct 1976
TL;DR: In this article, a graph-theoretic complexity measure for managing and controlling program complexity is presented. But the complexity is independent of physical size, and complexity depends only on the decision structure of a program.
Abstract: This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program complexity. The paper first explains how the graph theory concepts apply and gives an intuitive explanation of the graph concepts in programming terms. The control graphs of several actual FORTRAN programs are then presented to illustrate the correlation between intuitive complexity and the graph theoretic complexity. Several properties of the graph-theoretic complexity are then proved which show, for example, that complexity is independent of physical size (adding or subtracting functional statements leaves complexity unchanged) and complexity depends only on the decision structure of a program.The issue of using non-structured control flow is also discussed. A characterization of non-structured control graphs is given and a method of measuring the “structuredness” of a program is developed. The relationship between structure and reducibility is illustrated with several examples.The last section of the paper deals with a testing methodology used in conjunction with the complexity measure; a testing strategy is defined that dictates that a program can either admit of a certain minimal testing level or the program can be structurally reduced.

282 citations


Journal ArticleDOI
TL;DR: A variety of familiar problems are shown complete for P, including context-free emptiness, infiniteness and membership, establishing inconsistency of propositional formulas by unit resolution, deciding whether a player in a two-person game has a winning strategy, and determining whether an element is generated from a set by a binary operation.

268 citations


Journal ArticleDOI
TL;DR: It is shown, by very simple arguments, that the chromatic number of an arb?rary graph to within a given factor r < 2 has also been shown to “be Nkomplete” and there is very little prospect of fmding an efficient, i.e. polynomialbounded, algorithm for the general problem, although some special cases can be solved effkien?ly.

251 citations


Proceedings ArticleDOI
03 May 1976
TL;DR: A divide-and-conquer technique in multidimensional space is investigated which decomposes a geometric problem into two problems on N/2 points in k dimensions plus a single problem in k−1 dimension to obtain an algorithm for finding the two closest of N points in 0(N log N) time in any dimension.
Abstract: We investigate a divide-and-conquer technique in multidimensional space which decomposes a geometric problem on N points in k dimensions into two problems on N/2 points in k dimensions plus a single problem on N points in k−1 dimension. Special structure of the subproblems is exploited to obtain an algorithm for finding the two closest of N points in 0(N log N) time in any dimension. Related results are discussed, along with some conjectures and unsolved geometric problems.

170 citations


01 Jan 1976
TL;DR: This paper shows the problem of generating optimal code for expressions containing common subexpressions is computationally difficult, even for simple expressions and simple machines.
Abstract: This paper shows the problem of generating optimal code for expressions containing common subexpressions is computationally difficult, even for simple expressions and simple machines. Some heuristics for code generation are given and their worst-case behavior is analyzed. For one register machines, an optimal code generation algorithm is given whose time complexity is linear in the size of an expression and exponential only in the amount of sharing.

158 citations


Proceedings ArticleDOI
03 May 1976
TL;DR: Two algorithms for sorting n2 elements on an n×n mesh-connected processor array that require 0(n) routing and comparison steps are presented and are shown to be optimal in time within small constant factors.
Abstract: Two algorithms for sorting n2 elements on an n×n mesh-connected processor array that require 0(n) routing and comparison steps are presented. The best previous algorithms take time 0(n log n). Our algorithms are shown to be optimal in time within small constant factors.

Proceedings ArticleDOI
03 May 1976
TL;DR: A search algorithm, called point-location algorithm, is presented, which operates on a suitably preprocessed data structure, and yields interesting and efficient solutions of other geometric problems, such as spatial convex inclusion and inclusion in an arbitrary polygon.
Abstract: Given a subdivision of the plane induced by a planar graph with n vertices, in this paper we consider the problem of identifying which region of the subdivision contains a given test point. We present a search algorithm, called point-location algorithm, which operates on a suitably preprocessed data structure. The search runs in time at most 0((log n)2), while the preprocessing task runs in time at most 0(n log n) and requires 0(n) storage. The methods are quite general, since an arbitrary subdivision can be transformed in time at most 0(n log n) into one to which the preprocessing procedure is applicable. This solution of the point-location problem yields interesting and efficient solutions of other geometric problems, such as spatial convex inclusion and inclusion in an arbitrary polygon.

Journal ArticleDOI
TL;DR: This paper presents further evidence in support of the conjecture that SP cannot be recognized using storage (log n)^k for any k, and proves the result for a suitably restricted device.

Proceedings ArticleDOI
10 Aug 1976
TL;DR: A Hybrid Mixed Basis FFT multiplication algorithm which has a cross-over point at degree 25 and is generally faster than a basic FFT algorithm, while retaining the desirable O(N log N) timing function of an FFT approach.
Abstract: The “fast” polynomial multiplication algorithms for dense univariate polynomials are those which are asymptotically faster than the classical O(N2) method. These “fast” algorithms suffer from a common defect that the size of the problem at which they start to be better than the classical method is quite large; so large, in fact that it is impractical to use them in an algebraic manipulation system.A number of techniques are discussed here for improving these fast algorithms. The combination of the best of these improvements results in a Hybrid Mixed Basis FFT multiplication algorithm which has a cross-over point at degree 25 and is generally faster than a basic FFT algorithm, while retaining the desirable O(N log N) timing function of an FFT approach.The application of these methods to multivariate polynomials is also discussed. The use is advocated of the Kronecker Trick to speed up a fast algorithm. This results in a method which has a cross-over point at degree 5 for bivariate polynomials. Both theoretical and empirical computing times are presented for all algorithms discussed.

Proceedings ArticleDOI
25 Oct 1976
TL;DR: It is shown, for example, that sets complete for deterministic time classes contain infinite polynomial time recognizable subsets, thus showing that these sets do not require the use of non-determinism almost everywhere.
Abstract: In this paper we investigate the structure of sets which are complete for various classes. We show, for example, that sets complete for deterministic time classes contain infinite polynomial time recognizable subsets, thus showing that they are not complex almost everywhere. We show by a related technique that any set complete for NEXP_TIME contains an infinite subset in DEXP_TIME, thereby showing that these sets do not require the use of non-determinism almost everywhere. Furthermore, we show that complete sets for deterministic time classes have effective I.O. speed-up to a polynomial; this strengthens a result of Stockmeyer.

Journal ArticleDOI
TL;DR: For classes of languages accepted in polynomial time by multicounter machines, various trade-offs in computing power obtain among the number of counters, the amount of time, and the time required in all cases, deterministic and nondeterministic, on-line and off-line.

Journal ArticleDOI
TL;DR: It is shown that Valiant's partial procedure for equivalence can be made constructive and the time complexity of the algorithm is bounded by a double exponential function of the size of the input.

Book ChapterDOI
06 Sep 1976
TL;DR: The result presented here is the first lower bound of better than n log n given for an NP-complete problem for a model that is actually used in practice and derived by combining results on linear search tree complexity with results from threshold logic.
Abstract: Previously the best known lower bound on this problem was 71 log 71 [I]. The result presented here is the first lower bound of better than n log n given for an NP-complete problem for a model that is actually used in practice. Previous non-linear lower bounds have been for computations involving only monotone circuits [8] or fanout limited to one. Our theorem is derived by combining results on linear search tree complexity [4] with results from threshold logic [Ill. In Section 2, we begin by presenting the results on linear search trees and threshold logic. Section 3 is devoted to using these results to obtain our main theorem.

Journal ArticleDOI
Ronald V. Book1
TL;DR: Translational lemmas are stated in a general framework and then applied to specific complexity classes, and necessary and sufficient conditions are given for every set accepted by a Turing acceptor which operates in linear or polynomial time.

Proceedings ArticleDOI
10 Aug 1976
TL;DR: Time complexities of operation on “sets” and “ordered n-tuples” based on a hashing table search technique are presented as “Hashing LEMMAs’ and are applied to formula manipulation.
Abstract: In this paper, time complexities of operation on “sets” and “ordered n-tuples” based on a hashing table search technique are presented as “Hashing LEMMAs” and are applied to formula manipulation. Unique normal forms for multivariate symbolic formulas resulting in O(1) time complexity for identity checks are presented. The logarithmic factor log2N, characteristic to sorting algorithms, is shown to all disappear from time complexities of polynomial manipulations. Actual implementation of the hashing technique is outlined and actual timing data are presented in the appendix.


Journal ArticleDOI
TL;DR: This paper considers the reduction in algorithmic complexity that can be achieved by permitting approximate answers to computational problems, and shows that partial sorting of N items, insisting on matching any nonzero fraction of the terms with their correct successors, requires O (N \log N) comparisons.
Abstract: This paper considers the reduction in algorithmic complexity that can be achieved by permitting approximate answers to computational problems. It is shown that Shannon's rate-distortion function could, under quite general conditions, provide lower bounds on the mean complexity of inexact computations. As practical examples of this approach, we show that partial sorting of N items, insisting on matching any nonzero fraction of the terms with their correct successors, requires O (N \log N) comparisons. On the other hand, partial sorting in linear time is feasible (and necessary) if one permits any finite fraction of pairs to remain out of order. It is also shown that any error tolerance below 50 percent can neither reduce the state complexity of binary N -sequences from the zero-error value of O(N) nor reduce the combinational complexity of N -variable Boolean functions from the zero-error level of O(2^{N}/N) .

01 Jul 1976
TL;DR: If a certain restricted problem of first-order error analysis in linear programming (specified below), for errors in only the criterion function, has a polynomial-time algorithm, then so does the tautology problem.
Abstract: : Two results are proven: (1) If a certain restricted problem of first-order error analysis in linear programming (specified below), for errors in only the criterion function, has a polynomial-time algorithm, then so does the tautology problem; (2) If the tautology problem is decidable in polynomial time, then linear programs can be solved optimally in polynomial time.


Proceedings ArticleDOI
03 May 1976
TL;DR: In this paper, the existence of isomorphisms between two polynomial-time programming systems is proved for one-to-one and onto-translate functions in a restricted class of functions.
Abstract: We study restricted classes of programming systems (Godel numberings), where a programming system is in a given class if every programming system can be translated into it by functions in a given restricted class. For pairs of systems in various “natural” classes we give results on the existence of isomorphisms (one-to-one and onto translations) between them from the appropriate class of functions. Our results with the most computational significance concern polynomial time programming systems. We show that if P=NP then every two polynomial time programming systems are isomorphic via a polynomial time computable function. IfP