scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1978"


Journal ArticleDOI
TL;DR: The fact that the general decoding problem for linear codes and the general problem of finding the weights of a linear code are both NP-complete is shown strongly suggests, but does not rigorously imply, that no algorithm for either of these problems which runs in polynomial time exists.
Abstract: MEMBER, IEEE, AND HENK C. A. V~ TILBORG The fact that the general decoding problem for linear codes and the general problem of finding the weights of a linear code are both NP-complete is shown. This strongly suggests, but does not rigorously imply, that no algorithm for either of these problems which runs in polynomial time exists.

1,541 citations


Journal ArticleDOI
TL;DR: The use of precedence constraints between jobs that have to be respected in every feasible schedule is illustrated by extending some typical NP-completeness results and simplifying their correctness proofs for scheduling problems involving precedence constraints.
Abstract: Precedence constraints between jobs that have to be respected in every feasible schedule generally increase the computational complexity of a scheduling problem. Occasionally, their introduction may turn a problem that is solvable within polynomial time into an NP-complete one, for which a good algorithm is highly unlikely to exist. We illustrate the use of these concepts by extending some typical NP-completeness results and simplifying their correctness proofs for scheduling problems involving precedence constraints.

589 citations


Book ChapterDOI
TL;DR: In this paper, it was shown that the problem is NP-complete if there are arbitrary precedence constraints, but can be solved in O( n log n ) time if precedence constraints are series parallel.
Abstract: Suppose n jobs are to be sequenced for processing by a single machine, with the object of minimizing total weighted completion time. It is shown that the problem is NP-complete if there are arbitrary precedence constraints. However, if precedence constraints are “series parallel”, the problem can be solved in O( n log n ) time. This result generalizes previous results for the more special case of rooted trees. It is also shown how a decomposition procedure suggested by Sidney can be implemented in polynomial-bounded time. Equivalence of the sequencing problem with the optimal linear ordering problem for directed graphs is discussed.

403 citations


Proceedings ArticleDOI
16 Oct 1978
TL;DR: It is shown that a decision tree of height O(dn log n) can be constructed to process n operations in d dimensions, suggesting that the standard decision tree model will not provide a useful method for investigating the complexity of orthogonal range queries.
Abstract: Given a set of points in a d-dimensional space, an orthogonal range query is a request for the number of points in a specified d-dimensional box. We present a data structure and algorithm which enable one to insert and delete points and to perform orthogonal range queries. The worstcase time complexity for n operations is O(n logd n); the space usea is O(n logd-1 n). (O-notation here is with respect to n; the constant is allowed to depend on d.) Next we briefly discuss decision tree bounds on the complexity of orthogonal range queries. We show that a decision tree of height O(dn log n) (Where the implied constant does not depend on d or n) can be constructed to process n operations in d dimensions. This suggests that the standard decision tree model will not provide a useful method for investigating the complexity of such problems.

232 citations


01 Jan 1978
TL;DR: In this article, it was shown that there is no algorithm for generating all the maximal independent sets of such an independence system in time polynomial in IEI and K, unless V. It is possible to apply ideas of Paull and Unger and of Tsukiyama et al. to obtain polynomially time algorithms for a number of special cases, e.g. the efficient generation of all maximal feasible solutions to a knapsack problem.
Abstract: Suppose that an independence system (E,) is characterized by a subroutine which indicates in unit time whether or not a given subset of E is independent. It is shown that there is no algorithm for generating all theK maximal independent sets of such an independence system in time polynomial in IEI and K, unless V. However, it is possible to apply ideas of Paull and Unger and of Tsukiyama et al. to obtain polynomial-time algorithms for a number of special cases, e.g. the efficient generation of all maximal feasible solutions to a knapsack problem. The algorithmic techniques bear an interesting relationship with those of Read for the enumeration of graphs and other combinatorial configurations.

229 citations


Journal ArticleDOI
TL;DR: The strongest known dmgonalization results for both deterministic and nondetermlmstlc time complexity classes are reviewed and orgamzed for comparison with the results of the new padding technique.
Abstract: AaSTancr. A recurslve padding technique is used to obtain conditions sufficient for separation of nondetermlmsttc multltape Turlng machine time complexity classes If T2 is a running time and Tl(n + 1) grows more slowly than T~(n), then there is a language which can be accepted nondetermmlstlcally within time bound T~ but which cannot be accepted nondetermlnlStlcally within time bound T1. If even T~(n + f(n)) grows more slowly than Tz(n), where f is the very slowly growing "rounded reverse" of some real-time countable function, then there is such a language over a single-letter alphabet. The strongest known dmgonalization results for both deterministic and nondetermlmstlc time complexity classes are reviewed and orgamzed for comparison with the results of the new padding technique

189 citations


Journal ArticleDOI
TL;DR: A discretized version of the original problem, restated as a feasibility question, is NP-complete when both n and d are arbitrary and there exists a polynomial time algorithm which solves the problem in time O( n d −1 log n ) on a random access machine with unit cost arithmetic operations.

189 citations


Journal ArticleDOI
TL;DR: A new interprocedural data flow analysis algorithm is presented and analyzed which associates with each procedure in a program information about which variables may be modified, which may be used, and which are possibly preserved by a call on the procedure, and all of its subcalls.
Abstract: A new interprocedural data flow analysis algorithm is presented and analyzed. The algorithm associates with each procedure in a program information about which variables may be modified, which may be used, and which are possibly preserved by a call on the procedure, and all of its subcalls. The algorithm is sufficiently powerful to be used on recursive programs and to deal with the sharing of variables which arises through reference parameters. The algorithm is unique in that it can compute all of this information in a single pass, not requiring a prepass to compute calling relationships or sharing patterns. The algorithm is asymptotically optimal in time complexity. It has been implemented and is practical even on programs which are quite large.

160 citations


Proceedings ArticleDOI
31 May 1978
TL;DR: The optimization problem is shown to be NP-complete, but it is shown that a polynomial time algorithm can be given to optimize tableaux that correspond to an important subclass of expressions.
Abstract: Many useful database queries can be formulated in terms of expressions whose operands are relations and whose operators are the relational operations select, project, and join. This paper investigates the computational complexity of optimizing relational expressions of this form under a variety of cost measures. A matrix, called a tableau, is proposed as a conventient representative for the value of an expression. Functional dependencies can be used to imply additional equivalences among tableaux. The optimization problem is shown to be NP-complete, but we can give a polynomial time algorithm to optimize tableaux that correspond to an important subclass of expressions.

151 citations


Journal ArticleDOI
TL;DR: This paper considers the problem of sequencing classes of tasks with deadlines in which there is a set-up time or a changeover cost associated with switching from tasks in one class to another, and delineates the borderline between polynomial-solvable and $NP$-complete versions of the problem.
Abstract: In this paper we consider the problem of sequencing classes of tasks with deadlines in which there is a set-up time or a changeover cost associated with switching from tasks in one class to another. We consider the case of a single machine and our results delineate the borderline between polynomial-solvable and $NP$-complete versions of the problem. This is accomplished by giving polynomial time reductions, pseudo-polynomial time algorithms and polynomial time algorithms for various restricted cases of these problems.

145 citations


Proceedings ArticleDOI
01 May 1978
TL;DR: Strong evidence of the general applicability of the parallel computation thesis is given and strong evidence of its truth is given in this paper by introducing the notion of “conglomerates” - a very large class of parallel machines, including all those which could feasibly be built.
Abstract: A number of different models of synchronous, unbounded parallel computers have appeared in recent literature. Without exception, running time on these models has been shown to be polynomially related to the classical space complexity measure. The general applicability of this relationship is called “the parallel computation thesis” and strong evidence of its truth is given in this paper by introducing the notion of “conglomerates” - a very large class of parallel machines, including all those which could feasibly be built. Basic parallel machine models are also investigated, in an attempt to pin down the notion of parallel time to within a constant factor. To this end, a universal conglomerate structure is developed with can simulate any other basic model within linear time. This approach also leads to fair estimates of instruction execution times for various parallel models.

Journal ArticleDOI
TL;DR: A class of algorithms is presented for very rapid on-line detection of occurrences of a fixed set of pattern arrays as embedded subarrays in an input array by reducing the array problem to a string matching problem in a natural way and it is shown that efficient string matching algorithms may be applied to arrays.
Abstract: A class of algorithms is presented for very rapid on-line detection of occurrences of a fixed set of pattern arrays as embedded subarrays in an input array. By reducing the array problem to a string matching problem in a natural way, it is shown that efficient string matching algorithms may be applied to arrays. This is illustrated by use of the string-matching algorithm of Knuth, Morris and Pratt [7]. Depending on the data structure used for the preprocessed pattern graph, this algorithm may be made to run “real-time” or merely in linear time. Extensions can be made to nonrectangular arrays, multiple arrays of dissimilar sizes, and arrays of more than two dimensions. Possible applications are foreseen to problems such as detection of edges in digital pictures and detection of local conditions in board games.

Book ChapterDOI
TL;DR: In this article, the problem of determining if a tree S on n, vertices is isomorphic to any subtree of the tree T on n, t ≥ n s vertices was shown to be solvable in O( n t 3/2 n s ) steps.
Abstract: The problem of determining if the tree S (unrooted) on n , vertices is isomorphic to any subtree of the tree T on n , t ≥ n s vertices is shown to be solvable in O( n t 3/2 n s ) steps. The method involves the solution of an ( n t ,-1) by 2( n t ,-1) array of maximum bipartite matching problems where some of these subproblems are solved in groups. Recognition of isomorphic subproblems yields a compacted data structure reducing practical storage requirements with no increase in the order of time complexity.

Journal ArticleDOI
TL;DR: Algorithms are presented that construct the shortest connecting network, or minimal spanning tree, of N points embedded in k-dimensional coordinate space and an algorithm is also presented that constructs a spanning tree that is very nearly minimal with computation proportional to N log N for all k.
Abstract: Algorithms are presented that construct the shortest connecting network, or minimal spanning tree (MST), of N points embedded in k-dimensional coordinate space These algorithms take advantage of the geometry of such spaces to substantially reduce the computation from that required to construct MST's of more general graphs An algorithm is also presented that constructs a spanning tree that is very nearly minimal with computation proportional to N log N for all k

Journal ArticleDOI
TL;DR: It is shown that for the two-variable quadratics of the form αx2 + βy − γ = 0; α, β, γ ϵ ω, the problem is NP-complete, which implies NP-completeness of certain questions about the solutions.

Journal ArticleDOI
TL;DR: The decision tree complexity of computing the measure of the union of n (possibly overlapping) intervals is shown to be &OHgr;(n log n), even if comparisons between linear functions of the interval endpoints are allowed.
Abstract: The decision tree complexity of computing the measure of the union of n (possibly overlapping) intervals is shown to be O(n log n), even if comparisons between linear functions of the interval endpoints are allowed. The existence of an O(n log n) lower bound to determine whether any two of n real numbers are within ∈ of each other is also demonstrated. These problems provide an excellent opportunity for discussing the effects of the computational model on the ease of analysis and on the results produced.

Book ChapterDOI
17 Jul 1978
TL;DR: Non-containment for free single variable program schemes is shown to be NP-complete and a polynomial time algorithm for deciding equivalence of two free schemes, provided one of them has the predicates appearing in the same order in all executions, is given.
Abstract: Non-containment for free single variable program schemes is shown to be NP-complete. A polynomial time algorithm for deciding equivalence of two free schemes, provided one of them has the predicates appearing in the same order in all executions, is given. However, the ordering of a free scheme is shown to lead to an exponential increase in size.

Book ChapterDOI
17 Jul 1978
TL;DR: The main theorem of this paper establishes that if CLIQUE has some f-sparse translation into another set, which is calculable by a deterministic Turing machine in time bounded by f, then all the sets belonging to NP are calculable in time bound by a function polynomially related to f.
Abstract: Let Σ be an arbitrary alphabet. denotes ɛ∪Σ∪...∪Σn. We say that a function t, t : is f-sparse iff card for every natural n. The main theorem of this paper establishes that if CLIQUE has some f-sparse translation into another set, which is calculable by a deterministic Turing machine in time bounded by f, then all the sets belonging to NP are calculable in time bounded by a function polynomially related to f. The proof is constructive and shows the way of constructing a proper algorithm. The simplest and most significant corollary says that if there is an NP-complete language over a single letter alphabet, then P=NP.

Journal ArticleDOI
TL;DR: A more efficient algorithm than the existing one is presented for a single-machine scheduling problem where penalties occur for jobs that either commence before their target start date or are completed after their due date.
Abstract: We consider a single-machine scheduling problem where penalties occur for jobs that either commence before their target start date or are completed after their due date. The objective is to minimize the maximum penalty subject to certain assumptions on the target start times, due dates, and penalty functions. A more efficient algorithm than the existing one is presented for this problem. The number of computations in the proposed algorithm is of the order of n log n, while in the existing algorithm it is of the order of n2.

Proceedings ArticleDOI
01 Jan 1978
TL;DR: The objective of this program analysis is the construction of a mapping (a cover) from program text expressions to symbolic expressions for their value holding over all executions of the program.
Abstract: A global flow model is assumed; as usual, the flow of control is represented by a digraph called the control flow graph. The objective of our program analysis is the construction of a mapping (a cover) from program text expressions to symbolic expressions for their value holding over all executions of the program. The particular cover constructed by our methods is in general weaker than the covers obtainable by the methods of [Ki, FKU, R1], but our method has the advantage of being very efficient; requiring O(e + aα(a)) extended bit vector operations (a logical operation or a shift to the first nonzero bit) on all control flow graphs (whether reducible or not), where a is the number of edges of the control flow graph, e is the length of the text of the program, and α is Tarjan's function (an extremely slowly growing function).

Proceedings ArticleDOI
01 May 1978
TL;DR: This work investigates the problem of finding a homeomorphic image of a “pattern” graph H in a larger input graph G and develops a linear time algorithm to determine if there exists a simple cycle containing three given nodes in G.
Abstract: We investigate the problem of finding a homeomorphic image of a “pattern” graph H in a larger input graph G. We view this problem as finding specified sets of edge disjoint or node disjoint paths in G. Our main result is a linear time algorithm to determine if there exists a simple cycle containing three given nodes in G; here H is a triangle. No polynomial time algorithm for this problem was previously known. We also discuss a variety of reductions between related versions of this problem and a number of open problems.

Proceedings ArticleDOI
16 Oct 1978
TL;DR: The pagoda is presented, a new data-structure for representing priority queues, that handles an arbitrary sequence of n primitive operations chosen from MIN, INSERT, UNION, EXTR ACT and EXTRACT in time o(n log n).
Abstract: We present a new data-structure for representing priority queues, the pagoda. A detailed analysis shows that the pagoda provides a very efficient implementation of priority queues, where our measure of efficiency is the average run time of the various algorithms. It handles an arbitrary sequence of n primitive operations chosen from MIN, INSERT, UNION, EXTRACT and EXTRACTMIN in time o(n log n). The constant factors affecting these asymptotic run time are small enough to make the pagoda competitive with any other priority queue, including structures which cannot handle UNION or EXTRACT. The given algorithms process an arbitrary sequence of n operations MIN, INSERT and EXTRACT in linear average time O(n), and a sequence of n INSERT in linear worst case time O(n).

Journal ArticleDOI
TL;DR: In this article, the authors give timing comparisons for three sorting algorithms written for the CDC STAR computer and show that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
Abstract: This paper gives timing comparisons for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's sorting algorithm, which makes especially good use of vector operations but has a complexity of N (log N)2 as compared to a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

01 Nov 1978
TL;DR: A simple algorithm is described which determines the satisfiability over the reals of a conjunction of linear inequalities, none of which contains more than two variables, which is particularly suited to applications in mechanical program verification.
Abstract: A simple algorithm is described which determines the satisfiability over the reals of a conjunction of linear inequalities, none of which contains more than two variables. In the worst case the algorithm requires time O(${mn}^{\lceil \log^2 n \rceil + 3}$), where n is the number of variables and m the number of inequalities. Several considerations suggest that the algorithm may be useful in practice: it is simple to implement, it is fast for some important special cases, and if the inequalities are satisfiable it provides valuable information about their so1ution set. The algorithm is particularly suited to applications in mechanical program verification.

Journal ArticleDOI
TL;DR: It is shown that if P = N then every two polynomial time programming systems are isomorphic via a polynometric time computable function, which points the way to the possible existence of “natural” but intractable computational problems concerning programming systems.
Abstract: Restricted classes of programming systems (Godel numberings) are studied, where a programming system is in a given class if every programming system can be translated into it by functions in a given restricted class. For pairs of systems in various “natural” classes, results are given on the existence of isomorphisms (one-to-one and onto translations) between them from the corresponding classes of functions. The results with the most computational significance concern polynomial time programming systems. It is shown that if $\mathcal{P} = \mathcal{N}\mathcal{P}$ then every two polynomial time programming systems are isomorphic via a polynomial time computable function. If $\mathcal{P} e \mathcal{N}\mathcal{P}$ this result points the way to the possible existence of “natural” but intractable computational problems concerning programming systems. Results are also given concerning the relationship between the complexity of certain important and commonly used properties of programming systems (such as effec...

Journal ArticleDOI
TL;DR: An algorithm which augments an LR parser with the capability of reanalyzing a limited part of a modified program is illustrated, showing a modification to the basic algorithm which enable the reanalysis to be performed in linear time.
Abstract: The concept of incremental parsing is briefly introduced. An algorithm which augments an LR parser with the capability of reanalyzing a limited part of a modified program is illustrated. The algorithm operates on a sequence of configurations representing the parse of the old input and finds the smallest part of the sequence which must be recomputed to obtain the parse of the new input. The implementation is discussed: a suitable data structure and a version of the algorithm which operates upon it are introduced; finally the problem of realizing efficient incremental parsers is faced, showing a modification to the basic algorithm which enable the reanalysis to be performed in linear time.

Journal ArticleDOI
TL;DR: The objective is to find a transformed digital picture of a given picture such that the sum of absolute errors between the gray level histogram of the transformed picture and that of a reference picture is minimized.
Abstract: This paper investigates the problem of optimal histogram matching using monotone gray level transformation, which always assigns all picture points of a given gray level i to another gray level T(i) such that if i ≥ j, then T(i) ≥ T(j). The objective is to find a transformed digital picture of a given picture such that the sum of absolute errors between the gray level histogram of the transformed picture and that of a reference picture is minimized. This is equivalent to placing k1 linearly ordered objects of different sizes one by one into k2 linearly ordered boxes of assorted sizes, such that the accumulated error of space underpacked or overpacked in the boxes is minimized; the placement function is monotonic, which ensures a polynomial time solution to this problem. A tree search algorithm for optimal histogram matching is presented which has time complexity O(k1 × k2). If the monotone property is dropped, then the problem becomes NP-complete, even if it is restricted to k2 = 2.

Proceedings ArticleDOI
03 Apr 1978
TL;DR: It is shown that a mesh-connected n× n multiprocessor system can compute the inverse of a n×n matrix in linear time to n and solve systems of n linear equations in linearTime of n with n × (n+l) processors.
Abstract: It is shown that a mesh-connected n×n multiprocessor system can compute the inverse of a n×n matrix in linear time to n. The algorithm is based on a theorem known to Sylvester in 1851. It computes the cofactor matrix in n steps, each of which involves 4 unit distance message routing and 4 arithmetic operations for every processor. A coding and memory requirement for each processor is the same and is independent of n.It is also shown that the same algorithm solves systems of n linear equations in linear time of n with n × (n+l) processors.


Proceedings ArticleDOI
01 Jan 1978
TL;DR: A linear time algorithm for detecting common subexpressions is derived and algorithms which process equalities, inequalities and deductions on-line are discussed.
Abstract: The classical common subexpression problem in program optimization is the detection of identical subexpressions Suppose we have some extra information and are given pairs of expressions ei1=ei2 which must have the same value, and expressions fj1≠fj2 which must have different values We ask if as a result, h1=h2, or h1≠h2 This has been called the uniform word problem for finitely presented algebras, and has application in theorem-proving and code optimization We show that such questions can be answered in O(nlogn) time, where n is the number of nodes in a graph representation of all relevant expressions A linear time algorithm for detecting common subexpressions is derived Algorithms which process equalities, inequalities and deductions on-line are discussed