scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1977"


Journal ArticleDOI
TL;DR: An algorithm is presented which finds all occurrences of one given string within another, in running time proportional to the sum of the lengths of the strings, showing that the set of concatenations of even palindromes, i.e., the language $\{\alpha \alpha ^R\}^*$, can be recognized in linear time.
Abstract: An algorithm is presented which finds all occurrences of one given string within another, in running time proportional to the sum of the lengths of the strings. The constant of proportionality is low enough to make this algorithm of practical use, and the procedure can also be extended to deal with some more general pattern-matching problems. A theoretical application of the algorithm shows that the set of concatenations of even palindromes, i.e., the language $\{\alpha \alpha ^R\}^*$, can be recognized in linear time. Other algorithms which run even faster on the average are also considered.

3,156 citations


Journal ArticleDOI
TL;DR: A simplified scheduling problem involving identical processors and restricted task sets is shown to be P-complete, however, the LPT algorithm applied to this problem yields schedules which are near optimal for large n.
Abstract: The finishing time properties of several heuristic algorithms for scheduling n independent tasks on m nonidentical processors are studied. In particular, for m = 2 an n log n time-bounded algorithm is given which generates a schedule having a finishing time of at most (√5 + 1)/2 of the optimal finishing time. A simplified scheduling problem involving identical processors and restricted task sets is shown to be P-complete. However, the LPT algorithm applied to this problem yields schedules which are near optimal for large n.

815 citations


Journal ArticleDOI
TL;DR: An algorithm for finding the longest common subsequence of two sequences of length n which has a running time of O((r + n) log n), where r is the total number of ordered pairs of positions at which the two sequences match.
Abstract: Previously published algorithms for finding the longest common subsequence of two sequences of length n have had a best-case running time of O(n2). An algorithm for this problem is presented which has a running time of O((r + n) log n), where r is the total number of ordered pairs of positions at which the two sequences match. Thus in the worst case the algorithm has a running time of O(n2 log n). However, for those applications where most positions of one sequence match relatively few positions in the other sequence, a running time of O(n log n) can be expected.

742 citations


Journal ArticleDOI
TL;DR: Two algorithms are presented for sorting n2 elements on an n × n mesh-connected processor array that require O (n) routing and comparison steps and are shown to be optimal in time within small constant factors.
Abstract: Two algorithms are presented for sorting n2 elements on an n × n mesh-connected processor array that require O (n) routing and comparison steps. The best previous algoritmhm takes time O(n log n). The algorithms of this paper are shown to be optimal in time within small constant factors. Extensions to higher-dimensional arrays are also given.

489 citations


Book ChapterDOI
05 Sep 1977
TL;DR: One approach to understanding complexity issues for certain easily computable natural functions is surveyed, and the notion of rigidity does offer for the first time a reduction of relevant computational questions to noncomputional properties.
Abstract: We have surveyed one approach to understanding complexity issues for certain easily computable natural functions. Shifting graphs have been seen to account accurately and in a unified way for the superlinear complexity of several problems for various restricted models of computation. To attack "unrestricted" models (in the present context combinational circuits or straight-line arithmetic programs,) a first attempt, through superconcentrators, fails to provide any lower bounds although it does give counter-examples to alternative approaches. The notion of rigidity, however, does offer for the first time a reduction of relevant computational questions to noncomputional properties. The "reduction" consists of the conjunction of Corollary 6.3 and Theorem 6.4 which show that "for most sets of linear forms over the reals the stated algebraic and combinatorial reasons account for the fact that they cannot be computed in linear time and depth O(log n) simultaneously." We have outlined some problem areas which our preliminary results raise, and feel that further progress on most of these is humanly feasible. We would be interested in alternative approaches also.

406 citations


Journal ArticleDOI
TL;DR: The problem of deciding whether a Petri net is persistent is reducible to reachability, partially answering a question of Keller, and it is shown that the controllability problem requires exponential space, even for 1-bounded nets.

256 citations


Posted Content
01 Jan 1977
TL;DR: In this article, the authors give an informal introduction to the theory of NP-completeness and derive some fundamental results, in the hope of stimulating further use of this valuable analytical tool.
Abstract: Recent developments in the theory of computational complexity as applied to combinatorial problems have revealed the existence of a large class of so-called NP-complete problems, either all or none of which are solvable in polynomial time. Since many infamous combinatorial problems have been proved to be NP-complete, the latter alternative seems far more likely. In that sense, NP-completeness of a problem justifies the use of enumerative optimization methods and of approximation algorithms. In this paper we give an informal introduction to the theory of NP-completeness and derive some fundamental results, in the hope of stimulating further use of this valuable analytical tool.

253 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of generating optimal code for expressions containing common subexpressions is shown to be computationally difficult, even for simple expressions and simple machines, and some heuristics for code generation are given and their worst-case behavior is analyzed.
Abstract: This paper shows the problem of generating optimal code for expressions containing common subexpressions is computationally difficult, even for simple expressions and simple machines. Some heuristics for code generation are given and their worst-case behavior is analyzed. For one register machines, an optimal code generation algorithm is given whose time complexity is linear in the size of an expression and exponential only in the amount of sharing.

142 citations


Journal ArticleDOI
TL;DR: This is a tutorial on general techniques for combinatorial approximation, which generate fully polynomial time approximation schemes for a large number of NP-complete problems.
Abstract: This is a tutorial on general techniques for combinatorial approximation. In addition to covering known techniques, a new one is presented. These techniques generate fully polynomial time approximation schemes for a large number of NP-complete problems. Some of the problems they apply to are: 0-1 knapsack, integer knapsack, job sequencing with deadlines, minimizing weighted mean flow times, and optimal SPT schedules. We also present experimental results for the job sequencing with deadlines problem.

120 citations


Journal ArticleDOI
TL;DR: The proof of validity is based on finite variational methods and is therefore quite different and somewhat simpler than the proof for the Hu-Tucker algorithm, and yields some additional information about the structure of minimum cost binary trees.
Abstract: A new algorithm for constructing minimum cost binary trees in $O(n \log n)$ time is presented. The algorithm is similar to the well-known Hu-Tucker algorithm. Our proof of validity is based on finite variational methods and is therefore quite different and somewhat simpler than the proof for the Hu-Tucker algorithm. Our proof also yields some additional information about the structure of minimum cost binary trees. This permits a linear time implementation of our algorithm in a special case.

98 citations


Journal ArticleDOI
TL;DR: The results of this paper constitute a proper generalization of all previously solved special cases and derive an alternate efficient approach to the case in which the precedence graph is a rooted tree, as well as an algorithm for attacking the general problem having an arbitrary acyclic precedence graph.
Abstract: We consider the problem of sequencing N jobs on a single machine when each job has a known processing time and a known deferral rate and a general precedence relationship exists among the jobs. The problem is to find the minimum cost sequence which is consistent with the precedence relationship. This problem has been solved in certain special cases obtained by restricting the form of the precedence graph. The results of this paper constitute a proper generalization of all previously solved special cases. These results are used to derive an alternate efficient approach to the case in which the precedence graph is a rooted tree, as well as an algorithm for attacking the general problem having an arbitrary acyclic precedence graph. The general algorithm has an $O(n^3)$ time bound and though it only gives partial solutions in some instances, the class of problems that it completely solves is a proper superset of the corresponding classes for all previous polynomial time algorithms.

Journal ArticleDOI
TL;DR: It is shown that determining minimal size tries is an NP-complete problem for several variants of tries and that, for tries m which leaf chains are deleted, determining the trie for which average access time is minimal is also an NP -complete problem.
Abstract: Trle structures are a convenient way of indexing files in which a key consists of a number of attributes Records correspond to leaves in the trle Retrieval proceeds by following a path from the root to a leaf, the choice of edges being determined by attribute values The size of a trle for a file depends on the order in which attributes are tested It is shown that determining minimal size tries IS an NP-complete problem for several variants of tries and that, for tries m which leaf chains are deleted, determining the trie for which average access time is minimal is also an NP-complete problem These results hold even for files in which attribute values are chosen from a binary or ternary alphabet KE] WORDS AND PHRASES reformation retrieval, trle indexes, trte size, average search t i m e , complexity CR CATEGORIES 3 74, 4 33, 5 25

Journal ArticleDOI
TL;DR: This survey includes an introduction to the concepts of problem complexity, analysis of algorithms to find bounds on complexity, average-case behavior, and approximation algomthms.
Abstract: This survey includes an introduction to the concepts of problem complexity, analysis of algorithms to find bounds on complexity, average-case behavior, and approximation algomthms The major techmques used m analysis of algorithms are reviewed and examples of the use of these methods are presented. A brief explanation of the problem classes P and NP, as well as the class of NP-complete problems, is also presented.

01 Aug 1977
TL;DR: Algorithms of time complexity 0 log-squared n are developed to solve each of the following problems for graphs with n vertices: finding minimum spanning trees, biconnected components, dominators, bridges, cycles, cycle bases, and shortest cycles.
Abstract: : The existence of parallel computers has motivated the development of parallel problems solving techniques for many problems. Techniques are studied for solving graph problems on an unbounded parallel model of computation. It is shown that solutions to graph problems can be organized to reveal a large amount of parallelism, which can be exploited to substantially reduce the computation time. Precisely, for an appropriate measure of time complexity, algorithms of time complexity 0 log-squared n are developed to solve each of the following problems for graphs with n vertices: finding minimum spanning trees, biconnected components, dominators, bridges, cycles, cycle bases, and shortest cycles. The number of processors needed to execute each algorithm is bounded above by a polynomial function of n. It is shown that 2 log n + c is a lower bound on the time required to solve each of these graph problems. Thus, the algorithms obtained have time complexities which are optimal to within a factor of log n.

Journal ArticleDOI
TL;DR: Several procedures based on (not necessarily regular) resolution for checking whether a formula in CF3 is contradictory are considered, and the exponential lower bounds do not follow directly from Tseitin's lower bound for regular resolution since these procedures also allow nonregular resolution trees.
Abstract: Several procedures based on (not necessarily regular) resolution for checking whether a formula in CF3 is contradictory are considered. The procedures use various methods of bounding the size of the clauses which are generated. The following results are obtained:1. All of the proposed procedures which are forced to run in polynomial time do not always work—i.e., they do not identify all contradictory formulas.2. Those which always work must run in exponential time. The exponential lower bounds for these procedures do not follow directly from Tseitin’s lower bound for regular resolution since these procedures also allow nonregular resolution trees.

01 Mar 1977
TL;DR: In this article, the uniform word problem for finite groups presented by their multiplication tables is considered and upper bounds of 0(k-squared) for arbitrary groups and 0(n log-square n) for abelian groups are shown where n is the length of the presentation.
Abstract: : The uniform word problem for finite groups presented by their multiplication tables is considered. Upper bounds of 0(k-squared) for arbitrary group and 0(n log-squared n) for arbitrary semigroup and 0(n log n) for abelian groups are shown where n is the length of the presentation. (Author)

Journal ArticleDOI
Neil D. Jones1
TL;DR: In this article, the authors present an alternate, simpler simulation algorithm which involves consideration only of the configurations actually reached by the automaton, which can be expected to run faster and use less storage (depending on the data structures used).

Journal ArticleDOI
TL;DR: In this article, a new and efficient procedure for testing a pair of digraphs for isomorphism is developed, based on conducting a depth-first search on one of the digraph and then matching of edges using backtracking with very effective pruning.
Abstract: A new and efficient procedure for testing a pair of digraphs for isomorphism is developed. It is based on conducting a depth-first search on one of the digraphs followed by a systematic matching of edges using backtracking with very effective pruning. It is proved that for digraphs (ofn vertices) the expected time complexity of this procedure isO(n logn ). This theoretical result is verified empirically on more than 300 large random digraphs. This procedure is shown to be more efficient than any of the existing general isomorphism procedures.

Proceedings ArticleDOI
01 Jan 1977
TL;DR: It is shown that the general common subexpression elimination problem does not fit into the semilattice-theoretic model for global program optimization [Ki, KU, GW], and that the standard iterative algorithm for such problems does not work in this case.
Abstract: We propose a new optimization technique applicable to set-oriented languages, which we shall call general common subexpression elimination It involves the computation of the IAVAIL(n) function for all nodes n in a flow graph, where IAVAIL(n) is the set of expressions such that along every path leading to n, there will be found a computation of the expression followed by only incidental assignments (ie A = A ∪ {x} or A = A - {x}) to its operands and that the number of such assignments is bounded, independent of the path taken We shall try to justify our definitions and demonstrate the usefulness of this technique by several examples We shall show that this optimization problem does not fit into the semilattice-theoretic model for global program optimization [Ki, KU, GW], and that the standard iterative algorithm for such problems does not work in this case We then give several theorems which allow the problem to be solved for reducible flow graphs The formulae given in the theorems are in such a form that an efficient algorithm can be found by adapting an algorithm given in [U] The resulting algorithm takes 0(e log e) steps of an extended type, where bit vector operations are regarded as one step, and e is the number of edges of the flow graph It takes 0(n log n) extended steps for a program flow graph of n nodes

Journal ArticleDOI
TL;DR: It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L ( G ), Consequently, every deterministicETOL language is recognizable in polynomial time.
Abstract: It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L ( G ). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian parallel languages are recognizable within the same bounds.

Journal ArticleDOI
TL;DR: A new algorithm is presented which copies cyclic list structures using bounded workspace and linear time and uses a technique for traversing the structure twice, using the same spanning tree in each case.
Abstract: A new algorithm is presented which copies cyclic list structures using bounded workspace and linear time. Unlike a previous similar algorithm, this one makes no assumptions about the storage allocation system in use and uses only operations likely to be available in a high-level language. The distinctive feature of this algorithm is a technique for traversing the structure twice, using the same spanning tree in each case, first from left to right and then from right to left.

Proceedings ArticleDOI
30 Sep 1977
TL;DR: It is shown that the computation of the determinant requires an exponential number of multiplications if the commutativity of indeterminates is not allowed, which can reduce a computation of exponential complexity to a computations of polynomial complexity.
Abstract: In this paper we show that the computation of the determinant requires an exponential number of multiplications if the commutativity of indeterminates is not allowed. The determinant can be computed in polynomial time with the commutation of indeterminates. Hence the use of commutativity can reduce a computation of exponential complexity to a computation of polynomial complexity.

Journal ArticleDOI
TL;DR: It is shown by simulation that every language accepted within time $n^d$ by a nondeter-ministic one-dimensional Turing machine is accepted in linear time by an iterative array of nondeterministic d-dimensional iterative arrays, which is precisely Karp’s class NP.
Abstract: It is shown by simulation that every language accepted within time $n^d$ by a nondeter-ministic one-dimensional Turing machine is accepted in linear time by a nondeterministic d-dimensional iterative array. Conversely, every language accepted in linear time by such an iterative array is accepted within time $n^{d+1}$ by a nondeterministic one-dimensional Turing machine. It follows that the class of languages accepted in linear time by nondeterministic multidimensional iterative arrays is precisely Karp’s class NP, that nondeterministic $(d + 2)$-dimensional iterative arrays are more powerful than nondeterministic d-dimensional iterative arrays, and that nondeterministic two-dimensional iterative arrays are more powerful than the entire class of nondeterministic multidimensional Turing machines. Related deterministic results are surveyed and summarized for comparison.

Book ChapterDOI
18 Jul 1977
TL;DR: For each nondeterministie successor RAM, the derivation language is contained in TIME and its complement in NTIME and the following holds:.
Abstract: 3. For each nondeterministie successor RAM (we define this machine in such a way that it can guess in one step the content of an arbitrary register) the derivation language is contained in TIME (n • log n) and its complement in NTIME (~ . Especially for the time complexity classes NRAM (..) defined by this machine and for the time complexity classes Nk DIM (..) defined by nondeterministic k-dimensional Turing machines the following holds:

Journal ArticleDOI
TL;DR: In this article, it was shown that the results obtained by using the Newton-Raphson algorithm for vector linear time series models are identical to those derived by Nicholls (1976).
Abstract: SUMMARY Akaike (1973) showed, in the case of scalar autoregressive-moving average models, that the estimates obtained by an application of the Newton-Raphson algorithm to the approximate likelihood function are the same as those obtained by Hannan (1969). By making use of the properties of tensor products, this paper extends these ideas to show that, in the case of vector linear time series models, the estimates obtained by the application of the Newton-Raphson procedure are identical to those derived by Nicholls (1976). Consequently the estimates ob- tained from the Newton-Raphson algorithm, in the case of vector models, are consistent, asymptotically normal and efficient. A considerable amount of attention has, in the last few years, been directed towards the estimation of mixed time series models, e.g. Hannan (1969, 1970), Hannan & Nicholls (1972), Box & Jenkins (1970), Wilson (1973), Akaike (1973), Nicholls (1976), and in unpublished work by A. R. Pagan and W. Gersch & J. Yonemoto. Akaike's work is particularly appealing since it relates, in the case of autoregressive-moving average models, the estimation procedure using a nonlinear algorithm which was considered by Pagan, with the frequency domain pro- cedure considered by Hannan (1969). Indeed Akaike has shown that, in the case of a scalar autoregressive-moving average model, Hannan's procedure is equivalent to a three-stage realization of one step of the Newton-Raphson procedure for the numerical maximization of the likelihood function. XVhile he indicates that this comparison can be extended to multi- dimensional models, Akaike does not consider this more general case. In this paper it will be shown that the Hannan procedure and an application of the Newton- Raphson procedure do lead to the same estimates in the case of vector models. Consequently the estimates obtained from using the Newton-Raphson algorithm are asymptotically efficient in the sense that they are asymptotically normally distributed with covariance matrix equal to the Cramer-Rao lower bound. We shall verify this for models of the form considered by Nicholls (1976), namely vector autoregressive-moving average models with exogenous vari- ables. The comparison of the two methods of estimation of models of the form (1) by Akaike (1973) and Nicholls (1976) is, as we shall show, greatly simplified by use of the vec notation which transforms matrices to vectors by means of a stacking operation. Using this notation it is possible to obtain expressions for the Hessian and gradient which can be expressed concisely in terms of tensor products. It is worth noting in passing that had this notation been used by Akaike his expression for the gradient and Hessian matrices, in the scalar case, could have


Journal ArticleDOI
TL;DR: An algorithm is developed to solve this special case of the transportation problem in O(n) time in the worst case by restricting the number of origins to two using an extension of a recent selection algorithm.
Abstract: This paper considers a special case of the standard transportation problem obtained by restricting the number of origins to two. Necessary and sufficient conditions are established which lead to a direct construction of the optimal solution. Using an extension of a recent selection algorithm, an algorithm is developed to solve this special case of the transportation problem in $O(n)$ time in the worst case. The algorithm applies to both capacitated as well as uncapacitated problems.

Journal ArticleDOI
TL;DR: In this article, a class of nonsingular topology-symmetric perfect elimination sparse matrices with an average number of nonzero upper triangular elements equal to is considered, and the expected time complexity for the factorization is shown to be of order no greater than both for the essential and the overhead operations.
Abstract: A class of nonsingular topology-symmetric perfect elimination sparse matrices with an average number of nonzero upper triangular elements equal to is considered. The expected time complexity for the factorization is shown to be of order no greater than both for the essential and the overhead operations. The time complexity for the repeat solution is shown to be of order and for inversion of order . The implications of these results to sparse matrices in general are discussed.

Book ChapterDOI
05 Sep 1977
TL;DR: It is deduced that all NP-complete problems would be solvable within polynomial time and the two classes P and NP would coincide.
Abstract: Since the early work of Cook (1971) and Karp (1972) the research work on the properties of NP-complete problems has been intensive and widespread The class of NP-complete problems contains all those problems which are in NP, that iswhich can be decided by a nondeterministic Turing machine in polynomial time, and to which all other problems in the class NP can be reduced in polynomial time The characterization of the complexity of NP-complete problems leads to one of the most important (may be ~'the" most important) open questions in theoretical computer science: does there exist any Turing machine which decides any NP-complete problem in deterministic polynomial time? In that case, from the properties of the class NP, we would deduce that all NP-complete problems would be solvable within polynomial time and the two classes P and NP would coincide

Book ChapterDOI
TL;DR: A system of n processes sharing m reusable resources deadlock-free, if and only if the allocation policy which automatically grants any request that can be granted with currently free resource units never leads to a deadlock.