scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1983"


Journal ArticleDOI
TL;DR: In this paper, a decision method for finding a continuous motion connecting two given positions and orientations of the whole collection of bodies is presented. But it is not shown that this problem can be solved in polynomial time.

909 citations


Journal ArticleDOI
TL;DR: A linear-time algorithm is given for the classical problem of finding the smallest circle enclosing n given points in the plane, which disproves a conjecture by Shamos and Hoey that this problem requires Ω(n log n) time.
Abstract: Linear-time algorithms for linear programming in $R^2 $ and $R^3 $ are presented. The methods used are applicable for other graphic and geometric problems as well as quadratic programming. For exam...

848 citations


Proceedings ArticleDOI
01 Dec 1983
TL;DR: An algebraic approach to the problem of assigning canonical forms to graphs by computing canonical forms and the associated canonical labelings in polynomial time is announced.
Abstract: We announce an algebraic approach to the problem of assigning canonical forms to graphs. We compute canonical forms and the associated canonical labelings (or renumberings) in polynomial time for graphs of bounded valence, in moderately exponential, exp(n½ + o(1)),time for general graphs, in subexponential, nlog n, time for tournaments and for 2-(n,k,l) block designs with k,l bounded and nlog log n time for l-planes (symmetric designs) with l bounded. We prove some related problems NP-hard and indicate some open problems.

472 citations


Proceedings ArticleDOI
01 Dec 1983
TL;DR: A linear-time algorithm for the special case of the disjoint set union problem in which the structure of the unions (defined by a “union tree”) is known in advance, which gives similar improvements in the efficiency of algorithms for solving a number of other problems.
Abstract: This paper presents a linear-time algorithm for the special case of the disjoint set union problem in which the structure of the unions (defined by a “union tree”) is known in advance. The algorithm executes an intermixed sequence of m union and find operations on n elements in 0(m+n) time and 0(n) space. This is a slight but theoretically significant improvement over the fastest known algorithm for the general problem, which runs in 0(ma(m+n, n)+n) time and 0(n) space, where a is a functional inverse of Ackermann's function. Used as a subroutine, the algorithm gives similar improvements in the efficiency of algorithms for solving a number of other problems, including two-processor scheduling, the off-line min problem, matching on convex graphs, finding nearest common ancestors off-line, testing a flow graph for reducibility, and finding two disjoint directed spanning trees. The algorithm obtains its efficiency by combining a fast algorithm for the general problem with table look-up on small sets, and requires a random access machine for its implementation. The algorithm extends to the case in which single-node additions to the union tree are allowed. The extended algorithm is useful in finding maximum cardinality matchings on nonbipartite graphs.

398 citations


Journal ArticleDOI
Chazelle1
TL;DR: This paper presents an implementation of the bottom-left heuristic for two-dimensional bin-packing which requires linear space and quadratic time, and believes that even for relatively small values of N, it gives the most efficient implementation of this heuristic, to date.
Abstract: We study implementations of the bottom-left heuristic for two-dimensional bin-packing. To pack N rectangles into an infinite vertical strip of fixed width, the strategy considered here places each rectangle in turn as low as possible in the strip in a left-justified position. For reasons of simplicity and good performance, the bottom-left heuristic has long been a favorite in practical applications; however, the best implementations found so far require a number of steps O(N3). In this paper, we present an implementation of the bottom-left heuristic which requires linear space and quadratic time. The algorithm is fairly practical, and we believe that even for relatively small values of N, it gives the most efficient implementation of the heuristic, to date. It proceeds by first determining all the possible locations where the next rectangle can fit, then selecting the lowest of them. It is optimal among all the algorithms based on this exhaustive strategy, and its generality makes it adaptable to different packing heuristics.

304 citations


Journal ArticleDOI
TL;DR: An algorithm is presented to detect all of the distinct repetitions in a given textstring on a finite alphabet off-line on a RAM, based on a new data structure, the leaf-tree, which is particularly suited to exploit simple properties of the suffix tree associated with the string to be analyzed.

226 citations


Journal ArticleDOI
TL;DR: A heuristic algorithm which runs in polynomial time and produces a near minimal solution to the computation of a minimal Steiner tree for a general weighted graph is described.
Abstract: The computation of a minimal Steiner tree for a general weighted graph is known to be NP‐hard. Except for very simple cases, it is thus computationally impracticable to use an algorithm which produces an exact solution. This paper describes a heuristic algorithm which runs in polynomial time and produces a near minimal solution. Experimental results show that the algorithm performs satisfactorily in the rectilinear case. The paper provides an interesting case study of NP‐hard problems and of the important technique of heuristic evaluation.

205 citations


Journal ArticleDOI
David S. Johnson1, K. A. Niemi1
TL;DR: It is shown how dynamic programming techniques can be used to construct pseudopolynomial time optimization algorithms and fully polynomial time approximation schemes for the partially ordered knapsack problem and how this approach can be adapted to the case of in-trees and to a related tree partitioning problem arising in integrated circuit design.
Abstract: Let G be an acyclic directed graph with weights and values assigned to its vertices. In the partially ordered knapsack problem we wish to find a maximum-valued subset of vertices whose total weight does not exceed a given knapsack capacity, and which contains every predecessor of a vertex if it contains the vertex itself. We consider the special case where G is an out-tree. Even though this special case is still NP-complete, we observe how dynamic programming techniques can be used to construct pseudopolynomial time optimization algorithms and fully polynomial time approximation schemes for it. In particular, we show that a nonstandard approach we call “left-right” dynamic programming is better suited for this problem than the standard “bottom-up” approach, and we show how this “left-right” approach can also be adapted to the case of in-trees and to a related tree partitioning problem arising in integrated circuit design. We conclude by presenting complexity results which indicate that similar success cannot be expected with either problem when the restriction to trees is lifted.

183 citations


Book ChapterDOI
01 Jan 1983
TL;DR: In this paper, a polynomial time inference from positive data for the class of extended regular pattern languages is proposed, which are sets of all strings obtained by substituting any (possibly empty) constant string, instead of non-empty string.
Abstract: A pattern is a string of constant symbols and variable symbols. The language of a pattern p is the set of all strings obtained by substituting any non-empty constant string for each variable symbol in p. A regular pattern has at most one occurrence of each variable symbol. In this paper, we consider polynomial time inference from positive data for the class of extended regular pattern languages which are sets of all strings obtained by substituting any (possibly empty) constant string, instead of non-empty string. Our inference machine uses MINL calculation which finds a minimal language containing a given finite set of strings. The relation between MINL calculation for the class of extended regular pattern languages and the longest common subsequence problem is also discussed.

182 citations


Proceedings ArticleDOI
07 Nov 1983
TL;DR: A new formulation of the notion of duality that allows the unified treatment of a number of geometric problems is used, to solve two long-standing problems of computational geometry and to obtain a quadratic algorithm for computing the minimum-area triangle with vertices chosen among n points in the plane.
Abstract: This paper uses a new formulation of the notion of duality that allows the unified treatment of a number of geometric problems. In particular, we are able to apply our approach to solve two long-standing problems of computational geometry: one is to obtain a quadratic algorithm for computing the minimum-area triangle with vertices chosen among n points in the plane; the other is to produce an optimal algorithm for the half-plane range query problem. This problem is to preprocess n points in the plane, so that given a test half-plane, one can efficiently determine all points lying in the half-plane. We describe an optimal O(k + log n) time algorithm for answering such queries, where k is the number of points to be reported. The algorithm requires O(n) space and O(n log n) preprocessing time. Both of these results represent significant improvements over the best methods previously known. In addition, we give a number of new combinatorial results related to the computation of line arrangements.

169 citations


Journal ArticleDOI
TL;DR: In this article, the authors design and analyze an algorithm which realizes both asymptotic bounds simultaneously and makes possible a completely general implementation as a Fortran subroutine or even as a six-head finite automaton.

Proceedings ArticleDOI
07 Nov 1983
TL;DR: It appears that unless there is another radical breakthrough in ISO, independent of the previous one, the simple groups classification is an indispensable tool for further developments.
Abstract: We address the graph isomorphism problem and related fundamental complexity problems of computational group theory. The main results are these: A1. A polynomial time algorithm to test simplicity and find composition factors of a given permutation group (COMP). A2. A polynomial time algorithm to find elements of given prime order p in a permutation group of order divisible by p. A3. A polynomial time reduction of the problem of finding Sylow subgroups of permutation groups (SYLFIND) to finding the intersection of two cosets of permutation groups (INT). As a consequence, one can find Sylow subgroups of solvable groups and of groups with bounded nonabelian composition factors in polynomial time. A4. A polynomial time algorithm to solve SYLFIND for finite simple groups. A5. An ncd/log d algorithm for isomorphism (ISO) of graphs of valency less than d and a consequent improved moderately exponential general graph isomorphism test in exp(c√n log n) steps. A6. A moderately exponential, n,c√n algorithm for INT. Combined with A3, we obtain an nc√n algorithm for SYLFIND as well. All these problems have strong links to each other. ISO easily reduces to INT. A subcase of SYLFIND was solved in polynomial time and applied to bounded valence ISO in [Lul]. Now, SYLFIND is reduced to INT. Interesting special cases of SYLFIND belong to NP ∩ coNP and are not known to have subexponential solutions. All the results stated depend on the classification of finite simple groups. We note that no previous ISO test had no(d) worst case behavior for graphs of valency less than d. It appears that unless there is another radical breakthrough in ISO, independent of the previous one, the simple groups classification is an indispensable tool for further developments.

Proceedings ArticleDOI
17 Aug 1983
TL;DR: This paper indicates a method of describing real-time processes and their asynchronous communication by means of message exchanges based upon an extension of linear time temporal logic to a special temporal logic in which real- time and asynchronous message passing properties can be expressed.
Abstract: This paper indicates a method of describing real-time processes and their asynchronous communication by means of message exchanges. This description method is based upon an extension of linear time temporal logic to a special temporal logic in which real-time and asynchronous message passing properties can be expressed. We give a model of this logic, define new operators and show amongst others how they can be applied to specify real-time asynchronous message passing and an abstract real-time transmission medium.

Journal ArticleDOI
TL;DR: The problem of finding repeats in molecular sequences is approached as a sorting problem and leads to a method which is linear in space complexity and NlogN in expected time complexity, and which can be used to handle large sequences with relative ease.
Abstract: The problem of finding repeats in molecular sequences is approached as a sorting problem. It leads to a method which is linear in space complexity and NlogN in expected time complexity. The implementation is straightforward and can therefore be used to handle large sequences with relative ease. Of particular interest is that several sequences can be treated as a single sequence. This leads to an efficient method for finding dyads and for finding common features of many sequences, such as favorable alignments.

Journal ArticleDOI
TL;DR: Polynomial time algorithms are presented for finding the permutation distribution of any statistic that is a linear combination of some function of either the original observations or the ranks, including the original Fisher two-sample location statistic.
Abstract: Polynomial time algorithms are presented for finding the permutation distribution of any statistic that is a linear combination of some function of either the original observations or the ranks. This class of statistics includes the original Fisher two-sample location statistic and such common nonparametric statistics as the Wilcoxon, Ansari-Bradley, Savage, and many others. The algorithms are presented for the two-sample problem and it is shown how to extend them to the multisample problem—for example, to find the distribution of the Kruskal-Wallis and other extensions of the Wilcoxon—and to the single-sample situation. Stratification, ties, and censored observations are also easily handled by the algorithms. The algorithms require polynomial time as opposed to complete enumeration algorithms, which require exponential time. This savings is effected by first calculating and then inverting the characteristic function of the statistic.

Journal ArticleDOI
TL;DR: In this paper, an efficient via minimization algorithm for certain types of two-layer printed circuit boards is developed which can be executed in polynomial time and yields solutions for routings with junctions of degrees varying from 2 to 8 and guarantees the minimum number of vias for routing with three or fewer line segments connected to each junction.
Abstract: Based on graph theory, an efficient via minimization algorithm for certain types of two-layer printed circuit boards is developed which can be executed in polynomial time. The algorithm yields solutions for routings with junctions of degrees varying from 2 to 8 and guarantees the minimum number of vias for routings with three or fewer line segments connected to each junction. Examples are given to illustrate various aspects of the algorithm. In addition, preassignment of line segments on a particular layer of the board due to certain prescribed board (or component) constraints is discussed.

Journal ArticleDOI
TL;DR: A polynomial time algorithm for deciding whether or not a transducer isk-valued and this result holds when “valued” is replaced by “ambiguous”.
Abstract: We look at some decision problems concerning nondeterministic finite transducers. The problems concern finite-valuedness, finite ambiguity, equivalence, etc. For a fixedk, we give a polynomial time algorithm for deciding whether or not a transducer isk-valued. The result holds when “valued” is replaced by “ambiguous”. In fact, the following problems are decidable: 1) Given a transducer, is itk-ambiguous for somek? 2) Given two finitely ambiguous transducers, are they equivalent? For unambiguous transducers, equivalence is decidable in polynomial time.

Journal ArticleDOI
TL;DR: There is no obvious “principle of optimality” that can be applied, since globally narrow, aesthetic placements of trees may require wider than necessary subtrees, and the problem is NP-hard.
Abstract: We investigate the complexity of producing aesthetically pleasing drawings of binary trees, drawings that are as narrow as possible. The notion of what is aesthetically pleasing is embodied in several constraints on the placement of nodes, relative to other nodes. Among the results we give are: (1) There is no obvious "principle of optimality" that can be applied, since globally narrow, aesthetic placements of trees may require wider than necessary subtrees. (2) A previously suggested heuristic can produce drawings on n-node trees that are ?(n) times as wide as necessary. (3) The problem can be reduced in polynomial time to linear programming; hence, if the coordinates assigned to the nodes are continuous variables, then the problem can be solved in polynomial time. (4) If the placement is restricted to the integral lattice then the problem is NP-hard, as is its approximation to within a factor of about 4 per cent.

Journal ArticleDOI
TL;DR: It is shown that a polynomial time algorithm for a wider class of precedence constraints is unlikely, and it is proved that the problem to be NP-complete for precedence constraints that are the disjoint union of an in-forest and an out-forest (the “opposing forests” of the title).
Abstract: A basic problem of deterministic scheduling theory is that of scheduling n unit-length tasks on m identical processors subject to precedence constraints so as to meet a given overall deadline T C Hu’s classic “level algorithm” can be used to solve this problem in linear time if the precedence constraints have the form of an in-forest or an out-forest We show that a polynomial time algorithm for a wider class of precedence constraints is unlikely, by proving the problem to be NP-complete for precedence constraints that are the disjoint union of an in-forest and an out-forest (the “opposing forests” of our title) However, for any fixed value of m we show that this problem can be solved in polynomial time for such precedence constraints For the special case of $m = 3$ we provide a linear time algorithm

Proceedings ArticleDOI
07 Nov 1983
TL;DR: This method gives a polynomial time attack on knapsack public key cryptosystems that can be expected to break them if they transmit information at rates below dc (n), as n → ∞.
Abstract: The subset sum problem is to decide whether or not the 0-1 integer programming problem Σi=1n aixi = M; all xi = 0 or 1; has a solution, where the ai and M are given positive integers. This problem is NP-complete, and the difficulty of solving it is the basis of public key cryptosystems of knapsack type. We propose an algorithm which when given an instance of the subset sum problem searches for a solution. This algorithm always halts in polynomial time, but does not always find a solution when one exists. It converts the problem to one of finding a particular short vector v in a lattice, and then uses a lattice basis reduction algorithm due to A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovasz to attempt to find v. We analyze the performance of the proposed algorithm. Let the density d of a subset sum problem be defined by d=n/log2(maxi ai). Then for "almost all" problems of density d ≪ .645 the vector v we are searching for is the shortest nonzero vector in the lattice. We prove that for "almost all" problems of density d ≪ 1/n the lattice basis reduction algorithm locates v. Extensive computational tests of the algorithm suggest that it works for densities d ≪ dc (n), where dc (n) is a cutoff value that is substantially larger than 1/n. This method gives a polynomial time attack on knapsack public key cryptosystems that can be expected to break them if they transmit information at rates below dc (n), as n → ∞.

Journal ArticleDOI
TL;DR: Two methods are presented for generating uniform random strings in an unambiguous context-free language using a precomputed table of size $O(n^{r + 1} )$, where r is the number of nonterminals in the grammar used to specify the language.
Abstract: Let S be the set of all strings of length n generated by a given context-free grammar. A uniform random generator is one which produces strings from S with equal probability. In generating these strings, care must be taken in choosing the disjuncts that form the right-hand side of a grammar rule so that the produced string will have the specified length. Uniform random generators have applications in studying the complexity of parsers, in estimating the average efficiency of theorem provers for the propositional calculus, in establishing a measure of ambiguity of a grammar, etc. Two methods are presented for generating uniform random strings in an unambiguous context-free language. The first method will generate a random string of length n in linear time, but must use a precomputed table of size $O(n^{r + 1} )$, where r is the number of nonterminals in the grammar used to specify the language. The second method precomputes part of the table and calculates the other entries as they are called for. It requi...

Journal ArticleDOI
TL;DR: In this paper, a probabilistic model of transactions (queries, updates, insertions, and deletions) to a file is presented and an algorithm that obtains a near optimal solution to the index selection problem in polynomial time is developed.
Abstract: A problem of considerable interest in the design of a database is the selection of indexes. In this paper, we present a probabilistic model of transactions (queries, updates, insertions, and deletions) to a file. An evaluation function, which is based on the cost saving (in terms of the number of page accesses) attributable to the use of an index set, is then developed. The maximization of this function would yield an optimal set of indexes. Unfortunately, algorithms known to solve this maximization problem require an order of time exponential in the total number of attributes in the file. Consequently, we develop the theoretical basis which leads to an algorithm that obtains a near optimal solution to the index selection problem in polynomial time. The theoretical result consists of showing that the index selection problem can be solved by solving a properly chosen instance of the knapsack problem. A theoretical bound for the amount by which the solution obtained by this algorithm deviates from the true optimum is provided. This result is then interpreted in the light of evidence gathered through experiments.

01 Feb 1983
TL;DR: It is concluded that the g.c. c.
Abstract: : A nonlinear 0-1 program can be restated as a multilinear 0-1 program, which in turn is known to be equivalent to a linear 0-1 program with generalized covering (g.c.) inequalities. In a companion paper 6 we have defined a family of linear inequalities that contains more compact (smaller cardinality) linearizations of a multilinear 0-1 program than the one based on the g.c. inequalities. In this paper we analyze the dominance relations between inequalities of the above family. In particular, we give a criterion that can be checked in linear time, for deciding whether a g.c. inequality can be strengthened by extending the cover from which it was derived. We then describe a class of algorithms based on these results and discuss our computational experience. We conclude that the g.c. inequalities can be strengthened most of the time an extent that increases with problem density. In particular, the algorithm using the strengthening procedure outperforms the one using only g.c. inequalities whenever the number of nonlinear terms per constraint exceeds about 12-15, and the difference in their performance grows with the number of such terms. (Author)

Proceedings ArticleDOI
27 Jun 1983
TL;DR: A general and practical river routing algorithm that will always generate a solution if a solution exists and an analysis to determine the minimum space required for a strait-type river routing problem is included.
Abstract: A general and practical river routing algorithm is described. It is assumed that there is one layer for routing and terminals are on the boundaries of an arbitrarily shaped rectilinear routing region. All nets are two-terminal nets with pre-assigned (may be different) widths and no crossover between nets is allowed. The minimum separation between the edges of two adjacent wires is input as the design rule. This algorithm assumes no grid on the plane and will always generate a solution if a solution exists. The number of corners is reduced by flipping of corners. An analysis to determine the minimum space required for a strait-type river routing problem is included. Let B be the number of boundary segments and T be the total number of terminals. The time complexity is of O(T(B+T) /sup 2/) and the storage required is O((B+T) /sup 2/). This algorithm is implemented as part of the design station under development at the University of California, Berkeley.

Journal ArticleDOI
TL;DR: This paper addresses questions of efficiency in relational databases by presenting polynomial time algorithms for minimizing and testing equivalence of what they call “fan-out free” queries, a more general and more powerful subclass of the conjunctive queries than those previously studied.
Abstract: This paper addresses questions of efficiency in relational databases. We present polynomial time algorithms for minimizing and testing equivalence of what we call “fan-out free” queries. The fan-out free queries form a more general and more powerful subclass of the conjunctive queries than those previously studied. In particular, they can be used to express questions about transitive properties of databases, questions that are impossible to express if one operates under the assumption, implicit in previous work, that each variable has an assigned “type,” and hence can only refer to one fixed attribute of a relation. Our algorithms are graph-theoretic in nature, and the equivalence algorithm can be viewed as solving a special case of the graph isomorphism problem (by reducing it to a series of labelled forest isomorphism questions).

Journal ArticleDOI
TL;DR: An algorithm is given, which solves the open-shop problem in polynomial time, whenever the sum of execution times for one processor is large enough with respect to the maximal execution time.
Abstract: The open-shop problem is known to be NP-complete. However we give an algorithm, which solves the problem in polynomial time, whenever the sum of execution times for one processor is large enough with respect to the maximal execution time. According to the schedule given by our algorithm one of the processors works without idle time. Construction of this schedule is based on a suitable generalization of several “integer-making” techniques.

Book ChapterDOI
18 Jul 1983
TL;DR: It is shown that the deterministic computation time for sets in NP can depend on their density if and only if there is a collapse or partial collapse of the corresponding higher nondeterministic and deterministic time bonded complexity classes.
Abstract: In this paper we study the computational complexity of sets of different densities in NP. We show that the deterministic computation time for sets in NP can depend on their density if and only if there is a collapse or partial collapse of the corresponding higher nondeterministic and deterministic time bonded complexity classes. We show also that for NP sets of different densities there exist complete sets of the corresponding density under polynomial time Turing reductions. Finally, we show that these results can be interpreted as results about the complexity of theorem proving and proof presentation in axiomatized mathematical systems. This interpretation relates fundamental questions about the complexity of our intellectual tools to basic structural problems about P, NP, CoNP, and PSPACE, discussed in this paper.

Journal ArticleDOI
TL;DR: In this article, a new and explicit characterisation of the concept of persistency of excitation for time invariant systems in the presence of possibly unbounded signals is presented. And the implication of this result in the adaptive control of a class of linear time varying systems is also investigated.

Journal ArticleDOI
TL;DR: A class of linear time heuristic algorithms for the problem of finding a matching of the points such that the cost is minimum, and is presented.
Abstract: We consider the following problem: Given n points in a unit square in the Euclidean plane, find a matching of the points such that the cost (i.e., the sum of the lengths of the edges between matched points) is minimum. In particular, we present a class of linear time heuristic algorithms for this problem and analyze their worst case performance. The worst case performance of an algorithm is defined as the greatest possible cost, as a function of n, of the matching produced by the algorithm on a set of n points. Each of the algorithms studied here divides the unit square into a few smaller regions, and then is applied recursively to the points in each of these regions.

Proceedings ArticleDOI
07 Nov 1983
TL;DR: This paper shows the problem of partitioning a polygonal region into a minimum number of trapezoids with two horizontal sides to be NP-complete, and presents an O(n log n) natural approximation algorithm which uses only horizontal chords to partition a polyagonal region P into trapezoid, where n is the number of vertices of P.
Abstract: We consider the problem of partitioning a polygonal region into a minimum number of trapezoids with two horizontal sides. Triangles with a horizontal side are considered to be trapezoids with two horizontal sides one of which is degenerate. In this paper we show that this problem is equivalent to the problem of finding a maximum independent set of a straight-lines-in-the-plane graph. Thus it is shown to be NP-complete. Next we present an O(n log n) natural approximation algorithm which uses only horizontal chords to partition a polygonal region P into trapezoids, where n is the number of vertices of P. We show that the absolute performance ratio of the algorithm is three. We can also design another approximation algorithm with the ratio (1 + 2/c) if we have a (1 - 1/c) approximation algorithm for the maximum independent set problem on straight-lines-in-the-plane graphs, where c is some constant. Finally, we give an O(n3) exact algorithm for polygonal regions without windows.