scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1980"


Proceedings ArticleDOI
28 Apr 1980
TL;DR: This work aims to understand when nonuniform upper bounds can be used to obtain uniform upper bounds, and how to relate it to more common notions.
Abstract: It is well known that every set in P has small circuits [13]. Adleman [1] has recently proved the stronger result that every set accepted in polynomial time by a randomized Turing machine has small circuits. Both these results are typical of the known relationships between uniform and nonuniform complexity bounds. They obtain a nonuniform upper bound as a consequence of a uniform upper bound. The central theme here is an attempt to explore the converse direction. That is, we wish to understand when nonuniform upper bounds can be used to obtain uniform upper bounds. In this section we will define our basic notion of nonuniform complexity. Then we will show how to relate it to more common notions.

625 citations


Journal ArticleDOI
TL;DR: In this paper, a class of production planning problems is considered in which known demands have to be satisfied over a finite horizon at minimum total costs, and several algorithms proposed for their solution are described and analyzed.
Abstract: A class of production planning problems is considered in which known demands have to be satisfied over a finite horizon at minimum total costs. For each period, production and storage cost functions are specified. The production costs may include set-up costs and the production levels may he subject to capacity limits. The computational complexity of the problems in this class is investigated. Several algorithms proposed for their solution are described and analyzed. It is also shown that some special cases are NP-hard and hence unlikely to be solvable in polynomial time.

618 citations


Journal ArticleDOI
TL;DR: A simple proof is given that the congruence closure algorithm provides a decision procedure for the quantifier-free theory of equality and the problem of determining the satisfiability of a conjunction of literals becomes NP-complete if the axiomatization of the theory of list structure is changed slightly.
Abstract: The notion of the congruence closure of a relation on a graph is defined and several algorithms for computing it are surveyed. A simple proof is given that the congruence closure algorithm provides a decision procedure for the quantifier-free theory of equality. A decision procedure is then given for the quantifier-free theory of LISP list structure based on the congruence closure algorithm. Both decision procedures determine the satisfiability of a conjunction of literals of length n in average time O(n log n) using the fastest known congruence closure algorithm. It is also shown that if the axiomatization of the theory of list structure is changed slightly, the problem of determining the satisfiability of a conjunction of literals becomes NP-complete. The decision procedures have been implemented in the authors' simplifier for the Stanford Pascal Verifier.

560 citations


Journal ArticleDOI
TL;DR: It is proved that entries in the Pade table can be computed by the Extended Euclidean Algorithm, and an algorithm EMGCD (Extended Middle Greatest Common Divisor) is described which is faster than the algorithm HGCD of Aho, Hopcroft and Ullman, although both require time O(n log2 n).

419 citations


Proceedings ArticleDOI
13 Oct 1980
TL;DR: It is demonstrated that the normal closure of a subgroup can be computed in polynomial time, and that this proceaure can be used to test a group for solvability.
Abstract: A permutation group on n letters may always be represented by a small set of generators, even though its size may be exponential in n. We show that it is practical to use such a representation since many problems such as membership testing, equality testing, and inclusion testing are decidable in polynomial time. In addition, we demonstrate that the normal closure of a subgroup can be computed in polynomial time, and that this proceaure can be used to test a group for solvability. We also describe an approach to computing the intersection of two groups. The procedures and techniques have wide applicability and have recently been used to improve many graph isomorphism algorithms.

298 citations


Journal ArticleDOI
TL;DR: It is shown that it is possible to apply ideas of Paull and Unger and of Tsukiyama et al. to obtain polynomial-time algorithms for a number of special cases, e.g. the efficient generation of all maximal feasible solutions to a knapsack problem.
Abstract: Suppose that an independence system $(E,\mathcal {I})$ is characterized by a subroutine which indicates in unit time whether or not a given subset of E is independent. It is shown that there is no algorithm for generating all the K maximal independent sets of such an independence system in time polynomial in $|E|$ and K, unless $\mathcal {P} = \mathcal {NP}$. However, it is possible to apply ideas of Paull and Unger and of Tsukiyama et al. to obtain polynomial-time algorithms for a number of special cases, e.g. the efficient generation of all maximal feasible solutions to a knapsack problem. The algorithmic techniques bear an interesting relationship with those of Read for the enumeration of graphs and other combinatorial configurations.

258 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce a characterization of skeletal pixels in terms of how many arcs of the boundary pass through a pixel and a new algorithm is proposed which proceeds by peeling off successive contours of the set to be thinned while identifying pixels where disjoint parts of boundary have been mapped.

238 citations


Proceedings ArticleDOI
13 Oct 1980
TL;DR: Testing isomorphism of graphs of valence ≤ t is polynomial-time reducible to the color automorphism problem for groups with small simple sections, and some results on primitive permutation groups are used to show that the algorithm runs inPolynomial time.
Abstract: Suppose we are given a set of generators for a group G of permutations of a colored set A The color automorphism problem for G involves finding generators for the subgroup of G which stabilizes the color classes Testing isomorphism of graphs of valence ≤ t is polynomial-time reducible to the color automorphism problem for groups with small simple sections The algorithm for the latter problem involves several divide-and-conquer tricks The problem is solved sequentially on the G-orbits An orbit is broken into a minimal set of blocks permuted by G The hypothesis on G guarantees the existence of a 'large' subgroup P which acts as a p-group on the blocks A similar process is repeated for each coset of P on G Some results on primitive permutation groups are used to show that the algorithm runs in polynomial time

205 citations


Journal ArticleDOI
TL;DR: This note presents an efficient algorithm for finding the largest (or smallest) of a set of uniquely numbered processors arranged in a circle, in which no central controller exists and the number of processors is not known a priori.
Abstract: This note presents an efficient algorithm, requiring O(n log n) message passes, for finding the largest (or smallest) of a set of n uniquely numbered processors arranged in a circle, in which no central controller exists and the number of processors is not known a priori.

196 citations


Proceedings ArticleDOI
13 Oct 1980
TL;DR: The main result is that if there is a sparse NP-complete set under many-one reductions, then P = NP, and it is shown that if the set is under Turing reduction, then the polynomial time hierarchy collapses to Δ2P.
Abstract: A set S ⊂ {0,1}* is sparse if there is a polynomial p such that the number of strings in S of size at most n is at most p(n). All known NP-complete sets, such as SAT, are not sparse. The main result of this paper is that if there is a sparse NP-complete set under many-one reductions, then P = NP. We also show that if there is a sparse NP-complete set under Turing reductions, then the polynomial time hierarchy collapses to Δ2P.

185 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the problem of testing the bandwidth of a graph is not NP-complete (unless P = NP) for any fixed k, answering an open question of Garey, Graham, Johnson, and Knuth.
Abstract: In this paper we investigate the problem of testing the bandwidth of a graph: Given a graph, G, can the vertices of G be mapped to distinct positive integers so that no edge of G has its endpoints mapped to integers which differ by more than some fixed constant, k? We exhibit an algorithm to solve this problem in $O ( f ( k )N^{k + 1} )$ time, where N is the number of vertices of G and $f ( k )$ depends only on k This result implies that the “Bandwidth $\overset{?}{\leqq} k$” problem is not NP-complete (unless P = NP) for any fixed k, answering an open question of Garey, Graham, Johnson, and Knuth We also show how the algorithm can be modified to solve some other problems closely related to the “Bandwidth $\overset{?}{\leqq} k$” problem

Proceedings ArticleDOI
Ashok K. Chandra1, David Harel1
13 Oct 1980
TL;DR: This paper is an attempt at laying the foundations for the classification of queries on relational data bases according to their structure and their computational complexity, using a Σ-Π hierarchy of height, ω2, called the fixpoint query hierarchy, and its properties investigated.
Abstract: This paper is an attempt at laying the foundations for the classification of queries on relational data bases according to their structure and their computational complexity. Using the operations of composition and fixpoints, a Σ-Π hierarchy of height, ω2, called the fixpoint query hierarchy, is defined, and its properties investigated. The hierarchy includes most of the queries considered in the literature including those of Codd and Aho and Ullman. The hierarchy to level ω characterizes the first-order queries, and the levels up to ω are shown to be strict. Sets of queries larger than the fixpoint query hierarchy are obtained by considering the queries computable in polynomial time, queries computable in polynomial space, etc. It is shown that classes of queries defined from such complexity classes behave (with respect to containment) in a manner very similar to the corresponding complexity classes. Also, the set of second-order queries turns out to be the same as the set of queries defined from the polynomialtime hierarchy. Finally, these classes of queries are used to characterize a set of queries defined from language considerations: those expressible in a programming language with only typed (or ranked) relation variables. At the end of the paper is a list of symbols used therein.

Journal ArticleDOI
TL;DR: This work gives a general technique for determining the complexity of decidable combinations of theories, and shows that the satisfiability problem for the quantifier-free theory of integers, arrays, list structure and uninterpreted function symbols under +, ≤, store, select, cons, car and cdr is NP-complete.

Proceedings ArticleDOI
28 Apr 1980
TL;DR: The isomorphism problem for graphs has been in recent years the object of a much research and it is not known whether there exists a polynomial-time algorithm for it.
Abstract: The isomorphism problem for graphs has been in recent years the object of a much research (see e.g. [Col 78] or [Re-Cor 77]). Its complexity is still unknown. It is not known whether the problem is NP-complete, although it is NP, of course. It is not known whether there exists a polynomial-time algorithm for it. Recently, Babai [Ba 79] has discussed probabilistic algorithms. For additional information see also [Mi 77]. The problem has also some practical applications. Of the known algorithms let us only quote the work of Weinberg [We 66] and of Hopcroft and Tarjan [Ho-Ta 72]. Weinberg's algorithm rums in quadratic time (in αo, the number of vertices of the graphs). Hopcroft and Tarjan's runs in time 0(αo logαo) and uses their powerful technique of depth-first search. Both these algorithms apply only to planar (Weinberg's only to 3-connected planar) graphs. They rely on a well-known rigidity theorem of Withney [Withney 32].

Journal ArticleDOI
TL;DR: It is shown in this paper that there exists a natural isometry between the L_1 and L_\infty metrics, which implies the existence of a polynomial time algorithm for the OPP in one metric and the existence for the same problem in the other metric.
Abstract: In this paper we study the problem of scheduling the read/write head movement to handle a batch of $nI/O$ requests in a 2-dimensional secondary storage device in minimum time. Two models of storage systems are assumed in which the access time of a record (being proportional to the “distance” between the position of the record and that of the read/write head) is measured in terms of $L_1 $ and $L_\infty $ metrics, respectively. The scheduling problem, referred to as the Open Path Problem (OPP), is equivalent to finding a shortest Hamiltonian path with a specified end point in a complete graph with n vertices. We first show in this paper that there exists a natural isometry between the $L_1 $ and $L_\infty $ metrics. Consequently, the existence of a polynomial time algorithm for the OPP in one metric implies the existence of a polynomial time algorithm for the same problem in the other metric. Based on a result by Garey, Graham and Johnson, it is easy to show that the OPP in $L_1 $ (hence in $L_\infty $) me...

Journal ArticleDOI
TL;DR: Two new algorithms are proposed for enumerating all the cutsets or all the s-t cutsets separatmg two spectfied verttces s and t m an undirected graph and how good the performance of the old algorithm is, when a given graph is "dense."
Abstract: Thts paper deals wRh the problem of enumerating all the cutsets or all the s-t cutsets separatmg two spectfied verttces s and t m an undirected graph A vanety of approaches have been proposed for this problem, among which one based on the partmon e ra set of veruces rote two sets is the most effi¢ienL It is first shown that an algorithm of this type has time complexity O((n + m)(n log2#)#), and two new algorithms with ume complexity O((n + m)O + I)) are then proposed One of these new algorithms has space complexity O(nZ), and the other has space complexity O(n + m), where n and m are the numbers of veraces and edges, respectively, and ta ts the number ofs-t cutsets m a given graph The results of some computatmnal experiments are also described. An mvest~gaUon ~s made of the extent to whtch the new algorithms are better, and how good the performance of the old algorithm is, especmlly when a given graph is \"dense,\" t e , 2m/(n(n 1)) _> 0.4.

Journal ArticleDOI
TL;DR: An implementation of the Reverse Cuthill-McKee (RCM) algorithm whose run-time complexity is proved to be linear in the number of nonzeros in the matrix is provided.
Abstract: The Reverse Cuthill-McKee (RCM) algorithm is a method for reordering a sparse matrix so that it has a small envelope. Given a starting node, we provide an implementation of the algorithm whose run-time complexity is proved to be linear in the number of nonzeros in the matrix. Numerical experiments are provided which compare the performance of the new implementation to a good conventional implementation.

Journal ArticleDOI
TL;DR: A constructive algorithm for solving systems of linear inequalities (LI) with at most two variables per inequality with time complexity O(mn^3 I) on a random access machine.
Abstract: We present a constructive algorithm for solving systems of linear inequalities (LI) with at most two variables per inequality. The time complexity of the algorithm is $O(mn^3 I)$ on a random access machine, where m is the number of inequalities, n the number of variables, and I the size of the binary encoding of the input. The LI problem is of importance in complexity theory because it is polynomial time (Turing) equivalent to linear programming. The subclass of LI treated in this paper is of practical interest in mechanical verification systems.

Journal ArticleDOI
TL;DR: The problem of finding a homeomorphic image of a "pattern" graph H in a larger input graph G is studied in this article, where the main result is a linear time algorithm to determine if there exists a simple cycle containing three given nodes in G (here H is a triangle).

Proceedings ArticleDOI
13 Oct 1980
TL;DR: The ellipsoid method for linear programming is applied to show that a combinatorial optimization problem is solvable in polynomial time if and only if it admits a small generator of violated inequalities.
Abstract: We show that there can be no computationally tractable description by linear inequalities of the polyhedron associated with any NP-complete combinatorial optimization problem unless NP = co-NP -- a very unlikely event. We also apply the ellipsoid method for linear programming to show that a combinatorial optimization problem is solvable in polynomial time if and only if it admits a small generator of violated inequalities.

01 Nov 1980
TL;DR: The major result presented in this dissertation is a polynomial time algorithm for a restricted case of the routing problem, which minimizes the area of a rectangle circumscribing the component and the wire paths.
Abstract: In this thesis, the problem of designing the layout of integrated circuits is examined. The layout of an integrated circuit specifies the position of the chip of functional components and wires interconnecting the components. We use a general model under which components are represented by rectangles, and wires are represented by lines. This model can be applied to circuit components defined at any level of complexity, from a transistor to a programmable logic array (PLA). We focus on the standard decomposition of the layout problem into a placement problem and a routing problem. We examine problems encountered in layout design from the point of view of complexity theory. The general layout problem under our model is shown to be NP-complete. In addition, two problems encountered in a restricted version of the routing problem --channel routing--are shown to be NP-complete. The analysis of heuristic algorithms for NP-complete problems is discussed, and the analysis of one common algorithm is presented. The major result presented in this dissertation is a polynomial time algorithm for a restricted case of the routing problem. Given one rectangular component with terminals on its boundary, and pairs of terminals to be connected, the algorithm will find a two-layer channel routing which minimizes the area of a rectangle circumscribing the component and the wire paths. Each terminal can appear in only one pair of terminals to be connected, and the rectangle used to determine the area must have its boundaries parallel to those of the component. If any of the conditions of the problem are removed, the algorithm is no longer guaranteed to find the optimal solution.

Journal ArticleDOI
Gostelow1, Thomas
TL;DR: It is shown that a dataflow machine can automatically unfold the nested loops of n X n matrix multiply to reduce its time complexity from 0(n3) to 0( n) so long as sufficient processors and communication capacity is available.
Abstract: Our goal is to devise a computer comprising large numbers of cooperating processors (LSI). In doing so we reject the sequential and memory cell semantics of the von Neumann model, and instead adopt the asynchronous and functional semantics of dataflow. We briefly describe the high-level dataflow programming language Id, as well as an initial design for a dataflow machine and the results of detailed deterministic simulation experiments on a part of that machine. For example, we show that a dataflow machine can automatically unfold the nested loops of n X n matrix multiply to reduce its time complexity from 0(n3) to 0(n) so long as sufficient processors and communication capacity is available. Similarly, quicksort executes with average 0(n) time demanding 0(n) processors. Also discussed are the use of processor and communication time complexity analysis and "flow analysis," as aids in understanding the behavior of the machine.

Journal ArticleDOI
TL;DR: L. G. Khachiyan's polynomial time algorithm for determining whether a system of linear inequalities is satisfiable is presented together with a proof of its validity and can be used to solve linear programs in polynometric time.

Proceedings ArticleDOI
01 Jul 1980
TL;DR: This paper presents a new hidden surface algorithm that overlays a grid on the screen whose fineness depends on the number and size of the faces, and is as accurate as the arithmetic precision of the computer.
Abstract: This paper presents a new hidden surface algorithm. Its output is the set of the visible pieces of edges and faces, and is as accurate as the arithmetic precision of the computer. Thus calculating the hidden surfaces for a higher resolution device takes no more time. If the faces are independently and identically distributed, then the execution time is linear in the number of faces. In particular, the execution time does not increase with the depth complexity.This algorithm overlays a grid on the screen whose fineness depends on the number and size of the faces. Edges and faces are sorted into grid cells. Only objects in the same cell can intersect or hide each other. Also, if a face completely covers a cell then nothing behind it in the cell is relevant.Three programs have tested this algorithm. The first verified the variable grid concept on 50,000 intersecting edges. The second verified the linear time, fast speed, and irrelevance of depth complexity for hidden lines on 10,000 spheres. This also tested depth complexities up to 30, and showed that perspective scenes with the farther objects smaller are even faster to calculate. The third verified this for hidden surfaces on 3000 squares.

Journal ArticleDOI
TL;DR: A notion of LP-completeness is introduced, a set of problems is shown to be (polynomially) equivalent to linear programming, and a transformation is given to produce NP-complete versions ofLP-complete provlems.

Book ChapterDOI
Tatsuo Ohtsuki1
24 Oct 1980
TL;DR: A new polynomial time algorithm is presented for finding two vertex-disjoint paths between two specified pairs of vertices on an undirected graph and its application to the automatic wire routing is discussed.
Abstract: A new polynomial time algorithm is presented for finding two vertex-disjoint paths between two specified pairs of vertices on an undirected graph. An application of the two disjoint path algorithm to the automatic wire routing is also discussed.

Proceedings ArticleDOI
13 Oct 1980
TL;DR: An algorithm for a special case of wire routing given a rectangular circuit component on a planar surface with terminals around its boundary, the algorithm finds an optimal set of paths in the plane connecting specified pairs of terminals.
Abstract: In this paper we present an algorithm for a special case of wire routing. Given a rectangular circuit component on a planar surface with terminals around its boundary, the algorithm finds an optimal set of paths in the plane connecting specified pairs of terminals. The paths are restricted to lie on the outside of the component and must consist of line segments orthogonal to the sides of the component. Paths may intersect at a point but may not overlap. The criterion for optimality is the area of a rectangle with sides orthogonal to those of the component which circumscribes the component and paths. The algorithm has running time O(t3), where t is the number of terminals on the component.

Journal ArticleDOI
TL;DR: In this article, it was shown that the problem of finding a minimum $k$-basis, the $n$-center problem, and the $p$-median problem are NP-complete even in the case of such communication networks as planar graphs with maximum degree 3.
Abstract: It is shown that the problem of finding a minimum $k$-basis, the $n$-center problem, and the $p$-median problem are $NP$-complete even in the case of such communication networks as planar graphs with maximum degree 3. Moreover, a near optimal $m$-center problem is also $NP$-complete.

Journal ArticleDOI
TL;DR: In this article, the authors presented an 0 (n log n) heuristic algorithm for the Rectilinear Steiner Minimal Tree (RSMT) problem, which is based on a decomposition approach which first partitions the vertex...
Abstract: This paper presents an 0 (n log n) heuristic algorithm for the Rectilinear Steiner Minimal Tree (RSMT) problem. The algorithm is based on a decomposition approach which first partitions the vertex ...

01 Nov 1980
TL;DR: This paper uses graph contraction arguments instead of bicolor interchange and improves both the sequential processing and batch processing methods to obtain five-coloring algorithms that operate in O(n) time.
Abstract: A "sequential processing" algorithm using bicolor interchange that five-colors an n vertex planar graph in $O(n^2)$ time was given by Matula, Marble, and Isaacson [1972]. Lipton and Miller used a "batch processing" algorithm with bicolor interchange for the same problem and achieved an improved O(n log n) time bound [1978]. In this paper we use graph contraction arguments instead of bicolor interchange and improve both the sequential processing and batch processing methods to obtain five-coloring algorithms that operate in O(n) time.