scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1982"


Proceedings ArticleDOI
01 Jan 1982
TL;DR: An iterative mincut heuristic for partitioning networks is presented whose worst case computation time, per pass, grows linearly with the size of the network.
Abstract: An iterative mincut heuristic for partitioning networks is presented whose worst case computation time, per pass, grows linearly with the size of the network. In practice, only a very small number of passes are typically needed, leading to a fast approximation algorithm for mincut partitioning. To deal with cells of various sizes, the algorithm progresses by moving one cell at a time between the blocks of the partition while maintaining a desired balance based on the size of the blocks rather than the number of cells per block. Efficient data structures are used to avoid unnecessary searching for the best cell to move and to minimize unnecessary updating of cells affected by each move.

2,463 citations


Proceedings ArticleDOI
05 May 1982
TL;DR: The pattern which will be shown is that the expression complexity of the investigated languages is one exponential higher then their data complexity, and for both types of complexity the authors show completeness in some complexity class.
Abstract: Two complexity measures for query languages are proposed. Data complexity is the complexity of evaluating a query in the language as a function of the size of the database, and expression complexity is the complexity of evaluating a query in the language as a function of the size of the expression defining the query. We study the data and expression complexity of logical languages - relational calculus and its extensions by transitive closure, fixpoint and second order existential quantification - and algebraic languages - relational algebra and its extensions by bounded and unbounded looping. The pattern which will be shown is that the expression complexity of the investigated languages is one exponential higher then their data complexity, and for both types of complexity we show completeness in some complexity class.

1,523 citations


Journal ArticleDOI
TL;DR: The sorted array of black nodes is referred to as the “linear quadtree” and it is shown that it introduces a saving of at least 66 percent of the computer storage required by regular quadtrees.
Abstract: A quadtree may be represented without pointers by encoding each black node with a quaternary integer whose digits reflect successive quadrant subdivisions. We refer to the sorted array of black nodes as the “linear quadtree” and show that it introduces a saving of at least 66 percent of the computer storage required by regular quadtrees. Some algorithms using linear quadtrees are presented, namely, (i) encoding a pixel from a 2n × 2>n array (or screen) into its quaternary code; (ii) finding adjacent nodes; (iii) determining the color of a node; (iv) superposing two images. It is shown that algorithms (i)-(iii) can be executed in logarithmic time, while superposition can be carried out in linear time with respect to the total number of black nodes. The paper also shows that the dynamic capability of a quadtree can be effectively simulated.

717 citations


Proceedings ArticleDOI
03 Nov 1982
TL;DR: A linear-time algorithm is given for the classical problem of finding the smallest circle enclosing n given points in the plane, which disproves a conjecture by Shamos and Hoey that this problem requires Ω(n log n) time.
Abstract: Linear-time for Linear Programming in R2 and R3 are presented. The methods used are applicable for some other problems. For example, a linear-time algorithm is given for the classical problem of finding the smallest circle enclosing n given points in the plane. This disproves a conjecture by Shamos and Hoey that this problem requires Ω(n log n) time. An immediate consequence of the main result is that the problem of linear separability is solvable in linear-time. This corrects an error in Shamos and Hoey's paper, namely, that their O(n log n) algorithm for this problem in the plane was optimal. Also, a linear-time algorithm is given for the problem of finding the weighted center of a tree and algorithms for other common location-theoretic problems are indicated. The results apply also to the problem of convex quadratic programming in three-dimensions. The results have already been extended to higher dimensions and we know that linear programming can be solved in linear-time when the dimension is fixed. This will be reported elsewhere; a preliminary report is available from the author.

575 citations


Journal ArticleDOI
Ashok K. Chandra1, David Harel1
TL;DR: In this article, a fixpoint query hierarchy is proposed to classify queries on relational data bases according to their structure and their computational complexity using the operations of composition and fixpoints, and a Σ-π hierarchy of height ω2 is defined, and its properties investigated.

559 citations


Journal ArticleDOI
TL;DR: It is proved that O(n \log(k)) is a lower bound on the time required for any algorithm based on comparing array elements, so that the second algorithm is optimal.

540 citations


Journal ArticleDOI
TL;DR: In this article, the computational complexity of the capacitated lot size problem with a particular cost structure was studied, and several classes of problems solvable by polynomial time algorithms were identified, and efficient solution procedures were given.
Abstract: In this paper we study the computational complexity of the capacitated lot size problem with a particular cost structure that is likely to be used in practical settings. For the single item case new properties are introduced, classes of problems solvable by polynomial time algorithms are identified, and efficient solution procedures are given. We show that special classes are NP-hard, and that the problem with two items and independent setups is NP-hard under conditions similar to those where the single item problem is easy. Topics for further research are discussed in the last section.

535 citations


Journal ArticleDOI
TL;DR: In this article, the Schrage tree search problem has been shown to be NP-hard in the strong sense, and it is shown that the difference between the optimum and Schrage schedule is less than d 1.

518 citations


Proceedings ArticleDOI
03 Nov 1982
TL;DR: It is proved that the LP relaxation of bin packing, which was solved efficiently in practice by Gilmore and Gomory, has membership in P, despite the fact that it has an astronomically large number of variables.
Abstract: We present several polynomial-time approximation algorithms for the one-dimensional bin-packing problem. using a subroutine to solve a certain linear programming relaxation of the problem. Our main results are as follows: There is a polynomial-time algorithm A such that A(I) ≤ OPT(I) + O(log2 OPT(I)). There is a polynomial-time algorithm A such that, if m(I) denotes the number of distinct sizes of pieces occurring in instance I, then A(I) ≤ OPT(I) + O(log2 m(I)). There is an approximation scheme which accepts as input an instance I and a positive real number e, and produces as output a packing using as most (1 + e) OPT(I) + O(e-2) bins. Its execution time is O(e-c n log n), where c is a constant. These are the best asymptotic performance bounds that have been achieved to date for polynomial-time bin-packing. Each of our algorithms makes at most O(log n) calls on the LP relaxation subroutine and takes at most O(n log n) time for other operations. The LP relaxation of bin packing was solved efficiently in practice by Gilmore and Gomory. We prove its membership in P, despite the fact that it has an astronomically large number of variables.

509 citations


Proceedings ArticleDOI
03 Nov 1982
TL;DR: A more operative definition of Randomness should be pursued in the light of modern Complexity Theory.
Abstract: We give a set of conditions that allow one to generate 50–50 unpredictable bits.Based on those conditions, we present a general algorithmic scheme for constructing polynomial-time deterministic algorithms that stretch a short secret random input into a long sequence of unpredictable pseudo-random bits.We give an implementation of our scheme and exhibit a pseudo-random bit generator for which any efficient strategy for predicting the next output bit with better than 50–50 chance is easily transformable to an “equally efficient” algorithm for solving the discrete logarithm problem. In particular: if the discrete logarithm problem cannot be solved in probabilistic polynomial time, no probabilistic polynomial-time algorithm can guess the next output bit better than by flipping a coin: if “head” guess “0”, if “tail” guess “1”

232 citations


Proceedings ArticleDOI
05 May 1982
TL;DR: Two polynomial time algorithms are described which test isomorphism of undirected graphs whose eigenvalues have bounded multiplicity, if X and Y are graphs of eigenvalue multiplicity m.
Abstract: We investigate the connection between the spectrum of a graph, i.e. the eigenvalues of the adjacency matrix, and the complexity of testing isomorphism. In particular we describe two polynomial time algorithms which test isomorphism of undirected graphs whose eigenvalues have bounded multiplicity. If X and Y are graphs of eigenvalue multiplicity m, then the isomorphism of X and Y can be tested by an O(n4m+c) deterministic and by an O(n2m+c) Las Vegas algorithm, where n is the number of vertices of X and Y.

Journal ArticleDOI
Danny Dolev1, Maria Klawe1, Michael Rodeh1
TL;DR: In this paper, a simple unidirectional algorithm was proposed to determine the maximum number in a distributive manner, in which the number of messages passed is bounded by 1.356 n log n + O ( n ) messages.

Journal ArticleDOI
TL;DR: It is argued that conglomerates include all parallel machines which could feasibly be built with fixed connections, and a universal structure is developed which can simulate any other basic interconnection pattern within linear time.
Abstract: A number of different models of synchronous, unbounded parallel computers have appeared in the literature. Without exception, running time on these models has been shown to be polynomially related to the classical space complexity measure. The general applicability of this relationship is called the parallel computation thesis, and evidence of its truth is given in this paper by introducing a class of parallel machines called conglomerates. It is argued that conglomerates include all parallel machines which could feasibly be built with fixed connections. Basic interconnection patterns are also investigated in an attempt to pin down the notion of parallel time to within a constant factor. To this end, a universal structure is developed which can simulate any other basic interconnection pattern within linear time. This approach leads to fair estimates of instruction execution times for various parallel models. 11 references.

Journal ArticleDOI
TL;DR: It is shown that an attribute system can be translated (in a certain way) into a recursive program scheme if and only if it is strongly noncircular, which is decidable in polynomial time.

Journal ArticleDOI
TL;DR: This correspondence analyzes the computational complexity of fault detection problems for combinational circuits and proposes an approach to design for testability, and shows that for k-level (k ≥ 3) monotone/unate circuits these problems are still NP-complete, but that these are solvable in polynomial time for 2-level monot one/ unate circuits.
Abstract: In this correspondence we analyze the computational complexity of fault detection problems for combinational circuits and propose an approach to design for testability. Although major fault detection problems have been known to be in general NP-complete, they were proven for rather complex circuits. In this correspondence we show that these are still NP-complete even for monotone circuits, and thus for unate circuits. We show that for k-level (k ≥ 3) monotone/unate circuits these problems are still NP-complete, but that these are solvable in polynomial time for 2-level monotone/unate circuits. A class of circuits for which these fault detection problems are solvable in polynomial time is presented. Ripple-carry adders, decoder circuits, linear circuits, etc., belong to this class. A design approach is also presented in which an arbitrary given circuit is changed to such an easily testable circuit by inserting a few additional test-points.

Journal ArticleDOI
TL;DR: The temporal propositional logic of linear time is generalized to an uncertain world, in which random events may occur, and three different axiomatic systems are proposed and shown complete for general models, finite models, and models with bounded transition probabilities, respectively.
Abstract: The temporal propositional logic of linear time is generalized to an uncertain world, in which random events may occur. The formulas do not mention probabilities explicitly, i.e., the only probability appearing explicitly in formulas is probability one. This logic is claimed to be useful for stating and proving properties of probabilistic programs. It is convenient for proving those properties that do not depend on the specific distribution of probabilities used in the program's random draws. The formulas describe properties of execution sequences. The models are stochastic systems, with state transition probabilities. Three different axiomatic systems are proposed and shown complete for general models, finite models, and models with bounded transition probabilities, respectively. All three systems are decidable, by the results of Rabin ( Trans. Amer. Math. Soc. 141 (1969), 1–35).

Journal ArticleDOI
TL;DR: It is shown that the problem is log space complete for deterministic polynomial time, so the maximum flow problem probably has no algorithm which needs only O(logk n) storage space for any constant k.

Journal ArticleDOI
TL;DR: In this paper, a new implicit enumeration algorithm for the solution of the 0-1 knapsack problem, denoted by FPK 79, is proposed, and the implementation of the associated FORTRAN IV subroutine is then described.
Abstract: A new implicit enumeration algorithm for the solution of the 0–1 knapsack problem — denoted by FPK 79 — is proposed. The implementation of the associated FORTRAN IV subroutine is then described. Computational results prove the efficiency of this algorithm (practically linear time complexity including the initial arrangement of the data) whose performance is generally better than that of algorithm 37 and thus superior to that of the best known algorithms.

Journal ArticleDOI
TL;DR: The ellipsoid method for linear programming is applied to show that a combinatorial optimization problem is solvable in polynomial time if and only if it admits a small generator of violated inequalities.
Abstract: We show that there can be no computationally tractable description by linear inequalities of the polyhedron associated with any NP-complete combinatorial optimization problem unless NP = co-NP—a very unlikely event We also apply the ellipsoid method for linear programming to show that a combinatorial optimization problem is solvable in polynomial time if and only if it admits a small generator of violated inequalities

Journal ArticleDOI
TL;DR: A method to insert points in a quad-tree while keeping the tree balanced that achieves an average time complexity of O(log2 N) per insertion, where N is the number of updates performed on the quad- tree.
Abstract: Quad-trees and k--d trees have been noted for their lack of dynamic properties as data structures for multi-dimensional point sets. We describe a method to insert points in a quad-tree while keeping the tree balanced that achieves an average time complexity of O(log2 N) per insertion, where N is the number of updates performed on the quad-tree. We define a structure similar to a quad-tree, called a pseudo quad-tree, and show how it can be used to handle both insertions and deletions in O(log2 N) average time. We also discuss how quad-trees and pseudo quadtrees can be extended for use in configurations of points in which more than one point may have a same value in some equal coordinate, without altering the earlier time bounds for insertions, deletions and queries. Similar algorithms are given for k--d trees and the same average time bounds for insertion and deletion are achieved.

Journal ArticleDOI
TL;DR: Let R ~ NP be the collection of languages L such that for some polynomial time computable predicate P(x,y) and constant k, L = {xI~y, ly]=Ixlk-I values of y, lyl=IxLk,p(x-y)}; let RA,uA, pA,NpA,co-NP A be the relativization of these classes with respect to an oracle A.
Abstract: @ NP be the collection of languages L such that for some polynomial time computable predicate P(x,y) and constant k, L = {x|

Journal ArticleDOI
Zvi Galil1
TL;DR: An algorithm that constructs for a given set of (functional and multivalued) dependencies Σ and a set of attributes X, the dependency basis of X, and tests whether a dependency σ is implied by Σ in time O(min(k,log p) |Σ|, whenever all the dependencies in ΣU{σ} are functional dependencies.
Abstract: We describe an algorithm that constructs for a given set of (functional and multivalued) dependencies Σ and a set of attributes X, the dependency basis of X. The algorithm runs in time O(min(k,log p)|Σ|), where p is the number of sets in the dependency basis of X and k is the number of dependencies in Σ. A variant of the algorithm tests whether a dependency σ is implied by Σ in time O(min(k,log p) |Σ|), where p is the number of sets in the dependency basis of the left-hand side of σ that intersect the right-hand side of σ. Whenever all the dependencies in ΣU{σ} are functional dependencies these algorithms are linear time.

Journal ArticleDOI
Siegel1
TL;DR: The "time/space/inter-processor-transfer" complexities of the two algorithm approaches are analyzed in order to quantify the differences resulting from the two -strategies.
Abstract: Image correlation is representative of a wide variety of window-based image processing tasks. The way in which multimi-croprocessor systems (e.g., PASM) can use SIMD parallelism to perform image correlation is examined. Two fundamental algorithm strategies are explored. In one approach, all of the data that will be needed in a processor are transferred to the processor and operated on there. In the other, each processor performs all possible operations on its local data, generating partial results which are then transferred to the processor in which they are needed. The "time/space/inter-processor-transfer" complexities of the two algorithm approaches are analyzed in order to quantify the differences resulting from the two -strategies. For both approaches, the asymptotic time complexity of the N-processor SIMD algorithms is (1/N)th that of the corresponding serial algorithms.

Journal ArticleDOI
TL;DR: A polynomial time algorithm is given that finds a min-cut linear arrangement of trees whose cost is within a factor of 2 of optimal.
Abstract: The min-cut linear arrangement problem is one of several one-dimensional layout problems for undirected graphs that may be of relevance to VSLI design.This paper gives a polynomial time algorithm that finds a min-cut linear arrangement of trees whose cost is within a factor of 2 of optimal. For complete m-ary trees a linear time algorithm is given that finds an optimum min-cut linear arrangement.

Proceedings ArticleDOI
01 Jul 1982
TL;DR: This paper shows that the “wire frame” problem is equivalent to finding the embedding of a graph on a closed orientable surface, which satisfies all the topological properties of physical volumes.
Abstract: The design of complex geometric models has been and will continue to be one of the limiting factors in computer graphics. A careful enumeration of the properties of topologically correct models, so that they may be automatically enforced, can greatly speed this process. An example of the problems inherent in these methods is the “wire frame” problem, the automatic generation of a volume model from an edge-vertex graph. The solution to this problem has many useful applications in geometric modelling and scene recognition.This paper shows that the “wire frame” problem is equivalent to finding the embedding of a graph on a closed orientable surface. Such an embedding satisfies all the topological properties of physical volumes. Unfortunately graphical embeddings are not necessarily unique. But when we restrict the embedding surface so that it is equivalent to a sphere, and require that the input graph be three-connected, the resulting object is unique. Given these restrictions there exists a linear time algorithm to automatically convert the “wire frame” to the winged edge representation, a very powerful data structure. Applications of this algorithm are discussed and several examples shown.

Journal ArticleDOI
TL;DR: This paper describes an algorithm to construct, for each expression in a given program text, a symbolic expression whose value is equal to the value of the text expression for all executions of the program.
Abstract: This paper describes an algorithm to construct, for each expression in a given program text, a symbolic expression whose value is equal to the value of the text expression for all executions of the program. We call such a mapping from text expressions to symbolic expressions a cover. Covers are useful in such program optimization techniques as constant propagation and code motion. The particular cover constructed by our methods is in general weaker than the covers obtainable by the methods of [Ki], [FKU], [RL], [R2] but our method has the advantage of being very efficient. It requires $O(m\alpha (m,n) + l)$ operations if extended bit vector operations have unit cost, where n is the number of vertices in the control flow graph of the program, m is the number of edges, l is the length of the program text, and $\alpha $ is related to a functional inverse of Ackermann’s function [T2]. Our method does not require that the program be well-structured nor that the flow graph be reducible.

Journal ArticleDOI
TL;DR: It is shown that deciding whether two distant agents can arrive at compatible decisions without any communication can be done in polynomial time if there are two possible decisions for each agent, but is NP-complete if one agent has three or more alternatives.
Abstract: The complexity of two problems of distributed computation and decision-making is studied. It is shown that deciding whether two distant agents can arrive at compatible decisions without any communication can be done in polynomial time if there are two possible decisions for each agent, but is NP-complete if one agent has three or more alternatives. It is also shown that minimizing the amount of communication necessary for the distributed computation of a function, when two distant computers receive each a part of the input, is NP-complete. This proves a conjecture due to A. Yao.

Journal ArticleDOI
Clyde L. Monma1
TL;DR: Linear-time algorithms are presented for several problems of scheduling n equal-length tasks on m identical parallel processors subject to precedence constraints, which improves upon previous time bounds for the maximum lateness problem with treelike precedence constraints.
Abstract: Linear-time algorithms are presented for several problems of scheduling n equal-length tasks on m identical parallel processors subject to precedence constraints. This improves upon previous time bounds for the maximum lateness problem with treelike precedence constraints, the number-of-late-tasks problem without precedence constraints, and the one machine maximum lateness problem with general precedence constraints.

Journal ArticleDOI
TL;DR: In this paper, a polynomial algorithm was proposed to detect whether two communicating finite state machines can reach a deadlock in O(m 3 n −n −3 ) time.
Abstract: Let M and N be two communicating finite state machines which exchange one type of message. We develope a polynomial algorithm to detect whether or not M and N can reach a deadlock. The time complexity of the algorithm is O(m^{3}n^{3} and its space is O(mn) where m and n are the numbers of states in M and N , respectively. The algorithm can also be used to verify that two communicating machines which exchange many types of messages are deadlock-free.

Proceedings ArticleDOI
03 Nov 1982
TL;DR: A subclass of so-called well-formed layout specifications is defined, which has a concise layout, which can be hierarchically described in linear space, and is found in polynomial time, which is in general not a minimum area layout.
Abstract: In many CAD systems for VLSI design the specification of a layout is internally represented by a set of geometric constraints that take the form of linear inequalities between pairs of layout components. Some of the constraints may be explicitly stated by the circuit designer. Others are internally generated by the CAD system, using the design rules of the fabrication process. Layout compaction is then equivalent to finding a minimum area layout satisfying all constraints. We discuss the complexity of the constraint resolution problem arising in this context. Hereby we allow circuits to be specified hierarchically. The complexity of the constraint resolution is then measured in terms of the length of the hierarchical specification. We show the following results: 1. It is decidable in polynomial (cubic) time whether a given hierarchical layout specification yields a consistent set of geometric constraints. The size of minimum area layouts satisfying the constraints can also be determined in cubic time. 2. For every layout specification that is consistent a hierarchical description L of a minimum area layout can be computed in polynomial time in the length of L. 3. There is a consistent layout specification with the following property: No layout satisfying the constraints is concise, i.e., every hierarchical layout description consistent with the specification has a length which grows exponentially in the length of the specification. 4. We define a subclass of so-called well-formed layout specifications. Each well-formed specification has a concise layout, which can be hierarchically described in linear space. Such a layout can be found in polynomial time. However, it is in general not a minimum area layout. Indeed, there is a consistent well-formed specification all of whose minimum area layouts are inconcise,.i.e., need exponential space to be described.