scispace - formally typeset
Search or ask a question

Showing papers on "Time complexity published in 1985"


Proceedings ArticleDOI
21 Oct 1985
TL;DR: An automata-theoretic approach is described, whereby probabilistic quantification over sets of computations is reduced to standard quantificationover individual computations, and a new determinization construction for ω-automata is used to improve the time complexity of the algorithm by two exponentials.
Abstract: The verification problem for probabilistic concurrent finite-state program is to decide whether such a program satisfies its linear temporal logic specification. We describe an automata-theoretic approach, whereby probabilistic quantification over sets of computations is reduced to standard quantification over individual computations. Using new determinization construction for ω-automata, we manage to improve the time complexity of the algorithm by two exponentials. The time complexity of the final algorithm is polynomial in the size of the program and doubly exponential in the size of the specification.

814 citations


Journal ArticleDOI
21 Oct 1985
TL;DR: A natural class PLS is defined consisting essentially of those local search problems for which local optimality can be verified in polynomial time, and it is shown that there are complete problems for this class.
Abstract: We investigate the complexity of finding locally optimal solutions to NP-hard combinatorial optimization problems. Local optimality arises in the context of local search algorithms, which try to find improved solutions by considering perturbations of the current solution (“neighbors” of that solution). If no neighboring solution is better than the current solution, it is locally optimal. Finding locally optimal solutions is presumably easier than finding optimal solutions. Nevertheless, many popular local search algorithms are based on neighborhood structures for which locally optimal solutions are not known to be computable in polynomial time, either by using the local search algorithms themselves or by taking some indirect route. We define a natural class PLS consisting essentially of those local search problems for which local optimality can be verified in polynomial time, and show that there are complete problems for this class. In particular, finding a partition of a graph that is locally optimal with respect to the well-known Kernighan-Lin algorithm for graph partitioning is PLS-complete, and hence can be accomplished in polynomial time only if local optima can be found in polynomial time for all local search problems in PLS.

792 citations


Proceedings ArticleDOI
01 Jan 1985
TL;DR: An algorithm for checking satisfiability of a linear time temporal logic formula over a finite state concurrent program and a formal proof in case the formula is valid over the program is presented.
Abstract: We present an algorithm for checking satisfiability of a linear time temporal logic formula over a finite state concurrent program. The running time of the algorithm is exponential in the size of the formula but linear in the size of the checked program. The algorithm yields also a formal proof in case the formula is valid over the program. The algorithm has four versions that check satisfiability by unrestricted, impartial, just and fair computations of the given program.

731 citations


Journal ArticleDOI
TL;DR: The time complexity of several node, arc and path consistency algorithms is analyzed and it is proved that arc consistency is achievable in time linear in the number of binary constraints.

701 citations


Journal ArticleDOI
TL;DR: An attempt is made to identify important subclasses of NC and give interesting examples in each subclass, and a new problem complete for deterministic polynomial time is given, namely, finding the lexicographically first maximal clique in a graph.
Abstract: The class NC consists of problems solvable very fast (in time polynomial in log n ) in parallel with a feasible (polynomial) number of processors. Many natural problems in NC are known; in this paper an attempt is made to identify important subclasses of NC and give interesting examples in each subclass. The notion of NC 1 -reducibility is introduced and used throughout (problem R is NC 1 -reducible to problem S if R can be solved with uniform log-depth circuits using oracles for S ). Problems complete with respect to this reducibility are given for many of the subclasses of NC . A general technique, the “parallel greedy algorithm,” is identified and used to show that finding a minimum spanning forest of a graph is reducible to the graph accessibility problem and hence is in NC 2 (solvable by uniform Boolean circuits of depth O (log 2 n ) and polynomial size). The class LOGCFL is given a new characterization in terms of circuit families. The class DET of problems reducible to integer determinants is defined and many examples given. A new problem complete for deterministic polynomial time is given, namely, finding the lexicographically first maximal clique in a graph. This paper is a revised version of S. A. Cook, (1983, in “Proceedings 1983 Intl. Found. Comut. Sci. Conf.,” Lecture Notes in Computer Science Vol. 158, pp. 78–93, Springer-Verlag, Berlin/New York).

686 citations


Journal ArticleDOI
TL;DR: This paper presents a linear time algorithm for recognizing cographs and constructing their cotree representation, which is possible to design very fast polynomial time algorithms for problems which are intractable for graphs in general.
Abstract: Cographs are the graphs formed from a single vertex under the closure of the operations of union and complement. Another characterization of cographs is that they are the undirected graphs with no induced paths on four vertices. Cographs arise naturally in such application areas as examination scheduling and automatic clustering of index terms. Furthermore, it is known that cographs have a unique tree representation called a cotree. Using the cotree it is possible to design very fast polynomial time algorithms for problems which are intractable for graphs in general. Such problems include chromatic number, clique determination, clustering, minimum weight domination, isomorphism, minimum fill-in and Hamiltonicity. In this paper we present a linear time algorithm for recognizing cographs and constructing their cotree representation.

642 citations


Journal ArticleDOI
TL;DR: A linear-time algorithm for the special case of the disjoint set union problem in which the structure of the unions (defined by a “union tree”) is known in advance that is useful in finding maximum cardinality matchings in nonbipartite graphs.

608 citations


Proceedings ArticleDOI
23 Oct 1985
TL;DR: In this paper, the size of depth-k Boolean circuits for computing certain functions is shown to be polynomial in the number of levels in the hierarchy of the hierarchy, i.e., ΣkP,A is properly contained in Σp+1P+A for all k.
Abstract: We present exponential lower bounds on the size of depth-k Boolean circuits for computing certain functions. These results imply that there exists an oracle set A such that, relative to A, all the levels in the polynomial-time hierarchy are distinct, i.e., ΣkP,A is properly contained in Σk+1P,A for all k.

522 citations


Journal ArticleDOI
TL;DR: This method gives a polynomial time attack on knapsack public key cryptosystems that can be expected to break them if they transmit information at rates below dc (n), as n → ∞.
Abstract: The subset sum problem is to decide whether or not the 0-l integer programming problem Sni=l aixi = M, ∀I, xI = 0 or 1, has a solution, where the ai and M are given positive integers. This problem is NP-complete, and the difficulty of solving it is the basis of public-key cryptosystems of knapsack type. An algorithm is proposed that searches for a solution when given an instance of the subset sum problem. This algorithm always halts in polynomial time but does not always find a solution when one exists. It converts the problem to one of finding a particular short vector v in a lattice, and then uses a lattice basis reduction algorithm due to A. K. Lenstra, H. W. Lenstra, Jr., and L. Lovasz to attempt to find v. The performance of the proposed algorithm is analyzed. Let the density d of a subset sum problem be defined by d = n/log2(maxiai). Then for “almost all” problems of density d

460 citations


Book ChapterDOI
01 Jan 1985
TL;DR: In this paper, a local-ratio theorem for approximating the weighted vertex cover problem is presented, which consists of reducing the weights of vertices in certain subgraphs and has the effect of local-approximation.
Abstract: A local-ratio theorem for approximating the weighted vertex cover problem is presented. It consists of reducing the weights of vertices in certain subgraphs and has the effect of local-approximation. Putting together the Nemhauser-Trotter local optimization algorithm and the local-ratio theorem yields several new approximation techniques which improve known results from time complexity, simplicity and performance-ratio point of view. The main approximation algorithm guarantees a ratio of where K is the smallest integer s.t. † This is an improvement over the currently known ratios, especially for a “practical” number of vertices (e.g. for graphs which have less than 2400, 60000, 10 12 vertices the ratio is bounded by 1.75, 1.8, 1.9 respectively).

434 citations


Journal ArticleDOI
TL;DR: A quadratic time network algorithm is provided for computing an exact confidence interval for the common odds ratio in several 2×2 independent contingency tables, shown to be a considerable improvement on an existing algorithm developed by Thomas (1975), which relies on exhaustive enumeration.
Abstract: A quadratic time network algorithm is provided for computing an exact confidence interval for the common odds ratio in several 2×2 independent contingency tables. The algorithm is shown to be a considerable improvement on an existing algorithm developed by Thomas (1975), which relies on exhaustive enumeration. Problems that would formerly have consumed several CPU hours can now be solved in a few CPU seconds. The algorithm can easily handle sparse data sets where asymptotic results are suspect. The network approach, on which the algorithm is based, is also a powerful tool for exact statistical inference in other settings.

Journal ArticleDOI
TL;DR: The one-dimensional on-line bin-packing problem is considered and a revised version of HARMONIC, an O-space and time algorithm, is presented and is shown to have a worst-case performance ratio of less than 1.636.
Abstract: The one-dimensional on-line bin-packing problem is considered, A simple O(1)-space and O(n)-time algorithm, called HARMONICM, is presented. It is shown that this algorithm can achieve a worst-case performance ratio of less than 1.692, which is better than that of the O(n)-space and O(n log n)-time algorithm FIRST FIT. Also shown is that 1.691 … is a lower bound for all0(1)-space on-line bin-packing algorithms. Finally a revised version of HARMONICM , an O(n)-space and O(n)- time algorithm, is presented and is shown to have a worst-case performance ratio of less than 1.636.

Proceedings Article
01 Jan 1985
TL;DR: In this paper, the size of depth-k Boolean circuits for computing certain functions is shown to be polynomial in the number of levels in the hierarchy of the hierarchy, i.e., ΣkP,A is properly contained in Σp+1P+A for all k.
Abstract: We present exponential lower bounds on the size of depth-k Boolean circuits for computing certain functions. These results imply that there exists an oracle set A such that, relative to A, all the levels in the polynomial-time hierarchy are distinct, i.e., ΣkP,A is properly contained in Σk+1P,A for all k.

Journal ArticleDOI
John H. Reif1
TL;DR: It is shown that this problem, for undirected and directed graphs, is complete in deterministic polynomial time with respect to deterministic log-space reductions.

01 Sep 1985
TL;DR: In this paper, the authors consider the question whether there exists a logic that captures polynomial time (without presuming the presence of a linear order) and conjecture the negative answer.
Abstract: The chapter consists of two quite different parts. The first part is a survey (including some new results) on finite model theory. One particular point deserves a special attention. In computer science, the standard computation model is the Turing machine whose inputs are strings; other algorithm inputs are supposed to be encoded with strings. However, in combinatorics, database theory, etc., one usually does not distinguish between isomorphic structures (graphs, databases, etc.). For example, a database query should provide information about the database rather than its implementation. In such cases, there is a problem with string presentation of input objects: there is no known, easily computable string encoding of isomorphism classes of structures. Is there a computation model whose machines do not distinguish between isomorphic structures and compute exactly PTime properties? The question is intimately related to a question by Chandra and Harel in “Structure and complexity of relational queries”, J. Comput. and System Sciences 25 (1982), 99-128. We formalize the question as the question whether there exists a logic that captures polynomial time (without presuming the presence of a linear order) and conjecture the negative answer. The first part is based on lectures given at the 1984 Udine Summer School on Computation Theory and summarized in the technical report “Logic and the Challenge of Computer Science”, CRL-TR-10-85, Sep. 1985, Computing Research Lab, University of Michigan, Ann Arbor, Michigan. In the second part, we introduce a new computation model: evolving algebras (later renamed abstract state machines). This new approach to semantics of computations and in particulur to semantics of programming languages emphasizes dynamic and resource-bounded aspects of computation. It is illustrated on the example of Pascal. The technical report mentioned above contained an earlier version of part 2. The final version was written in 1986.

Journal ArticleDOI
TL;DR: A model and algorithm that can be used to find consistent and realistic reorder intervals for each item in large-scale production-distribution systems is presented and the optimal solution can be found using the proposed algorithm, which is a polynomial time algorithm.
Abstract: The objective of this paper is to present a model and algorithm that can be used to find consistent and realistic reorder intervals for each item in large-scale production-distribution systems. We assume such systems can be represented by directed acyclic graphs. Demand for each end item is assumed to occur at a constant and continuous rate. Production is instantaneous and no backorders are allowed. Both fixed setup costs and echelon holding costs are charged at each stage. We limit our attention to nested and stationary policies. Furthermore, we restrict the reorder interval for each stage to be a power of 2 times a base planning period. The model that results from these assumptions is an integer nonlinear programming problem. The optimal solution can be found using the proposed algorithm, which is a polynomial time algorithm. A real world example is given to illustrate the procedure.

Proceedings Article
18 Aug 1985
TL;DR: Positive results are shown for significant subclasses that allow not only propositional predicates but also some relations and under certain restrictions on these subclasses the learning algorithms are well suited to implementation on neural networks of threshold elements.
Abstract: The question of whether concepts expressible as disjunctions of conjunctions can be learned from examples in polynomial time is investigated. Positive results are shown for significant subclasses that allow not only propositional predicates but also some relations. The algorithms are extended so as to be provably tolerant to a certain quantifiable error rate in the examples data. It is further shown that under certain restrictions on these subclasses the learning algorithms are well suited to implementation on neural networks of threshold elements. The possible importance of disjunctions of conjunctions as a knowledge representation stems from the observations that on the one hand humans appear to like using it andon the other, that there is circumstantial evidence that significantly larger classes may not be learnable in polynomial time. An NP-completeness result corroborating the latter is also presented.

Proceedings ArticleDOI
21 Oct 1985
TL;DR: Evidence that the 3-D dynamic movement problem is intractable even if B has only a constant number of degrees of freedom of movement is provided, and evidence that the problem is PSPACE-hard if B is given a velocity modulus bound on its movements.
Abstract: This paper investigates the computational complexity of planning the motion of a body B in 2-D or 3-D space, so as to avoid collision with moving obstacles of known, easily computed, trajectories. Dynamic movement problems are of fundamental importance to robotics, but their computational complexity has not previously been investigated. We provide evidence that the 3-D dynamic movement problem is intractable even if B has only a constant number of degrees of freedom of movement. In particular, we prove the problem is PSPACE-hard if B is given a velocity modulus bound on its movements and is NP hard even if B has no velocity modulus bound, where in both cases B has 6 degrees of freedom. To prove these results we use a unique method of simulation of a Turing machine which uses time to encode configurations (whereas previous lower bound proofs in robotics used the system position to encode configurations and so required unbounded number of degrees of freedom). We also investigate a natural class of dynamic problems which we call asteroid avoidance problems: B, the object we wish to move, is a convex polyhedron which is free to move by translation with bounded velocity modulus, and the polyhedral obstacles have known translational trajectories but cannot rotate. This problem has many applications to robot, automobile, and aircraft collision avoidance. Our main positive results are polynomial time algorithms for the 2-D asteroid avoidance problem with bounded number of obstacles as well as single exponential time and nO(log n) space algorithms for the 3-D asteroid avoidance problem with an unbounded number of obstacles. Our techniques for solving these asteroid avoidance problems are novel in the sense that they are completely unrelated to previous algorithms for planning movement in the case of static obstacles. We also give some additional positive results for various other dynamic movers problems, and in particular give polynomial time algorithms for the case in which B has no velocity bounds and the movements of obstacles are algebraic in space-time.

Journal ArticleDOI
TL;DR: It is shown that the problem of determining if a CTL* formula is satisfiable in a structure generated by a binary relation is decidable in triple exponential time.
Abstract: In this paper the full branching time logic (CTL*) is studied. It has basic modalities consisting of a path quantifier, either A (“for all paths”) of E (“for some path”), followed by an arbitrary linear time assertion composed of unrestricted combinations of the usual linear temporal operators F (“sometime”), G (“always”), X (“nexttime”), and U (“until”). It is shown that the problem of determining if a CTL* formula is satisfiable in a structure generated by a binary relation is decidable in triple exponential time. The decision procedure exploits the special structure of the finite state ω-automata for linear temporal formulae which allows them to be determinized with only a single exponential blowup in size. Also the expressive power of tree automata is compared with that of CTL* augmented by quantified auxillary propositions.

Journal ArticleDOI
TL;DR: This work identifies polynomial time solvable special cases and derive good performance bounds for several natural approximation algorithms, assuming the existence of a central controller, and shows how these bounds can be maintained in a distributed regime.
Abstract: We consider a problem of scheduling file transfers in a network so as to minimize overall finishing time. Although the general problem is NP-complete, we identify polynomial time solvable special cases and derive good performance bounds for several natural approximation algorithms, assuming the existence of a central controller. We also show how these bounds can be maintained in a distributed regime.

Journal ArticleDOI
Chris N. Potts1
TL;DR: A heuristic method which uses linear programming to form a partial schedule leaving at most m−1 jobs unscheduled and which has a (best possible) worst-case performance ratio of 2 and a computational requirement which is polynomial in n although it is exponential in m.

Journal ArticleDOI
TL;DR: It is proved that determining the ground state of a cluster of identical atoms, interacting under two-body central forces, belongs to the class of NP-hard problems, which means that as yet no polynomial time algorithm solving this problem is known and that it is very unlikely that such an algorithm exists.
Abstract: The authors prove that determining the ground state of a cluster of identical atoms, interacting under two-body central forces, belongs to the class of NP-hard problems. This means that as yet no polynomial time algorithm solving this problem is known and, moreover, that it is very unlikely that such an algorithm exists. It also suggests the need for good heuristics.

BookDOI
01 Aug 1985
TL;DR: The average-case Cost of the Brute force and the Knuth-Morris-Pratt String Matching Algorithm are compared and the Mellin integral Transform in the Analysis of Algorithms is compared.
Abstract: Open Problems in Stringology- 1 - String Matching- Efficient String Matching with Don't-care Patterns- Optimal Factor Transducers- Relating the Average-case Cost of the Brute force and the Knuth-Morris-Pratt String Matching Algorithm- Algorithms for Factorizing and Testing Subsemigroups- 2 - Subword Trees- The Myriad Virtues of Subword Trees- Efficient and Elegant Subword Tree Construction- 3 - Data Compression- Textual Substitution Techniques for Data Compression- Variations on a Theme by Ziv and Lempel- Compression of Two-dimensional Images- Optimal Parsing of Strings- Novel Compression of Sparse Bit Strings- 4 - Counting- The Use and Usefulness of Numeration Systems- Enumeration of Strings- Two Counting Problems Solved via String Encodings- Some Uses of the Mellin integral Transform in the Analysis of Algorithms- 5 - Periods and Other Regularities- Periodicities in Strings- Linear Time Recognition of Square free Strings- Discovering Repetitions in Strings- Some Decision Results on Nonrepetitive Words- 6 - Miscellaneous- On the Complexity of some Word Problems Which Arise in Testing the Security of Protocols- Code Properties and Derivatives of DOL Systems- Words over a Partially Commutative Alphabet- The Complexity of Two-way Pushdown Automata and Recursive Programs- On Context Free Grammars and Random Number Generation

Book
01 Jan 1985
TL;DR: On Parallel Algorithms of some Orthogonal Transforms and the Complexity of Weighted Multi-Constrained Spanning Tree Problems (P. Borowik).
Abstract: On Parallel Algorithms of some Orthogonal Transforms (S.S. Agaian and D.Z. Gevorkian). An Efficient Algorithm for Finding Peripheral Nodes (I. Arany). Computational Aspects of Assigning Characteristic Semigroup of Asynchronous Automata and Their Extensions (S. Bocian and B. Mikolajczak). Reichenbach's Propositional Logic in Algorithmic Form (P. Borowik). The Complexity of Weighted Multi-Constrained Spanning Tree Problems (P. Camerini, G. Galbiati and F. Maffioli). An Algorithm for Finding SC-Preimages of a Deterministic Finite Automaton (K. Chmiel). On Entropy Decomposition Methods and Algorithm Design (Th. Fischer). An Efficient Algorithm for Dynamic String-Storage Allocation (D. Fox). Covering Intervals with Intervals under Containment Constraints (M.R. Garey and R.Y. Pinter). How to Construct Random Functions (O. Goldreich, S. Goldwasser and S. Micali). Four Pebbles Don't Suffice to Search Planar Infinite Labyrinths (F. Hoffmann). Parallel Algorithms: The Impact of Communication Complexity (F. Hossfeld). Tight Worst-Case Bounds for Bin-Packing Algorithms (A. Ivanyi). Hypergraph Planarity and the Complexity of Drawing Venn Diagrams (D.S. Johnson and H.O. Pollak). Convolutional Charaterization of Computability and Complexity of Computations (S. Jukna). Succinct Data Representations and the Complexity of Computations (S. Jukna). Lattices, Basis Reduction and the Shortest Vector Problem (R. Kannan). The Characterization of Some Complexity Classes by Recursion Schemata (M. Liskiewicz, K. Lorys and M. Piotrow). Some Algorithmic Problems on Lattices (L. Lovasz). Linear Proofs in the Non-Negative Cone (J. Moravek). Characterizing Some Low Arithmetic Classes (J.B. Paris, W.G. Handley and A.J. Wilkie). Constructing a Simplex Form of a Rational Matrix (A. Rycerz and J. Jegier). Computing N with a Few Number of Additions (I. Ruzsa and Zs. Tuza). A Hierarchy of Polynomial Time Basis Reduction Algorithms (C.P. Schnorr). A Topological View of Some Problems in Complexity Theory (M. Sipser). v-Computations on Turing Machines and the Accepted Languages (L. Staiger). On the Greedy Algorithm for an Edge-Partitioning Problem (Gy. Turan). The Complexity of Linear Quadtrees (T.R. Walsh).

Book ChapterDOI
TL;DR: The problem of decomposing a non-convex simple polygon into a minimum number of convex polygons is solved and the decomposition allows for the introduction of Steiner points.
Abstract: The problem of decomposing a non-convex simple polygon into a minimum number of convex polygons is solved. The decomposition allows for the introduction of Steiner points. Two algorithms are proposed. The first verifies that the problem is doable in polynomial time. The second provides an efficient method. Along the way, numerous results of independent interest in pure geometry as well as geometric complexity are stated.

Journal ArticleDOI
TL;DR: In this article, an integral formulation of the elastodynamic equations is presented and discretized to develope a numerical solution procedure, where constant space and linear time dependent interpolation functions are implemented.

Journal ArticleDOI
TL;DR: This paper addresses the problem of efficiently computing the motor torques required to drive a manipulator arm in free motion, given the desired trajectory—that is the inverse dynamics problem and presents two "mathemati cally exact "formulations especially suited to high-speed, highly parallel implementations using VLSI devices.
Abstract: This paper addresses the problem of efficiently computing the motor torques required to drive a manipulator arm in free motion, given the desired trajectory—that is the inverse dynamics problem. It analyzes the high degree of parallelism inherent in the computations and presents two "mathemati cally exact "formulations especially suited to high-speed, highly parallel implementations using VLSI devices. The first method presented is a parallel version of the recent linear Newton-Euler recursive algorithm. The time cost is linear in the number of joints, but the real-time coefficients are re duced by almost two orders of magnitude. The second formu lation reports a new parallel algorithm that shows that it is possible to improve on the linear time dependency. The real time required to perform the calculations increases only as the [log2] of the number of joints. Either formulation is sus ceptible to a systolic pipelined architecture in which complete sets of joint torques emerge at successive intervals of f...

Journal ArticleDOI
Mihalis Yannakakis1
TL;DR: An algorithm is presented which finds a min-cut linear arrangement of a tree in O(nlogn) time and an extension of the algorithm determines the number of pebbles needed to play the black and white pebble game on a tree.
Abstract: An algorithm is presented that finds a min-cut linear arrangement of a tree in O(n log n) time. An extension of the algorithm determines the number of pebbles needed to play the black and white pebble game on a tree.

Journal ArticleDOI
TL;DR: A broad treatment of the design of algorithms to compute the decomposition possibilities for a large class of discrete structures, including the substitution decomposition, and it is shown that for arbitrary relations the composition tree can be constructed in polynomial time.
Abstract: In the last years, decomposition techniques have seen an increasing application to the solution of problems from operations research and combinatorial optimization, in particular in network theory and graph theory. This paper gives a broad treatment of a particular aspect of this approach, viz. the design of algorithms to compute the decomposition possibilities for a large class of discrete structures. The decomposition considered is thesubstitution decomposition (also known as modular decomposition, disjunctive decomposition, X-join or ordinal sum). Under rather general assumptions on the type of structure considered, these (possibly exponentially many) decomposition possibilities can be appropriately represented in acomposition tree of polynomial size. The task of determining this tree is shown to be polynomially equivalent to the seemingly weaker task of determining the closed hull of a given set w.r.t. a closure operation associated with the substitution decomposition. Based on this reduction, we show that for arbitrary relations the composition tree can be constructed in polynomial time. For clutters and monotonic Boolean functions, this task of constructing the closed hull is shown to be Turing-reducible to the problem of determining the circuits of the independence system associated with the clutter or the prime implicants of the Boolean function. This leads to polynomial algorithms for special clutters or monotonic Boolean functions. However, these results seem not to be extendable to the general case, as we derive exponential lower bounds for oracle decomposition algorithms for arbitrary set systems and Boolean functions.

Proceedings ArticleDOI
D Harel1
01 Dec 1985
TL;DR: The notions of pseudo and external dominators which are both computable in linear time are introduced and made applicable for finding immediate dominators and an algorithm for a limited class of graphs which includes cycle free graphs is given which can be used to find dominators in reducible flow graphs.
Abstract: In the first part of the paper we show how to extend recent methods for solving a special case of the union-find problem in linear time, to a special case of the eval-link-update problem for computing the minimum function defined on paths of trees. In the cases where our approach is applicable, we give a way to perform m eval, link, and update operations on n elements in O(m + n) time and O(n) space, improved from O(m a(m + n, n) + n) time and O(n) space in the more general case, where a is a functional inverse of Ackermans function. The technique gives similar improvements in the efficiency of algorithms for solving several network optimization problems in the case where all the keys involved are integers in some suitable range. In the second part of the paper we show how to use the new technique for speeding up the fastest known algorithm for finding dominators in flow graphs so that it runs in linear time. We introduce the notions of pseudo and external dominators which are both computable in linear time and make the technique introduced in the first part applicable for finding immediate dominators. We first give an algorithm for a limited class of graphs which include cycle free graphs, and thus can be used to find dominators in reducible flow graphs. We then show how to extend our technique for computing dominators on any flow graph. All the algorithms we describe run on a Random Access Machine.