scispace - formally typeset

Showing papers in "Journal of Algorithms in 1991"


Journal ArticleDOI
TL;DR: Using a variation of the interpretability concept, it is shown that all graph properties definable in monadic second-order logic with quantification over vertex and edge sets can be decided in linear time for classes of graphs of fixed bounded treewidth given a tree-decomposition.
Abstract: Using a variation of the interpretability concept we show that all graph properties definable in monadic second-order logic (MS properties) with quantification over vertex and edge sets can be decided in linear time for classes of graphs of fixed bounded treewidth given a tree-decomposition. This gives an alternative proof of a recent result by Courcelle. We allow graphs with directed and/or undirected edges, labeled on edges and/or vertices with labels taken from a finite set. We extend MS properties to extended monadic second-order (EMS) problems involving counting or summing evaluations over sets definable in monadic second-order logic. Our technique allows us also to solve some EMS problems in linear time or in polynomial or pseudopolynomial time for classes of graphs of fixed bounded treewidth. Moreover, it is shown that each EMS problem is in NC for graphs of bounded treewidth. Most problems for which linear time algorithms for graphs of bounded treewidth were previously known to exist, and many others, are EMS problems.

897 citations


Journal ArticleDOI
TL;DR: The marking algorithm is developed, a randomized on-line algorithm for the paging problem, which it is proved that its expected cost on any sequence of requests is within a factor of 2Hk of optimum.
Abstract: The paging problem is that of deciding which pages to keep in a memory of k pages in order to minimize the number of page faults We develop the marking algorithm, a randomized on-line algorithm for the paging problem We prove that its expected cost on any sequence of requests is within a factor of 2Hk of optimum (Where Hk is the kth harmonic number, which is roughly In k) The best such factor that can be achieved is Hk This is in contrast to deterministic algorithms, which cannot be guaranteed to be within a factor smaller than k of optimum An alternative to comparing an on-line algorithm with the optimum off-line algorithm is the idea of comparing it to several other on-line algorithms We have obtained results along these lines for the paging problem Given a set of on-line algorithms ‘Support was provided by a Weizmann fellowship ‘Partial support was provided by the International Computer Science Institute, Berkeley, CA, and by NSF Grant CCR-8411954 3Support was provided by the International Computer Science Institute and operating grant A8092 of the Natural Sciences and Engineering Research Council of Canada Current

455 citations


Journal ArticleDOI
TL;DR: A new proof of the result, due to A. LaPaugh, that a graph may be optimally “searched” without clearing any edge twice is given.
Abstract: We give a new proof of the result, due to A. LaPaugh, that a graph may be optimally “searched” without clearing any edge twice.

272 citations


Journal ArticleDOI
TL;DR: A new structure called a “stable partition” is defined, which generalizes the notion of a complete stable matching, and it is proved that every instance of the stable roommates problem has at least one such structure.
Abstract: The stable roommates problem is a well-known problem of matching n people into n 2 disjoint pairs so that no two unmatched persons both prefer each other to their partners under the matching. We call such a matching “a complete stable matching.” It is known that a complete stable matching may not exist. Irving described an O(n2) algorithm that would find one complete stable matching if there is one, or would report that none exists. In this paper, we give a necessary and sufficient condition for the existence of a complete stable matching; namely, the non-existence of any odd party, which will be defined subsequently. We define a new structure called a “stable partition,” which generalizes the notion of a complete stable matching, and prove that every instance of the stable roommates problem has at least one such structure. We also show that a stable partition contains all the odd parties, if there are any. Finally we have an O(n2) algorithm that finds one stable partition which in turn gives all the odd parties.

181 citations


Journal ArticleDOI
TL;DR: The problem of maintaining on-line a solution to the All Pairs Shortest Paths Problem in a directed graph G = (V,E) where edges may be dynamically inserted or have their cost decreased is considered and a new data structure is introduced which is able to answer queries concerning the length of the shortest path between any two vertices in constant time.
Abstract: We consider the problem of maintaining on-line a solution to the All Pairs Shortest Paths Problem in a directed graph G = (V,E) where edges may be dynamically inserted or have their cost decreased. For the case of integer edge costs in a given range [1…C], we introduce a new data structure which is able to answer queries concerning the length of the shortest path between any two vertices in constant time and to trace out the shortest path between any two vertices in time linear in the number of edges reported. The total time required to maintain the data structure under a sequence of at most O(n2) edge insertions and at most O(Cn2) edge cost decreases is O(Cn3 log(nC)) in the worst case, where n is the total number of vertices in G. For the case of unit edge costs, the total time required to maintain the data structure under a sequence of at most O(n2) insertions of edges becomes O(n3 logn) in the worst case. The same bounds can be achieved for the problem of maintaining on-line longest paths in directed acyclic graphs. All our algorithms improve previously known algorithms and are only a logarithmic factor away from the best possible bounds.

174 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of finding k points of a set S that form a small set under some given measure, and present efficient algorithms for several natural measures including the diameter and variance.
Abstract: Let S be a set consisting of n points in the plane. We consider the problem of finding k points of S that form a “small” set under some given measure, and present efficient algorithms for several natural measures including the diameter and the variance.

148 citations


Journal ArticleDOI
TL;DR: A probabilistic algorithm with execution time O(n log2 n + n log n∥log p∥), which for a graph G on n vertices and a real number p > 0 either finds a tree-decomposition of width ≤ 6w or answers that the tree-width of G is ≥ w; this second answer may be wrong but with probability at most p.
Abstract: A graph G has tree-width at most w if it admits a tree-decomposition of width ≤ w. It is known that once we have a tree-decomposition of a graph G of bounded width, many NP-hard problems can be solved for G in linear time. For w ≤ 3 we give a linear time algorithm for finding such a decomposition and for a general fixed w we obtain a probabilistic algorithm with execution time O(n log2 n + n log n∥log p∥), which for a graph G on n vertices and a real number p > 0 either finds a tree-decomposition of width ≤ 6w or answers that the tree-width of G is ≥ w; this second answer may be wrong but with probability at most p. The second result is based on a separator technique which may be of independent interest.

100 citations


Journal ArticleDOI
TL;DR: This paper presents algorithms for several natural measures, including the diameter ( set measure ), the area, perimeter, or diagonal of the smallest enclosing axes-parallel rectangle ( rectangular measure), the side length of the largest enclosing axiomatic square ( square measure), and the radius of the biggest enclosing circle ( circular measure).
Abstract: We consider the following problem: given a planar set of points S , a measure μ acting on S , and a pair of values μ 1 and μ 2 , does there exist a bipartition S = S 1 ∪ S 2 satisfying μ ( S i ) ≤ μ i for i = 1,2? We present algorithms for several natural measures, including the diameter ( set measure ), the area, perimeter, or diagonal of the smallest enclosing axes-parallel rectangle ( rectangular measure ), the side length of the smallest enclosing axes-parallel square ( square measure ), and the radius of the smallest enclosing circle ( circular measure ). The algorithms run in time O ( n log n ) for the set, rectangle, and square measures, and in time O ( n 2 log n ) for the circular measure. The problem of partitioning S into an arbitrary number k of subsets is known to be NP-complete for many of these measures.

85 citations


Journal ArticleDOI
TL;DR: An efficient probabilistic algorithm for a Monte-Carlo approximation to the Hough transform that requires substantially less computation and storage than the standard Houghtransform when applied to patterns that are easily recognized by humans.
Abstract: The Hough transform is a common technique in computer vision and pattern recognition for recognizing patterns of points. We describe an efficient probabilistic algorithm for a Monte-Carlo approximation to the Hough transform. Our algorithm requires substantially less computation and storage than the standard Hough transform when applied to patterns that are easily recognized by humans. The probabilistic steps involve randomly choosing small subsets of points that jointly vote for likely patterns.

77 citations


Journal ArticleDOI
TL;DR: This paper shows that the optimal K-level quantizer problem can be solved in O(KN) time, by a better understanding of the objective function in this particular non-linear programming problem and the use of Aggarwal et al.'s matrix-searching technique.
Abstract: Optimal quantization, a fundamental problem in source coding and information theory, can be formulated as a discrete optimization problem. In 1964 Bruce (“Optimum Quantization,” Sc.D. thesis, MIT, May 1964) devised a dynamic programming algorithm for discrete optimal quantization. For the mean-square error measure and when the amplitude density function of the quantized signal is represented by a histogram of N points, Bruce's algorithm can compute the optimal K-level quantizer in O(KN2) time. This paper shows that the same problem can be solved in O(KN) time. The improvement is made by a better understanding of the objective function in this particular non-linear programming problem and the use of Aggarwal et al.'s matrix-searching technique.

71 citations


Journal ArticleDOI
TL;DR: The authors' algorithm for computing the complete visibility polygon of P from a convex set inside P leads to efficient algorithms for the following problems: Given a polygon Q of m vertices inside another polygon P of n vertices, construct a minimum nested convex polygon K between P and Q in O((n + m)log k) time, where k is the number of vertices.
Abstract: In this paper, we propose efficient algorithms for computing the complete and weak visibility polygons of a simple polygon P of n vertices from a convex set C inside P. The algorithm for computing the complete visibility polygon of P from C takes O(n + k) time in the worst case, where k is the number of extreme points of the convex set C. Given a triangulation of P - C, the algorithm for computing the weak visibility polygon of P from C takes O(n + k) time in the worst case. We also show that computing the complete and weak visibility polygons of P from a nonconvex set inside P has the same time complexity. The algorithm for computing the complete visibility polygon of P from a convex set inside P leads to efficient algorithms for the following problems: (i) Given a polygon Q of m vertices inside another polygon P of n vertices, construct a minimum nested convex polygon K between P and Q. The algorithm runs in O((n + m)log k) time, where k is the number of vertices of K. This is an improvement over the O((n + m)log(n + m)) time algorithm of Wang and Chan. (ii) Given two points inside a polygon P, compute a minimum link path between them inside P. Given a triangulation of P, the algorithm takes O(n) time. Suri also proposed a linear time algorithm for this problem in a triangulated polygon but our algorithm is simpler.

Journal ArticleDOI
TL;DR: A result of independent interest is a parallel hashing technique that enables drastic reduction of space requirements for the price of using randomness in the parallel sorting algorithm and for some parallel string matching algorithms.
Abstract: The problem of sorting n integers from a restricted range [1… m ], where m is a superpolynomial in n , is considered. An o ( n log n ) randomized algorithm is given. Our algorithm takes O ( n log log m ) expected time and O ( n ) space. (Thus, for m = n polylog( n ) we have an O ( n log log n ) algorithm.) The algorithm is parallelizable. The resulting parallel algorithm achieves optimal speedup. Some features of the algorithm make us believe that it is relevant for practical applications. A result of independent interest is a parallel hashing technique. The expected construction time is logarithmic using an optimal number of processors, and searching for a value takes O (1) time in the worst case. This technique enables drastic reduction of space requirements for the price of using randomness. Applicability of the technique is demonstrated for the parallel sorting algorithm and for some parallel string matching algorithms. The parallel sorting algorithm is designed for a strong and nonstandard model of parallel computation. Efficient simulations of the strong model on a CRCW PRAM are introduced. One of the simulations even achieves optimal speedup. This is probably the first optimal speedup simulation of a certain kind.

Journal ArticleDOI
TL;DR: An efficient algorithm for Waterman's problem, an on-line two-dimensional dynamic programming problem that is used for the prediction of RNA secondary structure, and an O(n + h log min{h, n2h}) time algorithm for the sparse convex case, where h is the number of possible base pairs in the RNA structure.
Abstract: An on-line problem is a problem where each input is available only after certain outputs have been calculated. The usual kind of problem, where all inputs are available at all times, is referred to as an off-line problem. We present an efficient algorithm for Waterman's problem, an on-line two-dimensional dynamic programming problem that is used for the prediction of RNA secondary structure. Our algorithm uses as a module an algorithm for solving a certain on-line one-dimensional dynamic programming problem. The time complexity of our algorithm is n times the complexity of the on-line one-dimensional dynamic programming problem. For the concave case, we present a linear time algorithm for on-line searching in totally monotone matrices which is a generalization of the on-line one-dimensional problem. This yields an optimal O(n2) time algorithm for the on-line two-dimensional concave problem. The constants in the time complexity of this algorithm are fairly small, which make it practical. For the convex case, we use an O(nα(n)) time algorithm for the on-line one-dimensional problem, where α(·) is the functional inverse of Ackermann's function. This yields an O(n2α(n)) time algorithm for the on-line two-dimensional convex problem. Our techniques can be extended to solve the sparse version of Waterman's problem. We obtain an O(n + h log min {h, n 2 h }) time algorithm for the sparse concave case, and an O(n + hα(h)) log min {h, n 2 h }) time algorithm for the sparse convex case, where h is the number of possible base pairs in the RNA structure. All our algorithms improve on previously known algorithms.

Journal ArticleDOI
TL;DR: The main result of this paper is that the set of minimal braids is co-NP-complete.
Abstract: Braids can be represented as two-dimensional diagrams showing the crossings of strings or as words over the generators of a braid group. A minimal braid is one with the fewest crossings (or the shortest words) among all possible representations topologically equivalent to that braid. The main result of this paper is that the set of minimal braids is co-NP-complete.

Journal ArticleDOI
TL;DR: An algorithm which reduces integer lattices in the two-dimensional case and finds a basis of a lattice consisting of its two successive minima and generalizes the worst-case input configuration of the centered Euclidean algorithm to dimension two is exhibited.
Abstract: Gauss gave an algorithm which reduces integer lattices in the two-dimensional case and finds a basis of a lattice consisting of its two successive minima. We exhibit its worst-case input configuration and then show that this worst-case input configuration generalizes the worst-case input configuration of the centered Euclidean algorithm to dimension two.

Journal ArticleDOI
TL;DR: Tight bounds are given on the average complexity of various problems of a bidirectional ring of n processors, where processors are anonymous, i.e., are indistinguishable.
Abstract: We consider a bidirectional ring of n processors, where processors are anonymous, i.e., are indistinguishable. In this model it is known that “most” functions (in particular XOR and orientation) have worst case message complexity Θ(n2) for asynchronous computations, and Θ(n log n) for synchronous computations. The average case behavior is different; an algorithm that computes XOR asynchronously with O(nn messages on the average is known. In this paper we give tight bounds on the average complexity of various problems. We show the following: • •An asynchronous deterministic algorithm that computes any computable function with O(n log n) messages, on the average (improving the O(nn algorithm). A matching lower bound is proven for functions such as XOR and orientation. • •An asynchronous probabilistic algorithm that computes any computable function with O(n log n) expected messages on any input, using one random bit per processor. A matching lower bound is proven. • •A Monte-Carlo asynchronous algorithm that computes any computable function with O(n) expected messages on any input, using one random bit per processor, with fixed error probability e > 0. • •A synchronous algorithm that computes any computable function optimally in O(n) messages, on the average. • •A synchronous probabilistic algorithm that computes any computable function optimally in O(n) expected messages on any input, using one random bit per processor • •Lower bounds on the complexity of Monte-Carlo algorithms that always terminate.

Journal ArticleDOI
TL;DR: It is proved that for each n there is a graph G n such that the chromatic number of G n is at most n e, but the probability that A(G n, p) (1 − ϑ)n log 2 n for a randomly chosen ordering p is O ( n − Δ ).
Abstract: Given a graph G and an ordering p of its vertices, denote by A ( G , p ) the number of colors used by the greedy coloring algorithm when applied to G with vertices ordered by p . Let e, ϑ, Δ be positive constants. It is proved that for each n there is a graph G n such that the chromatic number of G n is at most n e , but the probability that A(G n , p) (1 − ϑ)n log 2 n for a randomly chosen ordering p is O ( n − Δ ).

Journal ArticleDOI
TL;DR: A probabilistic analysis of the dual bin packing problem is carried out under the assumption that the items are drawn independently from the uniform distribution on [0, 1] and the connection between this problem and the classical binpacking problem as well as to renewal theory is revealed.
Abstract: In the dual bin packing problem, the objective is to assign items of given size to the largest possible number of bins, subject to the constraint that the total size of the items assigned to any bin is at least equal to 1. We carry out a probabilistic analysis of this problem under the assumption that the items are drawn independently from the uniform distribution on [0, 1] and reveal the connection between this problem and the classical bin packing problem as well as to renewal theory.


Journal ArticleDOI
TL;DR: This paper gives the first NC algorithm for recognizing the consecutive 1's property for rows of a (0, 1)-matrix, and shows that the maximum matching problem for arbitrary convex bipartite graphs can be solved within the same complexity bounds.
Abstract: Given a (0, 1)-matrix, the problem of recognizing the consecutive 1's property for rows is to decide whether it is possible to permute the columns such that the resulting matrix has the consecutive 1's in each of its rows. In this paper, we give the first NC algorithm for this problem. The algorithm runs in O(log n + log2 m) time using O(m2n + m3) processors on Common CRCW PRAM, where m × n is the size of the matrix. The algorithm can be extended to detect the circular 1's property within the same resource bounds. We can also make use of the algorithm to recognize convex bipartite graphs in O(log2 n) time using O(n3) processors, where n is the number of vertices in a graph. We further show that the maximum matching problem for arbitrary convex bipartite graphs can be solved within the same complexity bounds, combining the work by Dekel and Sahni, who gave an efficient parallel algorithm for computing maximum matchings in convex bipartite graphs with the condition that the neighbors of each vertex in one vertex set of a bipartite graph occur consecutively in the other vertex set. This broadens the class of graphs for which the maximum matching problem is known to be in NC.

Journal ArticleDOI
Xin He1
TL;DR: A parallel algorithm for recognizing series parallel graphs and constructing decomposition trees and takes O(log2 n + log m) time with O(n + m) processors, where n (m) is the number of vertices (edges) in the graph.
Abstract: We present efficient parallel algorithms for solving three problems for series parallel graphs: 3-coloring, depth-first spanning tree, and breadth-first spanning tree. If the input is given by the decomposition tree, the first two problems can be solved in O(log n) time with O( n log n ) processors, the last problem can be solved in O(log n log log n) time with O(n) processors. We also present a parallel algorithm for recognizing series parallel graphs and constructing decomposition trees. This algorithm takes O(log2 n + log m) time with O(n + m) processors, where n (m) is the number of vertices (edges) in the graph.

Journal ArticleDOI
TL;DR: It is shown that, assuming the generalized Riemann hypothesis, there exists a deterministic polynomial time algorithm, which on input of a rational prime p and a monic integral polynometric f computes all the irreducible factors of f mod p in F p.
Abstract: It is shown that, assuming the generalized Riemann hypothesis , there exists a deterministic polynomial time algorithm, which on input of a rational prime p and a monic integral polynomial f , whose discriminant is not divisible by p and whose roots generate an Abelian extension over Q , computes all the irreducible factors of f mod p in F p [ x ].

Journal ArticleDOI
TL;DR: In this article, the authors developed techniques based on X-ray probing to determine convex n-gons in 7n + 7 half-plane probes and proved linear lower bounds for determination and verification.
Abstract: A half-plane probe through a polygon measures the area of intersection between a half-plane and the polygon. We develop techniques based on X-ray probing to determine convex n-gons in 7n + 7 half-plane probes. We also show n + 1 half-plane probes are sufficient to verify a specified convex polygon and prove linear lower bounds for determination and verification.

Journal ArticleDOI
TL;DR: It is shown that the previously known algorithm BALANCE2 has competitiveness constant not better than 6, and another algorithm whose competitiveness constant is 4 is presented.
Abstract: We consider 2-server algorithms with time complexity O(1) per each request. We show that the previously known algorithm BALANCE2 has competitiveness constant not better than 6, and we present another algorithm whose competitiveness constant is 4.

Journal ArticleDOI
TL;DR: It is shown that any graph G can be embedded with unit congestion in a hypercube of dimension n ≥ max{6⌈log|V(G)|⌉, deg( G )}, but it is NP-complete to determine whether G is congestion-1 embeddable in a given hypercube, even if the source graph is connected.
Abstract: Embedding a source graph in a host graph has long been used to model the problem of processor allocation in a multicomputer system. If the host graph represents a network of processors that uses circuit switching for node-to-node communication, then embedding a graph with congestion-1 is practically as good as embedding it with adjacency preserved, but has the advantage of allowing far more graphs to be embeddable. In this paper, we show that any graph G can be embedded with unit congestion in a hypercube of dimension n ≥ max{6⌈log|V(G)|⌉, deg( G )}, but it is NP-complete to determine whether G is congestion-1 embeddable in a given hypercube of dimension less than max{6⌈log|V(G)|⌉, deg( G )}, even if the source graph is connected. The restriction to connected graphs is important because, in applications, source graphs are usually connected.

Journal ArticleDOI
Xin He1
TL;DR: This paper presents an O ( n log n ) algorithm for finding a minimum 3-cut in planar graphs and improves the best previously known algorithm for the problem by an O( n logn) factor.
Abstract: A 3-cut of a connected graph G is a subset of edges which, when deleted, separates G into three connected components. In this paper we present an O(n log n) algorithm for finding a minimum 3-cut in planar graphs. Our algorithm improves the best previously known algorithm for the problem by an O( n log n) factor.

Journal ArticleDOI
TL;DR: It is shown that the average number of swaps required to construct a heap on n keys by Williams’ method of repeated insertion is (! + o(1))n, where the constant ! is about 1.3.
Abstract: We show that the average number of swaps required to construct a heap on n keys by Williams’ method of repeated insertion is (! + o(1))n, where the constant ! is about 1.3. Further, with high probability the number of swaps is close to this

Journal ArticleDOI
TL;DR: This work presents a parallel algorithm for this problem, which runs in polylog parallel time and uses O ( n 3 ) processors on a PRAM and the major tool it uses is computing a minimum-weight branching with zero-one weights.
Abstract: We study the following problem: given a strongly connected digraph, find a minimal strongly connected spanning subgraph of it. Our main result is a parallel algorithm for this problem, which runs in polylog parallel time and uses O ( n 3 ) processors on a PRAM. Our algorithm is simple and the major tool it uses is computing a minimum-weight branching with zero-one weights. We also present sequential algorithms for the problem that run in time O ( m + n · log n ).

Journal ArticleDOI
TL;DR: The paper points out that the CS interacts with the access model to produce some remarkable synergistic effects that make it possible to use very effective “truncated versions of the CS, which have very modest space requirements.
Abstract: Let R 1 ,…, R n be a linear list of n elements. We assume the independent reference model, with a fixed but unknown access probability vector. We survey briefly the problem of reorganizing the list dynamically, on the basis of accrued references, with the objective of minimizing the expected access cost. The counter scheme (CS) is known to be asymptotically optimal for this purpose. The paper explores the CS, with the aim of reducing its storage requirements. We start with a detailed exposition of its cost function and then point out that it interacts with the access model to produce some remarkable synergistic effects. These make it possible to use very effective “truncated versions” of the CS, which have very modest space requirements. The versions we consider are: (i) the “limited-counters scheme”, which bounds each of the frequency counters to a maximal value c ; (ii) the original CS with a bound on the number of references during which the scheme is active. The bound is chosen so as to achieve a desired level of performance compared with the optimal policy.

Journal ArticleDOI
TL;DR: A parallel algorithm to recognize parity graphs and a parallel algorithm for finding the size of a maximum clique which runs in O(log2 n) time with n3log 2 n processors are presented.
Abstract: A graph is called a parity graph iff for every pair of vertices all minimal chains joining them have the same parity. We study properties of parity graphs from the point of view of parallel algorithms. We present a parallel algorithm to recognize parity graphs which runs in O(log2 n) time with n4log2 n processors on a CREW PRAM computer and a parallel algorithm for finding the size of a maximum clique which runs in O(log2 n) time with n3log2 n processors. The method which is used to find a maximum clique leads also to a simple O(n2) sequential algorithm.