# Showing papers in "Algorithmica in 1987"

••

Bell Labs

^{1}TL;DR: A geometric transformation is introduced that allows Voronoi diagrams to be computed using a sweepline technique and is used to obtain simple algorithms for computing the Vor onoi diagram of point sites, of line segment sites, and of weighted point sites.

Abstract: We introduce a geometric transformation that allows Voronoi diagrams to be computed using a sweepline technique. The transformation is used to obtain simple algorithms for computing the Voronoi diagram of point sites, of line segment sites, and of weighted point sites. All algorithms haveO(n logn) worst-case running time and useO(n) space.

1,209 citations

••

TL;DR: A worst-case lower bound on the length of paths generated by any algorithm operating within the framework of the accepted model is developed; the bound is expressed in terms of the perimeters of the obstacles met by the automaton in the scene.

Abstract: The problem of path planning for an automaton moving in a two-dimensional scene filled with unknown obstacles is considered. The automaton is presented as a point; obstacles can be of an arbitrary shape, with continuous boundaries and of finite size; no restriction on the size of the scene is imposed. The information available to the automaton is limited to its own current coordinates and those of the target position. Also, when the automaton hits an obstacle, this fact is detected by the automaton's "tactile sensor." This information is shown to be sufficient for reaching the target or concluding in finite time that the target cannot be reached. A worst-case lower bound on the length of paths generated by any algorithm operating within the framework of the accepted model is developed; the bound is expressed in terms of the perimeters of the obstacles met by the automaton in the scene. Algorithms that guarantee reaching the target (if the target is reachable), and tests for target reachability are presented. The efficiency of the algorithms is studied, and worst-case upper bounds on the length of generated paths are produced.

694 citations

••

Stanford University

^{1}, Tel Aviv University^{2}, Courant Institute of Mathematical Sciences^{3}, Bell Labs^{4}, Princeton University^{5}TL;DR: Given a triangulation of a simple polygonP, linear-time algorithms for solving a collection of problems concerning shortest paths and visibility withinP are presented.

Abstract: Given a triangulation of a simple polygonP, we present linear-time algorithms for solving a collection of problems concerning shortest paths and visibility withinP. These problems include calculation of the collection of all shortest paths insideP from a given source vertexS to all the other vertices ofP, calculation of the subpolygon ofP consisting of points that are visible from a given segment withinP, preprocessingP for fast "ray shooting" queries, and several related problems.

544 citations

••

TL;DR: The Θ(m) bound on finding the maxima of wide totally monotone matrices is used to speed up several geometric algorithms by a factor of logn.

Abstract: LetA be a matrix with real entries and letj(i) be the index of the leftmost column containing the maximum value in rowi ofA.A is said to bemonotone ifi
1 >i
2 implies thatj(i
1) ≥J(i
2).A istotally monotone if all of its submatrices are monotone. We show that finding the maximum entry in each row of an arbitraryn xm monotone matrix requires Θ(m logn) time, whereas if the matrix is totally monotone the time is Θ(m) whenm≥n and is Θ(m(1 + log(n/m))) whenm

506 citations

••

TL;DR: This paper re-examines, in a unified framework, two classic approaches to the problem of finding a longest common subsequence (LCS) of two strings, and proposes faster implementations for both.

Abstract: This paper re-examines, in a unified framework, two classic approaches to the problem of finding a longest common subsequence (LCS) of two strings, and proposes faster implementations for both. Letl be the length of an LCS between two strings of lengthm andn źm, respectively, and let s be the alphabet size. The first revised strategy follows the paradigm of a previousO(ln) time algorithm by Hirschberg. The new version can be implemented in timeO(lm · min logs, logm, log(2n/m)), which is profitable when the input strings differ considerably in size (a looser bound for both versions isO(mn)). The second strategy improves on the Hunt-Szymanski algorithm. This latter takes timeO((r +n) logn), wherer≤mn is the total number of matches between the two input strings. Such a performance is quite good (O(n logn)) whenr~n, but it degrades to ź(mn logn) in the worst case. On the other hand the variation presented here is never worse than linear-time in the productmn. The exact time bound derived for this second algorithm isO(m logn +d log(2mn/d)), whered ≤r is the number ofdominant matches (elsewhere referred to asminimal candidates) between the two strings. Both algorithms require anO(n logs) preprocessing that is nearly standard for the LCS problem, and they make use of simple and handy auxiliary data structures.

220 citations

••

TL;DR: An easily implemented modification to the divide-and-conquer algorithm for computing the Delaunay triangulation of sites in the plane reduces its expected running time toO(n log logn) for a large class of distributions that includes the uniform distribution in the unit square.

Abstract: An easily implemented modification to the divide-and-conquer algorithm for computing the Delaunay triangulation ofn sites in the plane is presented. The change reduces its ź(n logn) expected running time toO(n log logn) for a large class of distributions that includes the uniform distribution in the unit square. Experimental evidence presented demonstrates that the modified algorithm performs very well forn≤216, the range of the experiments. It is conjectured that the average number of edges it creates--a good measure of its efficiency--is no more than twice optimal forn less than seven trillion. The improvement is shown to extend to the computation of the Delaunay triangulation in theLp metric for 1

211 citations

••

TL;DR: This work defines a large class of problems requiring coordinated, simultaneous action in synchronous systems, and gives a method of transforming specifications of such problems into protocols that are optimal in all runs: these protocols are guaranteed to perform the simultaneous actions as soon as any other protocol could possibly perform them.

Abstract: This work applies the theory of knowledge in distributed systems to the design of efficient fault-tolerant protocols. We define a large class of problems requiring coordinated, simultaneous action in synchronous systems, and give a method of transforming specifications of such problems into protocols that are optimal in all runs: for every possible input to the system and faulty processor behavior, these protocols are guaranteed to perform the simultaneous actions as soon as any other protocol could possibly perform them. This transformation is performed in two steps. In the first step, we extract directly from the problem specification a high-level protocol programmed using explicit tests for common knowledge. In the second step, we carefully analyze when facts become common knowledge, thereby providing a method of efficiently implementing these protocols in many variants of the omissions failure model. In the generalized omissions model, however, our analysis shows that testing for common knowledge is NP-hard. Given the close correspondence between common knowledge and simultaneous actions, we are able to show that no optimal protocol for any such problem can be computationally efficient in this model. The analysis in this paper exposes many subtle differences between the failure models, including the precise point at which this gap in complexity occurs.

182 citations

••

TL;DR: A central result of this paper is a “rounding algorithm” for obtaining integral approximations to solutions of linear equations for matrix A and real vector x.

Abstract: We examine the problem of routing wires of a VLSI chip, where the pins to be connected are arranged in a regular rectangular array. We obtain tight bounds for the worst-case “channel-width” needed to route ann×n array, and develop provably good heuristics for the general case. Single-turn routings are proved to be near-optimal in the worst-case. A central result of our paper is a “rounding algorithm” for obtaining integral approximations to solutions of linear equations. Given a matrix A and a real vector x, then we can find an integral x such that for alli, ¦x
i
-x
i
¦ <1 and (Ax)
i
-(Ax)
i
<Δ. Our error bound Δ is defined in terms of sign-segregated column sums of A:
$$\Delta = \mathop {\max }\limits_j \left( {\max \left\{ {\sum\limits_{i:a_{ij} > 0} {a_{ij} ,} \sum\limits_{i:a_{ij}< 0} { - a_{ij} } } \right\}} \right).$$

115 citations

••

TL;DR: The sorting network described by Ajtaiet al. was the first to achieve a depth ofO(logn), and the networks introduced here are simplifications and improvements based strongly on their work.

Abstract: The sorting network described by Ajtaiet al was the first to achieve a depth ofO(logn) The networks introduced here are simplifications and improvements based strongly on their work While the constants obtained for the depth bound still prevent the construction being of practical value, the structure of the presentation offers a convenient basis for further development

112 citations

••

TL;DR: A number of well-known data structures for computing functions on linear lists are examined and it is shown that they can be canonically transformed into data structures to establish new upper bounds on the complexity of several query-answering problems.

Abstract: The relationship between linear lists and free trees is studied. We examine a number of well-known data structures for computing functions on linear lists and show that they can be canonically transformed into data structures for computing the same functions defined over free trees. This is used to establish new upper bounds on the complexity of several query-answering problems.

106 citations

••

TL;DR: A new and efficient algorithm for planning collision-free motion of a line segment (a rod or a “ladder”) in two-dimensional space amidst polygonal obstacles is presented, which is useful in obtaining efficient motion-planning algorithms for other more complex robot systems.

Abstract: We present here a new and efficient algorithm for planning collision-free motion of a line segment (a rod or a “ladder”) in two-dimensional space amidst polygonal obstacles. The algorithm uses a different approach than those used in previous motion-planning techniques, namely, it calculates the boundary of the (three-dimensional) space of free positions of the ladder, and then uses this boundary for determining the existence of required motions, and plans such motions whenever possible. The algorithm runs in timeO(K logn) =O(n 2 logn) wheren is the number of obstacle corners and whereK is the total number of pairs of obstacle walls or corners of distance less than or equal to the length of the ladder. The algorithm has thus the same complexity as the best previously known algorithm of Leven and Sharir [5], but if the obstacles are not too cluttered together it will run much more efficiently. The algorithm also serves as an initial demonstration of the viability of the technique it uses, which we expect to be useful in obtaining efficient motion-planning algorithms for other more complex robot systems.

••

TL;DR: It is proved that the greedy triangulation heuristic for minimum weight triangulations of convex polygons yields solutions within a constant factor from the optimum within time O(n2) time andO(n) space.

Abstract: We prove that the greedy triangulation heuristic for minimum weight triangulation of convex polygons yields solutions within a constant factor from the optimum. For interesting classes of convex polygons, we derive small upper bounds on the constant approximation factor. Our results contrast with Kirkpatrick's Ω(n) bound on the approximation factor of the Delaunay triangulation heuristic for minimum weight triangulation of convexn-vertex polygons. On the other hand, we present a straightforward implementation of the greedy triangulation heuristic for ann-vertex convex point set or a convex polygon takingO(n
2) time andO(n) space. To derive the latter result, we show that given a convex polygonP, one can find for all verticesv ofP a shortest diagonal ofP incident tov in linear time. Finally, we observe that the greedy triangulation for convex polygons having so-called semicircular property can be constructed in timeO(n logn).

••

TL;DR: An algorithm for determining the shortest restricted path motion of a polygonal object amidstpolygonal obstacles and a variation of this algorithm which minimizes any positive linear combination of length traversed byP and angular rotation of the ladder aboutP.

Abstract: We present an algorithm for determining the shortest restricted path motion of a polygonal object amidst polygonal obstacles. The class of motions which are allowed can be described as follows: a designated vertex,P, of the polygonal object traverses a piecewise linear path, whose breakpoints are restricted to the vertices of the obstacles. The distance measure being minimized is the length of the path traversed byP. Our algorithm runs in timeO(n4kogn). We also discuss a variation of this algorithm which minimizes any positive linear combination of length traversed byP and angular rotation of the ladder aboutP. This variation requiresO(n5) time.

••

TL;DR: A linear time randomizing algorithmic paradigm for finding local roots, optima, intersection points, etc., of ranked functions in a set of functions defined on a common intervalU is given.

Abstract: Consider a setF ofn functions defined on a common intervalU. A ranked function overF is defined from the functions ofF by using order information such as thek largest function, the sum ofk largest functions, etc. We give a linear time randomizing algorithmic paradigm for finding local roots, optima, intersection points, etc., of ranked functions. The algorithm is generalized to the Cost Effective Resource Allocation Problem and to various variants of the Parametric Knapsack Problem.

••

TL;DR: A theory of VLSI transformability reveals the inherent AT2 andA complexity of a large class of related problems.

Abstract: The two basic performance parameters that capture the complexity of any VLSI chip are the area of the chip,A, and the computation time,T. A systematic approach for establishing lower bounds onA is presented. This approach relatesA to the bisection flow, ź. A theory of problem transformation based on ź, which captures bothAT2 andA complexity, is developed. A fundamental problem, namely, element uniqueness, is chosen as a computational prototype. It is shown under general input/output protocol assumptions that any chip that decides ifn elements (each with (1+ź)lognbits) are unique must have ź=ź(nlogn), and thus, AT2=ź(n2log2n), andA= ź(nlogn). A theory of VLSI transformability reveals the inherentAT2 andA complexity of a large class of related problems.

••

TL;DR: Given two processes, each having a total-ordered set ofn elements, this work presents a distributed algorithm for finding median of these 2n elements using no more than logn +O(√logn) messages, but if the elements are distinct, only logn -O(1) messages will be required.

Abstract: Given two processes, each having a total-ordered set ofn elements, we present a distributed algorithm for finding median of these 2n elements using no more than logn +O(źlogn) messages, but if the elements are distinct, only logn +O(1) messages will be required. The communication complexity of our algorithm is better than the previously known result which takes 2 logn messages.

••

TL;DR: A subclass (namedWRW#) ofWRW is introduced and it is shown that the fixed point set of the cautious WRW#-scheduler properly contains CPSR, which allows more concurrency than anyCPSR- scheduler.

Abstract: Given a classC of serializable schedules, a cautiousC-scheduler is an on-line transaction scheduler that outputs schedules in classC and never resorts to rollbacks. Such a scheduler grants the current request if and only if the partial schedule it has granted so far, followed by the current request, can be extended to a schedule inC. A suitable extension is searched among the set of all possible sequences of the pending steps, which are predeclared by the transactions whose first requests have already arrived. If the partial schedule cannot be extended to a schedule inC, then the current request is delayed. An efficient cautiousCPSR-scheduler has been proposed by Casanova and Bernstein.
This paper discusses cautiousWRW-scheduling, whereWRW is the largest polynomially recognizable subclass of serializable schedules currently known. Since cautiousWRW-scheduling is, in general, NP-complete as shown in this paper, we introduce, a subclass (namedWRW#) ofWRW and discuss an efficient cautiousWRW#-scheduler. We also show that the fixed point set of the cautiousWRW#-scheduler properly containsCPSR. Therefore, ourWRW#-scheduler allows more concurrency than anyCPSR- scheduler.

••

TL;DR: The problem of routing multiple rows of cells with linear nets is studied and a polynomial-time heuristic is presented and is shown to produce total density within 50% of the optimal.

Abstract: The problem of routing multiple rows of cells with linear nets is studied. In keeping with various local routing strategies, two separate optimization criteria are considered: maximum channel density and total channel density. In each case the problem is shown to be NP-complete for a fixed number of rows. In addition, for the total density problem, a polynomial-time heuristic is presented and is shown to produce total densities within 50% of the optimal.

••

TL;DR: In recent years, with the rise of computational geometry and an increasing number of people working on the subject, many formerly neglected problems were solved e~iciently and elegantly, but at the same time, even more arose and became of interest: a few are hereafter offered.

Abstract: In recent years, with the rise of computational geometry and an increasing number of people working on the subject (see Lee and Preparata [5] for a survey), many formerly neglected problems were solved e~iciently and elegantly, but at the same time, as is natural for every expanding field, even more arose and became of interest: a few are hereafter offered. Although the minimum spanning tree problem (MST) has long been solved (at least in theory), related questions still remain unanswered. For example, let us consider n points in the plane, with which we wish to form a \"reasonable\" polygon. In practice these points are on the boundary of an unknown object, and the polygon is \"reasonable\" if the order of the points is roughly the same on the boundary of the other object and on the polygon. A traveling salesman tour (TST) would be a nice solution, but finding it is an NP-hard problem, and so we would like an easier criterion for constructing a good polygon. Suppose that such a polygon is part of the Delaunay triangulation of the points. It contains (n -2) Delaunay triangles, and the (graph-theoretic) dual of these triangles is a subtree of the Voronoi diagram, and covers ( n 2) Voronoi nodes (Figure 1). A good criterion of \"reasonableness\" might be to minimize the cost (cumulated edge-length) of this tree: whence the question of choosing k = n 2 nodes among N = 2n e 2 (number of nodes of the Voronoi diagram, e being the number of edges on the convex hull of the n data points), in order to minimize the cost of their MST (problem proposed by J. D. Boissonnat). Another problem in connection with Delaunay graphs, appealing but deceptive in its apparent simplicity, was proposed to me by J. D. Boissonnat and H. Crapo: this is the question of proving that a planar Delaunay triangulation always has a Hamiltonian cycle--which, if true, might lead to nice heuristics for the TST problem. For the definition of a Delaunay triangulation see, for example, Shamos and Hoey [8]. Algorithms for solving geometrical problems in the plane abound, and their worst-case complexity has been fairly well studied, but many difficulties arise

••

[...]

TL;DR: An efficient algorithm is presented that solves a generalization of the Longest Common Subsequence problem, in which one of the two input strings contains sets of symbols which may be permuted from a music application.

Abstract: An efficient algorithm is presented that solves a generalization of the Longest Common Subsequence problem, in which one of the two input strings contains sets of symbols which may be permuted. This problem arises from a music application.

••

TL;DR: This paper describes the data-structure of Corner Stitching in greater depth than previously, and presents algorithms for the basic operations described in Ousterhout's original paper.

Abstract: Corner Stitching was first presented by Ousterhout as a data-structure for VLSI CAD. This paper describes the data-structure in detail. It presents, in greater depth than previously, algorithms for the basic operations described in Ousterhout's original paper. The algorithms for enumerating and updating arbitrary rectangular areas are new. Their constant space complexity bounds are an improvement over previous algorithms for those operations that were recursive. From a practical standpoint, the elimination of the recursion has also made them much faster.

••

TL;DR: The compilation of regular expressions into specifications for programmable logic arrays (PLAs) that will implement the required function is discussed, along with the advantages of each method.

Abstract: The language of regular expressions is a useful one for specifying certain sequential processes at a very high level. They allow easy modification of designs for circuits, like controllers, that are described by patterns of events they must recognize and the responses they must make to those patterns. This paper discusses the compilation of such expressions into specifications for programmable logic arrays (PLAs) that will implement the required function. A regular expression is converted into a nondeterministic finite automaton, and then the automaton states are encoded as values on wires that are inputs and outputs of a PLA. The translation of regular expressions into nondeterministic automata by two different methods is discussed, along with the advantages of each method. A major part of the compilation problem is selection of good state codes for the nondeterministic automata; one successful strategy and its application to microcode compaction is explained in the paper.