scispace - formally typeset
Search or ask a question

Showing papers presented at "Symposium on Computational Geometry in 1986"


Proceedings ArticleDOI
Steven Fortune1
01 Aug 1986
TL;DR: A transformation is used to obtain simple algorithms for computing the Voronoi diagram of point sites, of line segment sites, and of weighted point sites with sweepline technique.
Abstract: We present a transformation that can be used to compute Voronoi diagrams with a sweepline technique. The transformation is used to obtain simple algorithms for computing the Voronoi diagram of point sites, of line segment sites, and of weighted point sites. All algorithms have O(n log n) worst case running time and use O(n) space.

641 citations


Proceedings ArticleDOI
01 Aug 1986
TL;DR: A new technique for half-space and simplex range query using random sampling to build a partition-tree structure and introduces the concept of anε-net for an abstract set of ranges to describe the desired result of this random sampling.
Abstract: We present a new technique for half-space and simplex range query using O(n) space and O(na) query time, where a 0. These bounds are better than those previously published for all d ≥ 2. The technique uses random sampling to build a partition-tree structure. We introduce the concept of an e-net for an abstract set of ranges to describe the desired result of this random sampling and give necessary and sufficient conditions that a random sample is an e-net with high probability. We illustrate the application of these ideas to other range query problems.

378 citations


Proceedings ArticleDOI
P Chew1
01 Aug 1986
TL;DR: Given a source, a destination, and a set of polygonal obstacles of size n, an size data structure can be used to find a reasonable approximation to the shortest path between the source and the destination in &Ogr;(n log n) time.
Abstract: Given a set S of points in the plane, there is a triangulation of S such that a path found within this triangulation has length bounded by a constant times the straight-line distance between the endpoints of the path. Specifically, for any two points a and b of S there is a path along edges of the triangulation with length less than √10 times |ab|, where |ab| is the straight-line Euclidean distance between a and b. Thus, a shortest path in this planar graph is less than about 3 times longer than the corresponding straight-line distance. The triangulation that has this property is the L1 metric Delaunay triangulation for the set 5. This result can be applied to motion planning in the plane. Given a source, a destination, and a set of polygonal obstacles of size n, an O(n) size data structure can be used to find a reasonable approximation to the shortest path between the source and the destination in O(n log n) time.

259 citations


Proceedings ArticleDOI
01 Aug 1986
TL;DR: It is shown that the problem is NP-hard when the space is a polygon with holes even if the polygon and the holes are convex or rectilinear, and an O(logn) algorithm to find the shortest route that visits a point and two convex polygons, where n is the total number of vertices.
Abstract: In this paper we consider the problem of finding shortest routes from which every point in a given space is visible (watchman routes). We show that the problem is NP-hard when the space is a polygon with holes even if the polygon and the holes are convex or rectilinear. The problem remains NP-hard for simple polyhedra. We present O(n) and O(nlogn) algorithms to find a shortest route in a simple rectilinear monotone polygon and a simple rectilinear polygon respectively, where n is the number of vertices in the polygon. Finding optimum watchman routes in simple polygons is closely related to the problem of finding shortest routes that visit a set of convex polygons in the plane in the presence of obstacles. We show that finding a shortest route that visits a set of convex polygons is NP-hard even when there are no obstacles. We present an O(logn) algorithm to find the shortest route that visits a point and two convex polygons, where n is the total number of vertices.

150 citations


Proceedings ArticleDOI
01 Aug 1986
TL;DR: A worst-ease lower bound of N(n 4) for explicitly computing the boundary of the visibility polygon from a line segment in the presence of other line segments is established, and an optimal algorithm to construct the boundary is designed.
Abstract: EIGindy and Avis [EA] considered the problem of determining the visibility polygon from a point inside a polygon. Their algorithm runs in optimal O(n ) time and space, where n is the number of the vertices of the given polygon. Later their result was generalized to visibility polygons from an edge by EIGindy [Eli, and Lee and Lin ILL I. Both independently discovered O(nlogn) algorithms for this problem. Very recently Guibas e|. ai. [GH] have proposed an optimal 0 ( n ) time algorithm. None of these algorithms work for polygons with holes. In this paper we consider the problem of computing visibility polygon.q inside a polygon P that may have holes. Our first result is an algorithm for computing the visibility polygon from a given point inside P . The algorithm runs in 0 (nlogn) time, which is proved to be optimal by reduction from the problem of sorting n positive integers (Aaano st. al, [AA] have obtained this result independently). Next we consider the problem of determining the visibility polygon from a line segment. As our main result, we establish a worst-ease lower bound of N(n 4) for explicitly computing the boundary of the visibility polygon from a line segment in the presence of other line segments, and design an optimal algorithm to construct the boundary. We also present an 0 (n 2) time and space algorithm if the visibility polygon can be represented as a union of several polygons. The latter algorithm is also proved to be optimal in the worst case.

128 citations


Proceedings ArticleDOI
01 Aug 1986
TL;DR: The Θ(m) bound on finding the maxima of wide totally monotone matrices is used to speed up several geometric algorithms by a factor of logn.
Abstract: LetA be a matrix with real entries and letj(i) be the index of the leftmost column containing the maximum value in rowi ofA.A is said to bemonotone ifi1 >i2 implies thatj(i1) ≥J(i2).A istotally monotone if all of its submatrices are monotone. We show that finding the maximum entry in each row of an arbitraryn xm monotone matrix requires Θ(m logn) time, whereas if the matrix is totally monotone the time is Θ(m) whenm≥n and is Θ(m(1 + log(n/m))) whenm

127 citations


Proceedings ArticleDOI
01 Aug 1986
TL;DR: This paper shows how certain geometric convolution operations can be computed efficiently and relies on new optimal solutions for certain reciprocal search problems, such as finding intersections between “blue” and “green” intervals, and overlaying convex planar subdivisions.
Abstract: In this paper we show how certain geometric convolution operations can be computed efficiently. Here “efficiently” means that our algorithms have running time proportional to the input size plus the output size. Our convolution algorithms rely on new optimal solutions for certain reciprocal search problems, such as finding intersections between “blue” and “green” intervals, and overlaying convex planar subdivisions.

93 citations


Proceedings ArticleDOI
D A Field1
01 Aug 1986
TL;DR: A measure of the quality of tetrahedra is proposed and used to identify undesirable tetrahingra created due to point distributions and geometric constraints of solid models and can be ameliorated by using tetrahedral decompositions of icosahedra to fill space.
Abstract: Computer generated solid models must be decomposed into finite element meshes for analysis by the Finite Element Method. To enable decompositions of complex solid models, tetrahedra are employed and to avoid badly skewed tetrahedra for finite element analysis, a Delaunay triangulation is created by Watson's Algorithm [6]. Certain two-dimensional properties of Delaunay triangulations do not extend to the three-dimensional implementation of Watson's Algorithm. Furthermore, serious numerical difficulties can occur due to the nonrandomness of triangulation points, Nonrandomness imposed by the geometry can be ameliorated by using tetrahedral decompositions of icosahedra to fill space. A measure of the quality of tetrahedra is proposed and used to identify undesirable tetrahedra created due to point distributions and geometric constraints of solid models. Postprocessing Delaunay triangulations to rectify undesirable tetrahedra is briefly discussed.

80 citations


Proceedings ArticleDOI
01 Aug 1986
TL;DR: The complexity of the hidden-line problem for an N-edge simple polyhedron is 8(N2), higher than previously published algorithms for the problem.
Abstract: An O(N 2) upper time bound is given for determining the visibility of the edges of a set of polygons with altogether N edges in the three-dimensional space. The best previously published algorithms for the problem take O(N21og N) time in the worst case. An ~(N 2) lower bound is also given for determining the visibility of a simple polyhedron with N edges. So we can conclude that the complexity of the hidden-line problem for an N-edge simple polyhedron is 8(N2).

68 citations


Proceedings ArticleDOI
01 Aug 1986
TL;DR: This work presents techniques which result in improved parallel algorithms for a number of problems whose efficient sequential algorithms use the plane-sweeping paradigm, and never uses the AKS sorting network in any of them.
Abstract: We present techniques which result in improved parallel algorithms for a number of problems whose efficient sequential algorithms use the plane-sweeping paradigm. The problems for which we give improved algorithms include intersection detection, trapezoidal decomposition, triangulation, and planar point location. Our technique can be used to improve on the previous time bound while keeping the space and processor bounds the same, or improve on the previous space bound while keeping the time and processor bounds the same. We also give efficient parallel algorithms for visibility from a point, 3-dimensional maxima, multiple range-counting, and rectilinear segment intersection counting. We never use the AKS sorting network in any of our algorithms.

66 citations


Proceedings ArticleDOI
01 Aug 1986
TL;DR: The Delaunay tree provides efficient solutions to several problems such as building theDelaunay triangulation of a finite set of n points in any dimension, locating a point in the triangulations, defining neighborhood relationships in the Triangulation and computing intersections.
Abstract: We present, in this paper, a new hierarchical data structure called the Delaunay tree. It is defined from the Delaunay triangulation and, roughly speaking, represents a triangulation as a hierarchy of balls. The Delaunay tree provides efficient solutions to several problems such as building the Delaunay triangulation of a finite set of n points in any dimension, locating a point in the triangulation, defining neighborhood relationships in the triangulation and computing intersections. The algorithms are extremely simple and are analyzed from a theoretical and practical points of view.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: In this paper, time algorithms for the problems of covering a horizontally convex orthogonal polygon with the minimum number of Orthogonal convex polygons and with theminimum number of orthogsonal star-shaped polygons are presented.
Abstract: In this paper we present O(n2) time algorithms for the problems of covering a horizontally convex orthogonal polygon with the minimum number of orthogonal convex polygons and with the minimum number of orthogonal star-shaped polygons.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: The modelling system uses a technique for generating polynomials for blends and fillets that allows the user complete control over their extent without requiring the user explicitly to devise the polynomial inequalities needed to represent them.
Abstract: This paper presents a new solid modelling system capable of representing three dimensional objects consisting of both simple and complicated surfaces and, equally importantly, blend surfaces between them. Implicit polynomial inequalities and set theoretic techniques are used to specify the models. The modelling system can generate shaded pictures and NC machining instructions for cutting complicated objects automatically.The modelling system uses a technique for generating polynomials for blends and fillets that allows the user complete control over their extent without requiring him or her explicitly to devise the polynomial inequalities needed to represent them.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: A modification to the divide-and-conquer algorithm of Guibas & Stolfi [GS] for computing the Delaunay triangulation of n sites in the plane, which reduces its expected running time and has optimal &Ogr;(n(n) worst-case performance.
Abstract: We present a modification to the divide-and-conquer algorithm of Guibas & Stolfi [GS] for computing the Delaunay triangulation of n sites in the plane. The change reduces its T(n log n) expected running time to O(n log n) for a large class of distributions which includes the uniform distribution in the unit square. The modified algorithm is significantly easier to implement than the optimal linear-expected-time algorithm of Bentley, Weide & Yao [BWY]. Unlike the incremental methods of Ohya, Iri & Murota [OIM] and Maus [M] it has optimal O(n log log n) worst-case performance. The improvement extends to the composition of the Delaunay triangulation in the Lp metric for 1

Proceedings ArticleDOI
01 Aug 1986
TL;DR: It is shown how solutions within a constant factor of the optimum can be computed in time, thus improving the previous time bound, and involving alternating search from two opposite directions.
Abstract: We consider the problem of partitioning isothetic polygons into rectangles by drawing edges of minimum total length. The problem has various applications [LPRS], eg. in VLSI design when dividing routing regions into channels ([Riv1], [Riv2]). If the polygons contain holes, the problem in NP-hard [LPRS]. In this paper it is shown how solutions within a constant factor of the optimum can be computed in time O(n log n), thus improving the previous O(n2) time bound. An unusual divide-and-conquer technique is employed, involving alternating search from two opposite directions, and further efficiency is gained by using a fast method to sort subsets of points. Generalized Voronoi diagrams are used in combination with plane-sweeping in order to detect all “well bounded” rectangles, which are essential for the heuristic.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: Some necessary conditions for the planarity of general angle-graphs are described, and necessary and sufficient conditions for some special classes of angle- graphs are given.
Abstract: t. Introduction: In this paper, we conslder certain geometric problems regarding planar graphs. We are interested in straight line embeddings of planar graphs that satisfy certain angle constraints. Thus this area is a blend of ~ n a r graph theory and plane geometry. The main problem discussed in this paper is the problem of planarity of angle-graphs. An angle-graph is an undirected graph in which at each vertex we are given a cyclic ordering of the incident edges as well as the angles between the successive edges. We are interested in the question of when such angle-graphs have a straight planar embedding, in which the angles between the edges at each vertex are preserved. In this paper we explain some partial results regarding the characterization of the planarity of angle-graphs, and pose some conjectures and open questions. In particular, we describe some necessary conditions for the planarity of general angle-graphs, and give necessary and sufficient conditions for the planarity of some special classes of angle-graphs.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: A new search technique, called the Shear-Based Format, is proposed, which shows that Yao's lower bound on the time-space tradeoff for orthogonal range queries is either optimal or nearly optimal.
Abstract: We propose a new search technique, called the Shear-Based Format, which shows that Yao's lower bound on the time-space tradeoff for orthogonal range queries is either optimal or nearly optimal [Ya82, Ya85]. Shearing is also a very pragmatic technique.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: An algorithm for finding the minimum Euclidean visible vertex distance between two nonintersecting simple polygons, where n is the number of vertices in a polygon.
Abstract: In this paper, we present an O(n log n) algorithm for finding the minimum Euclidean visible vertex distance between two nonintersecting simple polygons, where n is the number of vertices in a polygon. The algorithm is based on applying a divide and conquer method to two preprocessed facing boundaries of the polygons. We also derive an O(n log n) algorithm for finding a minimum sequence of separating line segments between two nonintersecting polygons.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: An optimal algorithm is presented to determine whether asimple circuit exists, and deliver a simple circuit, on a set of line segments, where each segment has at least one end-point on the convex hull of the segments (a CH-connected set of segments).
Abstract: Given a set of non-intersecting line segments in the plane, we are required to connect the line segments such that they form a simple circuit (a simple polygon). However, not every set of segments can be so connected. Figure 1 shows a set of segments that does not admit a simple circuit. This leads to the challenging problem of determining when a set of segments admits a simple circuit , and if it does, then find such a circuit. It has been shown [Rappaport] that in general, to determine whether a set of segments admits a simple circuit is NP-complete. In this paper an optimal algorithm is presented to determine whether a simple circuit exists, and deliver a simple circuit, on a set of line segments, where each segment has at least one end-point on the convex hull of the segments (a CH-connected set of segments). Furthermore this technique can be used to determine a simple circuit of minimum length, or a simple circuit that bounds the minimum area, with no increase in computational complexity. The rest of the paper is summarized. In section 2 cf this paper, the preliminary definitions and notation are introduced. In section 3, the geometric properties of the set, of segments are used to transform the segments into an associated graph. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct comm¢.rcial advantage, the ACM copyright notice and the title of the .~ :~iication and its date appear, and notice is given that copying is by !:crm:s,';or vf the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. A Hamiltonian circuit in this graph is !then used to deliver the connections of segments that form the boundary of a simple polygon. In section 4, a linear algorithm is introduced which finds, if there is one, a Hamiltonian circuit in graphs of the class obtained by the transformation discussed in section 3. This algorithm actually computes the minimum weight matching on an extremely structured bipartite graph. From this the result on minimal simple circuits follows immediately. Section 5, relates the details of a necessary step in the segment to graph transformation. This involves the intersection of line segments in the plane. The paper is summarized in section 6, where the proof of optimality of the …

Proceedings ArticleDOI
P O Fjällström1
01 Aug 1986
TL;DR: This paper is divided in two parts of which the first describes a method to obtain solid models with free-form geometry from polyhedral models achieved by replacing the edges of the original model with curved faces, i.e. sharp edges are replaced by rounds or fillets.
Abstract: This paper is divided in two parts of which the first describes a method to obtain solid models with free-form geometry from polyhedral models. This is achieved by replacing the edges of the original model with curved faces, i.e. sharp edges are replaced by rounds or fillets. The “radii” of these rounds and fillets are controlled by weight values assigned to the original edges. The value of each weight can vary in the interval from 0 to 1, which gives increased possibilities for the user to control the radius of a round or fillet without changing the topology or shape of the polyhedral model.Several of the curved faces will be non-rectangular and in the second part of the paper a method to determine surfaces to these faces is described. To do this we can either split these faces into rectangular subfaces and fit rectangular surfaces to the subfaces or use non-rectangular surfaces. I have tried the second alternative and used a scheme that is inspired by the methods presented by Gregory & Charrot, (1980), (1983) and (1984). However, my method differs from theirs in some respects:Their methods are based on a convex combination of Boolean sum surfaces which interpolate position and slope along to adjacent boundaries. The Boolean sum surfaces are based on linear Taylor interpolants. Instead of Boolean sum surfaces I use convex combinations of Taylor interpolants.I have also found that linear Taylor interpolants may not always give a satisfactory shape of the interior of the surface. A considerable improvement can be obtained by using higher degree interpolants.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: A new and efficient algorithm for planning collision-free motion of a rod in the plane amidst polygonal obstacles that has the same worst-case complexity as the best previously developed algorithm of Leven and Sharir [LS1].
Abstract: We present here a new and efficient algorithm for planning collision-free motion of a rod in the plane amidst polygonal obstacles. The algorithm calculates the boundary of the space of free positions of the rod, and then uses this boundary for determining the existence of required motions. The algorithm runs in time O(K log n) where n is the number of obstacle corners and where K is the total number of pairs of obstacle walls or corners lying from one another at distance less than or equal to the length of the rod. Since K = O(n2), the algorithm has the same worst-case complexity as the best previously developed algorithm of Leven and Sharir [LS1], but if the obstacles are not too cluttered together it will run much more efficiently.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: It is shown that when P, the set of points in R, contains interior points, there is always one point, called a splitter, that partitions P into d + 1 simplices, none of which contain more thandn /(dn
Abstract: A set P of points in Rd is called simplicial if it has dimension d and contains exactly d + 1 extreme points. We show that when P contains n interior points, there is always one point, called a splitter, that partitions P into d + 1 simplices, none of which contain more than dn /(d + 1) points. A splitter can be found in O (d4n) time. Using this result, we give a O (d4n log1+1/dn) algorithm for triangulating simplicial point sets that are in general position. In R3 we give an O (n logn + k) algorithm for triangulating arbitrary point sets, where k is the number of simplices produced. We exhibit sets of 2n + 1 points in R3 for which the number of simplices produced may vary between (n -1)2 + 1 and 2n -2. We also exhibit point sets for which every triangulation contains a quadratic number of simplices.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: Combined with existing algorithms for computing Voronoi diagrams on the surface of polyhedra, this structure provides an efficient solution to the nearest neighbor query problem on polyhedral surfaces.
Abstract: A common structure arising in computational geometry is the subdivision of a plane defined by the faces of a straight line planar graph. We consider a natural generalization of this structure on a polyhedral surface. The regions of the subdivision are bounded by geodesics on the surface of the polyhedron. A method is given for representing such a subdivision that is efficient both with respect to space and the time required to answer a number of different queries involving the subdivision. For example, given a point

Proceedings ArticleDOI
01 Aug 1986
TL;DR: An algorithm is given for a convex polygon which constructs a motion of the polygon when one exists; otherwise it reports that none exists.
Abstract: We consider the problem of moving an n vertex simple polygon around a corner in a right-angular corridor. We give an O(n log n) algorithm for a convex polygon which constructs a motion of the polygon when one exists; otherwise it reports that none exists. In the case of non-convex polygons, we have an O(n2) time algorithm.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: This work considers the problem of answering a series of on-line queries on a static database and presents optimal algorithms for the following problems in the plane: testing convex hull membership, half-plane intersection queries and fixed-constraint multi-objective linear programming.
Abstract: We consider the problem of answering a series of on-line queries on a static database. The conventional approach to such problems involves a preprocessing phase which constructs a data structure with good search behavior. The data structure is then used to process a series of queries without any further reordering. Our approach involves dynamic or query-driven structuring of the database, i.e. we process the database only when it is required for answering a query. We present optimal algorithms for the following problems in the plane: testing convex hull membership, half-plane intersection queries and fixed-constraint multi-objective linear programming. This technique is also applied to multidimensional dominance query problems.


Proceedings ArticleDOI
01 Aug 1986
TL;DR: Two algorithms for pictures of polyhedral scenes are presented that use a reciprocal diagram in the plane to characterize pictures of strict oriented polyhedra in space and a combinatorial algorithm that uses counts on the incidence structure.
Abstract: We present two algorithms for pictures of polyhedral scenes. These algorithms have been conjectured (and used for some time), but only now been verified. The combinatorial algorithm, proposed by Sugihara, uses counts on the incidence structure to characterize pictures which are generically the projection of sharp polyhedral scenes of planes and points. The geometric algorithm, described by Clerk Maxwell and rediscovered in the last decade, uses a reciprocal diagram in the plane to characterize pictures of strict oriented polyhedra in space.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: The greedy triangulation heuristic for minimum weight triangulations of a (non-necessarily convex) polygon yields solutions at most h times longer than the optimum where h is the diameter of the tree dual to the produced greedy Triangulation of the polygon.
Abstract: Manacher and Zorbrist conjectured that the greedy triangulation heuristic for minimum weight triangulation of n-point planar point sets yields solutions within an O(ne), e

Proceedings ArticleDOI
V Visvanathan1, L S Milor1
01 Aug 1986
TL;DR: An efficient algorithm to determine the image of a parallelepiped under a linear transformation is presented, based on specifying the boundary hyperplanes that define the image polytope and is of polynomial complexity in n.
Abstract: An efficient algorithm to determine the image of a parallelepiped under a linear transformation is presented. The work was motivated by certain problems in the testing of analog integrated circuits. The method is based on specifying the boundary hyperplanes that define the image polytope and is of polynomial complexity in n (the dimension of the parallelepiped), for a fixed value of the dimension of the range of the linear transformation. The algorithm has been implemented using library routines from LINPACK.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: A computer program is described, which constructs a simple n-polyhedron c i rcumscr ib ing a sphere, and then t i l t s each face plane, in every i t e r a t ion, according to a scheme which has been found to reduce the volume of the polyhedron.
Abstract: Let us define an n-polyhedron as a convex polyhedron with n faces, and l e t us ca l l a Rolyhedron simple i f a l l of i t s ver t ices are of degree three. We describe a computer program ca l led LINDELOF, which f i r s t constructs a simple n-polyhedron c i rcumscr ib ing a sphere, and then t i l t s each face plane, in every i t e r a t ion , according to a scheme which has been found to reduce the volume of the polyhedron. The i n i t i a l polyhedron is e i t he r randomly generated, or else constructed from spec i f ied tangent points . At the ~-th i t e r a t ion , fo r each face f i ( ~ ) ( i=1,2 . . . . . n) we define the defectDi(~ ) = tan [6i(~)], where 6i(~ ) is the central angle subtended by the face centroid ci(~) and the face tangent point r i (~), and the correction Ki(X ) = tan [Ki(~)], where Ki(~) is the angle through which the plane of the face fi(~) rolls on tile sphere. The relation between correction and defect is i l lustrated in Fig. I. For each face fi(~) in every iteration,