scispace - formally typeset
Search or ask a question

Showing papers presented at "Symposium on Computational Geometry in 1997"


Proceedings ArticleDOI
01 Aug 1997
TL;DR: This paper gives the first methods to obtain seed sets that are provably small in size based on a variant of the contour tree (or topographic change tree), and develops a simple approximation algorithm giving a seed set of size at most twice the size of the minimum once the contours tree is known.
Abstract: For 2D or 3D meshes that represent the domain of continuous function to the reals, the contours|or isosurfaces|of a speci ed value are an important way to visualize the function To nd such contours, a seed set can be used for the starting points from which the traversal of the contours can begin This paper gives the rst methods to obtain seed sets that are provably small in size They are based on a variant of the contour tree (or topographic change tree) We give a new, simple algorithm to compute such a tree in regular and irregular meshes that requires O(n logn) time in 2D for meshes with n elements, and in O(n) time in higher dimensions The additional storage overhead is proportial to the maximum size of any contour (linear in the worst case, but typically less) Given the contour tree, a minimum size seed set can be computed in roughly quadratic time Since in practice this can be excessive, we develop a simple approximation algorithm giving a seed set of size at most twice the size of the minimum It requires O(n log n) time and linear storage once the contour tree is known We also give experimental results, showing the size of the seed sets for several data sets

363 citations


Proceedings ArticleDOI
01 Aug 1997
TL;DR: A conservative culling stage is added to the rendering pipeline, attempting to identify and avoid processing of occluded polygons, and is applicable to all polygonal models, and can be easily implemented on top of view-frustum culling.
Abstract: Many applications in computer graphics and virtual environments need to render datasets with large numbers of primitives and high depth complexity at interactive rates. However, standard techniques like view frustum culling and a hardware z-bu er are unable to display datasets composed of hundred of thousands of polygons at interactive frame rates on current high-end graphics systems. We add a \conservative"visibility culling stage to the rendering pipeline, attempting to identify and avoid processing of occluded polygons. Given a moving viewpoint, the algorithm dynamically chooses a set of occluders. Each occluder is used to compute a shadow frustum, and all primitives contained within this frustumare culled. The algorithmhierarchicallytraverses the model, culling out parts not visible from the current viewpoint using e cient, robust, and in some cases specialized interference detection algorithms. The algorithm's performance varies with the location of the viewpoint and the depth complexity of the model. In the worst case it is linear in the input size with a small constant. In this paper, we demonstrate its performance on a city model composed of 500;000 polygons and possessing varying depth complexity. We are able to cull an average of 55% of the polygons that would not be culled by view-frustum culling and obtain a commensurate improvement in frame rate. The overall approach is e ective and scalable, is applicable to all polygonal models, and can be easily implemented on top of view-frustum culling.

170 citations


Proceedings ArticleDOI
01 Aug 1997
TL;DR: The problem of reconstructing a surface, given a set of scattered data points is addressed and a precise formulation of the reconstruction problem is proposed as a particular mesh of the surface called the normalized mesh, which has the property to be included inside the Delaunay graph.
Abstract: In this paper, the problem of reconstructing a surface, given a set of scattered data points is addressed. First, a precise formulation of the reconstruction problem is proposed. The solution is mathematically defined as a particular mesh of the surface called the normalized mesh. This solution has the property to be included inside the Delaunay graph. A criterion to detect faces of the normalized mesh inside the Delaunay graph is proposed. This criterion is proved to provide the exact solution in 2D for points sampling a r-regular shapes with a sampling path e sin ( π 8 )r . In 3D, this result cannot be extended and the criterion cannot retrieve every face. A heuristic is proposed in order to complete the surface.

157 citations


Proceedings ArticleDOI
L. Paul Chew1
01 Aug 1997
TL;DR: Thk is the first Delaunay-based method that is mathematically guaranteed to avoid slivers in mesh generation, and makes use of the Empty Circle Property for the DT of a set of point sites: the circumcircle of each triangle is empty of all other sites.
Abstract: The main contribution of this paper is a new mesh generation technique for producing 3D tetrahedral meshes. Like many existing techniques, this one is based on the Delaunay triangulation (DT). Unlike existing techniques, thk is the first Delaunay-based method that is mathematically guaranteed to avoid slivers. A sliver is a tetrahedral mesh-element that is almost completely flat. For example, imagine the tetrahedron created as the (3D) convex hull of the four corners of a square; th~ tetrahedron has nicely shaped faces — all faces are 45 degree right-triangles — but the tetrahedron has zero volume. Slivers in the mesh generally lead to poor numerical accuracy in a finite element analysis. The Delaunay triangulation (DT) has been widely used for mesh generation. In 21), the DT maximizes the minimum angle for a given point set; thus, small angles are avoided. There is no analogous property involving angles in 3D. We make use of the Empty Circle Property for the DT of a set of point sites: the circumcircle of each triangle is empty of all other sites. In 3D, the analogous property holds: the circumsphere of each tetrahedron is empty of all other sites. The Empty Circle Property can be used as the definition of the DT. There is a vsst literature on mesh generation with most of the material emanating from the various applications communities. We refer the reader to the excellent survey by Bern and Eppstein [BE92]. We consider here only work related to the topic of mesh generation with mathematical quality guarantees. Chew [Che89] showed how to use the DT to triangulate any 2D region with smooth boundaries and no sharp corners to attain a mesh of uniform density in which all angles are greater than 30 degrees. An optimality theorem for meshes of nonuniform density was developed by Bern, Eppstein and Gilbert [BEG94] using a quadtree-based approach. Ruppert [Ru93] later showed that a modification of Chew’s algorithm could also attain the same strong results

149 citations


Proceedings ArticleDOI
01 Aug 1997
TL;DR: The traditional worst-case analysis often fails to predict the actual behavior of the running time of geometric algorithms in practical situations, so models that describe the properties that realistic inputs have are needed so that the analysis can take these properties into account.
Abstract: The traditional worst-case analysis often fails to predict the actual behavior of the running time of geometric algorithms in practical situations. One reason is that worst-case scenarios are often very contrived and do not occur in practice. To avoid this, models are needed that describe the properties that realistic inputs have, so that the analysis can take these properties into account.

147 citations


Proceedings ArticleDOI
01 Aug 1997
TL;DR: Several algorithms to compute an approximate weighted geodesic shortest path, #(s, t), between two points s and t on the surface of P are presented and experimentally studied.

124 citations


Proceedings ArticleDOI
01 Aug 1997
TL;DR: A model of time-series similarity that allows outliers, different scaling functions, and variable sampling rates is analyzed and several deterministic and randomized algorithms for computing this notion of similarity are presented.
Abstract: Given a pair of nonidentical complex objects, defining (and determining) how similar they are to each other is a nontrivial problem. In data mining applications, one frequently needs to determine the similarity between two time series. We analyze a model of time-series similarity that allows outliers, different scaling functions, and variable sampling rates. We present several deterministic and randomized algorithms for computing this notion of similarity. The algorithms are based on nontrivial tools and methods from computational geometry. In particular, we use properties of families of well-separated geometric sets. The randomized algorithm has provably good performance and also works extremely efficiently in practice.

123 citations


Proceedings ArticleDOI
01 Aug 1997
TL;DR: A kinetic data structure for the maintenance of a multidimensional range search tree is introduced and is used as a building block to obtain kinetic data structures for two classical geometric proximity problems in arbitrary dlmensions.
Abstract: A kinetic data structure for the maintenance of a multidimensional range search tree is introduced. This structure is used as a building block to obtain kinetic data structures for two classical geometric proximity problems in arbitrary dlmensions: the first structure maintains the closest pair of a set of continuously moving points, and is provably efficient. The second structure maintains a spanning tree of the moving points whose cost remains within some prescribed factor of the minimum spanning tree.

88 citations


Proceedings ArticleDOI
01 Aug 1997
TL;DR: A practical new algorithm for the problem of computing low-cost paths in a weighted planar subdivision or on a weighted polyhedral surface that links selected pairs of subdivision vertices with locally optimal paths is presented.
Abstract: We present a practical new algorithm for the problem of computing low-cost paths in a weighted planar subdivision or on a weighted polyhedral surface. The algorithm is baaed on constructing a relatively sparse graph, a “pathnet”, that links selected pairs of subdivision vertices (and “critical points of entry”) with locally optimal paths. The pathnet can be searched for pat hs that are provably close to optimal and approach optimal, as one varies the parameter that controls the sparsity of the pathnet. We analyze our algorithm both analytically and experimentally. We report on the results of a set of experiments comparing the new algorithm with other standard methods.

86 citations


Proceedings ArticleDOI
01 Aug 1997
TL;DR: It is shown that a snap-rounded approximation to the arrangement defined by S can be built in an output-sensitive fashion, and that this can be done without first determining all the intersecting pairs of segments in S.
Abstract: We study the problem of robustly rounding a set S of n line segments in R2 using the snap rounding paradigm. In this paradigm each pixel containing an endpoint or intersection point is called “hot,” and all segments intersecting a hot pixel are re-routed to pass through its center. We show that a snap-rounded approximation to the arrangement defined by S can be built in an output-sensitive fashion, and that this can be done without first determining all the intersecting pairs of segments in S. Specifically, we give a deterministic plan~sweep algorithm running in time O(n bgn -F&H Ihl10g ~), where ~ is the set of hot pixela and \hl is the number of segments intersecting a hot pixel h E H. We also give a simple randomized incremental construction whose expected running time matches that of our deterministic algorithm. The complexity of these algorithms is optimal up to polylogarithmic factors. “This research is supported by NSF grant CCR9625289 and by U.S. ARO grsnt DAAH0496-1-O013. tThis rese~ch is supported by NSF grant CCR9623851 and US Army MUFU grant 5-23542-A. Permission 10make digil; llhd mpits tll’illl tw [ml o~lhis m:lterinl Ibr persomrl or Clossroom me is grnnled tvilllo{ll lid prnvidcrl 1}101Ihe copIcs are not mode nr distrihukd Ibr proli( or conm~L$rci.o I adwrnlngc. Ihc copy ri~l notice. Ihe tillc nl’the puhlicotion :Ind ih da(c ;Ippcilr. aml nn[iw IS given ihal cnpJTighl is h) pcrmissmn ol’lhc ,\(’il. [m. “1’0cnp) olhcn! M,. to rcpuhl ish. 10pnsl on swvers or It>rcdislrilw Ic Io Iisls. requires spwi Iic permission md~or tiec ( “ompttf{rfwndi ( ;wmcrr] 9Xii~.cl:r:m,c Copyrighl I 997 ,AChl 0M79 I-X78-997 ‘(I6,,S.75{1 LEONIDAS J. GUIBASt Dept. of Computer Science

70 citations


Proceedings ArticleDOI
01 Aug 1997
TL;DR: An efficient method that determines the sign of a multivariate polynomial expression with integer coefficients is proposed, which is highly parallelizable and is the fastest of all known multiprecision methods from a complexity point of view.
Abstract: We propose an efficient method that determines the sign of a multivariate polynomial expression with integer coefficients. This is a central operation on which the robustness of many geometric algorithms depends. Our method relies on modular computations, for which comparisons are usually thought to require multiprecision. Our novel technique of {\it recursive relaxation of the moduli} enables us to carry out sign determination and comparisons by using only floating point computations in single precision. This leads us to propose a hybrid symbolic-numeric approach to exact arithmetic. The method is highly parallelizable and is the fastest of all known multiprecision methods \from a complexity point of view. As an application, we show how to compute a few geometric predicates that reduce to matrix determinants and we discuss implementation efficiency, which can be enhanced by arithmetic filters. We substantiate these claims by experimental results and comparisons to other existing approaches. Our method can be used to generate robust and efficient implementations of geometric algorithms (convex hulls, Delaunay triangulations, arrangements) and numerical computer algebra (algebraic representation of curves and points, symbolic perturbation, Sturm sequences and multivariate resultants).

Proceedings ArticleDOI
01 Aug 1997
TL;DR: A novel perturbation scheme to overcome degeneracies and precision problems in computing spherical arrangements while using floating point arithmetic is described.
Abstract: We describe a software package for computing and manipulating the subdivision of a sphere by a collection of (not necessarily great) circles and for computing the boundary surface of the union of spheres. We present problems that arise in the implementation of the software and the solutions that we have found for them. At the core of the paper is a novel perturbation scheme to overcome degeneracies and precision problems in computing spherical arrangements while using floating point arithmetic. The scheme is relatively simple, it balances between the efficiency of computation and the magnitude of the perturbation, and it performs well in practice. In one O(n) time pass through the data, it perturbs the inputs necessary to insure no potential degeneracies and then passes the perturbed inputs on to the geometric algorithm. We report and discuss experimental results. Our package is a major component in a larger package aimed to support geometric queries on molecular models; it is currently employed by chemists working in `rational drug design.'' The spherical subdivisions are used to construct a geometric model of a molecule where each sphere represents an atom. We also give an overview of the molecular modeling package and detail additional features and implementation issues.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: The foveation transform of an image is introduced and a new method for foveating images based on wavelets is introduced, based on the multiresolution framework of Mallat.
Abstract: Motivated by applications of foveated images in visualization, we introduce the foveation transform of an image. We study the basic properties of these transforms using the multiresolution framework of Mallat. We also consider practical methods of realizing such transforms. In particular, we introduce a new method for foveating images based on wavelets. Preliminary experimental results are shown.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: New methods to answer approximate nearest neighbor queries on a set of n points in d -dimensional Euclidean space are proposed and applications to various proximity problems are discussed.
Abstract: This paper proposes new methods to answer approximate nearest neighbor queries on a set of n points in d -dimensional Euclidean space. For any fixed constant d , a data structure with O(\(\varepsilon\)(1-d)/2n log n) preprocessing time and O(\(\varepsilon\)(1-d)/2log n) query time achieves an approximation factor 1+\(\varepsilon\) for any given 0 < \(\varepsilon\) < 1; a variant reduces the \(\varepsilon\) -dependence by a factor of \(\varepsilon\)-1/2 . For any arbitrary d , a data structure with O(d2n log n) preprocessing time and O(d2log n) query time achieves an approximation factor O(d3/2) . Applications to various proximity problems are discussed.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: This work considers the problem of bounding the complexity of the k th level in an arrangement of n curves or surfaces, a problem dual to, and an extension of, the well-known k-set problem, and proves a new bound, O(nk 5/3), on the complexity.
Abstract: We consider the problem of bounding the complexity of the k th level in an arrangement of n curves or surfaces, a problem dual to, and an extension of, the well-known k-set problem. Among other results, we prove a new bound, O(nk 5/3 ) , on the complexity of the k th level in an arrangement of n planes in R 3 , or on the number of k -sets in a set of n points in three dimensions, and we show that the complexity of the k th level in an arrangement of n line segments in the plane is \(O(n\sqrt{k}\alpha(n/k))\) , and that the complexity of the k th level in an arrangement of n triangles in 3-space is O(n 2 k 5/6 α(n/k)) . 26 June, 1998 Editors-in-Chief: la href=../edboard.html#chiefslJacob E. Goodman, Richard Pollackl/al 19n3p315.pdf yes no no yes

Proceedings ArticleDOI
01 Aug 1997
TL;DR: An algorithm for computing the discrete 2-center of a set P of n points in the plane is presented, computing two congruent disks of smallest possible radius, centered at two points of P, whose union covers P.
Abstract: We present an algorithm for computing the discrete 2-center of a set P of n points in the plane; that is, computing two congruent disks of smallest possible radius, centered at two points of P , whose union covers P . Our algorithm runs in time O(n4/3log5n) .

Proceedings ArticleDOI
01 Aug 1997
TL;DR: A randomized approach for finding invariants in a set of flexible ligands (drug molecules) that underlies an integrated software system called RAPID currently under development, which is expected to prove useful in other applications such as molecular database screening and comparative molecular field analysis.
Abstract: This paper describes a randomized approach for finding invariants in a set of flexible ligands (drug molecules) that underlies an integrated software system called RAPID currently under development. An invariant is a collection of features embedded in 3 which is present in one or more of the possible low-energy conformations of each ligand. Such invariants of chemically distinct molecules are useful for computational chemists since they may represent candidate pharmacophores. A pharmacophore contains the parts of the ligand that are primarily responsible for its interaction and binding with a specific receptor. It is regarded as an inverse image of a receptor and is used as a template for building more effective pharmaceutical drugs. The identification of pharmacophores is crucial in drug design since the structure of the targeted receptor is frequently unknown, but a number of molecules that interact with the receptor have been discovered by experiments. It is expected that our techniques and the results produced by our system will prove useful in other applications such as molecular database screening and comparative molecular field analysis.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: This work defined several functional on the set of all triangulations of the finite system of sites in Rd attaining global minimum on the Delaunay triangulation (DT), and considers a so called “parabolic” functional and proves that it attains its minimum on DT in all dimensions.
Abstract: Some of the most well-known names in Computational Geometry are those of two prominent Russian mathematicians: Georgy F. Voronoi (1868 – 1908) and Boris N. Delaunay (1890 1980). Their considerable contribution to the Number Theory and Geometry is well known to the specialists in these fields. Surprisingly, their names (their works remained unread and later re-discovered) became the most popular not among “pure” mathematician, but among the researchers who used geometric applications. Such terms as “ Voronoi diagram” and “ Delaunay triangulation” are very important not only for Computational Geometry, but also for Geometric Modeling, Image Processing, CAD, GIS etc. Delaunay triangulation is used in numerous applications. It is widely used in plane and 3D case. A natural question may arise: why th~ triangulation is better than the others. Usually the advantages of Delaunay triangulation are rationalized by the max-min angle criterion and other properties [1,2,5,10,11,12]. The max-min angle criterion requires that the diagonal of every convex quadrilateral occurring in the triangulation “should be well chosen” [12], in the sense that replacement of the chosen diagonal by the alternative one must not increase the minimum oft he six angles in the two triangles making up the quadrilateral. Thus the Delaunay triangulation of a planar point set maximizes the minimum angle in any triangle. More specifically, the sequence of triangle angles, sorted from sharpest to leaat sharp, is lexicographlcally maximized over all such sequences constructed from triangulation of S. We defined several functional on the set of all triangulations of the finite system of sites in Rd attaining global minimum on the Delaunay triangulation (DT). First we consider a so called “parabolic” functional and prove that it attains its minimum on DT in all dimensions. It could be used as an equivalent definition for DT. Secondly we treat “mean radius” functiorral(the mean of circumradii of triangles) for planar triangulations. Thirdly we treat a so called “harmonic” functional. For a triangle this functional equals the ration of the sum of squaresof sides over area. Finally, we consider a discrete anidogue of the Dirichlet functional. Actually in all these cases the optimality of DT in 2D directly follow from flipping (swapping) aIgorithm: after each flip the corresponding functional decrease until Delaunay triangulation is reached. In 2D case all of these functional on triagles are Iexicographically minimised over all such sequences constructed from triangulation of S like for the max-min angle criterion. If d >2 then Delaunay triangulation is not optimal for the functional “mean radius”, “harmonic” and “ Dirichlet”. ~l?rom this point of view the usage of DT in dimensions d >2 may be nonappropriate. Thus the problem of finding” good” triangulations for this functional in higher dimensions is opened and more detailed consideration is necessary.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: A new technique is developed for the ecient and robust execution of proximity queries in two and three dimensions based on an implicit representation of Voronoi diagrams that is optimal with respect to both cost measures of the paradigm, asymptotic number of operations, and arithmetic degree.
Abstract: In the context of methodologies intended to confer robustness to geometric algo- rithms, we elaborate on the exact-computation paradigm and formalize the notion of degree of a geometric algorithm as a worst-case quantication of the precision (number of bits) to which arith- metic calculation have to be executed in order to guarantee topological correctness. We also propose a formalism for the expeditious evaluation of algorithmic degree. As an application of this paradigm and an illustration of our general approach where algorithm design is driven also by the degree, we consider the important classical problem of proximity queries in two and three dimensions and develop a new technique for the ecient and robust execution of such queries based on an implicit representation of Voronoi diagrams. Our new technique oers both low degree and fast query time and for 2D queries is optimal with respect to both cost measures of the paradigm, asymptotic number of operations, and arithmetic degree.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: A near-linear bound is proved on the combutorial complexity of the union of n fat convex objects in the plane, each pair of whose boundaries cross at most a constant number of times.
Abstract: On the Complexity of the Union of Fat Objects in the Plane* Alon Efratt Micha Sharir$ We prove a near-linear bound on the combutorial complexity of the union of n fat convex objects in the plane, each pair of whose boundaries cross at most a constant number of times.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: Some preliminary results on the maintenance of the convex hull are reported, the experimental setup is described, three alternative methods are compared, the value of the measures of quality for KDSS proposed by [BGH97], and some important numerical issues are highlighted.
Abstract: In many applications of computational geometry to modeling objects and processes in the physical world, the participating objects are in a state of continuous change. Motion is the most ubiquitous kind of continuous transformation but others, such as shape deformation, are also possible. In a recent paper, Baech, Guibas, and Hershberger [BGH97] proposed the framework of kinetic data structures (KDSS) as a way to maintain, in a completely on-line fashion, desirable information about the state of a geometric system in continuous motion or change. They gave examples of kinetic data structures for the maximum of a set of (changing) numbers, and for the convex hull and closest pair of a set of (moving) points in the plane. The KDS frameworkallowseach object to change its motion at will according to interactions with other moving objects, the environment, etc. We implemented the KDSSdescribed in [BGH97],es well as came alternative methods serving the same purpose, as a way to validate the kinetic data structures framework in practice. In this note, we report some preliminary results on the maintenance of the convex hull, describe the experimental setup, compare three alternative methods, discuss the value of the measures of quality for KDSS proposed by [BGH97],and highlight some important numerical issues.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: Two different methods to evaluate the sign of a determinant with integer entries based on the Gram—Schmidt orthogonalization process and an extension to n × n determinants of the ABDPY method which works only for 2 × 2 and 3 × 3 determinants are presented.
Abstract: This paper presents a theoretical and experimental study on two different methods to evaluate the sign of a determinant with integer entries. The first one is a method based on the Gram—Schmidt orthogonalization process which has been proposed by Clarkson [Cl]. We review his algorithm and propose a variant of his method, for which we give a complete analysis. The second method is an extension to n × n determinants of the ABDPY method [ABD+2] which works only for 2 × 2 and 3 × 3 determinants. Both methods compute the sign of an n× n determinant whose entries are integers on b bits, by using exact arithmetic on only b +O(n) bits. Furthermore, both methods are adaptive, dealing quickly with easy cases and resorting to full-length computation only for null determinants.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: It is shown that one can define a generalized configuration of points and a pseudo-polygon on it, so that its vertexedge pseudo-visibility graph is G, andimplies that the decision problem for vertex visibilitygraphs of pseud~polygons is in NP (as opposed to the same problem with straight-edge visibility, which is only known to be in PSPACE).
Abstract: We extend the notion of polygon visibility graphs to pseud~polygons defined on genemlized conjigumtions Of points. We consider both vertex-to-vertex, as well as vertex-to-edge visibility in pseudo-polygons. We study the characterization and recognition problems for vertex-edge pseudo-visibility graphs. Given a bipart.ite graph G satisfying three simple properties, which can all be checked in polynomial time, we show that we can define a generalized configuration of points and a pseudo-polygon on it, so that its vertexedge pseudo-visibility graph is G. This provides a full characterization of vertex-edge pseudo-visibility graphs and a polynomial-time algorithm for the decision problem. It alsoimplies that the decision problem for vertex visibilitygraphs of pseud~polygons is in NP (as opposed to the same problem with straight-edge visibility, which is only known to be in PSPACE).

Proceedings ArticleDOI
01 Aug 1997
TL;DR: A relatively simple algorithm is presented that preprocesses P in O(n) time, such that, given any two points \(s,t \in \partial P\) , and a parameter 0 < \(\varepsilon \le\) 1, it computes a distance Δ P (s, t) , which is the length of the shortest path between s and t on \(\partial{P}\).
Abstract: Given a convex polytope P with n edges in \(\Bbb R\)3 , we present a relatively simple algorithm that preprocesses P in O(n) time, such that, given any two points \(s,t \in \partial P\) , and a parameter 0 < \(\varepsilon \le\) 1, it computes, in O(log n) /ɛ1.5 + 1/ ɛ3 ) time, a distance ΔP(s,t) , such that dP(s,t)\(\leq\)ΔP(s,t)\(\leq\) (1+ɛ )dP(s,t) , where dP(s,t) is the length of the shortest path between s and t on \(\partial{P}\) . The algorithm also produces a polygonal path with O (1/ɛ1.5 ) segments that avoids the interior of P and has length ΔP(s,t) .

Proceedings ArticleDOI
01 Aug 1997
TL;DR: This paper proposes methods for efficient maintaining of the view around a point (i.e., the visibility polygon) in static and dynamic polygonal scenes in the plane and shows how to maintain the visibility complex for a dynamic scene.
Abstract: This paper proposes methods for efficient maintaining of the view around a point (i.e., the visibility polygon) in static and dynamic polygonal scenes in the plane. The algorithms presented use the visibility complex, that we have adapted for polygonal scenes (composed of disjoint simple polygons) to take advantage of the spatial and temporal coherence. The first algorithm shows how to maintain the view around a moving point in O(log2 v) time at each visibility change (v: current size of the view), improving the O(log2 n) time bound (n: number of polygon vertices) of [GS96], and is well-suited for small consecutive moves. The second algo- rithm maintains the view around a point moving along a known trajectory: After a preprocessing in O(v log v) time, the view is updated in O(log v) time at each change of visibil- ity. Finally, we show how to maintain the visibility complex for a dynamic scene (i.e., a scene in which the polygon ver- tices are moving) and how to maintain the view around a moving point in a dynamic scene.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: In this model, the cost of moving depends not only on the Euclidean distance but also on how much upwards or downwards the movement has to travel, simulating the situation when driving a vehicle on the tilted plane.
Abstract: Most computational geometry research on planar problems assumes that the underlying plane is perfectly ‘flat’, in the sense that movement between any two points cm the plane always takes the same cost as long as the Euclidean distance between the two points is the same. In real environments, distances may depend on the direction one moves along [10], or even may be influenced by local properties of the plane [8]. These situations sometimes can be modeled by considering a piecewiselinear surface as the underlying ‘plane’, and measuring distances therein; see e.g. [7]. In fact, many distance problems on non-flat planes are hard to deal with from the computational geometry point of view. We study distance problems for the basic case of a ‘tilted’ plane in three-space. In this model, the cost of moving depends not only on the Euclidean distance but also on how much upwards or downwards the movement has to travel, simulating the situation when driving a vehicle on the tilted plane. Direction-sensitive distances and, in particular, their induced Voronoi di-

Proceedings ArticleDOI
01 Aug 1997
TL;DR: A complete characterization of the centers of annuli which are locally minimal in arbitrary dimension is given and it is shown that, for d=2, a locally minimal annulus has two points on the inner circle and twopoints on the outer circle that interlace anglewise as seen from the center of the annulus.
Abstract: Given a set of points S={p1,. . ., pn} in Euclidean d -dimensional space, we address the problem of computing the d -dimensional annulus of smallest width containing the set. We give a complete characterization of the centers of annuli which are locally minimal in arbitrary dimension and we show that, for d=2 , a locally minimal annulus has two points on the inner circle and two points on the outer circle that interlace anglewise as seen from the center of the annulus. Using this characterization, we show that, given a circular order of the points, there is at most one locally minimal annulus consistent with that order and it can be computed in time O(n log n) using a simple algorithm. Furthermore, when points are in convex position, the problem can be solved in optimal Θ(n) time.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: A family of straightline graphs, derived from the Delaunay triangulation, is developed which is called A-shape of a finite point set (A is an arbitrary point set).
Abstract: In this paper, we develop a family of straightline graphs which we call A-shape of a finite point set (A is an arbitrary point set). Derived from the Delaunay triangulation, this family captures the notion of shape hull of a finite point set.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: By using 2SAT one can obtain good approximate algorithms and heuristics in labeling a set of sites (points) and if for each site only two labels are allowed then one can naturally formulate the problem as 1SAT, which can be solved in linear time.
Abstract: Given a rectilinear map consisting of n disjoint line segments, the corresponding map labeling problem is to place a maximum width rectangle at each segment using one of the three natural ways. In a recent paper, it is shown that if all segments are horizontal then the problem can be solved in optimal Θ(n log n) time. For the general problem a factor-2 approximate solution and a Polynomial Time Approximation Scheme are also proposed. In this paper, we show that the general problem is polynomially solvable with a nontrivial use of 2SAT and the solution can be even generalized to the case of allowing k natural placements for each segment, where k is any fixed constant. We believe this technique can be also used to solve other geometric packing problems.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: A combinatorial-geometric quantity λ(P, Q) is introduced, which is called the inner product of the distance-multiplicity vectors of P and Q, and it is shown its relevance to the complexity of various algorithms for LCP, and some upper bounds on the quantity are given.
Abstract: This paper considers the following problem, which we call the largest common point set problem (LCP): given two point sets P and Q in the Euclidean plane, find a subset of P with the maximum cardinality that is congruent to some subset of Q . We introduce a combinatorial-geometric quantity λ(P, Q) , which we call the inner product of the distance-multiplicity vectors of P and Q , show its relevance to the complexity of various algorithms for LCP, and give a nontrivial upper bound on λ(P, Q) . We generalize this notion to higher dimensions, give some upper bounds on the quantity, and apply them to algorithms for LCP in higher dimensions. Along the way, we prove a new upper bound on the number of congruent triangles in a point set in four-dimensional space.