scispace - formally typeset
Search or ask a question

Showing papers on "Delaunay triangulation published in 2009"


Journal ArticleDOI
TL;DR: In this article, a new Digital Elevation Model (DEM) of the natural landforms of Italy is presented and a methodology is discussed to build a DEM over wide areas where elevation data from nonhomogeneous (in density and accuracy) input sources are available.
Abstract: A new Digital Elevation Model (DEM) of the natural landforms of Italy is presented. A methodology is discussed to build a DEM over wide areas where elevation data from non-homogeneous (in density and accuracy) input sources are available. The input elevation data include contour lines and spot heights derived from the Italian Regional topographic maps, satellite-based global positioning system points, ground based and radar altimetry data. Owing to the great heterogeneity of the input data density, the DEM format that better preserves the original accuracy is a Triangular Irregular Network (TIN). A Delaunay-based TIN structure is improved by using the DEST algorithm that enhances input data by evaluating inferred break-lines. Accordingly to this approach, biased distributions in slopes and elevations are absent. To prevent discontinuities at the boundary between regions characterized by data with different resolution a cubic Hermite blending weight S-shaped function is adopted. The TIN of Italy consists of 1.39×109 triangles. The average triangle area ranges from 12 to about 13000 m2 accordingly to different morphologies and different sources. About 50% of the model has a local average triangle area

223 citations


Journal ArticleDOI
TL;DR: A robust but simple algorithm to reconstruct a surface from a set of merged range scans, allowing fast computation of a globally optimal tetrahedra labeling, while avoiding the “shrinking bias” that usually plagues graph cuts methods.
Abstract: We describe a robust but simple algorithm to reconstruct a surface from a set of merged range scans. Our key contribution is the formulation of the surface reconstruction problem as an energy minimisation problem that explicitly models the scanning process. The adaptivity of the Delaunay triangulation is exploited by restricting the energy to inside/outside labelings of Delaunay tetrahedra. Our energy measures both the output surface quality and how well the surface agrees with soft visibility constraints. Such energy is shown to perfectly fit into the minimum s − t cuts optimisation framework, allowing fast computation of a globally optimal tetrahedra labeling, while avoiding the “shrinking bias” that usually plagues graph cuts methods. The behaviour of our method confronted to noise, undersampling and outliers is evaluated on several data sets and compared with other methods through different experiments: its strong robustness would make our method practical not only for reconstruction from range data but also from typically more difficult dense point clouds, resulting for instance from stereo image matching. Our effective modeling of the surface acquisition inverse problem, along with the unique combination of Delaunay triangulation and minimum s − t cuts, makes the computational requirements of the algorithm scale well with respect to the size of the input point cloud.

160 citations


Journal ArticleDOI
TL;DR: This work first selects landmarks on network boundaries with sufficient density, then constructs the landmark Voronoi diagram and its dual combinatorial Delaunay complex on these landmarks, leading to a practical and accurate localization algorithm for large networks using only network connectivity.
Abstract: We study the problem of localizing a large sensor network having a complex shape, possibly with holes. A major challenge with respect to such networks is to figure out the correct network layout, that is, avoid global flips where a part of the network folds on top of another. Our algorithm first selects landmarks on network boundaries with sufficient density, then constructs the landmark Voronoi diagram and its dual combinatorial Delaunay complex on these landmarks. The key insight is that the combinatorial Delaunay complex is provably globally rigid and has a unique realization in the plane. Thus an embedding of the landmarks by simply gluing the Delaunay triangles properly recovers the faithful network layout. With the landmarks nicely localized, the rest of the nodes can easily localize themselves by trilateration to nearby landmark nodes. This leads to a practical and accurate localization algorithm for large networks using only network connectivity. Simulations on various network topologies show surprisingly good results. In comparison, previous connectivity-based localization algorithms such as multidimensional scaling and rubberband representation generate globally flipped or distorted localization results.

145 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that both Voronoi diagram and its dual graph Delaunay triangulation are simultaneously constructed in cultures of plasmodium, a vegetative state of Physarum polycephalum.
Abstract: We experimentally demonstrate that both Voronoi diagram and its dual graph Delaunay triangulation are simultaneously constructed — for specific conditions — in cultures of plasmodium, a vegetative state of Physarum polycephalum. Every point of a given planar data set is represented by a tiny mass of plasmodium. The plasmodia spread from their initial locations but, in certain conditions, stop spreading when they encounter plasmodia originated from different locations. Thus space loci not occupied by the plasmodia represent edges of Voronoi diagram of the given planar set. At the same time, the plasmodia originating at neighboring locations form merging protoplasmic tubes, where the strongest tubes approximate Delaunay triangulation of the given planar set. The problems are solved by plasmodium only for limited data sets, however the results presented lay a sound ground for further investigations.

141 citations


Journal ArticleDOI
TL;DR: In this paper, a survey of acute and nonobtuse simplices and associated spatial partitions is presented, including path-simplices, the generalization of right triangles to higher dimensions.
Abstract: This paper surveys some results on acute and nonobtuse simplices and associated spatial partitions. These partitions are relevant in numerical mathematics, including piecewise polynomial approximation theory and the finite element method. Special attention is paid to a basic type of nonobtuse simplices called path-simplices, the generalization of right triangles to higher dimensions. In addition to applications in numerical mathematics, we give examples of the appearance of acute and nonobtuse simplices in other areas of mathematics.

92 citations


Journal ArticleDOI
TL;DR: This article characterise the smoothness properties of the aggregate expected-value function and proposes a distributed deployment algorithm that enables the network to optimise it.
Abstract: This article considers the deployment of a network of robotic agents with limited-range communication and anisotropic sensing capabilities. We encode the environment coverage provided by the network by means of an expected-value objective function. This function has a gradient which is not amenable to distributed computation. We provide a constant-factor approximation of this measure via an alternative aggregate objective function, whose gradient is spatially distributed over the limited-range Delaunay proximity graph. We characterise the smoothness properties of the aggregate expected-value function and propose a distributed deployment algorithm that enables the network to optimise it. Simulations illustrate the results.

89 citations


Journal ArticleDOI
TL;DR: This work presents an iterative algorithm that seeks to transform a given triangulation in two or three dimensions into a well-centered one by minimizing a cost function and moving the interior vertices while keeping the mesh connectivity and boundary vertices fixed.
Abstract: Meshes composed of well-centered simplices have nice orthogonal dual meshes (the dual Voronoi diagram). This is useful for certain numerical algorithms that prefer such primal-dual mesh pairs. We prove that well-centered meshes also have optimality properties and relationships to Delaunay and minmax angle triangulations. We present an iterative algorithm that seeks to transform a given triangulation in two or three dimensions into a well-centered one by minimizing a cost function and moving the interior vertices while keeping the mesh connectivity and boundary vertices fixed. The cost function is a direct result of a new characterization of well-centeredness in arbitrary dimensions that we present. Ours is the first optimization-based heuristic for well-centeredness and the first one that applies in both two and three dimensions. We show the results of applying our algorithm to small and large two-dimensional meshes, some with a complex boundary, and obtain a well-centered tetrahedralization of the cube. We also show numerical evidence that our algorithm preserves gradation and that it improves the maximum and minimum angles of acute triangulations created by the best known previous method.

77 citations


Proceedings ArticleDOI
08 Jun 2009
TL;DR: A new implementation of the well-known incremental algorithm for constructing Delaunay triangulations in any dimension is described and a modification of the algorithm that uses and stores only theDelaunay graph (the edges of the full triangulation) is proposed.
Abstract: We describe a new implementation of the well-known incremental algorithm for constructing Delaunay triangulations in any dimension. Our implementation follows the exact computing paradigm and is fully robust. Extensive comparisons show that our implementation outperforms the best currently available codes for exact convex hulls and Delaunay triangulations, compares very well to the fast non-exact QHull implementation and can be used for quite big input sets in spaces of dimensions up to 6. To circumvent prohibitive memory usage, we also propose a modification of the algorithm that uses and stores only the Delaunay graph (the edges of the full triangulation). We show that a careful implementation of the modified algorithm performs only 6 to 8 times slower than the original algorithm while drastically reducing memory usage in dimension 4 or above.

75 citations


Proceedings ArticleDOI
28 Jun 2009
TL;DR: In this paper, the authors proposed a method for decomposing clumps of nuclei using high-level geometric constraints that are derived from low-level features of maximum curvature computed along the contour of each clump.
Abstract: Cell-based fluorescence imaging assays have the potential to generate massive amount of data, which requires detailed quantitative analysis. Often, as a result of fixation, labeled nuclei overlap and create a clump of cells. However, it is important to quantify phenotypic read out on a cell-by-cell basis. In this paper, we propose a novel method for decomposing clumps of nuclei using high-level geometric constraints that are derived from low-level features of maximum curvature computed along the contour of each clump. Points of maximum curvature are used as vertices for Delaunay triangulation (DT), which provides a set of edge hypotheses for decomposing a clump of nuclei. Each hypothesis is subsequently tested against a constraint satisfaction network for a near optimum decomposition. The proposed method is compared with other traditional techniques such as the watershed method with/without markers. The experimental results show that our approach can overcome the deficiencies of the traditional methods and is very effective in separating severely touching nuclei.

73 citations


Journal ArticleDOI
TL;DR: This study explores the use of characteristic hull polygons (CHPs) as a new method of home range estimation using simulated animal locational data conforming to five point pattern shapes at three sample sizes to demonstrate the method has potential as a home range estimator.
Abstract: Recent literature has reported inaccuracies associated with some popular home range estimators such as kernel density estimation, especially when applied to point patterns of complex shapes. This study explores the use of characteristic hull polygons (CHPs) as a new method of home range estimation. CHPs are special bounding polygons created in GIS that can have concave edges, be composed of disjoint regions, and contain areas of unoccupied space within their interiors. CHPs are created by constructing the Delaunay triangulation of a set of points and then removing a subset of the resulting triangles. Here, CHPs consisting of 95% of the smallest triangles, measured in terms of perimeter, are applied for home range estimation. First, CHPs are applied to simulated animal locational data conforming to five point pattern shapes at three sample sizes. Then, the method is applied to black-footed albatross (Phoebastria nigripes) locational data for illustration and comparison to other methods. For the simulated data, 95% CHPs produced unbiased home range estimates in terms of size for linear and disjoint point patterns and slight underestimates (8–20%) for perforated, concave, and convex ones. The estimated and known home ranges intersected one another by 72–96%, depending on shape and sample size, suggesting that the method has potential as a home range estimator. Additionally, the CHPs applied to estimate albatross home ranges illustrate how the method produces reasonable estimates for bird species that intensively forage in disjoint habitat patches.

69 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the numerical aspects of the developed contact domain method for large deformation frictional contact problems and demonstrate the performance of this method on static and dynamic contact problems in the context of large deformations.

Journal ArticleDOI
TL;DR: A Delaunay-based surface triangulation algorithm generating quality surface meshes for the molecular skin model by expanding the restricted union of balls along the surface and generating an @e-sampling of the skin surface incrementally.
Abstract: Quality surface meshes for molecular models are desirable in the studies of protein shapes and functionalities. However, there is still no robust software that is capable to generate such meshes with good quality. In this paper, we present a Delaunay-based surface triangulation algorithm generating quality surface meshes for the molecular skin model. We expand the restricted union of balls along the surface and generate an @e-sampling of the skin surface incrementally. At the same time, a quality surface mesh is extracted from the Delaunay triangulation of the sample points. The algorithm supports robust and efficient implementation and guarantees the mesh quality and topology as well. Our results facilitate molecular visualization and have made a contribution towards generating quality volumetric tetrahedral meshes for the macromolecules.

Proceedings ArticleDOI
19 Apr 2009
TL;DR: This paper develops a new landmark selection algorithm with incremental Delaunay refinement method that substantially improves the robustness and applicability of the original localization algorithm, especially in networks with very low average degree (even non- rigid networks) and complex shapes.
Abstract: We study the anchor-free localization problem for a large-scale sensor network with a complex shape, knowing network connectivity information only. The main idea follows from our previous work in which a subset of the nodes are selected as landmarks and the sensor field is partitioned into Voronoi cells with all the nodes closest to the same landmark grouped into the same cell. We extract the combinatorial Delaunay complex as the dual complex of the landmark Voronoi diagram and embed the combinatorial Delaunay complex as a structural skeleton. In this paper we develop a new landmark selection algorithm with incremental Delaunay refinement method. This algorithm does not assume any knowledge of the network boundary and runs in a distributed manner to select landmarks incrementally until both the global rigidity property (the Delaunay complex is globally rigid and thus can be embedded uniquely) and the coverage property (every node is not far from the embedded Delaunay complex) are met. The new algorithm substantially improves the robustness and applicability of the original localization algorithm, especially in networks with very low average degree (even non- rigid networks) and complex shapes.

Journal ArticleDOI
TL;DR: A full subtraction approach where the total potential is divided into a singularity and a correction potential, which allows the construction of transfer matrices for fast computation of the inverse problem for anisotropic volume conductors in a constrained Delaunay tetrahedralisation approach.

Journal ArticleDOI
TL;DR: A finite-volume model that solves the shallow-water equations on any mesh of the surface of the sphere is presented and the accuracy and cost effectiveness of four quasi-uniform meshes of the spheres are compared.
Abstract: Alternative meshes of the sphere and adaptive mesh refinement could be immensely beneficial for weather and climate forecasts, but it is not clear how mesh refinement should be achieved. A finite-volume model that solves the shallow-water equations on any mesh of the surface of the sphere is presented. The accuracy and cost effectiveness of four quasi-uniform meshes of the sphere are compared: a cubed sphere, reduced latitude–longitude, hexagonal–icosahedral, and triangular–icosahedral. On some standard shallow-water tests, the hexagonal–icosahedral mesh performs best and the reduced latitude–longitude mesh performs well only when the flow is aligned with the mesh. The inclusion of a refined mesh over a disc-shaped region is achieved using either gradual Delaunay, gradual Voronoi, or abrupt 2:1 block-structured refinement. These refined regions can actually degrade global accuracy, presumably because of changes in wave dispersion where the mesh is highly nonuniform. However, using gradual refinem...

Book ChapterDOI
01 Oct 2009
TL;DR: The idea is to explicitly sample corners and edges from the input image and to constrain the Delaunay refinement algorithm to preserve these features in addition to the surface patches to generate high-quality tetrahedral meshes from segmented images.
Abstract: The problem of generating realistic computer models of objects represented by 3D segmented images is important in many biomedical applications. Labelled 3D images impose particular challenges for meshing algorithms because multi-material junctions form features such as surface pacthes, edges and corners which need to be preserved into the output mesh. In this paper, we propose a feature preserving Delaunay refinement algorithm which can be used to generate high-quality tetrahedral meshes from segmented images. The idea is to explicitly sample corners and edges from the input image and to constrain the Delaunay refinement algorithm to preserve these features in addition to the surface patches. Our experimental results on segmented medical images have shown that, within a few seconds, the algorithm outputs a tetrahedral mesh in which each material is represented as a consistent submesh without gaps and overlaps. The optimization property of the Delaunay triangulation makes these meshes suitable for the purpose of realistic visualization or finite element simulations.

Journal ArticleDOI
TL;DR: A close connection between coarse-graining procedures from microscopic dynamics and discretization schemes for partial differential equations is pointed toward.
Abstract: By using the standard theory of coarse graining based on Zwanzig’s projection operator, we derive the dynamic equations for discrete hydrodynamic variables. These hydrodynamic variables are defined in terms of the Delaunay triangulation. The resulting microscopically derived equations can be understood, a posteriori, as a discretization on an arbitrary irregular grid of the Navier–Stokes equations. The microscopic derivation provides a set of discrete equations that exactly conserves mass, momentum, and energy and the dissipative part of the dynamics produces strict entropy increase. In addition, the microscopic derivation provides a practical implementation of thermal fluctuations in a way that the fluctuation-dissipation theorem is satisfied exactly. This paper points toward a close connection between coarse-graining procedures from microscopic dynamics and discretization schemes for partial differential equations.

Proceedings ArticleDOI
02 Feb 2009
TL;DR: In this paper, the authors extend the COCONE algorithm to handle supersize data, which is the first reported Delaunay based surface reconstruction algorithm that can handle data containing more than a million sample points on a modest machine.
Abstract: Surface reconstruction provides a powerful paradigm for modeling shapes from samples. For point cloud data with only geometric coordinates as input, Delaunay based surface reconstruction algorithms have been shown to be quite effective both in theory and practice. However, a major complaint against Delaunay based methods is that they are slow and cannot handle large data. We extend the COCONE algorithm to handle supersize data. This is the first reported Delaunay based surface reconstruction algorithm that can handle data containing more than a million sample points on a modest machine.

Journal ArticleDOI
TL;DR: The results rule out the conjecture by Hsu and Huang that the bond thresholds are 2/3 and 1/3, respectively, but support the conjecture of Wierman that, for fully triangulated lattices other than the regular triangular lattice, the bond threshold is less than 2 sin pi/18 approximately 0.3473.
Abstract: The site percolation threshold for the random Voronoi network is determined numerically, with the result pc=0714 10+/-0000,02 , using Monte Carlo simulation on periodic systems of up to 40,000 sites The result is very close to the recent theoretical estimate pc approximately 07151 of Neher For the bond threshold on the Voronoi network, we find pc=0666, 931+/-0000,005 implying that, for its dual, the Delaunay triangulation pc=0333 069+/-0000 005 These results rule out the conjecture by Hsu and Huang that the bond thresholds are 2/3 and 1/3, respectively, but support the conjecture of Wierman that, for fully triangulated lattices other than the regular triangular lattice, the bond threshold is less than 2 sin pi/18 approximately 03473

Journal ArticleDOI
TL;DR: The first idea is an effective use of the Voronoi diagram and unifies previously suggested Steiner point insertion schemes (circumcenter, sink, off-center) together with a new strategy and leads to two new versions of Delaunay refinement.
Abstract: We propose two novel ideas to improve the performance of Delaunay refinement algorithms which are used for computing quality triangulations. The first idea is an effective use of the Voronoi diagram and unifies previously suggested Steiner point insertion schemes (circumcenter, sink, off-center) together with a new strategy. The second idea is the integration of a new local smoothing strategy into the refinement process. These lead to two new versions of Delaunay refinement, where the second is simply an extension of the first. For a given input domain and a constraint angle $\alpha$, Delaunay refinement algorithms aim to compute triangulations that have all angles at least $\alpha$. The original Delaunay refinement algorithm of Ruppert is proven to terminate with size-optimal quality triangulations for $\alpha\le20.7^\circ$. In practice, the original and the consequent Delaunay refinement algorithms generally work for $\alpha\le34^\circ$ and fail to terminate for larger constraint angles. Our algorithms provide the same theoretical guarantees as the previous Delaunay refinement algorithms. The second of the proposed algorithms generally terminates for constraint angles up to $42^\circ$. Experiments also indicate that our algorithm computes significantly (about by a factor of two) smaller triangulations than the output of the previous Delaunay refinement algorithms. Moreover, the new algorithms are experimentally shown to outperform the previous algorithms even in the existence of additional constraints, such as the maximum area triangle constraint which is commonly used for computing uniform triangulations.

Journal ArticleDOI
TL;DR: It is demonstrated, how a subtetrahedralization of an intersected element can be obtained, which preserves the possibly curved interface and allows therefore exact numerical integration.
Abstract: Three-dimensional higher-order eXtended finite element method (XFEM) - computations still pose challenging computational geometry problems especially for moving interfaces. This paper provides a method for the localization of a higher-order interface finite element (FE) mesh in an underlying three-dimensional higher-order FE mesh. Additionally, it demonstrates, how a subtetrahedralization of an intersected element can be obtained, which preserves the possibly curved interface and allows therefore exact numerical integration. The proposed interface algorithm collects initially a set of possibly intersecting elements by comparing their ?eXtended axis-aligned bounding boxes?. The intersection method is applied to a highly reduced number of intersection candidates. The resulting linearized interface is used as input for an elementwise constrained Delaunay tetrahedralization, which computes an appropriate subdivision for each intersected element. The curved interface is recovered from the linearized interface in the last step. The output comprises triangular integration cells representing the interface and tetrahedral integration cells for each intersected element. Application of the interface algorithm currently concentrates on fluid-structure interaction problems on low-order and higher-order FE meshes, which may be composed of any arbitrary element types such as hexahedra, tetrahedra, wedges, etc.. Nevertheless, other XFEM-problems with explicitly given interfaces or discontinuities may be tackled in addition. Multiple structures and interfaces per intersected element can be handled without any additional difficulties. Several parallelization strategies exist depending on the desired domain decomposition approach. Numerical test cases including various geometrical exceptions demonstrate the accuracy, robustness and efficiency of the interface handling.

Journal ArticleDOI
TL;DR: An algorithm for surface reconstruction from unorganized points based on a view of the sampling process as a deformation from the original surface is proposed, comparable in speed and complexity to current popular Voronoi/Delaunay-based algorithms, and is applicable to very large datasets.

Journal ArticleDOI
TL;DR: In this paper, a new type of Steiner points, called off-centers, is introduced to improve the quality of Delaunay triangulations in two dimensions, and a new algorithm based on iterative insertion of offcenters is proposed.
Abstract: We introduce a new type of Steiner points, called off-centers, as an alternative to circumcenters, to improve the quality of Delaunay triangulations in two dimensions. We propose a new Delaunay refinement algorithm based on iterative insertion of off-centers. We show that this new algorithm has the same quality and size optimality guarantees of the best known refinement algorithms. In practice, however, the new algorithm inserts fewer Steiner points, runs faster, and generates smaller triangulations than the best previous algorithms. Performance improvements are significant especially when user-specified minimum angle is large, e.g., when the smallest angle in the output triangulation is 30^o, the number of Steiner points is reduced by about 40%, while the mesh size is down by about 30%. As a result of its shown benefits, the algorithm described here has already replaced the well-known circumcenter insertion algorithm of Ruppert and has been the default quality triangulation method in the popular meshing software Triangle.

Book ChapterDOI
25 Oct 2009
TL;DR: This paper presents a perturbation algorithm which favors deterministic over random perturbations and applies the proposed algorithm to meshes obtained by Delaunay refinement as well as to carefully optimized meshes.
Abstract: Isotropic tetrahedron meshes generated by Delaunay refinement algorithms are known to contain a majority of well-shaped tetrahedra, as well as spurious sliver tetrahedra. As the slivers hamper stability of numerical simulations we aim at removing them while keeping the triangulation Delaunay for simplicity. The solution which explicitly perturbs the slivers through random vertex relocation and Delaunay connectivity update is very effective but slow. In this paper we present a perturbation algorithm which favors deterministic over random perturbation. The added value is an improved efficiency and effectiveness. Our experimental study applies the proposed algorithm to meshes obtained by Delaunay refinement as well as to carefully optimized meshes.

Journal ArticleDOI
TL;DR: This paper develops a refinement strategy that eliminates domain dependent numerical predicates and obtains a meshing algorithm that is practical and implementation-friendly.
Abstract: Recently a Delaunay refinement algorithm has been proposed that can mesh piecewise smooth complexes which include polyhedra, smooth and piecewise smooth surfaces, and non-manifolds. However, this algorithm employs domain dependent numerical predicates, some of which could be computationally expensive and hard to implement. In this paper we develop a refinement strategy that eliminates these complicated domain dependent predicates. As a result we obtain a meshing algorithm that is practical and implementation-friendly.

Book ChapterDOI
07 Sep 2009
TL;DR: It is shown that in the worst-case the algorithm needs quadratic time, but that this can only happen in degenerate cases, and that the algorithm runs in O(n logn) time under realistic assumptions.
Abstract: Incremental construction con BRIO using a space-filling curve order for insertion is a popular algorithm for constructing Delaunay triangulations So far, it has only been analyzed for the case that a worst-case optimal point location data structure is used which is often avoided in implementations In this paper, we analyze its running time for the more typical case that points are located by walking We show that in the worst-case the algorithm needs quadratic time, but that this can only happen in degenerate cases We show that the algorithm runs in O(n logn) time under realistic assumptions Furthermore, we show that it runs in expected linear time for many random point distributions

Proceedings ArticleDOI
26 Jun 2009
TL;DR: In this paper, the authors propose an approach to automatically fill holes in triangulated models by unfolding the hole boundary onto a plane using energy minimization. And then, they triangulate the unfolded hole using a constrained Delaunay triangulation, and embed the triangular mesh as a minimum energy surface in ℝ3.
Abstract: We propose a novel approach to automatically fill holes in triangulated models. Each hole is filled using a minimum energy surface that is obtained in three steps. First, we unfold the hole boundary onto a plane using energy minimization. Second, we triangulate the unfolded hole using a constrained Delaunay triangulation. Third, we embed the triangular mesh as a minimum energy surface in ℝ3. The running time of the method depends primarily on the size of the hole boundary and not on the size of the model, thereby making the method applicable to large models. Our experiments demonstrate the applicability of the algorithm to the problem of filling holes bounded by highly curved boundaries in large models.

Journal IssueDOI
TL;DR: This paper proves that the largest independent set in a randomly and uniformly selected point set in the unit square is O(n log(2) log n/ log n), with probability tending to 1.
Abstract: Given a point set P in the plane, the Delaunay graph with respect to axis-parallel rectangles is a graph defined on the vertex set P, whose two points p, q is an element of P are connected by an edge if and only if there is a rectangle parallel to the coordinate axes that contains p and q, but no other elements of P. The following question of Even et al. (SIAM J Comput 33 (2003) 94-136) was motivated by a frequency assignment problem in cellular telephone networks: Does there exist a constant c > 0 such that the Delaunay graph of any set of it points in general position in the plane contains an independent set of size at least cn? We answer this question in the negative, by proving that the largest independent set in a randomly and uniformly selected point set in the unit square is O(n log(2) log n/ log n), with probability tending to 1. We also show that our bound is not far from optimal, as the Delaunay graph of a uniform random set of it points almost surely has an independent set of size at least cn log log n/(log n log log log it).

Journal ArticleDOI
TL;DR: This work investigates ways in which an algorithm can improve its expected performance by fine-tuning itself automatically with respect to an arbitrary, unknown input distribution, and gives self-improving algorithms for sorting and clustering.
Abstract: We investigate ways in which an algorithm can improve its expected performance by fine-tuning itself automatically with respect to an unknown input distribution D. We assume here that D is of product type. More precisely, suppose that we need to process a sequence I_1, I_2, ... of inputs I = (x_1, x_2, ..., x_n) of some fixed length n, where each x_i is drawn independently from some arbitrary, unknown distribution D_i. The goal is to design an algorithm for these inputs so that eventually the expected running time will be optimal for the input distribution D = D_1 * D_2 * ... * D_n. We give such self-improving algorithms for two problems: (i) sorting a sequence of numbers and (ii) computing the Delaunay triangulation of a planar point set. Both algorithms achieve optimal expected limiting complexity. The algorithms begin with a training phase during which they collect information about the input distribution, followed by a stationary regime in which the algorithms settle to their optimized incarnations.

Book ChapterDOI
05 Dec 2009
TL;DR: This paper presents a local self-stabilizing algorithm that constructs a Delaunay graph from any initial connected topology and in a distributed manner and terminates in time O(n 3) in the worst-case.
Abstract: This paper studies the construction of self-stabilizing topologies for distributed systems. While recent research has focused on chain topologies where nodes need to be linearized with respect to their identifiers, we go a step further and explore a natural 2-dimensional generalization. In particular, we present a local self-stabilizing algorithm that constructs a Delaunay graph from any initial connected topology and in a distributed manner. This algorithm terminates in time O(n 3) in the worst-case. We believe that such self-stabilizing Delaunay networks have interesting applications and give insights into the necessary geometric reasoning that is required for higher-dimensional linearization problems.