scispace - formally typeset
Search or ask a question

Showing papers on "Constrained Delaunay triangulation published in 2017"


Journal ArticleDOI
TL;DR: The results show that the proposed algorithm can preserve the main shape of the polyline and meet the area-maintaining constraint during large-scale change and is also free from self-intersection.
Abstract: As a basic and significant operator in map generalization, polyline simplification needs to work across scales. Perkal’s e-circle rolling approach, in which a circle with diameter e is rolled on both sides of the polyline so that the small bend features can be detected and removed, is considered as one of the few scale-driven solutions. However, the envelope computation, which is a key part of this method, has been difficult to implement. Here, we present a computational method that implements Perkal’s proposal. To simulate the effects of a rolling circle, Delaunay triangulation is used to detect bend features and further to construct the envelope structure around a polyline. Then, different connection methods within the enveloping area are provided to output the abstracted result, and a strategy to determine the best connection method is explored. Experiments with real land-use polygon data are implemented, and comparison with other algorithms is discussed. In addition to the scale-specificity inherited from Perkal’s proposal, the results show that the proposed algorithm can preserve the main shape of the polyline and meet the area-maintaining constraint during large-scale change. This algorithm is also free from self-intersection.

57 citations


Journal ArticleDOI
TL;DR: A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media and can generate smoother elements and a better distribution of element aspect ratios to be applied to various simulations of complex geological media that contain a large number of discontinuities.

34 citations


Journal ArticleDOI
TL;DR: The proposed approach manages to perform an exact cellular decomposition of non-convex polygonal coastal areas and includes an attributes based schema for area partitioning and coverage in a multi-UAS context.
Abstract: The paper presents a novel algorithmic approach that allows to tackle in a common framework the problems of area decomposition, partition and coverage for multiple heterogeneous Unmanned Aircraft Systems (UAS). The approach combines computational geometry techniques and graph search algorithms in a multi-UAS context. Even though literature provides several strategies for area decomposition like grid overlay decomposition or exact cellular methods, some fail to either successfully decompose complex areas, or the associated path generation strategies are not feasible. The proposed approach manages to perform an exact cellular decomposition of non-convex polygonal coastal areas and includes an attributes based schema for area partitioning. In particular, the proposed solution uses a Constrained Delaunay Triangulation (CDT) for computing a configuration space of a complex area containing obstacles. The cell size of each produced triangle is constrained to the maximum projected Field-of-View (FoV) of the sensor on-board each UAS. In addition, the resulting mesh is considered as an undirected graph, where each vertex has several attributes used for area partitioning and coverage in a multi-UAS context. Simulation results show how the algorithms can compute sound solutions in real complex coastal regions.

33 citations


Journal ArticleDOI
TL;DR: This article develops a new method to obtain proper IDTs on manifold triangle meshes and proves that by adding at most O(n) auxiliary sites, the computed GVD satisfies the closed ball property, and hence its dual graph is a proper IDT.
Abstract: Intrinsic Delaunay triangulation (IDT) naturally generalizes Delaunay triangulation from R2 to curved surfaces. Due to many favorable properties, the IDT whose vertex set includes all mesh vertices is of particular interest in polygonal mesh processing. To date, the only way for constructing such IDT is the edge-flipping algorithm, which iteratively flips non-Delaunay edges to become locally Delaunay. Although this algorithm is conceptually simple and guarantees to terminate in finite steps, it has no known time complexity and may also produce triangulations containing faces with only two edges. This article develops a new method to obtain proper IDTs on manifold triangle meshes. We first compute a geodesic Voronoi diagram (GVD) by taking all mesh vertices as generators and then find its dual graph. The sufficient condition for the dual graph to be a proper triangulation is that all Voronoi cells satisfy the so-called closed ball property. To guarantee the closed ball property everywhere, a certain sampling criterion is required. For Voronoi cells that violate the closed ball property, we fix them by computing topologically safe regions, in which auxiliary sites can be added without changing the topology of the Voronoi diagram beyond them. Given a mesh with n vertices, we prove that by adding at most O(n) auxiliary sites, the computed GVD satisfies the closed ball property, and hence its dual graph is a proper IDT. Our method has a theoretical worst-case time complexity O(n2 + tnlog n), where t is the number of obtuse angles in the mesh. Computational results show that it empirically runs in linear time on real-world models.

26 citations


Proceedings ArticleDOI
01 Jun 2017
TL;DR: This paper presents an algorithm to compute a waypoint list for each UAS such that each sub-area is covered with its sensor on-board following a pattern that goes from the borders of the sub- area to the inner regions.
Abstract: This paper addresses area coverage in complex non concave coastal regions for an arbitrary number of heterogeneous Unmanned Aircraft Systems (UAS). The space is discretized with the constrained Delaunay triangulation and the Lloyd optimization is applied to the computed mesh. The paper presents an algorithm to compute a waypoint list for each UAS such that each sub-area is covered with its sensor on-board following a pattern that goes from the borders of the sub-area to the inner regions. In addition, the resulting paths lead to a uniform coverage pattern that avoids visiting some regions more times than others. Different sensitivity parameters of the algorithm are compared based on the average angles between the waypoints and the total length of the paths. Results show that these parameters support the optimization of the computed waypoint lists and the proposed algorithm produces feasible coverage paths while increasing their smoothness.

25 citations


Proceedings ArticleDOI
01 May 2017
TL;DR: LS-ELAS, a line segment extension to the ELAS algorithm, is presented, which increases the performance and robustness and increased the accuracy by using an adaptive method to sample candidate points along edge segments.
Abstract: We present LS-ELAS, a line segment extension to the ELAS algorithm, which increases the performance and robustness. LS-ELAS is a binocular dense stereo matching algorithm, which computes the disparities in constant time for most of the pixels in the image and in linear time for a small subset of the pixels (support points). Our approach is based on line segments to determine the support points instead of uniformly selecting them over the image range. This way we find very informative support points which preserve the depth discontinuity. The prior of our Bayesian stereo matching method is based on a set of line segments and a set of support points. Both sets are given to a constrained Delaunay triangulation to generate a triangulation mesh which is aware of possible depth discontinuities. We further increased the accuracy by using an adaptive method to sample candidate points along edge segments. We evaluated our algorithm on the Middlebury benchmark.

20 citations


Journal ArticleDOI
TL;DR: The experimental results show that the SABM method can be used for continuous generalization and generates smooth, natural and visually pleasing linear features with gradient effects.
Abstract: This paper presents a new method for use in performing continuous scale transformations of linear features using Simulated Annealing-Based Morphing (SABM). This study addresses two key problems in the continuous generalization of linear features by morphing, specifically the detection of characteristic points and correspondence matching. First, an algorithm that performs robust detection of characteristic points is developed that is based on the Constrained Delaunay Triangulation (CDT) model. Then, an optimal problem is defined and solved to associate the characteristic points between a coarser representation and a finer representation. The algorithm decomposes the input shapes into several pairs of corresponding segments and uses the simulated annealing algorithm to find the optimal matching. Simple straight-line trajectories are used to define the movements between corresponding points. The experimental results show that the SABM method can be used for continuous generalization and generates smooth, natural and visually pleasing linear features with gradient effects. In contrast to linear interpolation, the SABM method uses the simulated annealing technique to optimize the correspondence between characteristic points. Moreover, it avoids interior distortions within intermediate shapes and preserves the geographical characteristics of the input shapes.

12 citations


Journal ArticleDOI
TL;DR: This paper introduces a modification of the original Delaunay-based optimization algorithm that reduces the number of function evaluations on the boundary of feasibility as compared with the original algorithm, and leads to a significant reduction of datapoints accumulating on the boundaries of feasibility, and faster overall convergence.
Abstract: This paper introduces a modification of our original Delaunay-based optimization algorithm (developed in JOGO DOI: 10.1007/s10898-015-0384-2 ) that reduces the number of function evaluations on the boundary of feasibility as compared with the original algorithm. A weaknesses we have identified with the original algorithm is the sometimes faulty behavior of the generated uncertainty function near the boundary of feasibility, which leads to more function evaluations along the boundary of feasibility than might otherwise be necessary. To address this issue, a second search function is introduced which has improved behavior near the boundary of the search domain. Additionally, the datapoints are quantized onto a Cartesian grid, which is successively refined, over the search domain. These two modifications lead to a significant reduction of datapoints accumulating on the boundary of feasibility, and faster overall convergence.

10 citations



Journal ArticleDOI
TL;DR: This paper proposes a new graphics processing unit (GPU) method able to compute the 2D constrained Delaunay triangulation of a planar straight-line graph consisting of points and segments that improves, in terms of running time, the best known GPU-based approach to the CDT problem.
Abstract: In this paper, we propose a new graphics processing unit GPU method able to compute the 2D constrained Delaunay triangulation CDT of a planar straight-line graph consisting of points and segments. All existing methods compute the Delaunay triangulation of the given point set, insert all the segments, and then finally transform the resulting triangulation into the CDT. To the contrary, our novel approach simultaneously inserts points and segments into the triangulation, taking special care to avoid conflicts during retriangulations due to concurrent insertion of points or concurrent edge flips. Our implementation using the Compute Unified Device Architecture programming model on NVIDIA GPUs improves, in terms of running time, the best known GPU-based approach to the CDT problem.

7 citations


Proceedings ArticleDOI
27 Feb 2017
TL;DR: This new CDT traversal algorithm is more efficient than the previous ones: it uses less arithmetic operations; it does not add extra thread divergence since it uses a fixed number of operation; at last, it is robust with 32-bits floats, contrary to the previous traversal algorithms.
Abstract: Acceleration structures are mandatory for ray-tracing applications, allowing to cast a large number of rays per second. In 2008, Lagae and Dutre have proposed to use Constrained Delaunay Tetrahedralization (CDT) as an acceleration structure for ray tracing. Our experiments show that their traversal algorithm is not suitable for GPU applications, mainly due to arithmetic errors. This article proposes a new CDT traversal algorithm. This new algorithm is more efficient than the previous ones: it uses less arithmetic operations; it does not add extra thread divergence since it uses a fixed number of operation; at last, it is robust with 32-bits floats, contrary to the previous traversal algorithms. Hence, it is the first method usable both on CPU and GPU.

Journal Article
TL;DR: The purpose of this modification is to decrease the number of comparison operations and the error rates within the matching process, by doing a full analysis of the Delaunay triangles.
Abstract: This paper presents a modification for robust minutiae based fingerprint verification methods that use Delaunay triangulations. The purpose of this modification is to decrease the number of comparison operations and the error rates within the matching process, by doing a full analysis of the Delaunay triangles. From this full analysis, a modified method was proposed. The identified minutiae represent nodes of a coZnnected graph composed of triangles. With this technique, the minimum angle over all triangulations is maximized, which gives local stability to the constructed structures against rotation and translation variations. Geometric thresholds and minutiae data were used to characterize the triangulations created from input and template fingerprint images. The effectiveness of the proposed modification is confirmed with calculations of False Acceptance Rate (FAR), False Rejected Rate (FRR) and Equal Error Rate (EER) over FVC2002 databases compared to other approaches results.

Journal Article
TL;DR: In this paper, Delaunay triangulation method was used to construct surface of scattered data points for six different test functions and the result of the study shows interpolating surface produced by the removing points and the total absolute error with mean absolute error was calculated and compared.
Abstract: Surface reconstruction of scattered data points is one of the challenging area where the main purpose is to produce a smooth surface. In this research, Delaunay triangulation method was used to construct surface of scattered data points for six different test functions. In certain cases some surface producing holes after scanning where it becomes difficulty to produce a smooth surface. This research intends to test the accuracy of Delaunay triangulation in generating different surface when the points of scattered data were removed. The points removed were according to the percentage of points and the new surface was generated for every removing point. The result of the study shows interpolating surface produced by the removing points and the total absolute error with mean absolute error was calculated and compared.

Proceedings ArticleDOI
25 Feb 2017
TL;DR: This work proposes the first working GPU algorithm for the 2D Delaunay refinement problem, and it is proven to terminate with finite output size for an input PSLG with no angle smaller than 60° and θ ≥ 20.7°.
Abstract: We propose the first working GPU algorithm for the 2D Delaunay refinement problem. Our algorithm adds Steiner points to an input planar straight line graph (PSLG) to generate a constrained Delaunay mesh with triangles having no angle smaller than an input θ. It is shown to run from a few times to an order of magnitude faster than the well-known Triangle software, which is the fastest CPU Delaunay mesh generator. Our implementation handles degeneracy and is numerically robust. It is proven to terminate with finite output size for an input PSLG with no angle smaller than 60° and θ ≥ 20.7°. In addition, we notice meshes generated by our algorithm are of similar sizes to that by Triangle, which has incorporated good consideration in keeping output small in size.

Journal ArticleDOI
TL;DR: The texture features are first introduced into the generalization process, and a self-organizing mapping (SOM)-based algorithm is used for texture classification and a new cognition-based hierarchical algorithm is proposed for model-group clustering.
Abstract: Three-dimensional (3D) building models have been widely used in the fields of urban planning, navigation and virtual geographic environments. These models incorporate many details to address the complexities of urban environments. Level-of-detail (LOD) technology is commonly used to model progressive transmission and visualization. These detailed groups of models can be replaced by a single model using generalization. In this paper, the texture features are first introduced into the generalization process, and a self-organizing mapping (SOM)-based algorithm is used for texture classification. In addition, a new cognition-based hierarchical algorithm is proposed for model-group clustering. First, a constrained Delaunay triangulation (CDT) is constructed using the footprints of building models that are segmented by a road network, and a preliminary proximity graph is extracted from the CDT by visibility analysis. Second, the graph is further segmented by the texture–feature and landmark models. Third, a minimum support tree (MST) is created from the segmented graph, and the final groups are obtained by linear detection and discrete-model conflation. Finally, these groups are conflated using small-triangle removal while preserving the original textures. The experimental results demonstrate the effectiveness of this algorithm.

Posted Content
TL;DR: The objective of this work is to report an update over that review article considering contemporary prominent algorithms and generalizations for the problem of two dimensional Constrained Delaunay triangulation.
Abstract: In this article, recent works on 2D Constrained Delaunay triangulation(CDT) algorithms have been reported. Since the review of CDT algorithms presented by de Floriani(Issues on Machine Vision, Springer Vienna, pg. 95--104, 1989), different algorithms for construction and applications of CDT have appeared in literature each concerned with different aspects of computation and different suitabilities. Therefore, objective of this work is to report an update over that review article considering contemporary prominent algorithms and generalizations for the problem of two dimensional Constrained Delaunay triangulation.

Patent
26 Apr 2017
TL;DR: In this article, a method for representing navigation grid map for 3D scene, comprising the following steps: extracting the walking hierarchy planes in 3D scenes, obtaining a set of the walking hierarchical planes; removing the isolated planes and the non-walking planes; abstracting the initial non-passable regions of a way-finding role in the hierarchy planes; conducting region combination to the polygons in the initial nonsmooth regions; combining the intersecting regions in the initially nonsmootable regions to form a final non passable regions; and structuring a final navigation grid
Abstract: The invention provides a method for representing navigation grid map for 3D scene, comprising the following steps: extracting the walking hierarchy planes in a 3D scene; obtaining a set of the walking hierarchy planes; removing the isolated planes and the non-walking planes; abstracting the initial non-passable regions of a way-finding role in the walking hierarchy planes so as to abstracting the models in the 3D scene as non-passable regions; conducting region combination to the polygons in the initial non-passable regions so as to obtain a final non-passable region; combining the intersecting regions in the initial non-passable regions to form a final non-passable region; performing constrained Delaunay triangulation to the constraints of the hierarchy planes to form a triangular set; and structuring a final navigation grid by the triangular sets of all walking hierarchy planes. The final navigation grid capable of fully separating obstacles from walking regions. The invention is applied to the technical field of a digital media and can effectively ensure that obstacles are separated from walking regions.

Journal ArticleDOI
TL;DR: A novel mesh deformation technique is developed based on the Delaunay graph mapping method and the inverse distance weighting (IDW) interpolation that possesses the ability of better controlling the near surface mesh quality.
Abstract: A novel mesh deformation technique is developed based on the Delaunay graph mapping method and the inverse distance weighting (IDW) interpolation. The algorithm maintains the advantages of the efficiency of Delaunay graph mapping mesh deformation while it also possesses the ability of better controlling the near surface mesh quality. The Delaunay graph is used to divide the mesh domain into a number of sub-domains. On each sub-domain, the inverse distance weighting interpolation is applied, resulting in a similar efficiency as compared to the fast Delaunay graph mapping method. The paper will show how the near-wall mesh quality is controlled and improved by the new method

Journal ArticleDOI
TL;DR: This paper surveys properties that helps to understand the shelling performances: shelling provides most tetrahedra enclosed by the final surface, but it can “get stuck” or block in unexpected cases.
Abstract: Recently, methods have been proposed to reconstruct a 2-manifold surface from a sparse cloud of points estimated from an image sequence. Once a 3D Delaunay triangulation is computed from the points, the surface is searched by growing a set of tetrahedra whose boundary is maintained 2-manifold. Shelling is a step that adds one tetrahedron at once to the growing set. This paper surveys properties that helps to understand the shelling performances: shelling provides most tetrahedra enclosed by the final surface but it can " get stuck " or block in unexpected cases.

Proceedings ArticleDOI
01 Jun 2017
TL;DR: A novel approach based on the Delaunay triangulation to solve the skeleton extraction in natural images is proposed and the algorithm shows a promising result in extracting the skeleton innatural images.
Abstract: Conventional skeleton extraction methods require a closed boundary constraint to solve the problem. In natural images closed boundary constraint might not be easily satisfied due to the similarity of the object and its background, occlusion, etc. In this paper a novel approach based on the Delaunay triangulation to solve the skeleton extraction in natural images is proposed. The algorithm shows a promising result in extracting the skeleton in natural images.

Journal ArticleDOI
TL;DR: A data‐parallel algorithm for the construction of Delaunay triangulations on the sphere that resolves a breakdown situation of the classical Bowyer–Watson point insertion algorithm and is suitable for practical implementation because of its compact formulation.
Abstract: We present a data‐parallel algorithm for the construction of Delaunay triangulations on the sphere. Our method combines a variant of the classical Bowyer–Watson point insertion algorithm with the recently published parallelization technique by Jacobsen et al. It resolves a breakdown situation of the latter approach and is suitable for practical implementation because of its compact formulation. Some complementary aspects are discussed such as the parallel workload and floating‐point arithmetics. In a second step, the generated triangulations are reordered by a stripification algorithm. This improves cache performance and significantly reduces data‐read operations and indirect addressing in multi‐threaded stencil loops. This paper is an extended version of our Parallel Processing and Applied Mathematics conference contribution. Copyright © 2016 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A novel algorithm is proposed based on a constrained Delaunay triangulation to identify and eliminate the discrepancies, and the alignment is performed without moving vertices with a snapping operator to guarantee that the datasets have been properly conflated and that the polygons are geometrically valid.
Abstract: Datasets produced by different countries or organisations are seldom properly aligned and contain several discrepancies (e.g., gaps and overlaps). This problem has been so far almost exclusively tackled by snapping vertices based on a user-defined threshold. However, as we argue in this paper, this leads to invalid geometries, is error-prone, and leaves several discrepancies along the boundaries. We propose a novel algorithm to align the boundaries of adjacent datasets. It is based on a constrained Delaunay triangulation to identify and eliminate the discrepancies, and the alignment is performed without moving vertices with a snapping operator. This allows us to guarantee that the datasets have been properly conflated and that the polygons are geometrically valid. We present our algorithm, our implementation (based on the stable and fast triangulator in CGAL), and we show how it can be used it practice with different experiments with real-world datasets. Our experiments demonstrate that our approach is highly efficient and that it yields better results than snapping-based methods.

Journal ArticleDOI
TL;DR: A novel approach to extract 2D skeleton information (skeletonization) from natural image is proposed by using a better edge points detection and skeleton extraction.
Abstract: A R T I C L E I N F O A B S T R A C T Article history: Received: 18 May, 2017 Accepted: 15 June, 2017 Online: 11 July, 2017 In this paper a novel approach to extract 2D skeleton information (skeletonization) from natural image is proposed. The work presented here is the extension of our previous paper presented at the International Sympsosium on Multimedia 2016. In the past work, a threshold based method is utilized. Here the algorithm is further improved by using a better edge points detection and skeleton extraction. Furthermore the proposed method is compared with the Skeleton Strength Map (SSM) and shows better result visually and numerically (F-measure comparison).

Book ChapterDOI
31 Jan 2017
TL;DR: Experimental results show that NSGA-II-DT outperforms NSGA -II on WFG problems with 4, 5 and 6 objectives and two projection strategies using a unit plane and a least-squares plane in the objective space show that the former is more effective than the latter.
Abstract: This paper investigates the scalability of the Delaunay triangulation (DT) based diversity preservation technique for solving many-objective optimization problems (MaOPs). Following the NSGA-II algorithm, the proposed optimizer with DT based density measurement (NSGAII-DT) determines the density of individuals according to the DT mesh built on the population in the objective space. To reduce the computing time, the population is projected onto a plane before building the DT mesh. Experimental results show that NSGA-II-DT outperforms NSGA-II on WFG problems with 4, 5 and 6 objectives. Two projection strategies using a unit plane and a least-squares plane in the objective space are investigated and compared. Our results also show that the former is more effective than the latter.

Proceedings ArticleDOI
01 Aug 2017
TL;DR: A novel image scaling method that employs a mesh model that explicitly represents discontinuities in the image that can greatly reduce the blurring artifacts that can arise during image enlargement and produce images that look more pleasant to human observers, compared to the well-known bilinear and bicubic methods.
Abstract: In this paper, we present a novel image scaling method that employs a mesh model that explicitly represents discontinuities in the image. Our method effectively addresses the problem of preserving the sharpness of edges, which has always been a challenge, during image enlargement. We use a constrained Delaunay triangulation to generate the model and an approximating function that is continuous everywhere except across the image edges (i.e., discontinuities). The model is then rasterized using a subdivision-based technique. Visual comparisons and quantitative measures show that our method can greatly reduce the blurring artifacts that can arise during image enlargement and produce images that look more pleasant to human observers, compared to the well-known bilinear and bicubic methods.