scispace - formally typeset
Search or ask a question

Showing papers on "Delaunay triangulation published in 2017"


Journal ArticleDOI
TL;DR: In this paper, the authors present a methodology and set of validation criteria for the systematic creation of synthetic power system test cases, which do not correspond to any real grid and are free from confidentiality requirements.
Abstract: This paper presents a methodology and set of validation criteria for the systematic creation of synthetic power system test cases The synthesized grids do not correspond to any real grid and are, thus, free from confidentiality requirements The cases are built to match statistical characteristics found in actual power grids First, substations are geographically placed on a selected territory, synthesized from public information about the underlying population and generation plants A clustering technique is employed, which ensures the synthetic substations meet realistic proportions of load and generation, among other constraints Next, a network of transmission lines is added This paper describes several structural statistics to be used in characterizing real power system networks, including connectivity, Delaunay triangulation overlap, dc power flow analysis, and line intersection rate The paper presents a methodology to generate synthetic line topologies with realistic parameters that satisfy these criteria Then, the test cases can be augmented with additional complexities to build large, realistic cases The methodology is illustrated in building a 2000 bus public test case that meets the criteria specified

531 citations


Journal ArticleDOI
TL;DR: The results show that variations in fish feeding behaviors can be accurately quantified and analyzed using the FIFFB values, for which the linear correlation coefficient versus expert manual scoring reached 0.945.

85 citations


Journal ArticleDOI
TL;DR: The results show that the proposed algorithm can preserve the main shape of the polyline and meet the area-maintaining constraint during large-scale change and is also free from self-intersection.
Abstract: As a basic and significant operator in map generalization, polyline simplification needs to work across scales. Perkal’s e-circle rolling approach, in which a circle with diameter e is rolled on both sides of the polyline so that the small bend features can be detected and removed, is considered as one of the few scale-driven solutions. However, the envelope computation, which is a key part of this method, has been difficult to implement. Here, we present a computational method that implements Perkal’s proposal. To simulate the effects of a rolling circle, Delaunay triangulation is used to detect bend features and further to construct the envelope structure around a polyline. Then, different connection methods within the enveloping area are provided to output the abstracted result, and a strategy to determine the best connection method is explored. Experiments with real land-use polygon data are implemented, and comparison with other algorithms is discussed. In addition to the scale-specificity inherited from Perkal’s proposal, the results show that the proposed algorithm can preserve the main shape of the polyline and meet the area-maintaining constraint during large-scale change. This algorithm is also free from self-intersection.

57 citations


Journal ArticleDOI
01 Jan 2017-Animal
TL;DR: The technique indicates that a combination of image processing, MLP classification and mathematical modelling can be used as a precise method for quantifying pig lying behaviour in welfare investigations.
Abstract: Machine vision-based monitoring of pig lying behaviour is a fast and non-intrusive approach that could be used to improve animal health and welfare. Four pens with 22 pigs in each were selected at a commercial pig farm and monitored for 15 days using top view cameras. Three thermal categories were selected relative to room setpoint temperature. An image processing technique based on Delaunay triangulation (DT) was utilized. Different lying patterns (close, normal and far) were defined regarding the perimeter of each DT triangle and the percentages of each lying pattern were obtained in each thermal category. A method using a multilayer perceptron (MLP) neural network, to automatically classify group lying behaviour of pigs into three thermal categories, was developed and tested for its feasibility. The DT features (mean value of perimeters, maximum and minimum length of sides of triangles) were calculated as inputs for the MLP classifier. The network was trained, validated and tested and the results revealed that MLP could classify lying features into the three thermal categories with high overall accuracy (95.6%). The technique indicates that a combination of image processing, MLP classification and mathematical modelling can be used as a precise method for quantifying pig lying behaviour in welfare investigations.

48 citations


Journal ArticleDOI
TL;DR: In this paper, the Particle Finite Element Method (PFEM) was used to simulate the formation of continuous chip in orthogonal machining, and the results have been compared with experimental tests showing a good competitiveness of PFEM in comparison with other available simulation tools.

44 citations


Journal ArticleDOI
TL;DR: In this article, a two-scale continuum model and the discrete fracture network model were combined to calculate reactive flow of acid in carbonate rock with a complex fracture network. But the authors did not consider the locations of fractures in the model, and the method can capture complex geometric relationships.

41 citations


Journal ArticleDOI
TL;DR: A simple push-pull optimization algorithm for blue-noise sampling by enforcing spatial constraints on given point sets, based on the topology emerging from Delaunay triangulation, which offers flexibility for trading-off between different targets, such as noise and aliasing.
Abstract: We describe a simple push-pull optimization (PPO) algorithm for blue-noise sampling by enforcing spatial constraints on given point sets. Constraints can be a minimum distance between samples, a maximum distance between an arbitrary point and the nearest sample, and a maximum deviation of a sample's capacity (area of Voronoi cell) from the mean capacity. All of these constraints are based on the topology emerging from Delaunay triangulation, and they can be combined for improved sampling quality and efficiency. In addition, our algorithm offers flexibility for trading-off between different targets, such as noise and aliasing. We present several applications of the proposed algorithm, including anti-aliasing, stippling, and non-obtuse remeshing. Our experimental results illustrate the efficiency and the robustness of the proposed approach. Moreover, we demonstrate that our remeshing quality is superior to the current state-of-the-art approaches.

40 citations


Journal ArticleDOI
TL;DR: In this paper, the Discontinuous Cell Method (DCM) is formulated with the objective of simulating cohesive fracture propagation and fragmentation in homogeneous solids without issues relevant to excessive mesh deformation typical of available Finite Element formulations.

38 citations


Journal ArticleDOI
TL;DR: A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media and can generate smoother elements and a better distribution of element aspect ratios to be applied to various simulations of complex geological media that contain a large number of discontinuities.

34 citations


Journal ArticleDOI
TL;DR: A new refinement method is proposed for incremental road map construction using big trace data, employing Delaunay triangulation for higher accuracy during the GPS trace stream fusion process, and improves upon existing incremental methods in terms of accuracy.
Abstract: With the rapid development of urban transportation, people urgently need high-precision and up-to-date road maps. At the same time, people themselves are an important source of road information for detailed map construction, as they can detect real-world road surfaces with GPS devices in the course of their everyday life. Big trace data makes it possible and provides a great opportunity to extract and refine road maps at relatively low cost. In this paper, a new refinement method is proposed for incremental road map construction using big trace data, employing Delaunay triangulation for higher accuracy during the GPS trace stream fusion process. An experiment and evaluation were carried out on the GPS traces collected by taxis in Wuhan, China. The results show that the proposed method is practical and improves upon existing incremental methods in terms of accuracy.

33 citations


Journal ArticleDOI
TL;DR: Experimental results over some of the fingerprint verification competition (FVC) and national institute of standards and technology (NIST) databases show superiority of the proposed approach in comparison with state-of-the-art indexing algorithms.
Abstract: A novel and efficient fingerprint indexing based on minutiae triplets, is proposed.The proposed fingerprint features are invariant to rotation and translation.Our fingerprint representation is very robust to distortions and noise.The experimental results demonstrate the validity of the proposed algorithm. Fingerprint indexing plays a key role in the automatic fingerprint identification systems (AFISs) which allows us to speed up the search in large databases without missing accuracy. In this paper, we propose a fingerprint indexing algorithm based on novel features of minutiae triplets to improve the performance of fingerprint indexing. The minutiae triplet based feature vectors, which are generated by ellipse properties and their relation with the triangles formed by the proposed expanded Delaunay triangulation, are used to generate indices and a recovery method based on k-means clustering algorithm is employed for fast and accurate retrieval. The proposed expanded Delaunay triangulation algorithm is based on the quality of fingerprint images and combines two robust Delaunay triangulation algorithms. This paper also employs an improved k-means clustering algorithm which can be applied over large databases, without reducing the accuracy. Finally, a candidate list reduction criteria is employed to reduce the candidate list and to generate the final candidate list for matching stage. Experimental results over some of the fingerprint verification competition (FVC) and national institute of standards and technology (NIST) databases show superiority of the proposed approach in comparison with state-of-the-art indexing algorithms. Our indexing proposal is very promising for the improvement of real-time AFISs efficiency and accuracy in the near future.

Journal ArticleDOI
TL;DR: This article develops a new method to obtain proper IDTs on manifold triangle meshes and proves that by adding at most O(n) auxiliary sites, the computed GVD satisfies the closed ball property, and hence its dual graph is a proper IDT.
Abstract: Intrinsic Delaunay triangulation (IDT) naturally generalizes Delaunay triangulation from R2 to curved surfaces. Due to many favorable properties, the IDT whose vertex set includes all mesh vertices is of particular interest in polygonal mesh processing. To date, the only way for constructing such IDT is the edge-flipping algorithm, which iteratively flips non-Delaunay edges to become locally Delaunay. Although this algorithm is conceptually simple and guarantees to terminate in finite steps, it has no known time complexity and may also produce triangulations containing faces with only two edges. This article develops a new method to obtain proper IDTs on manifold triangle meshes. We first compute a geodesic Voronoi diagram (GVD) by taking all mesh vertices as generators and then find its dual graph. The sufficient condition for the dual graph to be a proper triangulation is that all Voronoi cells satisfy the so-called closed ball property. To guarantee the closed ball property everywhere, a certain sampling criterion is required. For Voronoi cells that violate the closed ball property, we fix them by computing topologically safe regions, in which auxiliary sites can be added without changing the topology of the Voronoi diagram beyond them. Given a mesh with n vertices, we prove that by adding at most O(n) auxiliary sites, the computed GVD satisfies the closed ball property, and hence its dual graph is a proper IDT. Our method has a theoretical worst-case time complexity O(n2 + tnlog n), where t is the number of obtuse angles in the mesh. Computational results show that it empirically runs in linear time on real-world models.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: The main contribution is to pose the reconstruction problem as a non-local variational optimization over a time-varying Delaunay graph of the scene geometry, which allows for an efficient, keyframeless approach to depth estimation.
Abstract: We propose a lightweight method for dense online monocular depth estimation capable of reconstructing 3D meshes on computationally constrained platforms. Our main contribution is to pose the reconstruction problem as a non-local variational optimization over a time-varying Delaunay graph of the scene geometry, which allows for an efficient, keyframeless approach to depth estimation. The graph can be tuned to favor reconstruction quality or speed and is continuously smoothed and augmented as the camera explores the scene. Unlike keyframe-based approaches, the optimized surface is always available at the current pose, which is necessary for low-latency obstacle avoidance. FLaME (Fast Lightweight Mesh Estimation) can generate mesh reconstructions at upwards of 230 Hz using less than one Intel i7 CPU core, which enables operation on size, weight, and power-constrained platforms. We present results from both benchmark datasets and experiments running FLaME in-the-loop onboard a small flying quadrotor.

Proceedings ArticleDOI
02 May 2017
TL;DR: A scalable approach for robustly computing a 3D surface mesh from multi-scale multi-view stereo point clouds that can handle extreme jumps of point density and is highly competitive with the state-of-the-art in terms of accuracy, completeness and outlier resilience.
Abstract: In this paper we present a scalable approach for robustly computing a 3D surface mesh from multi-scale multi-view stereo point clouds that can handle extreme jumps of point density (in our experiments three orders of magnitude). The backbone of our approach is a combination of octree data partitioning, local Delaunay tetrahedralization and graph cut optimization. Graph cut optimization is used twice, once to extract surface hypotheses from local Delaunay tetrahedralizations and once to merge overlapping surface hypotheses even when the local tetrahedralizations do not share the same topology. This formulation allows us to obtain a constant memory consumption per sub-problem while at the same time retaining the density independent interpolation properties of the Delaunay-based optimization. On multiple public datasets, we demonstrate that our approach is highly competitive with the state-of-the-art in terms of accuracy, completeness and outlier resilience. Further, we demonstrate the multi-scale potential of our approach by processing a newly recorded dataset with 2 billion points and a point density variation of more than four orders of magnitude - requiring less than 9GB of RAM per process.

Journal ArticleDOI
TL;DR: A fine grain parallel version of the 3D Delaunay Kernel procedure using the OpenMP (Open Multi-Processing) API that allows to generate meshes with more than a billion tetrahedra in about two minutes.
Abstract: This paper presents a fine grain parallel version of the 3D Delaunay Kernel procedure using the OpenMP (Open Multi-Processing) API. A set S={p1,,pn} of n points is taken as input. S is initially sorted along a space-filling curve so that two points that are close in the insertion order are also close geometrically. The sorted set of points is then divided into M subsets Si, 1iM of equal size n/M. The multithreaded version of the Delaunay kernel inserts M points at a time in the triangulation. OpenMP barriers provide the required synchronization that is needed after each multiple insertion in order to avoid data races. This simple approach exhibits two standard problems of parallel computing: load imbalance and parallel overheads. Those two issues are addressed using a two-level version of the multithreaded Delaunay kernel. Tests show that triangulations of about a billion tetrahedra can be generated on a 32 core machine (Intel Xeon E5-4610 v2 @ 2.30GHz with 128GB of memory) in less that 3 minutes of wall clock time, with a speedup of 18 compared to the single-threaded implementation. A fine grain parallel Delaunay kernel algorithm is proposed.The method that is proposed allows to generate meshes with more than a billion tetrahedra in about two minutes.The implementation uses simple OpenMP constructs.

Journal ArticleDOI
TL;DR: This paper proposes a fast 3D EMD to decompose a volume into several 3D intrinsic mode functions (TIMFs) and introduces two strategies to accelerate the TEMD.

Journal ArticleDOI
TL;DR: It is shown that the choice for a network construction technique in archaeological case studies is important and a possible strategy to approach such a problem is presented.

Journal ArticleDOI
TL;DR: In this article, an algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described, which is designed to generate very high quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modeling and numerical weather prediction.
Abstract: . An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi–Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.

Proceedings ArticleDOI
27 Jun 2017
TL;DR: A new and simple method for filling complex holes in surfaces by creating contour curves inside the boundary edges of the hole and the Delaunay triangulation method in a local area is applied for creating new meshes.
Abstract: In this paper, we propose a new and simple method for filling complex holes in surfaces. To fill a hole, locally uniform points are added to the hole by creating contour curves inside the boundary edges of the hole. A set of contour curves is created by shortening the flow of the boundary edges of the hole. The Delaunay triangulation method in a local area is applied for creating new meshes. The direction of the shortening flow is changed to satisfy the convergence of the curve shortening flow. It enables the filling of a complex hole, such as a hole with an island and a hole with highly curved boundary edges. In addition, the method can be used to fill a hole by preserving the sharp features of the model.

Journal ArticleDOI
TL;DR: An interpolation method based on the Delaunay triangulation and Voronoi tessellation as well as the training set of direct equipartition ray design (EquRay) mixtures, simply IDVequ, successfully predicted the toxicities of various types of binary mixtures.
Abstract: Concentration addition (CA) was proposed as a reasonable default approach for the ecological risk assessment of chemical mixtures. However, CA cannot predict the toxicity of mixture at some effect zones if not all components have definite effective concentrations at the given effect, such as some compounds induce hormesis. In this paper, we developed a new method for the toxicity prediction of various types of binary mixtures, an interpolation method based on the Delaunay triangulation (DT) and Voronoi tessellation (VT) as well as the training set of direct equipartition ray design (EquRay) mixtures, simply IDVequ. At first, the EquRay was employed to design the basic concentration compositions of five binary mixture rays. The toxic effects of single components and mixture rays at different times and various concentrations were determined by the time-dependent microplate toxicity analysis. Secondly, the concentration-toxicity data of the pure components and various mixture rays were acted as a training set. The DT triangles and VT polygons were constructed by various vertices of concentrations in the training set. The toxicities of unknown mixtures were predicted by the linear interpolation and natural neighbor interpolation of vertices. The IDVequ successfully predicted the toxicities of various types of binary mixtures.

Posted Content
TL;DR: In this article, a scalable approach for robustly computing a 3D surface mesh from multi-scale multi-view stereo point clouds that can handle extreme jumps of point density (in their experiments three orders of magnitude).
Abstract: In this paper we present a scalable approach for robustly computing a 3D surface mesh from multi-scale multi-view stereo point clouds that can handle extreme jumps of point density (in our experiments three orders of magnitude). The backbone of our approach is a combination of octree data partitioning, local Delaunay tetrahedralization and graph cut optimization. Graph cut optimization is used twice, once to extract surface hypotheses from local Delaunay tetrahedralizations and once to merge overlapping surface hypotheses even when the local tetrahedralizations do not share the same topology.This formulation allows us to obtain a constant memory consumption per sub-problem while at the same time retaining the density independent interpolation properties of the Delaunay-based optimization. On multiple public datasets, we demonstrate that our approach is highly competitive with the state-of-the-art in terms of accuracy, completeness and outlier resilience. Further, we demonstrate the multi-scale potential of our approach by processing a newly recorded dataset with 2 billion points and a point density variation of more than four orders of magnitude - requiring less than 9GB of RAM per process.

Journal ArticleDOI
Jiewei Zhan1, Jianping Chen1, Peihua Xu1, Xudong Han1, Yu Chen, Yunkai Ruan1, Xin Zhou1 
TL;DR: In this article, a computational framework based on the Delaunay triangulation of 3D uniformly distributed random points is introduced, and a detailed description of the proposed computational framework is presented.

Journal ArticleDOI
TL;DR: In this paper, the authors compare and analyze strain rate maps that were calculated using different approaches, including the Delaunay triangulation and a grid solution, to reconstruct the active deformation in the Mediterranean area.
Abstract: Strain rates and Euler poles for various subregions of the Alpine Mediterranean region were calculated by using global navigation satellite system data from permanent stations. The main scope of the study is to compare and analyze strain rate maps that were calculated using different approaches. This area presents a complex tectonic setting due to the interaction of the Eurasian and Nubian plates. The horizontal velocity gradient tensor was computed starting from a new set of site velocities determined by using continuous long-series geodetic data, state-of-the-art antenna calibrations and recomputed precise orbits. Geodesy provides velocities for a sparsely distributed, discrete number of sites, while deformation has a spatially continuous distribution. For this reason, the interpolation method and the geometric approach to the problem play a fundamental role in the estimation of the strain rate field. In the present study, principal deformation axes and principal angle were estimated by applying two different approaches: the Delaunay triangulation and a grid solution. Both methods produce results with broad coherence, providing new information about the deformation throughout the entire study area. Moreover, an evaluation and analysis of Euler poles related to the different velocity patterns, give complementary information to reconstruct the active deformation in the Mediterranean area.

Journal ArticleDOI
TL;DR: In this article, the expected number of simplices in the Delaunay mosaic was studied in low dimensions, where the points from a Poisson point process in ℝ n were chosen.
Abstract: Mapping every simplex in the Delaunay mosaic of a discrete point set to the radius of the smallest empty circumsphere gives a generalized discrete Morse function. Choosing the points from a Poisson point process in ℝ n , we study the expected number of simplices in the Delaunay mosaic as well as the expected number of critical simplices and nonsingular intervals in the corresponding generalized discrete gradient. Observing connections with other probabilistic models, we obtain precise expressions for the expected numbers in low dimensions. In particular, we obtain the expected numbers of simplices in the Poisson–Delaunay mosaic in dimensions n ≤ 4.

Book ChapterDOI
TL;DR: Asteroseismic Inference on a Massive Scale (AIMS) as mentioned in this paper is a system for estimating stellar parameters and credible intervals/error bars in a Bayesian manner from a set of stellar frequency data and classical constraints.
Abstract: The goal of AIMS (Asteroseismic Inference on a Massive Scale) is to estimate stellar parameters and credible intervals/error bars in a Bayesian manner from a set of asteroseismic frequency data and so-called classical constraints. To achieve reliable parameter estimates and computational efficiency, it searches through a grid of pre-computed models using an MCMC algorithm -- interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modelling consist of individual frequencies from peak-bagging, which can be complemented with classical spectroscopic constraints. AIMS is mostly written in Python with a modular structure to facilitate contributions from the community. Only a few computationally intensive parts have been rewritten in Fortran in order to speed up calculations.

Patent
08 Mar 2017
TL;DR: In this article, an automatic registration and fusion method of point cloud data and an optical image based on a point feature was proposed, which comprises steps of performing filtering processing of the point clouds data; respectively using an adaptive support weight dense stereo algorithm and a Delaunay triangulation algorithm respectively to determine a depth map of the optical image and a depth maps of the points cloud data; obtaining a two-dimensional matching relation between the depth map and the point cloud images by a scale invariant feature transformation algorithm; eliminating false matching point pairs via two-step RANSAC algorithm
Abstract: The invention discloses an automatic registration and fusion method of point cloud data and an optical image based on a point feature. The automatic registration and fusion method comprises steps of performing filtering processing of the point cloud data; respectively using an adaptive support weight dense stereo algorithm and a Delaunay triangulation algorithm respectively to determine a depth map of the optical image and a depth map of the point cloud data; obtaining a two-dimensional matching relation between the depth map of the optical image and the depth map of the point cloud data by a scale invariant feature transformation algorithm; eliminating false matching point pairs via a two-step RANSAC algorithm and obtaining a camera position parameter estimation; and performing color texture mapping of the point cloud data and the optical image to obtain a fused three-dimensional image. The automatic registration and fusion method is advantageous in that, a GPS/INS initial position prior step is not needed, strong features of a man-made building in a scene are not relied on, automation degree is high, robustness is high, and registration accuracy is high.

Journal ArticleDOI
TL;DR: In this article, a reverse-time migration scheme was developed that can image regions with rugged topography without requiring any approximations by adopting an irregular, unstructured-grid modelling scheme.
Abstract: We developed a reverse-time migration scheme that can image regions with rugged topography without requiring any approximations by adopting an irregular, unstructured-grid modelling scheme. This grid, which can accurately describe surface topography and interfaces between high-velocity-contrast regions, is generated by Delaunay triangulation combined with the centroidal Voronoi tessellation method. The grid sizes vary according to the migration velocities, resulting in significant reduction of the number of discretized nodes compared with the number of nodes in the conventional regular-grid scheme, particularly in the case wherein high near-surface velocities exist. Moreover, the time sampling rate can be reduced substantially. The grid method, together with the irregular perfectly matched layer absorbing boundary condition, enables the proposed scheme to image regions of interest using curved artificial boundaries with fewer discretized nodes. We tested the proposed scheme using the 2D SEG Foothill synthetic dataset.


Journal ArticleDOI
TL;DR: A Spatial Anomaly Points and Regions Detection method using multi-constrained graphs and local density (SAPRD for short) is proposed, which demonstrates the effectiveness and practicability of the SAPRD algorithm.
Abstract: Spatial anomalies may be single points or small regions whose non-spatial attribute values are significantly inconsistent with those of their spatial neighborhoods. In this article, a Spatial Anomaly Points and Regions Detection method using multi-constrained graphs and local density (SAPRD for short) is proposed. The SAPRD algorithm first models spatial proximity relationships between spatial entities by constructing a Delaunay triangulation, the edges of which provide certain statistical characteristics. By considering the difference in non-spatial attributes of adjacent spatial entities, two levels of non-spatial attribute distance constraints are imposed to improve the proximity graph. This produces a series of sub-graphs, and those with very few entities are identified as candidate spatial anomalies. Moreover, the spatial anomaly degree of each entity is calculated based on the local density. A spatial interpolation surface of the spatial anomaly degree is generated using the inverse distance weight, and this is utilized to reveal potential spatial anomalies and reflect their whole areal distribution. Experiments on both simulated and real-life spatial databases demonstrate the effectiveness and practicability of the SAPRD algorithm.

Journal ArticleDOI
TL;DR: This work presents a Delaunay triangulation based strategy to detect the presence of holes and an algorithm to reconstruct them and provides theoretical analysis of the proposed algorithm, which ensures the correctness of the reconstructed holes, for specific structures.