scispace - formally typeset
Search or ask a question

Showing papers on "Delaunay triangulation published in 2014"


Journal ArticleDOI
TL;DR: The feature rejection algorithm for meshing (FRAM) is introduced to generate a high quality conforming Delaunay triangulation of a three-dimensional discrete fracture network (DFN) by prescribing a minimum length scale and then restricting the generation of the network to only create features of that size and larger.
Abstract: We introduce the feature rejection algorithm for meshing (FRAM) to generate a high quality conforming Delaunay triangulation of a three-dimensional discrete fracture network (DFN). The geometric features (fractures, fracture intersections, spaces between fracture intersections, etc.) that must be resolved in a stochastically generated DFN typically span a wide range of spatial scales and make the efficient automated generation of high-quality meshes a challenge. To deal with these challenges, many previous approaches often deformed the DFN to align its features with a mesh through various techniques including redefining lines of intersection as stair step functions and distorting the fracture edges. In contrast, FRAM generates networks on which high-quality meshes occur automatically by constraining the generation of the network. The cornerstone of FRAM is prescribing a minimum length scale and then restricting the generation of the network to only create features of that size and larger. The process is f...

141 citations


Journal ArticleDOI
TL;DR: This paper presents a set of methods for creating a haptic texture model from tool-surface interaction data recorded by a human in a natural and unconstrained manner and uses these texture model sets to render synthetic vibration signals in real time as a user interacts with the TexturePad system.
Abstract: Texture gives real objects an important perceptual dimension that is largely missing from virtual haptic interactions due to limitations of standard modeling and rendering approaches. This paper presents a set of methods for creating a haptic texture model from tool-surface interaction data recorded by a human in a natural and unconstrained manner. The recorded high-frequency tool acceleration signal, which varies as a function of normal force and scanning speed, is segmented and modeled as a piecewise autoregressive (AR) model. Each AR model is labeled with the source segment's median force and speed values and stored in a Delaunay triangulation to create a model set for a given texture. We use these texture model sets to render synthetic vibration signals in real time as a user interacts with our TexturePad system, which includes a Wacom tablet and a stylus augmented with a Haptuator. We ran a human-subject study with two sets of ten participants to evaluate the realism of our virtual textures and the strengths and weaknesses of this approach. The results indicated that our virtual textures accurately capture and recreate the roughness of real textures, but other modeling and rendering approaches are required to completely match surface hardness and slipperiness.

140 citations


Journal ArticleDOI
01 Jan 2014
TL;DR: The results reveal that CSO based sensor deployment which utilizes the wavelet transform method is a powerful and successful method for sensor deployment on 3-D terrains.
Abstract: In this paper, a deterministic sensor deployment method based on wavelet transform (WT) is proposed. It aims to maximize the quality of coverage of a wireless sensor network while deploying a minimum number of sensors on a 3-D surface. For this purpose, a probabilistic sensing model and Bresenham's line of sight algorithm are utilized. The WT is realized by an adaptive thresholding approach for the generation of the initial population. Another novel aspect of the paper is that the method followed utilizes a Cat Swarm Optimization (CSO) algorithm, which mimics the behavior of cats. We have modified the CSO algorithm so that it can be used for sensor deployment problems on 3-D terrains. The performance of the proposed algorithm is compared with the Delaunay Triangulation and Genetic Algorithm based methods. The results reveal that CSO based sensor deployment which utilizes the wavelet transform method is a powerful and successful method for sensor deployment on 3-D terrains.

93 citations


Journal ArticleDOI
TL;DR: Experimental results on public databases and security analysis show that the Delaunay quadrangle-based system with topology code can achieve better performance and higher security level than theDelaunay triangle-basedSystem, the Delaunaay quadRangle- based system without topologycode, and some other similar systems.
Abstract: Although some nice properties of the Delaunay triangle-based structure have been exploited in many fingerprint authentication systems and satisfactory outcomes have been reported, most of these systems operate without template protection. In addition, the feature sets and similarity measures utilized in these systems are not suitable for existing template protection techniques. Moreover, local structural change caused by nonlinear distortion is often not considered adequately in these systems. In this paper, we propose a Delaunay quadrangle-based fingerprint authentication system to deal with nonlinear distortion-induced local structural change that the Delaunay triangle-based structure suffers. Fixed-length and alignment-free feature vectors extracted from Delaunay quadrangles are less sensitive to nonlinear distortion and more discriminative than those from Delaunay triangles and can be applied to existing template protection directly. Furthermore, we propose to construct a unique topology code from each Delaunay quadrangle. Not only can this unique topology code help to carry out accurate local registration under distortion, but it also enhances the security of template data. Experimental results on public databases and security analysis show that the Delaunay quadrangle-based system with topology code can achieve better performance and higher security level than the Delaunay triangle-based system, the Delaunay quadrangle-based system without topology code, and some other similar systems.

76 citations


Journal ArticleDOI
TL;DR: A 3D lossy compression system based on plane extraction which represent the points of each scene plane as a Delaunay triangulation and a set of points/area information and it provides a fast scene reconstruction useful for further visualization or processing tasks.

67 citations


Proceedings ArticleDOI
01 Apr 2014
TL;DR: This paper studies the achievable accuracy of signal strength estimation for a primary TV network with log-normal propagation models, linear interpolation through Delaunay triangulation, and ordinary kriging as means of including measurement data into a database's prediction process.
Abstract: Recent advances towards opening underutilized spectrum resources for secondary use rely on geolocation databases to determine protection requirements of primary users. This paper studies the achievable accuracy of signal strength estimation for a primary TV network with log-normal propagation models, linear interpolation through Delaunay triangulation, and ordinary kriging as means of including measurement data into a database's prediction process. Direct estimation methods are compared against hybrid mixture models of statistical propagation modelling with interpolation of the error surface. The results presented herein use measurements from a large-scale measurement campaign in the UK. They show that relatively small data sets suffice to achieve mean absolute prediction errors of 3.1–5.0dB compared to 5.9–6.9dB achieved by purely terrain-based estimation approaches.

66 citations


Journal ArticleDOI
TL;DR: A robust and feature-capturing surface reconstruction and simplification method that turns an input point set into a low triangle-count simplicial complex is introduced and is shown to exhibit both robustness to noise and outliers, as well as preservation of sharp features and boundaries.
Abstract: We introduce a robust and feature-capturing surface reconstruction and simplification method that turns an input point set into a low triangle-count simplicial complex. Our approach starts with a (possibly non-manifold) simplicial complex filtered from a 3D Delaunay triangulation of the input points. This initial approximation is iteratively simplified based on an error metric that measures, through optimal transport, the distance between the input points and the current simplicial complex--both seen as mass distributions. Our approach is shown to exhibit both robustness to noise and outliers, as well as preservation of sharp features and boundaries. Our new feature-sensitive metric between point sets and triangle meshes can also be used as a post-processing tool that, from the smooth output of a reconstruction method, recovers sharp features and boundaries present in the initial point set.

63 citations


Journal ArticleDOI
TL;DR: By measuring the extent of possible valleys of the density along the segment connecting pairs of observations, the proposed procedure shifts the formulation from a space with arbitrary dimension to a univariate one, thus leading benefits both in computation and visualization.
Abstract: Density-based clustering methods hinge on the idea of associating groups to the connected components of the level sets of the density underlying the data, to be estimated by a nonparametric method. These methods claim some desirable properties and generally good performance, but they involve a non-trivial computational effort, required for the identification of the connected regions. In a previous work, the use of spatial tessellation such as the Delaunay triangulation has been proposed, because it suitably generalizes the univariate procedure for detecting the connected components. However, its computational complexity grows exponentially with the dimensionality of data, thus making the triangulation unfeasible for high dimensions. Our aim is to overcome the limitations of Delaunay triangulation. We discuss the use of an alternative procedure for identifying the connected regions associated to the level sets of the density. By measuring the extent of possible valleys of the density along the segment connecting pairs of observations, the proposed procedure shifts the formulation from a space with arbitrary dimension to a univariate one, thus leading benefits both in computation and visualization.

62 citations


Journal ArticleDOI
TL;DR: It is shown that the percentage of recombined hexahedra strongly depends on the location of the vertices in the initial 3D mesh, and that the execution times are reasonable and non-conformal quadrilateral faces adjacent to triangular faces are present in the final meshes.
Abstract: Indirect quad mesh generation methods rely on an initial triangular mesh. So called triangle-merge techniques are then used to recombine the triangles of the initial mesh into quadrilaterals. This way, high-quality full-quad meshes suitable for finite element calculations can be generated for arbitrary two-dimensional geometries. In this paper, a similar indirect approach is applied to the three-dimensional case, i.e., a method to recombine tetrahedra into hexahedra. Contrary to the 2D case, a 100% recombination rate is seldom attained in 3D. Instead, part of the remaining tetrahedra are combined into prisms and pyramids, eventually yielding a mixed mesh. We show that the percentage of recombined hexahedra strongly depends on the location of the vertices in the initial 3D mesh. If the vertices are placed at random, less than 50% of the tetrahedra will be combined into hexahedra. In order to reach larger ratios, the vertices of the initial mesh need to be anticipatively organized into a lattice-like structure. This can be achieved with a frontal algorithm, which is applicable to both the two- and three-dimensional cases. The quality of the vertex alignment inside the volumes relies on the quality of the alignment on the surfaces. Once the vertex placement process is completed, the region is tetrahedralized with a Delaunay kernel. A maximum number of tetrahedra are then merged into hexahedra using the algorithm of Yamakawa-Shimada. Non-uniform mixed meshes obtained following our approach show a volumic percentage of hexahedra that usually exceeds 80%. The execution times are reasonable. However, non-conformal quadrilateral faces adjacent to triangular faces are present in the final meshes.

55 citations


Journal ArticleDOI
TL;DR: In this article, the spatial variation of cell size in a functionally graded cellular structure is achieved using error diffusion to convert a continuous tone image into binary form, and the effects of two control parameters, greyscale value and resolution on the resulting cell size measures were investigated.

52 citations


Proceedings ArticleDOI
18 Jun 2014
TL;DR: This paper proposes an overlapped and parallel disk-based triangulation framework for billion-scale graphs, OPT, which achieves the ideal cost by (1) full overlap of the CPU and I/O operations and (2) full parallelism of multi-core CPU and FlashSSD I/o.
Abstract: Graph triangulation, which finds all triangles in a graph, has been actively studied due to its wide range of applications in the network analysis and data mining. With the rapid growth of graph data size, disk-based triangulation methods are in demand but little researched. To handle a large-scale graph which does not fit in memory, we must iteratively load small parts of the graph. In the existing literature, achieving the ideal cost has been considered to be impossible for billion-scale graphs due to the memory size constraint. In this paper, we propose an overlapped and parallel disk-based triangulation framework for billion-scale graphs, OPT, which achieves the ideal cost by (1) full overlap of the CPU and I/O operations and (2) full parallelism of multi-core CPU and FlashSSD I/O. In OPT, triangles in memory are called the internal triangles while triangles constituting vertices in memory and vertices in external memory are called the external triangles. At the macro level, OPT overlaps the internal triangulation and the external triangulation, while it overlaps the CPU and I/O operations at the micro level. Thereby, the cost of OPT is close to the ideal cost. Moreover, OPT instantiates both vertex-iterator and edge-iterator models and benefits from multi-thread parallelism on both types of triangulation. Extensive experiments conducted on large-scale datasets showed that (1) OPT achieved the elapsed time close to that of the ideal method with less than 7% of overhead under the limited memory budget, (2) OPT achieved linear speed-up with an increasing number of CPU cores, (3) OPT outperforms the state-of-the-art parallel method by up to an order of magnitude with 6 CPU cores, and (4) for the first time in the literature, the triangulation results are reported for a billion-vertex scale real-world graph.

Journal ArticleDOI
TL;DR: The extension and the temperature behavior of the boundary region, its structure and composition are discussed in detail, using the example of a molecular dynamics model of an aqueous solution of the human amyloid polypeptide, hIAPP.

Journal ArticleDOI
TL;DR: It is demonstrated how weighted triangulations provide a faster and more robust approach to a series of geometry processing applications, including the generation of well-centered meshes, self-supporting surfaces, and sphere packing.
Abstract: In this article we investigate the use of weighted triangulations as discrete, augmented approximations of surfaces for digital geometry processing. By incorporating a scalar weight per mesh vertex, we introduce a new notion of discrete metric that defines an orthogonal dual structure for arbitrary triangle meshes and thus extends weighted Delaunay triangulations to surface meshes. We also present alternative characterizations of this primal-dual structure (through combinations of angles, areas, and lengths) and, in the process, uncover closed-form expressions of mesh energies that were previously known in implicit form only. Finally, we demonstrate how weighted triangulations provide a faster and more robust approach to a series of geometry processing applications, including the generation of well-centered meshes, self-supporting surfaces, and sphere packing.

Journal ArticleDOI
TL;DR: It is proved that AD-LBR is in 2D asymptotically equivalent to a finite element discretization on an anisotropic Delaunay triangulation, a procedure more involved and computationally expensive, and benefits from the theoretical guarantees of this procedure, for a fraction of its cost.
Abstract: We introduce a new discretization scheme for Anisotropic Diffusion, AD-LBR, on two and three dimensional Cartesian grids. The main features of this scheme is that it is non-negative and has sparse stencils, of cardinality bounded by 6 in 2D, by 12 in 3D, despite allowing diffusion tensors of arbitrary anisotropy. The radius of these stencils is not a-priori bounded however, and can be quite large for pronounced anisotropies. Our scheme also has good spectral properties, which permits larger time steps and avoids e.g. chessboard artifacts. AD-LBR relies on Lattice Basis Reduction, a tool from discrete mathematics which has recently shown its relevance for the discretization on grids of strongly anisotropic Partial Differential Equations (Mirebeau in Preprint, 2012). We prove that AD-LBR is in 2D asymptotically equivalent to a finite element discretization on an anisotropic Delaunay triangulation, a procedure more involved and computationally expensive. Our scheme thus benefits from the theoretical guarantees of this procedure, for a fraction of its cost. Numerical experiments in 2D and 3D illustrate our results.

Journal ArticleDOI
TL;DR: This work embeds ASI into a time iteration algorithm to compute recursive equilibria in an infinite horizon endowment economy where heterogeneous agents trade in a bond and a stock subject to various trading constraints and shows that this method computesEquilibria accurately and outperforms other grid schemes by far.

Journal ArticleDOI
TL;DR: This review presents state-of-the-art applications of alpha shape and Delaunay triangulation in the studies on protein-DNA, protein-protein,protein-ligand interactions and protein structure analysis.
Abstract: In recent years, more 3D protein structures have become available, which has made the analysis of large molecular structures much easier. There is a strong demand for geometric models for the study of protein-related interactions. Alpha shape and Delaunay triangulation are powerful tools to represent protein structures and have advantages in characterizing the surface curvature and atom contacts. This review presents state-of-the-art applications of alpha shape and Delaunay triangulation in the studies on protein ^DNA, protein ^ protein, protein ^ ligand interactions and protein structure analysis.

Book ChapterDOI
01 Jan 2014
TL;DR: A preliminary method to generate polyhedral meshes of general non-manifold domains based on computing the dual of a general tetrahedral mesh and demonstrating the technique on some simple to moderately complex domains is presented.
Abstract: We present a preliminary method to generate polyhedral meshes of general non-manifold domains. The method is based on computing the dual of a general tetrahedral mesh. The resulting mesh respects the topology of the domain to the same extent as the input mesh. If the input tetrahedral mesh is Delaunay and well-centered, the resulting mesh is a Voronoi mesh with planar faces. For general tetrahedral meshes, the resulting mesh is a polyhedral mesh with straight edges but possibly curved faces. The initial mesh generation phase is followed by a mesh untangling and quality improvement technique.We demonstrate the technique on some simple to moderately complex domains.

Book ChapterDOI
Zhaoyu Lu1, Ziqi Luo1, Huicheng Zheng1, Jikai Chen1, Weihong Li1 
01 Nov 2014
TL;DR: The proposed method, referred to as Delaunay-based temporal coding model (DTCM), encodes texture variations corresponding to muscle activities on face due to dynamical micro-expressions, which escalates the capacity of the method to locate spatiotemporally important features related to the micro- expressions of interest.
Abstract: Micro-expression recognition has been a challenging problem in computer vision research due to its briefness and subtlety. Previous psychological study shows that even human being can only recognize micro-expressions with low average recognition rates. In this paper, we propose an effective and efficient method to encode the micro-expressions for recognition. The proposed method, referred to as Delaunay-based temporal coding model (DTCM), encodes texture variations corresponding to muscle activities on face due to dynamical micro-expressions. Image sequences of micro-expressions are normalized not only temporally but also spatially based on Delaunay triangulation, so that the influence of personal appearance irrelevant to micro-expressions can be suppressed. Encoding temporal variations at local subregions and selecting spatial salient subregions in the face area escalates the capacity of our method to locate spatiotemporally important features related to the micro-expressions of interest. Extensive experiments on publicly available datasets, including SMIC, CASME, and CASME II, verified the effectiveness of the proposed model.

Journal ArticleDOI
19 Nov 2014
TL;DR: A novel method to generate high-quality simplicial meshes with specified anisotropy that generalizes optimal Delaunay triangulation and leads to a simple and efficient algorithm that is quality and speed compared to state-of-the-art methods on a variety of domains and metrics.
Abstract: We present a novel method to generate high-quality simplicial meshes with specified anisotropy. Given a surface or volumetric domain equipped with a Riemannian metric that encodes the desired anisotropy, we transform the problem to one of functional approximation. We construct a convex function over each mesh simplex whose Hessian locally matches the Riemannian metric, and iteratively adapt vertex positions and mesh connectivity to minimize the difference between the target convex functions and their piecewise-linear interpolation over the mesh. Our method generalizes optimal Delaunay triangulation and leads to a simple and efficient algorithm. We demonstrate its quality and speed compared to state-of-the-art methods on a variety of domains and metrics.

Proceedings ArticleDOI
14 Mar 2014
TL;DR: This work proposes the first algorithm to compute the 3D Delaunay triangulation (DT) on the GPU using massively parallel point insertion followed by bilateral flipping, a powerful local operation in computational geometry, and outperforms all existing sequential CPU algorithms by up to an order of magnitude.
Abstract: We propose the first algorithm to compute the 3D Delaunay triangulation (DT) on the GPU. Our algorithm uses massively parallel point insertion followed by bilateral flipping, a powerful local operation in computational geometry. Although a flipping algorithm is very amenable to parallel processing and has been employed to construct the 2D DT and the 3D convex hull on the GPU, to our knowledge there is no such successful attempt for constructing the 3D DT. This is because in 3D when many points are inserted in parallel, flipping gets stuck long before reaching the DT, and thus any further correction to obtain the DT is costly. In contrast, we show that by alternating between parallel point insertion and flipping, together with picking an appropriate point insertion order, one can still obtain a triangulation very close to Delaunay. We further propose an adaptive star splaying approach to subsequently transform this result into the 3D DT efficiently. In addition, we introduce several GPU speedup techniques for our implementation, which are also useful for general computational geometry algorithms. On the whole, our hybrid approach, with the GPU accelerating the main work of constructing a near-Delaunay structure and the CPU transforming that into the 3D DT, outperforms all existing sequential CPU algorithms by up to an order of magnitude, in both synthetic and real-world inputs. We also adapt our approach to the 2D DT problem and obtain similar speedup over the best sequential CPU algorithms, and up to 2 times over previous GPU algorithms.

Journal ArticleDOI
TL;DR: In this article, the Steiner points on edges are removed more systematically following a specific sequence in an alternative manner rather than a random selection commonly adopted in practice; whereas for Steiner Points on a facet, a weight on the steiner point adjacency would lead to an optimal order to facilitate their removal.

Journal ArticleDOI
01 Jan 2014-Optik
TL;DR: A robust approach to image matching based on Hessian affine region detector and local Delaunay triangulation and affine invariant geometric constraint and experimental results indicate that the proposed method can get higher correctness of image matching than RANSAC based method.

Book
10 Jul 2014
TL;DR: The introduction to GIS Measurements and Analysis Using GIS Appendix Appendix-B Glossary of GIS Terms Bibliography
Abstract: Introduction Definitions and Different Perspectives of GIS Computational Aspects of GIS Computing Algorithms in GIS Motivation of the Book Organization of the Book Summary Computational Geodesy Definition of Geodesy Mathematical Models of Earth Geometry of Ellipse and Ellipsoid Computing Radius of Curvature Concept of Latitude Applications of Geodesy The Indian Geodetic Reference System (IGRS) Summary Reference Systems and Coordinate Transformations Definition of Reference System Classification of Reference Systems Datum and Coordinate System Attachment of Datum to the Real World Different Coordinate Systems Used in GIS Shape of Earth Coordinate Transformations Datum Transformation Usage of Coordinate Systems Summary Basics of Map Projection What is Map Projection? and Why it is Necessary? Mathematical Definition of Map Projection Process Flow of Map Projection Azimuthal Map Projection Cylinderical Map Projection Conical Map Projection Classification of Map Projections Application of Map Projections Summary Algorithms for Rectification of Geometric Distortions Sources of Geometric Distortion Algorithms for Satellite Image Registration Scale Invariant Feature Transform (SIFT) Fourier Mellin Transform Multiresolution Image Analysis Applications of Image Registration Summary Differential Geometric Principles and Operators Properties of Gaussian, Hessian and Difference of Gaussian Summary Computational Geometry and its Application to GIS Introduction Definitions Geometric Computational Techniques Triangulation of Simple Polygon Convex Hulls in Two Dimensions Divide and Conquer Algorithm Voronoi Diagrams Delaunay Triangulation Delaunay Triangulation: Randomized Incremental Algorithm Delaunay Triangulations and Convex Hulls Applications of Voronoi Diagram and Delaunay Triangulation Summary Spatial Interpolation Techniques Non-Geostatistical Interpolators Geostatistics Summary Spatial Statistical Methods Definition of Statistics Spatial statistics Classification of statistical methods Role of statistics in GIS Descriptive Statistical Methods Inferential Statistics Point Pattern Analysis in GIS Applications of Spatial Statistical Methods Summary An Introduction to Bathymetry Spatial Analysis of Bathymetric Data and Sea GIS Measurements and Analysis Using GIS Appendix Appendix-B Glossary of GIS Terms Bibliography

Journal ArticleDOI
TL;DR: An intuitive framework for analyzing Delaunay refinement algorithms is presented that unifies the pioneering mesh generation algorithms of L. Paul Chew and Jim Ruppert, improves the algorithms in several minor ways, and helps to solve the difficult problem of meshing nonmanifold domains with small angles.
Abstract: Delaunay refinement is a technique for generating unstructured meshes of triangles for use in interpolation, the finite element method, and the finite volume method. In theory and practice, meshes produced by Delaunay refinement satisfy guaranteed bounds on angles, edge lengths, the number of triangles, and the grading of triangles from small to large sizes. This article presents an intuitive framework for analyzing Delaunay refinement algorithms that unifies the pioneering mesh generation algorithms of L. Paul Chew and Jim Ruppert, improves the algorithms in several minor ways, and most importantly, helps to solve the difficult problem of meshing nonmanifold domains with small angles. Although small angles inherent in the input geometry cannot be removed, one would like to triangulate a domain without creating any new small angles. Unfortunately, this problem is not always soluble. A compromise is necessary. A Delaunay refinement algorithm is presented that can create a mesh in which most angles are 30° or greater and no angle is smaller than arcsin [ ( 3 / 2 ) sin ( ϕ / 2 ) ] ∼ ( 3 / 4 ) ϕ , where ϕ ⩽ 60 ° is the smallest angle separating two segments of the input domain. New angles smaller than 30° appear only near input angles smaller than 60°. In practice, the algorithm's performance is better than these bounds suggest. Another new result is that Ruppert's analysis technique can be used to reanalyze one of Chew's algorithms. Chew proved that his algorithm produces no angle smaller than 30° (barring small input angles), but without any guarantees on grading or number of triangles. He conjectures that his algorithm offers such guarantees. His conjecture is conditionally confirmed here: if the angle bound is relaxed to less than 26.5°, Chew's algorithm produces meshes (of domains without small input angles) that are nicely graded and size-optimal.

Proceedings ArticleDOI
16 Nov 2014
TL;DR: In this article, a distributed-memory scalable parallel Delaunay and Voronoi tessellation algorithm is proposed that automatically determines which neighbor points need to be exchanged among the sub domains of a spatial decomposition.
Abstract: Computing a Voronoi or Delaunay tessellation from a set of points is a core part of the analysis of many simulated and measured datasets: N-body simulations, molecular dynamics codes, and LIDAR point clouds are just a few examples. Such computational geometry methods are common in data analysis and visualization, but as the scale of simulations and observations surpasses billions of particles, the existing serial and shared memory algorithms no longer suffice. A distributed-memory scalable parallel algorithm is the only feasible approach. The primary contribution of this paper is a new parallel Delaunay and Voronoi tessellation algorithm that automatically determines which neighbor points need to be exchanged among the sub domains of a spatial decomposition. Other contributions include periodic and wall boundary conditions, comparison of our method using two popular serial libraries, and application to numerous science datasets.

Journal ArticleDOI
TL;DR: In this paper, it has been shown that the elimination of the perigee can be carried out also in Delaunay variables, which reduces the total number of terms of the transformation series to about one third of the terms required in the classical approach.
Abstract: Analytical integration in Artificial Satellite Theory may benefit from different canonical simplification techniques, like the elimination of the parallax, the relegation of the nodes, or the elimination of the perigee. These techniques were originally devised in polar-nodal variables, an approach that requires expressing the geopotential as a Pfaffian function in certain invariants of the Kepler problem. However, it has been recently shown that such sophisticated mathematics are not needed if implementing both the relegation of the nodes and the parallax elimination directly in Delaunay variables. Proceeding analogously, it is shown here how the elimination of the perigee can be carried out also in Delaunay variables. In this way the construction of the simplification algorithm becomes elementary, on one hand, and the computation of the transformation series is achieved with considerable savings, on the other, reducing the total number of terms of the elimination of the perigee to about one third of the number of terms required in the classical approach.

Proceedings ArticleDOI
24 Aug 2014
TL;DR: This paper presents two other methods for removing visual artifacts that are as good as the old method in the terms of surface quality, and at the same time, processing time is almost three times smaller.
Abstract: In the recent years, a family of 2-manifold surface reconstruction methods from a sparse Structure-from-Motion points cloud based on 3D Delaunay triangulation was developed. This family consists of batch and incremental variations which include a step that remove visual artifacts. Although been necessary in the term of surface quality, this step is slow compared to the other parts of the algorithm and is not well suited to be used in an incremental manner. In this paper, we present two other methods for removing visual artifacts. They are evaluated and compared to the previous one in the incremental context where the need of new methods is the highest. Taken separately, they provide medium results, but used together they are as good as the old method in the terms of surface quality, and at the same time, processing time is almost three times smaller.

Journal ArticleDOI
03 Dec 2014-PLOS ONE
TL;DR: Improvements include an automated parameter-setting method for elastic beams, explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, and an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm.
Abstract: Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm.

Patent
13 Aug 2014
TL;DR: In this paper, the authors proposed a scattered point cloud Delaunay triangulation curved surface reconstruction method based on a mapping method and belongs to the field of computer graphics and virtual reality technology.
Abstract: The invention relates to a scattered point cloud Delaunay triangulation curved surface reconstruction method based on a mapping method and belongs to the field of a computer graphics and virtual reality technology. The method specifically includes the first step of obtaining original point cloud data of a target, the second step of obtaining K-level neighborhoods and unit normal vectors of all points in the original point cloud data, the third step of fragmenting the point cloud data, the fourth step of parameterizing fragmented point clouds to a two-dimensional plane, the fifth step of conducting Delaunay triangulation on the point clouds in the two-dimensional plane and mapping the point clouds back to corresponding three-dimensional space, and the sixth step of optimizing an initial triangle mesh model. Compared with the prior art, the scattered point cloud Delaunay triangulation curved surface reconstruction method based on the mapping method has the advantages that the quality of a triangle mesh can be ensured and triangle meshing on the scattered point clouds can be quickly realized when mesh modeling is carried out on large-scale point cloud data, and the method has a better effect on large-scale point clouds.

Journal ArticleDOI
01 Dec 2014
TL;DR: A local position estimation algorithm (LPEA) using triangulation rules, which contains estimation and estimation correction procedures, and indicated that the algorithm is significantly accurate in terms of the local coverage measurement and accelerates the operation in coverage measurement algorithms in wireless sensor networks.
Abstract: Coverage preservation during a mission is a crucial issue for wireless sensor networks (WSNs). There are numerous methods to measure the coverage globally, such as circular, grid and non-circular models, but only a few algorithms can be used to measure the coverage locally, using a sensor. A local coverage measurement algorithm uses only the location of a sensor and its neighbors to calculate the coverage, either by having the location information or by determining this information. Absolute localization for location-based services determines sensor positions according to the global Cartesian coordinate system, usually with the help of a global positioning system (GPS) device. In some applications, such as local coverage measurement, the sectional position of neighbors related to a sensor is sufficient. For a GPS-free environment, this paper develops a local position estimation algorithm (LPEA), using triangulation rules, which contains estimation and estimation correction procedures. Simulation results demonstrated that the algorithm is superior to previous work on finding the exact location of neighbors. Moreover, the estimation error in finding the location of neighbors in a 100-m wide field is less than 5m whenever a minimum of eight neighbors is available. Furthermore, measuring coverage using the results of this LPEA is nearly identical to results where the locations of all sensors are known. In this paper, the circular model, the Delaunay triangulation (DT) coverage measurement model, was used, where the circular model, the circular model with shadowing effect and the circular probabilistic model are measurable through a DT coverage measurement. Simulation and real device experiments indicated that the algorithm is significantly accurate in terms of the local coverage measurement and accelerates the operation in coverage measurement algorithms in wireless sensor networks.