scispace - formally typeset
Search or ask a question

Showing papers on "Delaunay triangulation published in 2010"


Journal ArticleDOI
TL;DR: In this article, the authors present a new code for solving the molecular and atomic excitation and radiation transfer problem in a molecular gas and predicting emergent spectra. This code works in arbitrary three dimensional geometry using unstructured Delaunay latices for the transport of photons.
Abstract: We present a new code for solving the molecular and atomic excitation and radiation transfer problem in a molecular gas and predicting emergent spectra. This code works in arbitrary three dimensional geometry using unstructured Delaunay latices for the transport of photons. Various physical models can be used as input, ranging from analytical descriptions over tabulated models to SPH simulations. To generate the Delaunay grid we sample the input model randomly, but weigh the sample probability with the molecular density and other parameters, and thereby we obtain an average grid point separation that scales with the local opacity. Our code does photon very efficiently so that the slow convergence of opaque models becomes traceable. When convergence between the level populations, the radiation field, and the point separation has been obtained, the grid is ray-traced to produced images that can readily be compared to observations. Because of the high dynamic range in scales that can be resolved using this type of grid, our code is particularly well suited for modeling of ALMA data. Our code can furthermore deal with overlapping lines of multiple molecular and atomic species.

291 citations


Proceedings ArticleDOI
21 Jun 2010
TL;DR: By avoiding explicit reconstruction, this work is able to perform skeleton-driven topology repair of acquired point clouds in the presence of large amounts of missing data and show that the curve skeletons the authors extract provide an intuitive and easy-to-manipulate structure for effective topology modification, leading to more faithful surface reconstruction.
Abstract: We present an algorithm for curve skeleton extraction via Laplacian-based contraction. Our algorithm can be applied to surfaces with boundaries, polygon soups, and point clouds. We develop a contraction operation that is designed to work on generalized discrete geometry data, particularly point clouds, via local Delaunay triangulation and topological thinning. Our approach is robust to noise and can handle moderate amounts of missing data, allowing skeleton-based manipulation of point clouds without explicit surface reconstruction. By avoiding explicit reconstruction, we are able to perform skeleton-driven topology repair of acquired point clouds in the presence of large amounts of missing data. In such cases, automatic surface reconstruction schemes tend to produce incorrect surface topology. We show that the curve skeletons we extract provide an intuitive and easy-to-manipulate structure for effective topology modification, leading to more faithful surface reconstruction.

254 citations


Journal ArticleDOI
TL;DR: In this article, the interference problem in wireless mesh networks is converted into geometry problems in graph theory, and then solved by geometric algorithms, which is proved to be able to significantly reduce interference in a wireless mesh network in O(n log n) time complexity.
Abstract: In wireless mesh networks such as WLAN (IEEE 802.11s) or WMAN (IEEE 802.11), each node should help to relay packets of neighboring nodes toward gateway using multi-hop routing mechanisms. Wireless mesh networks usually intensively deploy mesh nodes to deal with the problem of dead spot communication. However, the higher density of nodes deployed, the higher radio interference occurred. This causes significant degradation of system performance. In this paper, we first convert network problems into geometry problems in graph theory, and then solve the interference problem by geometric algorithms. We first define line intersection in a graph to reflect radio interference problem in a wireless mesh network. We then use plan sweep algorithm to find intersection lines, if any; employ Voronoi diagram algorithm to delimit the regions among nodes; use Delaunay Triangulation algorithm to reconstruct the graph in order to minimize the interference among nodes. Finally, we use standard deviation to prune off those longer links (higher interference links) to have a further enhancement. The proposed hybrid solution is proved to be able to significantly reduce interference in a wireless mesh network in O(n log n) time complexity.

148 citations


Journal ArticleDOI
TL;DR: This technical note studies robotic sensor networks performing static coverage optimization with area constraints, and designs the ¿move-to-center-and-compute-weight¿ strategy to steer the robotic network towards the set of center generalized Voronoi configurations while monotonically optimizing coverage.
Abstract: This technical note studies robotic sensor networks performing static coverage optimization with area constraints. Given a density function describing the probability of events happening and a performance function measuring the cost to service a location, the objective is to position sensors in the environment so as to minimize the expected servicing cost. Moreover, because of load balancing considerations, the area of the region assigned to each robot is constrained to be a pre-specified amount. We characterize the optimal configurations as center generalized Voronoi configurations. The generalized Voronoi partition depends on a set of weights, one per robot, assigned to the network. We design a Jacobi iterative algorithm to find the weight assignment whose corresponding generalized Voronoi partition satisfies the area constraints. This algorithm is distributed over the generalized Delaunay graph. We also design the ?move-to-center-and-compute-weight? strategy to steer the robotic network towards the set of center generalized Voronoi configurations while monotonically optimizing coverage.

136 citations


Journal ArticleDOI
TL;DR: A model for the simulation of pedestrian flows and crowd dynamics has been developed that performs well for standard benchmarks, and allows for typical crowd dynamics, such as lane forming, overtaking, avoidance of obstacles and panic behaviour.

126 citations


Journal ArticleDOI
TL;DR: This article presents recent developments on constrained Delaunay tetrahedralizations of piecewise linear domains and surveys various related results and detail two core algorithms that have provable guarantees and are amenable to practical implementation.

99 citations


Journal ArticleDOI
TL;DR: Bregman divergences as discussed by the authors allow one to define information-theoretic Voronoi diagrams in statistical parametric spaces based on the relative entropy of distributions and establish correspondences between those diagrams, and show how to compute them efficiently.
Abstract: The Voronoi diagram of a finite set of objects is a fundamental geometric structure that subdivides the embedding space into regions, each region consisting of the points that are closer to a given object than to the others. We may define various variants of Voronoi diagrams depending on the class of objects, the distance function and the embedding space. In this paper, we investigate a framework for defining and building Voronoi diagrams for a broad class of distance functions called Bregman divergences. Bregman divergences include not only the traditional (squared) Euclidean distance but also various divergence measures based on entropic functions. Accordingly, Bregman Voronoi diagrams allow one to define information-theoretic Voronoi diagrams in statistical parametric spaces based on the relative entropy of distributions. We define several types of Bregman diagrams, establish correspondences between those diagrams (using the Legendre transformation), and show how to compute them efficiently. We also introduce extensions of these diagrams, e.g., k-order and k-bag Bregman Voronoi diagrams, and introduce Bregman triangulations of a set of points and their connection with Bregman Voronoi diagrams. We show that these triangulations capture many of the properties of the celebrated Delaunay triangulation.

91 citations


Journal ArticleDOI
Shuming Gao1, Wei Zhao1, Hongwei Lin1, Fanqin Yang1, Xiang Chen1 
TL;DR: The method provides an effective way to make CAD mesh model simplification meet the requirements of engineering applications and several experimental results are presented to show the superiority and effectivity of the approach.
Abstract: Dynamic simulation and high quality FEA mesh generation need the CAD mesh model to be simplified, that is, suppressing the detailed features on the mesh without any changes to the rest. However, the traditional mesh simplification methods for graphical models can not satisfy the requirements of CAD mesh simplification. In this paper, we develop a feature suppression based CAD mesh model simplification framework. First, the CAD mesh model is segmented by an improved watershed segmentation algorithm, constructing the region-level representation required by feature recognition. Second, the form features needing to be suppressed are extracted using a feature recognition method with user defined feature facility based on the region-level representation, establishing the feature-level representation. Third, every recognized feature is suppressed using the most suitable one of the three methods, i.e. planar Delaunay triangulation, Poisson equation based method, and the method for blend features, thus simplifying the CAD mesh model. Our method provides an effective way to make CAD mesh model simplification meet the requirements of engineering applications. Several experimental results are presented to show the superiority and effectivity of our approach.

88 citations


Journal ArticleDOI
TL;DR: It is shown that boundary conforming Delaunay meshes for 3D polyhedral domains can be generated efficiently when the smallest input angle of the domain is bounded by arccos 1/3 ≈ 70.53°.
Abstract: A boundary conforming Delaunay mesh is a partitioning of a polyhedral domain into Delaunay simplices such that all boundary simplices satisfy the generalized Gabriel property. It’s dual is a Voronoi partition of the same domain which is preferable for Voronoi-box based finite volume schemes. For arbitrary 2D polygonal regions, such meshes can be generated in optimal time and size. For arbitrary 3D polyhedral domains, however, this problem remains a challenge. The main contribution of this paper is to show that boundary conforming Delaunay meshes for 3D polyhedral domains can be generated efficiently when the smallest input angle of the domain is bounded by arccos 1/3 ≈ 70.53°. In addition, well-shaped tetrahedra and an appropriate mesh size can be obtained. Our new results are achieved by reanalyzing a classical Delaunay refinement algorithm. Note that our theoretical guarantee on the input angle (70.53°) is still too strong for many practical situations. We further discuss variants of the algorithm to relax the input angle restriction and to improve the mesh quality.

67 citations


Journal ArticleDOI
TL;DR: It is shown how to preprocess a set of n disjoint unit disks in the plane in O(nlogn) time so that if one point per disk is specified with precise coordinates, the Delaunay triangulation can be computed in linear time.
Abstract: An assumption of nearly all algorithms in computational geometry is that the input points are given precisely, so it is interesting to ask what is the value of imprecise information about points. We show how to preprocess a set of n disjoint unit disks in the plane in O(nlogn) time so that if one point per disk is specified with precise coordinates, the Delaunay triangulation can be computed in linear time. From the Delaunay, one can obtain the Gabriel graph and a Euclidean minimum spanning tree; it is interesting to note the roles that these two structures play in our algorithm to quickly compute the Delaunay.

66 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work gives a provably correct algorithm to reconstruct a k-dimensional manifold embedded in d-dimensional Euclidean space and proves that for a dense enough sample the output of the algorithm is isotopic to the manifold and a close geometric approximation of the manifold.
Abstract: We give a provably correct algorithm to reconstruct a k-dimensional manifold embedded in d-dimensional Euclidean space. Input to our algorithm is a point sample coming from an unknown manifold. Our approach is based on two main ideas : the notion of tangential Delaunay complex defined in [6,19,20], and the technique of sliver removal by weighting the sample points [13]. Differently from previous methods, we do not construct any subdivision of the embedding d-dimensional space. As a result, the running time of our algorithm depends only linearly on the extrinsic dimension d while it depends quadratically on the size of the input sample, and exponentially on the intrinsic dimension k. To the best of our knowledge, this is the first certified algorithm for manifold reconstruction whose complexity depends linearly on the ambient dimension. We also prove that for a dense enough sample the output of our algorithm is isotopic to the manifold and a close geometric approximation of the manifold.

Journal ArticleDOI
TL;DR: A new method for the calculation of the percentage of separation space used was developed using Delaunay's triangulation algorithms (convex hull) and showed better precision and accuracy than an existing method.

Journal ArticleDOI
TL;DR: This work describes how to eliminate the boundary‐matching constraint by adapting recent embedded boundary techniques to tetrahedra, so that neither air nor solid boundaries need to align with mesh geometry, and can substantially increase the flexibility and accuracy of adaptive Eulerian fluid simulation.
Abstract: When simulating fluids, tetrahedral methods provide flexibility and ease of adaptivity that Cartesian grids find difficult to match. However, this approach has so far been limited by two conflicting requirements. First, accurate simulation requires quality Delaunay meshes and the use of circumcentric pressures. Second, meshes must align with potentially complex moving surfaces and boundaries, necessitating continuous remeshing. Unfortunately, sacrificing mesh quality in favour of speed yields inaccurate velocities and simulation artifacts. We describe how to eliminate the boundary-matching constraint by adapting recent embedded boundary techniques to tetrahedra, so that neither air nor solid boundaries need to align with mesh geometry. This enables the use of high quality, arbitrarily graded, non-conforming Delaunay meshes, which are simpler and faster to generate. Temporal coherence can also be exploited by reusing meshes over adjacent timesteps to further reduce meshing costs. Lastly, our free surface boundary condition eliminates the spurious currents that previous methods exhibited for slow or static scenarios. We provide several examples demonstrating that our efficient tetrahedral embedded boundary method can substantially increase the flexibility and accuracy of adaptive Eulerian fluid simulation.

Journal ArticleDOI
TL;DR: The presented theory of the @b-shape and the @ b-complex will be equally useful for diverse areas such as structural biology, computer graphics, geometric modelling, computational geometry, CAD, physics, and chemistry, where the core hurdle lies in determining the proximity among spherical particles.
Abstract: The proximity and topology among particles are often the most important factor for understanding the spatial structure of particles. Reasoning the morphological structure of molecules and reconstructing a surface from a point set are examples where proximity among particles is important. Traditionally, the Voronoi diagram of points, the power diagram, the Delaunay triangulation, and the regular triangulation, etc. have been used for understanding proximity among particles. In this paper, we present the theory of the @b-shape and the @b-complex and the corresponding algorithms for reasoning proximity among a set of spherical particles, both using the quasi-triangulation which is the dual of the Voronoi diagram of spheres. Given the Voronoi diagram of spheres, we first transform the Voronoi diagram to the quasi-triangulation. Then, we compute some intervals called @b-intervals for the singular, regular, and interior states of each simplex in the quasi-triangulation. From the sorted set of simplexes, the @b-shape and the @b-complex corresponding to a particular value of @b can be found efficiently. Given the Voronoi diagram of spheres, the quasi-triangulation can be obtained in O(m) time in the worst case, where m represents the number of simplexes in the quasi-triangulation. Then, the @b-intervals for all simplexes in the quasi-triangulation can also be computed in O(m) time in the worst case. After sorting the simplexes using the low bound values of the @b-intervals of each simplex in O(mlogm) time, the @b-shape and the @b-complex can be computed in O(logm+k) time in the worst case by a binary search followed by a sequential search in the neighborhood, where k represents the number of simplexes in the @b-shape or the @b-complex. The presented theory of the @b-shape and the @b-complex will be equally useful for diverse areas such as structural biology, computer graphics, geometric modelling, computational geometry, CAD, physics, and chemistry, where the core hurdle lies in determining the proximity among spherical particles.

Journal ArticleDOI
TL;DR: This work introduces as a foundational element the design of a container data structure that both provides concurrent addition and removal operations and is compact in memory, which makes it especially well-suited for storing large dynamic graphs such as Delaunay triangulations.
Abstract: Computers with multiple processor cores using shared memory are now ubiquitous. In this paper, we present several parallel geometric algorithms that specifically target this environment, with the goal of exploiting the additional computing power. The algorithms we describe are (a) 2-/3-dimensional spatial sorting of points, as is typically used for preprocessing before using incremental algorithms, (b) d-dimensional axis-aligned box intersection computation, and finally (c) 3D bulk insertion of points into Delaunay triangulations, which can be used for mesh generation algorithms, or simply for constructing 3D Delaunay triangulations. For the latter, we introduce as a foundational element the design of a container data structure that both provides concurrent addition and removal operations and is compact in memory. This makes it especially well-suited for storing large dynamic graphs such as Delaunay triangulations. We show experimental results for these algorithms, using our implementations based on the Computational Geometry Algorithms Library (CGAL). This work is a step towards what we hope will become a parallel mode for CGAL, where algorithms automatically use the available parallel resources without requiring significant user intervention.

Posted Content
TL;DR: In this article, the authors established the existence of stationary Gibbsian point processes for interactions that act on hyperedges between the points, such as Delaunay edges or triangles, cliques of Voronoi cells or clusters of $k$-nearest neighbors.
Abstract: We establish the existence of stationary Gibbsian point processes for interactions that act on hyperedges between the points. For example, such interactions can depend on Delaunay edges or triangles, cliques of Voronoi cells or clusters of $k$-nearest neighbors. The classical case of pair interactions is also included. The basic tools are an entropy bound and stationarity.

Journal ArticleDOI
TL;DR: A numerical method for 2D LEFM crack propagation simulation that uses a Lepp-Delaunay based mesh refinement algorithm for triangular meshes which allows both the generation of the initial mesh and the local modification of the current mesh as the crack propagates.

Book ChapterDOI
16 Jun 2010
TL;DR: This work presents an efficient algorithm for computing the clipped Voronoi diagram for a set of sites with respect to a compact 3D volume, assuming that the volume is represented as a tetrahedral mesh.
Abstract: The Voronoi diagram is a fundamental geometry structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact 3D domain (i.e. a finite 3D volume), some Voronoi cells of their Voronoi diagram are infinite, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm for computing the clipped Voronoi diagram for a set of sites with respect to a compact 3D volume, assuming that the volume is represented as a tetrahedral mesh. We also describe an application of the proposed method to implementing a fast method for optimal tetrahedral mesh generation based on the centroidal Voronoi tessellation.

Book ChapterDOI
05 Sep 2010
TL;DR: A simple yet powerful higher-order conditional random field (CRF) is used to model optical flow which consists of a standard photoconsistency cost and a prior on affine motions both modeled in terms of higher- order potential functions.
Abstract: We use a simple yet powerful higher-order conditional random field (CRF) to model optical flow. It consists of a standard photoconsistency cost and a prior on affine motions both modeled in terms of higher-order potential functions. Reasoning jointly over a large set of unknown variables provides more reliable motion estimates and a robust matching criterion. One of the main contributions is that unlike previous region-based methods, we omit the assumption of constant flow. Instead, we consider local affine warps whose likelihood energy can be computed exactly without approximations. This results in a tractable, so-called, higher-order likelihood function. We realize this idea by employing triangulation meshes which immensely reduce the complexity of the problem. Optimization is performed by hierarchical fusion moves and an adaptive mesh refinement strategy. Experiments show that we achieve high-quality motion fields on several data sets including the Middlebury optical flow database.

Book ChapterDOI
06 Jul 2010
TL;DR: It is shown that e always contains a plane spanner of maximum degree 6 and stretch factor 6 that can be constructed efficiently in linear time given the Triangular Distance Delaunay triangulation introduced by Chew.
Abstract: We consider the question: "What is the smallest degree that can be achieved for a plane spanner of a Euclidean graph e?" The best known bound on the degree is 14. We show that e always contains a plane spanner of maximum degree 6 and stretch factor 6. This spanner can be constructed efficiently in linear time given the Triangular Distance Delaunay triangulation introduced by Chew.

Journal ArticleDOI
TL;DR: This work addresses the problem of generating quality surface triangle meshes from 3D point clouds sampled on piecewise smooth surfaces using a feature detection process based on the covariance matrices of Voronoi cells to extract a set of sharp features from the point cloud.
Abstract: We address the problem of generating quality surface triangle meshes from 3D point clouds sampled on piecewise smooth surfaces. Using a feature detection process based on the covariance matrices of Voronoi cells, we first ex- tract from the point cloud a set of sharp features. Our algorithm also runs on the input point cloud a reconstruction process, such as Poisson reconstruction, providing an implicit surface. A feature preserving variant of a Delaunay refinement process is then used to generate a mesh approximating the implicit surface and containing a faithful representation of the extracted sharp edges. Such a mesh provides an enhanced trade-off between accuracy and mesh complexity. The whole process is robust to noise and made versatile through a small set of parameters which govern the mesh sizing, approximation error and shape of the elements. We demonstrate the effectiveness of our method on a variety of models including laser scanned datasets ranging from indoor to outdoor scenes.

Book ChapterDOI
20 May 2010
TL;DR: A fully robust implementation built upon existing generic algorithms provided by the Cgal library of the Delaunay triangulation of points on a sphere, or of rounded points close to a sphere is presented.
Abstract: We propose two ways to compute the Delaunay triangulation of points on a sphere, or of rounded points close to a sphere, both based on the classic incremental algorithm initially designed for the plane. We use the so-called space of circles as mathematical background for this work. We present a fully robust implementation built upon existing generic algorithms provided by the Cgal library. The efficiency of the implementation is established by benchmarks.

Journal ArticleDOI
TL;DR: In this article, an improved version of the SimpleX method for radiative transfer on an unstructured Delaunay grid is presented, which is parallellised for distributed memory machines using MPI.
Abstract: Context. We present an improved version of the SimpleX method for radiative transfer on an unstructured Delaunay grid. The grid samples the medium through which photons are transported in an optimal way for fast radiative transfer calculations. Aims. We study the detailed working of SimpleX in test problems and show improvements over the previous version of the method. Methods. We have applied a direction conserving transport scheme that correctly transports photons in the optically thin regime, a regime where the original SimpleX algorithm lost its accuracy. In addition, a scheme of dynamic grid updates is employed to ensure correct photon transport when the optical depth changes during a simulation. For the application to large data sets, the method is parallellised for distributed memory machines using MPI. Results. To test the new method, we have performed standard tests for cosmological radiative transfer. We show excellent correspondence with both the analytical solution (when available) and to the results of other codes compared to the former version of SimpleX, without sacrificing the benefits of the high computational speed of the method.

01 Jan 2010
TL;DR: In this paper, a set of algorithms building a program package for the generation of two-and three-dimensional unstructured/hybrid grids around complex geometries has been developed.
Abstract: A set of algorithms building a program package for the generation of twoandthree-dimensional unstructured/hybrid grids around complex geometrieshas been developed. The unstructured part of the grid generator is based on the advancing frontalgorithm. Tetrahedra of variable size, as well as directionally stretched tetrahedracan be generated by specification of a proper background grid, initiallygenerated by a Delaunay algorithm. A marching layer prismatic grid generation algorithm has been developedfor the generation of grids for viscous flows. The algorithm is able to handleregions of narrow gaps, as well as concave regions. The body surface is describedby a triangular unstructured surface grid. The subsequent grid layers in theprismatic grid are marched away from the body by an algebraic procedurecombined with an optimization procedure, resulting in a semi-structured gridof prismatic cells. Adaptive computations using remeshing have been done with use of a gradientsensor. Several key-variables can be monitored simultaneously. The sensorindicates that only the key-variables with the largest gradients give a substantialcontribution to the sensor. The sensor gives directionally stretched grids. An algorithm for the surface definition of curved surfaces using a biharmonicequation has been developed. This representation of the surface canbe used both for projection of the new surface nodes in h-refinement, and theinitial generation of the surface grid. For unsteady flows an algorithm has been developed for the deformationof hybrid grids, based on the solution of the biharmonic equation for the deformationfield. The main advantage of the grid deformation algorithm is that itcan handle large deformations. It also produces a smooth deformation distributionfor cells which are very skewed or stretched. This is necessary in orderto handle the very thin cells in the prismatic layers. The algorithms have been applied to complex three-dimensional geometries,and the influence of the grid quality on the accuracy for a finite volumeflow solver has been studied for some simpler generic geometries.

Journal ArticleDOI
TL;DR: This paper presents a novel approach for reconstructing an object surface from its silhouettes that directly estimates the differential structure of the surface, and results in a higher accuracy than existing volumetric approaches for object reconstruction.

Journal ArticleDOI
TL;DR: It is proved that DG C ( S ) is a t -spanner for S, for some constant t that depends only on the shape of the set C .
Abstract: Let C be a compact and convex set in the plane that contains the origin in its interior, and let S be a finite set of points in the plane. The Delaunay graph DG C ( S ) of S is defined to be the dual of the Voronoi diagram of S with respect to the convex distance function defined by C . We prove that DG C ( S ) is a t -spanner for S , for some constant t that depends only on the shape of the set C . Thus, for any two points p and q in S , the graph DG C ( S ) contains a path between p and q whose Euclidean length is at most t times the Euclidean distance between p and q .

Proceedings ArticleDOI
13 Jun 2010
TL;DR: The notion of a stable Delaunay graph (SDG in short) is introduced, a dynamic subgraph of the Delauny triangulation that is less volatile in the sense that it undergoes fewer topological changes and yet retains many useful properties of the full Delaunays.
Abstract: The best known upper bound on the number of topological changes in the Delaunay triangulation of a set of moving points in ℜ2 is (nearly) cubic, even if each point is moving with a fixed velocity. We introduce the notion of a stable Delaunay graph (SDG in short), a dynamic subgraph of the Delaunay triangulation, that is less volatile in the sense that it undergoes fewer topological changes and yet retains many useful properties of the full Delaunay triangulation. SDG is defined in terms of a parameter ± > 0, and consists of Delaunay edges pq for which the (equal) angles at which p and q see the corresponding Voronoi edge epq are at least ±. We prove several interesting properties of SDG and describe two kinetic data structures for maintaining it. Both structures use O*(n) storage. They process O*(n2) events during the motion, each in O*(1) time, provided that the points of P move along algebraic trajectories of bounded degree; the O*(·) notation hides multiplicative factors that are polynomial in 1/± and polylogarithmic in n. The first structure is simpler but the dependency on 1/± in its performance is higher.

Journal ArticleDOI
TL;DR: The 3D NNRPIM analysis is used to solve static and dynamic composite laminated plate problems, and several benchmark examples are studied to demonstrate the effectiveness of the method.
Abstract: Based on the natural neighbor radial point interpolation method (NNRPIM), a 3D analysis of thick composite laminated plates is presented. The NNRPIM uses the natural neighbour concept in order to enforce nodal connectivity. Based on the Voronoi diagram small cells are created from the unstructured set of nodes discretizing the problem domain, the ‘influence-cells’, which are in fact influence domains entirely nodal dependent. The Delaunay triangles, the dual of the Voronoi cells, are used to create a node-dependent background mesh used in the numerical integration of the NNRPIM interpolation functions. The NNRPIM interpolation functions, used in the Galerkin weak form, are constructed in a process similar to that in the radial point interpolation method (RPIM) with some differences that modify the method performance. In the construction of the NNRPIM interpolation functions, no polynomial base is required and the used radial basis function (RBF) is the multiquadric RBF. The NNRPIM interpolation functions ...

Proceedings ArticleDOI
Jun Chen1, Chaomin Luo1, Mohan Krishnan1, Mark Paulik1, Yipeng Tang1 
TL;DR: The proposed enhanced path planning and GPS tail technique has been successfully demonstrated in a Player/Stage simulation environment and tests on an actual course are very promising and reveal the potential for stable forward navigation.
Abstract: An enhanced dynamic Delaunay Triangulation-based (DT) path planning approach is proposed for mobile robots to plan and navigate a path successfully in the context of the Autonomous Challenge of the Intelligent Ground Vehicle Competition (www.igvc.org). The Autonomous Challenge course requires the application of vision techniques since it involves path-based navigation in the presence of a tightly clustered obstacle field. Course artifacts such as switchbacks, ramps, dashed lane lines, trap etc. are present which could turn the robot around or cause it to exit the lane. The main contribution of this work is a navigation scheme based on dynamic Delaunay Triangulation (DDT) that is heuristically enhanced on the basis of a sense of general lane direction. The latter is computed through a "GPS (Global Positioning System) tail" vector obtained from the immediate path history of the robot. Using processed data from a LADAR, camera, compass and GPS unit, a composite local map containing both obstacles and lane line segments is built up and Delaunay Triangulation is continuously run to plan a path. This path is heuristically corrected, when necessary, by taking into account the "GPS tail" . With the enhancement of the Delaunay Triangulation by using the "GPS tail", goal selection is successfully achieved in a majority of situations. The robot appears to follow a very stable path while navigating through switchbacks and dashed lane line situations. The proposed enhanced path planning and GPS tail technique has been successfully demonstrated in a Player/Stage simulation environment. In addition, tests on an actual course are very promising and reveal the potential for stable forward navigation.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: These ideas from mesh generation are applied to improve the time and space complexities of computing the full persistent homological information associated with a point cloud P in Euclidean space ℜd, and a new collection of filtrations, based on the Delaunay triangulation of a carefully-chosen superset of P, whose sizes are reduced to 2O(d2)n.
Abstract: We apply ideas from mesh generation to improve the time and space complexities of computing the full persistent homological information associated with a point cloud P in Euclidean space ℜd. Classical approaches rely on the Cech, Rips, ±-complex, or witness complex filtrations of P, whose complexities scale up very badly with d. For instance, the ±-complex filtration incurs the n Ω(d) size of the Delaunay triangulation, where n is the size of P. The common alternative is to truncate the filtrations when the sizes of the complexes become prohibitive, possibly before discovering the most relevant topological features. In this paper we propose a new collection of filtrations, based on the Delaunay triangulation of a carefully-chosen superset of P, whose sizes are reduced to 2O(d2)n. Our filtrations interleave multiplicatively with the family of offsets of P, so that the persistence diagram of P can be approximated in 2O(d2)n3 time in theory, with a near-linear observed running time in practice. Thus, our approach remains tractable in medium dimensions, say 4 to 10.