scispace - formally typeset
Search or ask a question

Showing papers on "Delaunay triangulation published in 2005"


Journal ArticleDOI
01 Sep 2005
TL;DR: The notion of loose e-sample is introduced and it is shown that the set of loosee-samples contains and is asymptotically identical to the setof e-s samples, which are easier to check and to construct.
Abstract: The notion of e-sample, introduced by Amenta and Bern, has proven to be a key concept in the theory of sampled surfaces. Of particular interest is the fact that, if E is an e-sample of a C2-continuous surface S for a sufficiently small e, then the Delaunay triangulation of E restricted to S is a good approximation of S, both in a topological and in a geometric sense. Hence, if one can construct an e-sample, one also gets a good approximation of the surface. Moreover, correct reconstruction is ensured by various algorithms. In this paper, we introduce the notion of loose e-sample. We show that the set of loose e-samples contains and is asymptotically identical to the set of e-samples. The main advantage of loose e-samples over e-samples is that they are easier to check and to construct. We also present a simple algorithm that constructs provably good surface samples and meshes. Given a C2-continuous surface S without boundary, the algorithm generates a sparse e-sample E and at the same time a triangulated surface Dels(E). The triangulated surface has the same topological type as S, is close to S for the Hausdorff distance and can provide good approximations of normals, areas and curvatures. A notable feature of the algorithm is that the surface needs only to be known through an oracle that, given el line segment, detects whether the segment intersects the surface and in the affirmative, returns the intersection points. This makes the algorithm useful in a wide variety of contexts and for a large class of surfaces.

362 citations


Journal ArticleDOI
01 Jul 2005
TL;DR: A novel Delaunay-based variational approach to isotropic tetrahedral meshing is presented, which minimize a simple mesh-dependent energy through global updates of both vertex positions and connectivity and generates well-shaped tetrahedra.
Abstract: In this paper, a novel Delaunay-based variational approach to isotropic tetrahedral meshing is presented. To achieve both robustness and efficiency, we minimize a simple mesh-dependent energy through global updates of both vertex positions and connectivity. As this energy is known to be the ∠1 distance between an isotropic quadratic function and its linear interpolation on the mesh, our minimization procedure generates well-shaped tetrahedra. Mesh design is controlled through a gradation smoothness parameter and selection of the desired number of vertices. We provide the foundations of our approach by explaining both the underlying variational principle and its geometric interpretation. We demonstrate the quality of the resulting meshes through a series of examples.

347 citations


Book ChapterDOI
01 Jan 2005
TL;DR: A method to decompose an arbitrary 3D piecewise linear complex (PLC) into a constrained Delaunay tetrahedralization (CDT) by updating the input PLC into another PLC which is topologically and geometrically equivalent to the original one and does have a CDT.
Abstract: We present a method to decompose an arbitrary 3D piecewise linear complex (PLC) into a constrained Delaunay tetrahedralization (CDT). It successfully resolves the problem of non-existence of a CDT by updating the input PLC into another PLC which is topologically and geometrically equivalent to the original one and does have a CDT. Based on a strong CDT existence condition, the redefinition is done by a segment splitting and vertex perturbation. Once the CDT exists, a practically fast cavity retetrahedralization algorithm recovers the missing facets. This method has been implemented and tested through various examples. In practice, it behaves rather robust and efficient for relatively complicated 3D domains.

281 citations


Journal ArticleDOI
TL;DR: A new method for bidimensional empirical mode decomposition (EMD) based on Delaunay triangulation and on piecewise cubic polynomial interpolation is described, which shows its efficiency in terms of computational cost and the decomposition of Gaussian white noises leads to bidimensional selective filter banks.
Abstract: In this letter, we describe a new method for bidimensional empirical mode decomposition (EMD). This decomposition is based on Delaunay triangulation and on piecewise cubic polynomial interpolation. Particular attention is devoted to boundary conditions that are crucial for the feasibility of the bidimensional EMD. The study of the behavior of the decomposition on a different kind of image shows its efficiency in terms of computational cost, and the decomposition of Gaussian white noises leads to bidimensional selective filter banks.

269 citations


Journal ArticleDOI
TL;DR: This work introduces a new three-dimensional agent-based Voronoi-Delaunay hybrid model for multicellular tumor spheroids and tests hypotheses on the functional dependence of the uptake rates and uses computer simulations to find suitable mechanisms for the induction of necrosis.
Abstract: We study multicellular tumor spheroids by introducing a new three-dimensional agent-based Voronoi-Delaunay hybrid model. In this model, the cell shape varies from spherical in thin solution to convex polyhedral in dense tissues. The next neighbors of the cells are provided by a weighted Delaunay triangulation with on average linear computational complexity. The cellular interactions include direct elastic forces and cell-cell as well as cell-matrix adhesion. The spatiotemporal distribution of two nutrients---oxygen and glucose---is described by reaction-diffusion equations. Viable cells consume the nutrients, which are converted into biomass by increasing the cell size during the ${\mathrm{G}}_{1}$ phase. We test hypotheses on the functional dependence of the uptake rates and use computer simulations to find suitable mechanisms for the induction of necrosis. This is done by comparing the outcome with experimental growth curves, where the best fit leads to an unexpected ratio of oxygen and glucose uptake rates. The model relies on physical quantities and can easily be generalized towards tissues involving different cell types. In addition, it provides many features that can be directly compared with the experiment.

211 citations


Journal ArticleDOI
TL;DR: This paper introduces a new 3D metric field that tightens the mesh around interfaces when the calculation domain is divided in several subdomains, and places enough elements through each subdomain thickness, without introducing too many nodes in the other directions.

164 citations


Journal ArticleDOI
TL;DR: A novel definition of the anisotropic centroidal Voronoi tessellation corresponding to a given Riemann metric tensor is introduced and various numerical examples demonstrating the effectiveness of the proposed method are presented.
Abstract: In this paper, we introduce a novel definition of the anisotropic centroidal Voronoi tessellation (ACVT) corresponding to a given Riemann metric tensor. A directional distance function is used in the definition to simplify the computation. We provide algorithms to approximate the ACVT using the Lloyd iteration and the construction of anisotropic Delaunay triangulation under the given Riemannian metric. The ACVT is applied to the optimization of two-dimensional anisotropic Delaunay triangulation, to the generation of surface CVT, and high-quality triangular mesh on general surfaces. Various numerical examples demonstrating the effectiveness of the proposed method are presented.

146 citations


Proceedings ArticleDOI
31 Jul 2005
TL;DR: In this article, a moving least squares algorithm for reconstructing a surface from point cloud data is proposed, where an implicit function I whose zero set U is the reconstructed surface is a good approximation to the signed distance function of the sampled surface F.
Abstract: We analyze a moving least squares algorithm for reconstructing a surface from point cloud data Our algorithm defines an implicit function I whose zero set U is the reconstructed surface We prove that I is a good approximation to the signed distance function of the sampled surface F and that U is geometrically close to and homeomorphic to F Our proof requires sampling conditions similar to e-sampling, used in Delaunay reconstruction algorithms

129 citations


Proceedings ArticleDOI
21 Nov 2005
TL;DR: This work proposes a novel algorithm for placement of streamlines from two-dimensional steady vector or direction fields starting at the furthest point seeding strategy, which leads to high quality placements by favoring long streamlines, while retaining uniformity with the increasing density.
Abstract: We propose a novel algorithm for placement of streamlines from two-dimensional steady vector or direction fields. Our method consists of placing one streamline at a time by numerical integration starting at the furthest away from all previously placed streamlines. Such a farthest point seeding strategy leads to high quality placements by favoring long streamlines, while retaining uniformity with the increasing density. Our greedy approach generates placements of comparable quality with respect to the optimization approach from Turk and Banks, while being 200 times faster. Simplicity, robustness as well as efficiency is achieved through the use of a Delaunay triangulation to model the streamlines, address proximity queries and determine the biggest voids by exploiting the empty circle property. Our method handles variable density and extends to multiresolution.

128 citations


Book ChapterDOI
01 Jan 2005
TL;DR: An algorithm to implement natural neighbour interpolation in two and three dimensions, which has the same time complexity as the insertion of a single point in a Voronoi diagram or a Delaunay triangulation.
Abstract: Although the properties of natural neighbour interpolation and its usefulness with scattered and irregularly spaced data are well-known, its implementation is still a problem in practice, especially in three and higher dimensions. We present in this paper an algorithm to implement the method in two and three dimensions, but it can be generalized to higher dimensions. Our algorithm, which uses the concept of flipping in a triangulation, has the same time complexity as the insertion of a single point in a Voronoi diagram or a Delaunay triangulation.

108 citations


Proceedings ArticleDOI
23 Jan 2005
TL;DR: A moving least squares algorithm for reconstructing a surface from point cloud data is analyzed and it is proved that I is a good approximation to the signed distance function of the sampled surface F and that U is geometrically close to and homeomorphic to F.
Abstract: We analyze a moving least squares algorithm for reconstructing a surface from point cloud data. Our algorithm defines an implicit function I whose zero set U is the reconstructed surface. We prove that I is a good approximation to the signed distance function of the sampled surface F and that U is geometrically close to and homeomorphic to F. Our proof requires sampling conditions similar to e-sampling, used in Delaunay reconstruction algorithms.

Journal ArticleDOI
TL;DR: In this article, the properties of C-grid staggered spatial discretizations of the shallow-water equations on regular Delaunay triangulations on the sphere are analyzed, and the power spectra for energy and potential enstrophy obtained in long model integrations display a qualitative behavior similar to that predicted by the decaying turbulence theory for the continuous system.
Abstract: The properties of C-grid staggered spatial discretizations of the shallow-water equations on regular Delaunay triangulations on the sphere are analyzed. Mass-conserving schemes that also conserve either energy or potential enstrophy are derived, and their features are analogous to those of the C-grid staggered schemes on quadrilateral grids. Results of numerical tests carried out with explicit and semi-implicit time discretizations show that the potential-enstrophy-conserving scheme is able to reproduce correctly the main features of large-scale atmospheric motion and that power spectra for energy and potential enstrophy obtained in long model integrations display a qualitative behavior similar to that predicted by the decaying turbulence theory for the continuous system.

Journal ArticleDOI
TL;DR: This paper proposes a new method for isotropic remeshing of triangulated surface meshes by distributing the desired number of samples by generalizing error diffusion and creating the mesh by lifting the corresponding constrained Delaunay triangulation from parameter space.
Abstract: This paper proposes a new method for isotropic remeshing of triangulated surface meshes. Given a triangulated surface mesh to be resampled and a user-specified density function defined over it, we first distribute the desired number of samples by generalizing error diffusion, commonly used in image halftoning, to work directly on mesh triangles and feature edges. We then use the resulting sampling as an initial configuration for building a weighted centroidal Voronoi diagram in a conformal parameter space, where the specified density function is used for weighting. We finally create the mesh by lifting the corresponding constrained Delaunay triangulation from parameter space. A precise control over the sampling is obtained through a flexible design of the density function, the latter being possibly low-pass filtered to obtain a smoother gradation. We demonstrate the versatility of our approach through various remeshing examples.

Book ChapterDOI
01 Jan 2005
TL;DR: This chapter focuses on the automatic mesh generation methods based on the advancing front method and the Delaunay triangulation method, which are the basis of many existing mesh generation programs and the basis for current research.
Abstract: The process of creating a finite element mesh is often termed as mesh generation. Mesh generation has always been a time-consuming and error-prone process. This is true in the practical science and engineering computations, where meshes have to be generated for three-dimensional geometries of various levels of complexity. The attempt to create a fully automatic mesh generator, which is a particular mesh generation algorithm that is capable of generating valid finite element meshes over arbitrary domains and needs only the information of the specified geometric boundary of the domain and the required distribution of the element size, started in the early 1970s. Since then, many methodologies have been proposed and different algorithms have been devised in the development of automatic mesh generators. This chapter focuses on the automatic mesh generation methods based on the advancing front method and the Delaunay triangulation method. These are the basis of many existing mesh generation programs and the basis of current research. This chapter discusses the algorithmic procedures of the advancing front method in two dimensions and the Delaunay triangulation method in three dimensions. It also discusses curve and surface mesh generations.

Journal ArticleDOI
TL;DR: The proposed DBRG algorithm takes a set of unorganized sample points from the boundary surface of a three-dimensional object and produces an orientable manifold triangulated model with a correct geometry and topology that is faithful to the original object.
Abstract: This paper presents a Delaunay-based region-growing (DBRG) surface reconstruction algorithm that holds the advantages of both Delaunay-based and region-growing approaches. The proposed DBRG algorithm takes a set of unorganized sample points from the boundary surface of a three-dimensional object and produces an orientable manifold triangulated model with a correct geometry and topology that is faithful to the original object. Compared with the traditional Delaunay-based approach, the DBRG algorithm requires only one-pass Delaunay computation and needs no Voronoi information because it improves the non-trivial triangle extraction by using a region-growing technique. Compared with the traditional region-growing methods, the proposed DBRG algorithm makes the surface reconstruction more systematic and robust because it inherits the structural characteristics of the Delaunay triangulation, which nicely complements the absence of geometric information in a set of unorganized points. The proposed DBRG algorithm is capable of handling surfaces with complex topology, boundaries, and even non-uniform sample points. Experimental results show that it is highly efficient compared with other existing algorithms.

Proceedings ArticleDOI
06 Jun 2005
TL;DR: Star splaying can be a fast first step in repairing a high-quality finite element mesh that has lost the Delaunay property after its vertices have moved in response to simulated physical forces.
Abstract: Star splaying is a general-dimensional algorithm that takes as input a triangulation or an approximation of a convex hull, and produces the Delaunay triangulation, weighted Delaunay triangulation, or convex hull of the vertices in the input. If the input is "nearly Delaunay" or "nearly convex" in a certain sense quantified herein, and it is sparse (i.e. each input vertex adjoins only a constant number of edges), star splaying runs in time linear in the number of vertices. Thus, star splaying can be a fast first step in repairing a high-quality finite element mesh that has lost the Delaunay property after its vertices have moved in response to simulated physical forces. Star splaying is akin to Lawson's edge flip algorithm for converting a triangulation to a Delaunay triangulation, but it works in any dimensionality.

Proceedings ArticleDOI
23 Jan 2005
TL;DR: It is pointed out that controlled perturbation is a general scheme for converting idealistic algorithms into algorithms which can be executed with floating point arithmetic and how to use it in the context of randomized geometric algorithms without deteriorating the running time.
Abstract: Most geometric algorithms are idealistic in the sense that they are designed for the Real-RAM model of computation and for inputs in general position. Real inputs may be degenerate and floating point arithmetic is only an approximation of real arithmetic. Perturbation replaces an input by a nearby input which is (hopefully) in general position and on which the algorithm can be run with floating point arithmetic. Controlled perturbation as proposed by Halperin et al. calls for more: control over the amount of perturbation needed for a given precision of the floating point system. Or conversely, a control over the precision needed for a given amount of perturbation. Halperin et al. gave controlled perturbation schemes for arrangements of polyhedral surfaces, spheres, and circles.We extend their work and point out that controlled perturbation is a general scheme for converting idealistic algorithms into algorithms which can be executed with floating point arithmetic. We also show how to use controlled perturbation in the context of randomized geometric algorithms without deteriorating the running time. Finally, we give concrete schemes for planar Delaunay triangulations and convex hulls and Delaunay triangulations in arbitrary dimensions. We analyze the relation between the perturbation amount and the precision of the floating point system. We also report about experiments with a planar Delaunay diagram algorithm.

Proceedings ArticleDOI
17 Jul 2005
TL;DR: This work first proves the NP-completeness of CI, and then presents a Delaunay triangulation-based algorithm, connectivity improvement using CIDT, which is demonstrated to be effective and compared to its variations via J-Sim simulation.
Abstract: A fully connected network topology is critical to many fundamental operations in wireless ad hoc networks. We study the problem of deploying additional wireless nodes to improve the connectivity of an existing wireless network. Given a disconnected network, we consider the connectivity improvement (CI) problem, i.e., how to deploy as few additional nodes as possible so that the augmented network is connected. We first prove the NP-completeness of CI, and then present a Delaunay triangulation-based algorithm, connectivity improvement using Delaunay triangulation (CIDT). We discuss several variations of CIDT, and propose two additional optimization techniques to further improve the performance. Finally, we demonstrate the effectiveness of CIDT, and compare the performance of its variations via J-Sim simulation.

Proceedings ArticleDOI
21 Sep 2005
TL;DR: A fingerprint matching algorithm that uses Delaunay triangulation in computational geometry, and then develops a matching algorithm based on DT net to find reference minutiae pairs (RMPs) to identify fingerprint pairs.
Abstract: Fingerprint matching is a key issue in research of an automatic fingerprint identification system. On the basis of Delaunay triangulation (DT) in computational geometry, we proposed a fingerprint matching algorithm based on DT net in this paper. It uses DT in fingerprint matching, and then develops a matching algorithm based on DT net to find reference minutiae pairs (RMPs). Using DT on the topological structure of minutiae set, a DT net is formed with minutiae as vertexes. From the nets of the input minutiae set and template minutiae set, select out a certain pairs of minutiae which have similar structures as RMPs for aligning, and matching is carried out based on point pattern. The experiment is conducted on FVC2002 and the result indicates the validity of our algorithm

Book ChapterDOI
03 Oct 2005
TL;DR: This paper designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations and shows that the algorithm is significantly faster than existing implementations.
Abstract: In this paper, we designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations. If the number of constraining segments is smaller than the memory size, our algorithm runs in expected $O(\frac{N}{B}{\rm log}_{M/B}\frac{N}{B})$ I/Os for triangulating N points in the plane, where M is the memory size and B is the disk block size. If there are more constraining segments, the theoretical bound does not hold, but in practice the performance of our algorithm degrades gracefully. Through an extensive set of experiments with both synthetic and real data, we show that our algorithm is significantly faster than existing implementations.

01 Jan 2005
TL;DR: In this article, the distinct element method was originally designed to handle spherical particles and was generalized to a wider range of particle shapes called spherosimplices, and a contact detection method was given as well which uses weighted Delaunay triangulations to detect contacts occurring in a population of particles with such shapes.
Abstract: The distinct element method was originally designed to handle spherical particles Here, this method is generalized to a wider range of particle shapes called spherosimplices A contact detection method is given as well which uses weighted Delaunay triangulations to detect contacts occurring in a population of particles with such shapes Finally, a set of numerical experiments is performed indicating that the overall contact detection complexity is linear in the number of particles

Book ChapterDOI
TL;DR: A novel minutiae matching approach to fingerprint verification by using Delaunay triangulation to represent each fingerprint as a special connected graph that facilitates a local-structure-based matching of two minutia from input and template fingerprints respectively.
Abstract: This paper presents a novel minutiae matching approach to fingerprint verification. Given an input or a template fingerprint image, minutiae are extracted first. Using Delaunay triangulation, each fingerprint is then represented as a special connected graph with each node being a minutia point and each edge connecting two minutiae. Such a graph is used to define the neighborhood of a minutia that facilitates a local-structure-based matching of two minutiae from input and template fingerprints respectively. The possible alignment of an edge in input graph and an edge in template graph can be identified efficiently. A global matching score between two fingerprints is finally calculated by using an aligned-edge-guided triangle matching procedure. The effectiveness of the proposed approach is confirmed by a benchmark test on FVC2000 and FVC2002 databases.

Journal ArticleDOI
01 May 2005
TL;DR: This paper presents several parallel algorithms for the construction of the Delaunay triangulation in E^2 and E^3-one of the fundamental problems in computer graphics.
Abstract: This paper presents several parallel algorithms for the construction of the Delaunay triangulation in E^2 and E^3-one of the fundamental problems in computer graphics. The proposed algorithms are designed for parallel systems with shared memory and several processors. Such a hardware configuration (especially the case with two-processors) became widely spread in the last few years in the computer graphics area. Some of the proposed algorithms are easy to be implemented but not very efficient, while some of them prove opposite characteristics. Some of them are usable in E^2 only, other work in E^3 as well. The algorithms themselves were already published in computer graphics where the computer graphics criteria were highlighted. This paper concentrates on parallel and systematic point of view and gives detailed information about the parallelization of a computational geometry application to parallel and distributed computation oriented community.

Proceedings ArticleDOI
14 Nov 2005
TL;DR: A new blind robust watermarking scheme for 3D meshes that resists to affine transforms, white noise addition, smoothing, crop and sampling changes such as decimation, subdivision or remeshing is presented.
Abstract: We present a new blind robust watermarking scheme for 3D meshes. Feature points are used to build a partition of the mesh shape that resists to common 3D watermarking attacks. These points are automatically selected through a multi-scale estimation of the curvature tensor field. The automatic capture of robust feature points and its use for blind detection in a robust watermarking scheme are the contribution of this paper. Our watermarking scheme proceeds by first partitioning the mesh shape using a geodesic Delaunay triangulation of the detected feature points. Each of these geodesic triangle patches is then parameterized and remeshed by a subdivision strategy to obtain a robust base meshing. Then remeshed patches are watermarked in the spectral domain and original mesh points are finally projected on the corresponding watermarked patches. This strategy shows good preliminary results as it resists to affine transforms, white noise addition, smoothing, crop and sampling changes such as decimation, subdivision or remeshing.

Book ChapterDOI
01 Jan 2005
TL;DR: Adaptive thinning algorithms are greedy point removal schemes for bivariate scattered data sets with corresponding function values, where the points are recursively removed according to some data-dependent criterion.
Abstract: Adaptive thinning algorithms are greedy point removal schemes for bivariate scattered data sets with corresponding function values, where the points are recursively removed according to some data-dependent criterion. Each subset of points, together with its function values, defines a linear spline over its Delaunay triangulation. The basic criterion for the removal of the next point is to minimise the error between the resulting linear spline at the bivariate data points and the original function values. This leads to a hierarchy of linear splines of coarser and coarser resolutions.

Proceedings ArticleDOI
21 Nov 2005
TL;DR: A Delaunay-based surface triangulation algorithm generating quality surface meshes for the molecular skin model by expanding the restricted union of balls along the surface and generating an /spl epsiv/-sampling of the skin surface incrementally.
Abstract: Quality surface meshes for molecular models are desirable in the studies of protein shapes and functionalities. However, there is still no robust software that is capable to generate such meshes with good quality. In this paper, we present a Delaunay-based surface triangulation algorithm generating quality surface meshes for the molecular skin model. We expand the restricted union of balls along the surface and generate an /spl epsiv/-sampling of the skin surface incrementally. At the same time, a quality surface mesh is extracted from the Delaunay triangulation of the sample points. The algorithm supports robust and efficient implementation and guarantees the mesh quality and topology as well. Our results facilitate molecular visualization and have made a contribution towards generating quality volumetric tetrahedral meshes for the macromolecules.

Journal ArticleDOI
TL;DR: The features of the natural neighbor (Sibson) interpolant are used within the context of a constrained Voronoi diagram, dual to the constrained Delaunay triangulation, for treating moving interface (Stefan) problems.

Journal ArticleDOI
TL;DR: A parallel algorithm for regular triangulations that allows vertex insertion, deletion, movement, and weight changes for fully dynamic and kinetic particle simulations is presented.

Journal ArticleDOI
TL;DR: This paper presents for the first time an effective way to create in parallel guaranteed quality meshes with billions of elements in a few hundreds of seconds, and at the same time demonstrates that these meshes can be generated in an efficient and scalable way.
Abstract: Creating in parallel guaranteed quality large unstructured meshes is a challenging problem. Parallel mesh generation procedures decompose the original mesh generation problem into smaller subproblems that can be solved in parallel. The subproblems can be treated as either completely or partially coupled, or they can be treated as completely decoupled. In this paper we present a parallel guaranteed quality Delaunay method for 2-dimensional domains which is based on the complete decoupling of the subproblems. As a result the method eliminates the communication and the synchronization during the meshing of the subproblems. Moreover, it achieves 100% code reuse of existing, fine-tuned, and well-tested sequential mesh generators. The approach we describe in this paper presents for the first time an effective way to create in parallel guaranteed quality meshes with billions of elements in a few hundreds of seconds, and at the same time demonstrates that these meshes can be generated in an efficient and scalable way. Our performance data indicate superlinear speedups.

Proceedings ArticleDOI
31 May 2005
TL;DR: This work presents a novel computational geometry based placement migration method, and a new stability metric to more accurately measure the "similarity" between two placements.
Abstract: Placement migration is a critical step to address a variety of post-placement design closure issues, such as timing, routing congestion, signal integrity, and heat distribution. To fix a design problem, one would like to perturb the design as little as possible while preserving the integrity of the original placement. This work presents a novel computational geometry based placement migration method, and a new stability metric to more accurately measure the "similarity" between two placements. It has two stages, a bin-based spreading at coarse scale and a Delaunay triangulation based spreading at finer grain. It has clear advantage over conventional legalization algorithms such that the neighborhood characteristics of the original placement are preserved. Thus, the placement migration is much more stable, which is important to maintain. Applying this technique to placement legalization demonstrates significant improvements in wire length and stability compared to other popular legalization algorithms.