scispace - formally typeset
Search or ask a question

Showing papers on "Constrained Delaunay triangulation published in 2008"


Journal ArticleDOI
TL;DR: In this article, it was shown that the decision version of this problem is NP-hard, using a reduction from PLANAR 1-IN-3-SAT, and the correct working of the gadgets was established with computer assistance, using dynamic programming on polygonal faces, as well as the β-skeleton heuristic to certify that certain edges belong to the minimum-weight triangulation.
Abstract: A triangulation of a planar point set S is a maximal plane straight-line graph with vertex set S. In the minimum-weight triangulation (MWT) problem, we are looking for a triangulation of a given point set that minimizes the sum of the edge lengths. We prove that the decision version of this problem is NP-hard, using a reduction from PLANAR 1-IN-3-SAT. The correct working of the gadgets is established with computer assistance, using dynamic programming on polygonal faces, as well as the β-skeleton heuristic to certify that certain edges belong to the minimum-weight triangulation.

180 citations


Journal ArticleDOI
TL;DR: This work develops a scale-invariant representation of images from the bottom up, using a piecewise linear approximation of contours and constrained Delaunay triangulation to complete gaps and model curvilinear grouping on top of this graphical/geometric structure.
Abstract: Using a large set of human segmented natural images, we study the statistics of region boundaries. We observe several power law distributions which likely arise from both multi-scale structure within individual objects and from arbitrary viewing distance. Accordingly, we develop a scale-invariant representation of images from the bottom up, using a piecewise linear approximation of contours and constrained Delaunay triangulation to complete gaps. We model curvilinear grouping on top of this graphical/geometric structure using a conditional random field to capture the statistics of continuity and different junction types. Quantitative evaluations on several large datasets show that our contour grouping algorithm consistently dominates and significantly improves on local edge detection.

98 citations


Book ChapterDOI
01 Jan 2008
TL;DR: A Delaunay refinement algorithm has been proposed that can mesh domains as general as piecewise smooth complexes and has a provable guarantee about topology, but certain steps are too expensive to make it practical.
Abstract: Recently a Delaunay refinement algorithm has been proposed that can mesh domains as general as piecewise smooth complexes [7]. This class includes polyhedra, smooth and piecewise smooth surfaces, volumes enclosed by them, and above all non-manifolds. In contrast to previous approaches, the algorithm does not impose any restriction on the input angles. Although this algorithm has a provable guarantee about topology, certain steps are too expensive to make it practical.

86 citations


Journal ArticleDOI
TL;DR: A novel reconstruction algorithm that, given an input point set sampled from an object S, builds a one-parameter family of complexes that approximate S at different scales, which makes the algorithm applicable in any metric space.
Abstract: We present a novel reconstruction algorithm that, given an input point set sampled from an object S, builds a one-parameter family of complexes that approximate S at different scales. At a high level, our method is very similar in spirit to Chew’s surface meshing algorithm, with one notable difference though: the restricted Delaunay triangulation is replaced by the witness complex, which makes our algorithm applicable in any metric space. To prove its correctness on curves and surfaces, we highlight the relationship between the witness complex and the restricted Delaunay triangulation in 2d and in 3d. Specifically, we prove that both complexes are equal in 2d and closely related in 3d, under some mild sampling assumptions.

64 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed clothing segmentation method is able to extract different clothing from static images with variations in backgrounds and lighting conditions.

55 citations


Journal ArticleDOI
TL;DR: This paper introduces a new algorithm for constrained Delaunay triangulation, which is built upon sets of points and constraining edges, which has various applications in geographical information system (GIS), for example, iso‐lines triangulated or the triangulations of polygons in land cadastre.
Abstract: This paper introduces a new algorithm for constrained Delaunay triangulation, which is built upon sets of points and constraining edges. It has various applications in geographical information system (GIS), for example, iso-lines triangulation or the triangulation of polygons in land cadastre. The presented algorithm uses a sweep-line paradigm combined with Lawson's legalisation. An advancing front moves by following the sweep-line. It separates the triangulated and non-triangulated regions of interest. Our algorithm simultaneously triangulates points and constraining edges and thus avoids consuming location of those triangles containing constraining edges, as used by other approaches. The implementation of the algorithm is also considerably simplified by introducing two additional artificial points. Experiments show that the presented algorithm is among the fastest constrained Delaunay triangulation algorithms available at the moment.

46 citations


Patent
02 Apr 2008
TL;DR: In this paper, a method for generating constrained Voronoi grids in a plane with internal features and boundaries is disclosed, which generally includes approximation of internal feature and boundaries with polylines based on plane geometry.
Abstract: A method for generating constrained Voronoi grids in a plane with internal features and boundaries is disclosed. The disclosed method generally includes approximation of internal features and boundaries with polylines based on plane geometry. Protected polygons or points are generated around the polylines, and Delaunay triangulation of protected points or protected polygon vertices is constructed. Delaunay triangulation that honors protected polygons or points is generated in the rest of the gridding domain. The constrained Voronoi grid is then generated from the Delaunay triangulation, which resolves all of the approximated features and boundaries with the edges of Voronoi cells. Constrained Voronoi grids may be generated with adaptive cell sizes based on specified density criterion.

44 citations


Journal ArticleDOI
Vin de Silva1
TL;DR: In this paper, the weak Delaunay triangulation of a finite point set in a metric space, which contains as a subcomplex the traditional (strong) delaunay triangle, was introduced.
Abstract: We consider a new construction, the weak Delaunay triangulation of a finite point set in a metric space, which contains as a subcomplex the traditional (strong) Delaunay triangulation. The two simplicial complexes turn out to be equal for point sets in Euclidean space, as well as in the (hemi)sphere, hyperbolic space, and certain other geometries. There are weighted and approximate versions of the weak and strong complexes in all these geometries, and we prove equality theorems in those cases also. On the other hand, for discrete metric spaces the weak and strong complexes are decidedly different. We give a short empirical demonstration that weak Delaunay complexes can lead to dramatically clean results in the problem of estimating the homology groups of a manifold represented by a finite point sample.

42 citations


Journal ArticleDOI
TL;DR: An algorithm and a software are presented for the parallel constrained Delaunay mesh generation in two dimensions based on the decomposition of the original mesh generation problem into N smaller subproblems which are meshed in parallel.
Abstract: Delaunay refinement is a widely used method for the construction of guaranteed quality triangular and tetrahedral meshes We present an algorithm and a software for the parallel constrained Delaunay mesh generation in two dimensions Our approach is based on the decomposition of the original mesh generation problem into N smaller subproblems which are meshed in parallel The parallel algorithm is asynchronous with small messages which can be aggregated and exhibits low communication costs On a heterogeneous cluster of more than 100 processors our implementation can generate over one billion triangles in less than 3 minutes, while the single-node performance is comparable to that of the fastest to our knowledge sequential guaranteed quality Delaunay meshing library (the Triangle)

42 citations


Journal ArticleDOI
TL;DR: In this article, the existence and uniqueness theorem for planar weighted Delaunay triangulations with nonintersecting site-circles with prescribed combinatorial type and circle intersection angles was proved.
Abstract: We use a variational principle to prove an existence and uniqueness theorem for planar weighted Delaunay triangulations (with non-intersecting site-circles) with prescribed combinatorial type and circle intersection angles. Such weighted Delaunay triangulations may be interpreted as images of hyperbolic polyhedra with one vertex on and the remaining vertices beyond the infinite boundary of hyperbolic space. Thus, the main theorem states necessary and sufficient conditions for the existence and uniqueness of such polyhedra with prescribed combinatorial type and dihedral angles. More generally, we consider weighted Delaunay triangulations in piecewise flat surfaces, allowing cone singularities with prescribed cone angles in the vertices. The material presented here extends work by Rivin on Delaunay triangulations and ideal polyhedra.

41 citations


Book ChapterDOI
01 Jan 2008
TL;DR: This work addresses the problem of generating 2D quality triangle meshes from a set of constraints provided as a planar straight line graph and inserts fewer Steiner points than Delaunay refinement alone, and improves over the mesh quality.
Abstract: We address the problem of generating 2D quality triangle meshes from a set of constraints provided as a planar straight line graph. The algorithm first computes a constrained Delaunay triangulation of the input set of constraints, then interleaves Delaunay refinement and optimization. The refinement stage inserts a subset of the Voronoi vertices and midpoints of constrained edges as Steiner points. The optimization stage optimizes the shape of the triangles through the Lloyd iteration applied to Steiner points both in 1D along constrained edges and in 2D after computing the bounded Voronoi diagram. Our experiments show that the proposed algorithm inserts fewer Steiner points than Delaunay refinement alone, and improves over the mesh quality.

Journal ArticleDOI
TL;DR: A geometric hardcore condition is introduced on small and large cells, consequently the existence of infinite Gibbs Delaunay tessellations on ℝ2.
Abstract: In this paper, we prove the existence of infinite Gibbs Delaunay tessellations on ℝ2. The interaction depends on the local geometry of the tessellation. We introduce a geometric hardcore condition on small and large cells, consequently we can construct more regular infinite random Delaunay tessellations.

Journal ArticleDOI
TL;DR: An isosurface meshing algorithm, DelIso, based on the Delaunay refinement paradigm, which has been successfully applied to mesh a variety of domains with guarantees for topology, geometry, mesh gradedness, and triangle shape is presented.
Abstract: We present an isosurface meshing algorithm, DelIso, based on the Delaunay refinement paradigm. This paradigm has been successfully applied to mesh a variety of domains with guarantees for topology, geometry, mesh gradedness, and triangle shape. A restricted Delaunay triangulation, dual of the intersection between the surface and the three-dimensional Voronoi diagram, is often the main ingredient in Delaunay refinement. Computing and storing three-dimensional Voronoi/Delaunay diagrams become bottlenecks for Delaunay refinement techniques since isosurface computations generally have large input datasets and output meshes. A highlight of our algorithm is that we find a simple way to recover the restricted Delaunay triangulation of the surface without computing the full 3D structure. We employ techniques for efficient ray tracing of isosurfaces to generate surface sample points, and demonstrate the effectiveness of our implementation using a variety of volume datasets.

01 Jan 2008
TL;DR: This paper focuses on the application of triangulations and rubber-sheeting techniques to the problem of merging two digitized map files and an algorithm for that triangulation and a specific rubber- sheeting technique is proposed.
Abstract: This paper focuses on the application of triangulation and rubber-sheeting techniques to the problem of merging two digitized map files The Census Bureau is currently developing a map merging procedure called conflation Reproducibility, quality control, and a desire for mathematical consistency in conflation lead to a need for well-defined procedures The Delaunay triangulation is well-defined and in some sense the 'best' triangulation on a finite set of points It leads naturally into an efficient rubber^sheeting algorithm The discussion starts with triangulations and rubber-sheeting in general and well-defined triangulations This leads to the Delaunay triangulation, an algorithm for that triangulation and a specific rubber-sheeting technique Finally, some problems that require further research are mentioned in an appendix

Proceedings ArticleDOI
30 Dec 2008
TL;DR: A fast constrained Delaunay triangulation algorithm based on constrained-edge priority is presented to instead of complicated segmentation algorithms in multi-feature extraction and abnormity detection of real-time abnormal vehicle event detection with multi- feature over highway high-definition surveillance video.
Abstract: This paper introduces a framework of real-time abnormal vehicle event detection with multi-feature over highway high-definition surveillance video. The framework is composed of two parts: multi-feature extraction and abnormity detection. In multi-feature extraction, a fast constrained Delaunay triangulation (CDT) algorithm based on constrained-edge priority is presented to instead of complicated segmentation algorithms. After calibrating manually to extract the actual driveways from surveillance video sequence, localizing vehicle regions and tracking via detection of vehicle regions to extract static features and motional features in monitor area, multi-feature vectors are created for each vehicle. In abnormity detection, a method of adaptive detection modeling of vehicle events (ADMVE) is introduced. A Semi-supervised Mixture of Gaussian Hidden Markov Model is trained with the multi-feature vectors for each video segment. The normal model is trained by supervised mode with manual labeling, and becomes more accurate via adaptation iteration. The abnormal models are trained through the adapted Bayesian learning with unsupervised mode. Finally, experiments using real video sequence are performed to verify the proposed method.

Proceedings ArticleDOI
09 Jun 2008
TL;DR: The running time of the algorithm matches the information-theoretic lower bound for the given input distribution, implying that if the input distribution has low entropy, then the algorithm beats the standard Ω(n log n) bound for computing Delaunay triangulations.
Abstract: We study the problem of two-dimensional Delaunay triangulation in the self-improving algorithms model [1]. We assume that the n points of the input each come from an independent, unknown, and arbitrary distribution. The first phase of our algorithm builds data structures that store relevant information about the input distribution. The second phase uses these data structures to efficiently compute the Delaunay triangulation of the input. The running time of our algorithm matches the information-theoretic lower bound for the given input distribution, implying that if the input distribution has low entropy, then our algorithm beats the standard Ω(n log n) bound for computing Delaunay triangulations.Our algorithm and analysis use a variety of techniques: e-nets for disks, entropy-optimal point-location data structures, linear-time splitting of Delaunay triangulations, and information-theoretic arguments.

Journal ArticleDOI
TL;DR: A set of procedures for the shape reconstruction and mesh generation of unstructured high-order spatial discretization of patient-specific geometries from a series of medical images and for the simulation of flows in these meshes using a high- order hp-spectral solver are described.
Abstract: We describe a set of procedures for the shape reconstruction and mesh generation of unstructured high-order spatial discretization of patient-specific geometries from a series of medical images and for the simulation of flows in these meshes using a high-order hp-spectral solver. The reconstruction of the shape of the boundary is based on the interpolation of an implicit function through a set of points obtained from the segmentation of the images. This approach is favoured for its ability of smoothly interpolating between sections of different topology. The boundary of the object is initially represented as an iso-surface of an implicit function defined in terms of radial basis functions. This surface is approximated by a triangulation extracted by the method of marching cubes. The triangulation is then suitably smoothed and refined to improve its quality and permit its approximation by a quilt of bi-variate spline surface patches. Such representation is often the standard input format required for state-of-the-art mesh generators. The generation of the surface patches is based on a partition of the triangulation into Voronoi regions and dual Delaunay triangulations with an even number of triangles. The quality of the triangulation is optimized by imposing that the distortion associated with the energy of deformation by harmonic maps is minimized. Patches are obtained by merging adjacent triangles and this representation is then used to generate a mesh of linear elements using standard generation techniques. Finally, a mesh of high-order elements is generated in a bottom-up fashion by creating the additional points required for the high-order interpolation and projecting them on the edges and surfaces of the quilt of patches. The methodology is illustrated by generating meshes for a by-pass graft geometry and calculating high-order CFD solutions in these meshes.

01 Jan 2008
TL;DR: This work proposes a new C++ implementation of the well-known incremental algorithm for the construction of Delaunay triangulations in any dimension that outperforms the best currently available codes for convex hulls and Delauny triagulations and is fully robust.
Abstract: We propose a new C++ implementation of the well-known incremental algorithm for the construction of Delaunay triangulations in any dimension. Our implementation follows the exact computing paradigm and is fully robust. Extensive comparisons have shown that our implementation outperforms the best currently available codes for convex hulls and Delaunay triagulations, and that it can be used for quite big input sets in spaces of dimensions up to 6.

Proceedings ArticleDOI
09 Jun 2008
TL;DR: It is shown how to preprocess a set of n disjoint unit disks in the plane in O(n log n) time so that if one point per disk is specified with precise coordinates, the Delaunay triangulation can be computed in linear time.
Abstract: An assumption of nearly all algorithms in computational geometry is that the input points are given precisely, so it is interesting to ask what is the value of imprecise information about points. We show how to preprocess a set of n disjoint unit disks in the plane in O(n log n) time so that if one point per disk is specified with precise coordinates, the Delaunay triangulation can be computed in linear time. From the Delaunay, one can obtain the Gabriel graph and a Euclidean minimum spanning tree; it is interesting to note the roles that these two structures play in our algorithm to quickly compute the Delaunay.

Journal ArticleDOI
TL;DR: An unstructured hybrid tessellation of a scattered point set that minimally covers the proximal space around each point that proves to be superior to classical Delaunay one in a finite element context.
Abstract: In this paper we propose an unstructured hybrid tessellation of a scattered point set that minimally covers the proximal space around each point. The mesh is automati- cally obtained in a bounded period of time by transforming an initial Delaunay tessellation. Novel types of polygonal interpolants are used for interpolation applications and the geometric qualities of the elements make them also useful for discretization schemes. The approach proves to be super- ior to classical Delaunay one in a finite element context.

01 Jan 2008
TL;DR: Efficient algorithms for approximating a height field using a piecewise-linear triangulated surface using both Delaunay and data-dependent triangulation criteria are presented and empirical comparisons of several variants of the algorithms on large digital elevation models are presented.
Abstract: We present efficient algorithms for approximating a height field using a piecewise-linear triangulated surface. The algorithms attempt to minimize both the error and the number of triangles in the approximation. The methods we examine are variants of the greedy insertion algorithm. This method begins with a simple triangulation of the domain as an initial approximation. It then iteratively finds the input point with highest error in the current approximation and inserts it as a vertex in the triangulation. We describe optimized algorithms using both Delaunay and data-dependent triangulation criteria. The algorithms have typical costs of O((m + n) logm), where n is the number of points in the input height field and m is the number of vertices in the final approximation. We also present empirical comparisons of several variants of the algorithms on large digital elevation models. We have made a C++ implementation of our algorithms publicly available.

Posted Content
TL;DR: It is shown that, under mild sampling conditions, the restricted Delaunay triangulation provides good topological approximations of 1- and 2-manifolds, but this is not the case for higher-dimensional manifolds, even under stronger sampling conditions.
Abstract: It is a well-known fact that, under mild sampling conditions, the restricted Delaunay triangulation provides good topological approximations of 1- and 2-manifolds. We show that this is not the case for higher-dimensional manifolds, even under stronger sampling conditions. Specifically, it is not true that, for any compact closed submanifold M of R n , and any sufficiently dense uniform sampling L of M, the Delaunay triangulation of L restricted to M is homeomorphic to M, or even homotopy equivalent to it. Besides, it is not true either that, for any sufficiently dense set W of witnesses, the witness complex of L relative to W contains or is contained in the restricted Delaunay triangulation of L.

Journal ArticleDOI
TL;DR: A newly developed polynomial preserving gradient recovery technique is further studied and it is found that the recovered gradient improves the leading term of the error by a factor ε.
Abstract: A newly developed polynomial preserving gradient recovery technique is further studied. The results are twofold. First, error bounds for the recovered gradient are established on the Delaunay type mesh when the major part of the triangulation is made of near parallelogram triangle pairs with e-perturbation. It is found that the recovered gradient improves the leading term of the error by a factor e. Secondly, the analysis is performed for a highly anisotropic mesh where the aspect ratio of element sides is unbounded. When the mesh is adapted to the solution that has significant changes in one direction but very little, if any, in another direction, the recovered gradient can be superconvergent. © 2007 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq, 2008

Proceedings ArticleDOI
25 Jun 2008
TL;DR: A path planning algorithm for determining an optimal path with respect to the costs of a dual graph on the Constrained Delaunay Triangulation of an environment using triangles to avoid the nonoptimal paths caused by the different geometric size of the triangles.
Abstract: This paper proposes a path planning algorithm for determining an optimal path with respect to the costs of a dual graph on the Constrained Delaunay Triangulation (CDT) of an environment. The advantages of using triangles for environment expression are: less data storage required, available mature triangulation methods and consistent with a potential motion planning framework. First we represent the polygon environment as a planar straight line graph (PSLG) described as a collection of vertices and segments, and then we adopt the CDT to partition the environment into triangles. Then on this CDT of the environment, a dual graph is constructed following the target attractive principle in order to avoid the nonoptimal paths caused by the different geometric size of the triangles. Correspondingly, a path planning algorithm via A* search algorithm finds an optimal path on the real-time building dual graph. In addition, completeness and optimization analysis of the algorithm is given. The simulation results demonstrate the effectiveness and optimization of the algorithm.

Book ChapterDOI
01 Jan 2008
TL;DR: A new mesh refinement algorithm for computing quality guaranteed Delaunay triangulations in three dimensions relies on new ideas for computing the goodness of the mesh, and a sampling strategy that employs numerically stable Steiner points.
Abstract: We propose a new mesh refinement algorithm for computing quality guaranteed Delaunay triangulations in three dimensions. The refinement relies on new ideas for computing the goodness of the mesh, and a sampling strategy that employs numerically stable Steiner points. We show through experiments that the new algorithm results in sparse well-spaced point sets which in turn leads to tetrahedral meshes with fewer elements than the traditional refinement methods.

Journal ArticleDOI
TL;DR: A new spanner called constrained Delaunay triangulation (CDT) is proposed which considers both geometric properties and network requirements and shows that the minimum number of hops from source to destination is less than other spanners.
Abstract: Geometric spanners can be used for efficient routing in wireless ad hoc networks. Computation of existing spanners for ad hoc networks primarily focused on geometric properties without considering network requirements. In this paper, we propose a new spanner called constrained Delaunay triangulation (CDT) which considers both geometric properties and network requirements. The CDT is formed by introducing a small set of constraint edges into local Delaunay triangulation (LDel) to reduce the number of hops between nodes in the network graph. We have simulated the CDT using network simulator (ns-2.28) and compared with Gabriel graph (GG), relative neighborhood graph (RNG), local Delaunay triangulation (LDel), and planarized local Delaunay triangulation (PLDel). The simulation results show that the minimum number of hops from source to destination is less than other spanners. We also observed the decrease in delay, jitter, and improvement in throughput.

Journal ArticleDOI
TL;DR: A Fast Constrained Delaunay Triangulation (FCDT) algorithm is proposed to replace complicated segmentation algorithms for multi-feature extraction and becomes more accurate via iterated adaptation.
Abstract: The detection of abnormal vehicle events is a research hotspot in the analysis of highway surveillance video. Because of the complex factors, which include different conditions of weather, illumination, noise and so on, vehicle’s feature extraction and abnormity detection become difficult. This paper proposes a Fast Constrained Delaunay Triangulation (FCDT) algorithm to replace complicated segmentation algorithms for multi-feature extraction. Based on the video frames segmented by FCDT, an improved algorithm is presented to estimate background self-adaptively. After the estimation, a multi-feature eigenvector is generated by Principal Component Analysis (PCA) in accordance with the static and motional features extracted through locating and tracking each vehicle. For abnormity detection, adaptive detection modeling of vehicle events (ADMVE) is presented, for which a semi-supervised Mixture of Gaussian Hidden Markov Model (MGHMM) is trained with the multi-feature eigenvectors from each video segment. The normal model is developed by supervised mode with manual labeling, and becomes more accurate via iterated adaptation. The abnormal models are trained through the adapted Bayesian learning with unsupervised mode. The paper also presents experiments using real video sequence to verify the proposed method.

Proceedings Article
01 Jan 2008
TL;DR: This work develops a query structure that maintains the mesh without paying the full cost of retriangulating, and develops an example of such a meshing algorithm that produces a provably small mesh in time as fast as sorting the input plus writing the output.
Abstract: We are interested in the following mesh refinement problem: given an input set of points P in R, we would like to produce a good-quality triangulation by adding new points in P . Algorithms for mesh refinement are typically incremental: they compute the Delaunay triangulation of the input, and insert points one by one. However, retriangulating after each insertion can take linear time. In this work we develop a query structure that maintains the mesh without paying the full cost of retriangulating. Assuming that the meshing algorithm processes bad-quality elements in increasing order of their size, our structure allows inserting new points and computing a restriction of the Voronoi cell of a point, both in constant time. We develop an example of such a meshing algorithm, and show that it produces a provably small mesh in time as fast as sorting the input plus writing the output.

Proceedings ArticleDOI
02 Jun 2008
TL;DR: The major result is the probability of triangulation for any point given the number of nodes lying up to a specific distance from it, employing a graph representation where an edge exists between any two nodes close than 2 units from one another.
Abstract: This paper analyses the probability that randomly deployed sensor nodes triangulate any point within the target area. Its major result is the probability of triangulation for any point given the number of nodes lying up to a specific distance (2 units) from it, employing a graph representation where an edge exists between any two nodes close than 2 units from one another. The expected number of un-triangulated coverage holes, i.e. uncovered areas which cannot be triangulated by adjacent nodes, in a finite target area is derived. Simulation results corroborate the probabilistic analysis with low error, for any node density. These results will find applications in triangulation-based or trilateration-based pointing analysis, or any computational geometry application within the context of triangulation.

Book ChapterDOI
01 Dec 2008
TL;DR: A method for stabilizing the computation of stereo correspondences by explicitly examining the planarity hypothesis in the 3D space and showing that the proposed method works well on real indoor, outdoor, and medical image data and is also more efficient than the traditional DP method.
Abstract: A method for stabilizing the computation of stereo correspondences is presented in this paper Delaunay triangulation is employed to partition the input images into small, localized regions Instead of simply assuming that the surface patches viewed from these small triangles are locally planar, we explicitly examine the planarity hypothesis in the 3D space To perform the planarity test robustly, adjacent triangles are merged into larger polygonal patches first and then the planarity assumption is verified Once piece-wise planar patches are identified, point correspondences within these patches are readily computed through planar homographies These point correspondences established by planar homographies serve as the ground control points (GCPs) in the final dynamic programming (DP)-based correspondence matching process Our experimental results show that the proposed method works well on real indoor, outdoor, and medical image data and is also more efficient than the traditional DP method