scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 2012"


Book
23 Nov 2012
TL;DR: In this article, the authors introduce the Matrix Integrals method and the Goulden-Jackson Formula for computing polygonal gluings, and apply it to the problem of "Mirror Symmetry in Dimension One".
Abstract: 0 Introduction: What is This Book About.- 1 Constellations, Coverings, and Maps.- 2 Dessins d'Enfants.- 3 Introduction to the Matrix Integrals Method.- 4 Geometry of Moduli Spaces of Complex Curves.- 5 Meromorphic Functions and Embedded Graphs.- 6 Algebraic Structures Associated with Embedded Graphs.- A.1 Representation Theory of Finite Groups.- A.1.1 Irreducible Representations and Characters.- A.1.2 Examples.- A.1.3 Frobenius's Formula.- A.2 Applications.- A.2.2 Examples.- A.2.3 First Application: Enumeration of Polygon Gluings.- A.2.4 Second Application: the Goulden-Jackson Formula.- A.2.5 Third Application: "Mirror Symmetry" in Dimension One.- References.

800 citations


Journal ArticleDOI
TL;DR: In this paper, a review of the symbol map, a mathematical tool introduced by Goncharov and used by him and collaborators in the context of ρ = 4 SYM for simplifying expressions among multiple polylogarithms, and recall its main properties.
Abstract: We present a review of the symbol map, a mathematical tool introduced by Goncharov and used by him and collaborators in the context of $ \mathcal{N} $ = 4 SYM for simplifying expressions among multiple polylogarithms, and we recall its main properties. A recipe is given for how to obtain the symbol of a multiple polylogarithm in terms of the combinatorial properties of an associated rooted decorated polygon, and it is indicated how that recipe relates to a similar explicit formula for it previously given by Goncharov. We also outline a systematic approach to constructing a function corresponding to a given symbol, and illustrate it in the particular case of harmonic polylogarithms up to weight four. Furthermore, part of the ambiguity of this process is highlighted by exhibiting a family of non-trivial elements in the kernel of the symbol map for arbitrary weight.

298 citations


Journal ArticleDOI
TL;DR: An algorithm is presented which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers.
Abstract: We consider the problem of autonomous robotic laundry folding, and propose a solution to the perception and manipulation challenges inherent to the task. At the core of our approach is a quasi-static cloth model which allows us to neglect the complex dynamics of cloth under significant parts of the state space, allowing us to reason instead in terms of simple geometry. We present an algorithm which, given a 2D cloth polygon and a desired sequence of folds, outputs a motion plan for executing the corresponding manipulations, deemed g-folds, on a minimal number of robot grippers. We define parametrized fold sequences for four clothing categories: towels, pants, short-sleeved shirts, and long-sleeved shirts, each represented as polygons. We then devise a model-based optimization approach for visually inferring the class and pose of a spread-out or folded clothing article from a single image, such that the resulting polygon provides a parse suitable for these folding primitives. We test the manipulation and perception tasks individually, and combine them to implement an autonomous folding system on the Willow Garage PR2. This enables the PR2 to identify a clothing article spread out on a table, execute the computed folding sequence, and visually track its progress over successive folds.

272 citations


Journal ArticleDOI
TL;DR: A ''continuously closed plate'' (CCP) is introduced, such that, as each margin moves independently, the plate polygon remains closed geometrically as a function of time.

244 citations


Journal ArticleDOI
TL;DR: An automatic crack propagation modelling technique using polygon elements is presented in this article, where a simple algorithm to generate a polygon mesh from a Delaunay triangulated mesh is implemented The polygon element formulation is constructed from the scaled boundary finite element method (SBFEM), treating each polygon as a SBFEM subdomain and is very efficient in modelling singular stress fields in the vicinity of cracks.
Abstract: SUMMARY An automatic crack propagation modelling technique using polygon elements is presented A simple algorithm to generate a polygon mesh from a Delaunay triangulated mesh is implemented The polygon element formulation is constructed from the scaled boundary finite element method (SBFEM), treating each polygon as a SBFEM subdomain and is very efficient in modelling singular stress fields in the vicinity of cracks Stress intensity factors are computed directly from their definitions without any nodal enrichment functions An automatic remeshing algorithm capable of handling any n-sided polygon is developed to accommodate crack propagation The algorithm is simple yet flexible because remeshing involves minimal changes to the global mesh and is limited to only polygons on the crack paths The efficiency of the polygon SBFEM in computing accurate stress intensity factors is first demonstrated for a problem with a stationary crack Four crack propagation benchmarks are then modelled to validate the developed technique and demonstrate its salient features The predicted crack paths show good agreement with experimental observations and numerical simulations reported in the literature Copyright © 2012 John Wiley & Sons, Ltd

191 citations


Journal ArticleDOI
TL;DR: It is argued that other variables such as the cognitive complexity of the PPGIS mapping process and stronger claims of external validity favor the use of point features, but these advantages must be weighed against the significantly higher sampling effort required.
Abstract: The collection of spatial information through public participation geographic information systems (PPGIS) is most frequently implemented using either point or polygon spatial features but the research trade-offs between the two methods are not well-understood. In a quasi-experimental PPGIS design, we collected four attributes (aesthetic, recreation, economic, and biological values) as both point and polygon spatial features in the same PPGIS study. We then used Monte Carlo simulation methods to describe the relationship between the quantity of data collected and the degree of spatial convergence in the two methods for each of the four PPGIS attributes. The results demonstrate that the same PPGIS attributes identified by points and polygons will converge on a collective spatial ‘truth’ within the study area provided there are enough observations, however, the degree of spatial convergence varies by PPGIS attribute type and the quantity of data collected. The use of points for mapping PPGIS attributes and a...

137 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an algorithm for contact detection between polygonal or polyhedral (3-D) convex particles in the Discrete Element Method (DEM).

131 citations


Journal ArticleDOI
TL;DR: In this article, a new aspect of the existing study in the context of quiver gauge theories is discussed, which is 4d supersymmetric world volume theories of D3 branes with toric Calabi-Yau moduli spaces.
Abstract: Reflexive polygons have attracted great interest both in mathematics and in physics. This paper discusses a new aspect of the existing study in the context of quiver gauge theories. These theories are 4d supersymmetric worldvolume theories of D3 branes with toric Calabi-Yau moduli spaces that are conveniently described with brane tilings. We find all 30 theories corresponding to the 16 reflexive polygons, some of the theories being toric (Seiberg) dual to each other. The mesonic generators of the moduli spaces are identified through the Hilbert series. It is shown that the lattice of generators is the dual reflexive polygon of the toric diagram. Thus, the duality forms pairs of quiver gauge theories with the lattice of generators being the toric diagram of the dual and vice versa.

91 citations


Journal ArticleDOI
TL;DR: It is proved that the optimal convergence estimate for first-order interpolants used in finite element methods based on three major approaches for generalizing barycentric interpolation functions to convex planar polygonal domains is correct.
Abstract: We prove the optimal convergence estimate for first-order interpolants used in finite element methods based on three major approaches for generalizing barycentric interpolation functions to convex planar polygonal domains. The Wachspress approach explicitly constructs rational functions, the Sibson approach uses Voronoi diagrams on the vertices of the polygon to define the functions, and the Harmonic approach defines the functions as the solution of a PDE. We show that given certain conditions on the geometry of the polygon, each of these constructions can obtain the optimal convergence estimate. In particular, we show that the well-known maximum interior angle condition required for interpolants over triangles is still required for Wachspress functions but not for Sibson functions.

76 citations


Posted Content
TL;DR: In this paper, the authors studied the ensemble of uniformly random Gelfand-Tsetlin schemes with arbitrary fixed N-th row and obtained an explicit double contour integral expression for the determinantal correlation kernel.
Abstract: A Gelfand-Tsetlin scheme of depth N is a triangular array with m integers at level m, m=1,...,N, subject to certain interlacing constraints. We study the ensemble of uniformly random Gelfand-Tsetlin schemes with arbitrary fixed N-th row. We obtain an explicit double contour integral expression for the determinantal correlation kernel of this ensemble (and also of its q-deformation). This provides new tools for asymptotic analysis of uniformly random lozenge tilings of polygons on the triangular lattice; or, equivalently, of random stepped surfaces. We work with a class of polygons which allows arbitrarily large number of sides. We show that the local limit behavior of random tilings (as all dimensions of the polygon grow) is directed by ergodic translation invariant Gibbs measures. The slopes of these measures coincide with the ones of tangent planes to the corresponding limit shapes described by Kenyon and Okounkov in arXiv:math-ph/0507007. We also prove that at the edge of the limit shape, the asymptotic behavior of random tilings is given by the Airy process. In particular, our results cover the most investigated case of random boxed plane partitions (when the polygon is a hexagon).

74 citations


Journal ArticleDOI
TL;DR: In this paper, a simple proof of the known weighted analytic regularity in a polygon, relying on a new formulation of elliptic a priori estimates in smooth domains with analytic control of derivatives, is given.
Abstract: We prove weighted anisotropic analytic estimates for solutions of second-order elliptic boundary value problems in polyhedra. The weighted analytic classes which we use are the same as those introduced by Guo in 1993 in view of establishing exponential convergence for hp finite element methods in polyhedra. We first give a simple proof of the known weighted analytic regularity in a polygon, relying on a new formulation of elliptic a priori estimates in smooth domains with analytic control of derivatives. The technique is based on dyadic partitions near the corners. This technique can successfully be extended to polyhedra, providing isotropic analytic regularity. This is not optimal, because it does not take advantage of the full regularity along the edges. We combine it with a nested open set technique to obtain the desired three-dimensional anisotropic analytic regularity result. Our proofs are global and do not require the analysis of singular functions.

Journal ArticleDOI
TL;DR: An extended local search algorithm (ELS) for the irregular strip packing problem is presented, which adopts two neighborhoods, swapping two given polygons in a placement and placing one polygon into a new position.

Journal ArticleDOI
TL;DR: The dual operations of taking the interior hull and moving out the edges of a two-dimensional lattice polygon are reviewed and it is shown how the latter operation naturally gives rise to an algorithm for enumerating lattice polygons by their genus.
Abstract: We review previous work of (mainly) Koelman, Haase and Schicho, and Poonen and Rodriguez-Villegas on the dual operations of (i) taking the interior hull and (ii) moving out the edges of a two-dimensional lattice polygon. We show how the latter operation naturally gives rise to an algorithm for enumerating lattice polygons by their genus. We then report on an implementation of this algorithm, by means of which we produce the list of all lattice polygons (up to equivalence) whose genus is contained in {1,…,30}. In particular, we obtain the number of inequivalent lattice polygons for each of these genera. As a byproduct, we prove that the minimal possible genus for a lattice 15-gon is 45.

Journal ArticleDOI
TL;DR: An automatic cohesive crack propagation modelling methodology for quasi-brittle materials using polygon elements is presented in this article, where each polygon is treated as a subdomain that is modelled by the scaled boundary finite element method (SBFEM).

Journal ArticleDOI
TL;DR: It is shown that at least six sides per polygon are necessary by constructing a class of planar graphs that cannot be represented by pentagons and that the lower bound of six sides is matched by an upper bound with a linear-time algorithm.
Abstract: In this paper, we consider the problem of representing planar graphs by polygons whose sides touch. We show that at least six sides per polygon are necessary by constructing a class of planar graphs that cannot be represented by pentagons. We also show that the lower bound of six sides is matched by an upper bound of six sides with a linear-time algorithm for representing any planar graph by touching hexagons. Moreover, our algorithm produces convex polygons with edges having at most three slopes and with all vertices lying on an O(n)×O(n) grid.

Journal ArticleDOI
TL;DR: A simple and practical technique is presented for creating fine three-dimensional images with polygon-based computer- generated holograms that takes less computation time than common point-source methods and produces fine spatial 3D images of deep 3D scenes that convey a strong sensation of depth, unlike conventional 3D systems providing only binocular dis- parity.
Abstract: A simple and practical technique is presented for creating fine three-dimensional (3D) images with polygon-based computer- generated holograms. The polygon-based method is a technique for computing the optical wave-field of virtual 3D scenes given by a numerical model. The presented method takes less computation time than common point-source methods and produces fine spatial 3D images of deep 3D scenes that convey a strong sensation of depth, unlike conventional 3D systems providing only binocular dis- parity. However, smooth surfaces cannot be reconstructed using the presented method because the surfaces are approximated by planar polygons. This problem is resolved by introducing a simple rendering technique that is almost the same as that in common computer gra- phics, since the polygon-based method has similarity to rendering techniques in computer graphics. Two actual computer holograms are presented to verify and demonstrate the proposed technique. One is a hologram of a live face whose shape is measured using a 3D laser scanner that outputs polygon-mesh data. The other is for a scene including the moon. Both are created employing the proposed rendering techniques of the texture mapping of real photo- graphs and smooth shading. © 2012 SPIE and IS&T. (DOI: 10.1117/1

Book
26 Jan 2012
TL;DR: This book is about how to use CGAL two-dimensional arrangements to solve problems and contains comprehensive explanations of the solution programs, many illustrations, and detailed notes on further reading.
Abstract: Arrangements of curves constitute fundamental structures that have been intensively studied in computational geometry. Arrangements have numerous applications in a wide range of areas examples include geographic information systems, robot motion planning, statistics, computer-assisted surgery and molecular biology. Implementing robust algorithms for arrangements is a notoriously difficult task, and the CGAL arrangements package is the first robust, comprehensive, generic and efficient implementation of data structures and algorithms for arrangements of curves. This book is about how to use CGAL two-dimensional arrangements to solve problems. The authors first demonstrate the features of the arrangement package and related packages using small example programs. They then describe applications, i.e., complete standalone programs written on top of CGAL arrangements used to solve meaningful problems for example, finding the minimum-area triangle defined by a set of points, planning the motion of a polygon translating among polygons in the plane, computing the offset polygon, finding the largest common point sets under approximate congruence, constructing the farthest-point Voronoi diagram, coordinating the motion of two discs moving among obstacles in the plane, and performing Boolean operations on curved polygons. The book contains comprehensive explanations of the solution programs, many illustrations, and detailed notes on further reading, and it is supported by a website that contains downloadable software and exercises. It will be suitable for graduate students and researchers involved in applied research in computational geometry, and for professionals who require worked-out solutions to real-life geometric problems. It is assumed that the reader is familiar with the C++ programming-language and with the basics of the generic-programming paradigm.

Journal ArticleDOI
TL;DR: Although the change polygon method is sensitive to the definition and measurement of shoreline length, the results are more invariant to parameter changes than the transect-from-baseline method, suggesting that the change Polygon technique may be a more robust coastal change method.
Abstract: This study compares two automated approaches, the transect-from-baseline technique and a new change polygon method, for quantifying historical coastal change over time. The study shows that the transect-from-baseline technique is complicated by choice of a proper baseline as well as generating transects that intersect with each other rather than with the nearest shoreline. The change polygon method captures the full spatial difference between the positions of the two shorelines and average coastal change is the defined as the ratio of the net area divided by the shoreline length. Although then change polygon method is sensitive to the definition and measurement of shoreline length, the results are more invariant to parameter changes than the transect-from-baseline method, suggesting that the change polygon technique may be a more robust coastal change method.

Journal ArticleDOI
17 Jan 2012
TL;DR: In this article, an orthonormal basis of eigenfunctions of the Dirichlet Laplacian for a rational polygon is considered and a sequence of probability measures is shown to converge to the Lebesgue measure.
Abstract: We consider an orthonormal basis of eigenfunctions of the Dirichlet Laplacian for a rational polygon. The modulus squared of the eigenfunctions defines a sequence of probability measures. We prove that this sequence contains a density-one subsequence that converges to Lebesgue measure.

Proceedings Article
01 Jan 2012
TL;DR: In this paper, it was shown that the problem of computing the minimum number of flips required to transform two triangulations of a convex polygon is in P or NP-complete.
Abstract: Given two triangulations of a convex polygon, computing the minimum number of flips required to transform one to the other is a long-standing open problem. It is not known whether the problem is in P or NP-complete. We prove that two natural generalizations of the problem are NP-complete, namely computing the minimum number of flips between two triangulations of (1) a polygon with holes; (2) a set of points in the plane.

01 Nov 2012
TL;DR: Vatistas et al. as discussed by the authors proposed a model based on potential flow theory, linearized around a potential vortex flow with a free surface for which they show that unstable resonant states appear, and obtained an analytically soluble model, together with estimates of the circulation based on angular momentum balance, reproduces the main features of the experimental phase diagram.
Abstract: We explain the rotating polygon instability on a swirling fluid surface [G. H. Vatistas, J. Fluid Mech. 217, 241 (1990) and Jansson et al., Phys. Rev. Lett. 96, 174502 (2006)] in terms of resonant interactions between gravity waves on the outer part of the surface and centrifugal waves on the inner part. Our model is based on potential flow theory, linearized around a potential vortex flow with a free surface for which we show that unstable resonant states appear. Limiting our attention to the lowest order mode of each type of wave and their interaction, we obtain an analytically soluble model, which, together with estimates of the circulation based on angular momentum balance, reproduces the main features of the experimental phase diagram. The generality of our arguments implies that the instability should not be limited to flows with a rotating bottom (implying singular behavior near the corners), and indeed we show that we can obtain the polygons transiently by violently stirring liquid nitrogen in a hot container.

Journal ArticleDOI
TL;DR: In this article, the authors introduce the notion of multichannel conformal blocks relevant for the operator product expansion for null polygon Wilson loops with more than six edges, and decompose the one loop heptagon Wilson loop and predict the value of its two loop OPE discontinuities.
Abstract: We introduce the notion of Multichannel Conformal Blocks relevant for the Operator Product Expansion for Null Polygon Wilson loops with more than six edges As an application of these, we decompose the one loop heptagon Wilson loop and predict the value of its two loop OPE discontinuities At the functional level, the OPE discontinuities are roughly half of the full result Using symbols they suffice to predict the full two loop result We also present several new predictions for the heptagon result at any loop order

Journal ArticleDOI
TL;DR: The two-dimensional cutting/packing problem with items that correspond to simple polygons that may contain holes are studied and algorithms based on no-fit polygon computation are proposed in which they found optimal solutions for several of the tested instances within a reasonable runtime.
Abstract: In this paper, the two-dimensional cutting/packing problem with items that correspond to simple polygons that may contain holes are studied in which we propose algorithms based on no-fit polygon computation. We present a GRASP based heuristic for the 0/1 version of the knapsack problem, and another heuristic for the unconstrained version of the knapsack problem. This last heuristic is divided in two steps: first it packs items in rectangles and then use the rectangles as items to be packed into the bin. We also solve the cutting stock problem with items of irregular shape, by combining this last heuristic with a column generation algorithm. The algorithms proposed found optimal solutions for several of the tested instances within a reasonable runtime. For some instances, the algorithms obtained solutions with occupancy rates above 90% with relatively fast execution time.

Journal ArticleDOI
TL;DR: This paper proposes an approximation algorithm that solves the problem of finding the largest area rectangle of arbitrary orientation that is fully contained in C with an O ( n 3 ) computational cost, where n is the number of vertices of the polygon.

Proceedings ArticleDOI
21 May 2012
TL;DR: This work presents a cluster-based distributed solution for end-to-end polygon overlay processing, modeled after the authors' Windows Azure cloud-based Crayons system, and demonstrates the scalability of the system, along with the remaining bottlenecks.
Abstract: GIS polygon-based (also know as vector-based) spatial data overlay computation is much more complex than raster data computation. Processing of polygonal spatial data files has been a long standing research question in GIS community due to the irregular and data intensive nature of the underlying computation. The state-of-the-art software for overlay computation in GIS community is still desktop-based. We present a cluster-based distributed solution for end-to-end polygon overlay processing, modeled after our Windows Azure cloud-based Crayons system [1]. We present the details of porting Crayons system to MPI-based Linux cluster and show the improvements made by employing efficient data structures such as R-trees. We present performance report and show the scalability of our system, along with the remaining bottlenecks. Our experimental results show an absolute speedup of 15x for end-to-end overlay computation employing up to 80 cores.

Journal ArticleDOI
TL;DR: In this article, a primal-dual algorithm based on linear programming is presented that provides lower bounds on the necessary number of guards in every step and in case of convergence and integrality, ends with an optimal solution.
Abstract: The classical Art Gallery Problem asks for the minimum number of guards that achieve visibility coverage of a given polygon. This problem is known to be NP-hard, even for very restricted and discrete special cases. For the case of vertex guards and simple orthogonal polygons, Cuoto et al. have recently developed an exact method that is based on a set-cover approach. For the general problem (in which both the set of possible guard positions and the point set to be guarded are uncountable), neither constant-factor approximation algorithms nor exact solution methods are known.We present a primal-dual algorithm based on linear programming that provides lower bounds on the necessary number of guards in every step and—in case of convergence and integrality—ends with an optimal solution. We describe our implementation and give experimental results for an assortment of polygons, including nonorthogonal polygons with holes.

Journal ArticleDOI
TL;DR: In this article, a new two-dimensional discrete element type, termed the polyarc element, is presented, which is capable of representing any 2D convex particle shape with arbitrary angularity and elongation using a small number of shape parameters.
Abstract: SUMMARY A new two-dimensional discrete element type, termed the ‘polyarc’ element is presented in this paper. Compared to other discrete element types, the new element is capable of representing any two-dimensional convex particle shape with arbitrary angularity and elongation using a small number of shape parameters. Contact resolution between polyarc elements, which is the most computation-extensive task in DEM simulation only involves simple closed-form solutions. Two undesirable contact scenarios common for polygon elements can be avoided by the polyarc element, so the contact resolution algorithm for polyarc elements is simpler than that for polygon elements. The extra flexibility in particle shape representation induces little or no additional computational cost. The key algorithmic aspects of the new element, including the particle shape representation scheme, the quick neighbor search algorithm, the contact resolution algorithm, and the contact law are presented. The recommended contact law for the polyarc model was formulated on the basis of an evaluation of various contact law schemes for polygon type discrete elements. The capability and efficiency of the new element type were demonstrated through an investigation of strength anisotropy of a virtual sand consisting of a random mix of angular and smooth elongated particles subjected to biaxial compression tests. Copyright © 2011 John Wiley & Sons, Ltd.

Posted Content
TL;DR: In this article, a series of numerical vertical compression tests on assemblies of 2D granular material using a Discrete Element code and studied the results with regard to the grain shape were performed.
Abstract: We performed a series of numerical vertical compression tests on assemblies of 2D granular material using a Discrete Element code and studied the results with regard to the grain shape. The samples consist of 5,000 grains made from either 3 overlapping discs (clumps - grains with concavities) or six-edged polygons (convex grains). These two grain type have similar external envelopes, which is a function of a geometrical parameter $\alpha$. In this paper, the numerical procedure applied is briefly presented followed by the description of the granular model used. Observations and mechanical analysis of dense and loose granular assemblies under isotropic loading are made. The mechanical response of our numerical granular samples is studied in the framework of the classical vertical compression test with constant lateral stress (biaxial test). The comparison of macroscopic responses of dense and loose samples with various grain shapes shows that when $\alpha$ is considered a concavity parameter, it is therefore a relevant variable for increasing mechanical performances of dense samples. When $\alpha$ is considered an envelope deviation from perfect sphericity, it can control mechanical performances for large strains. Finally, we present some remarks concerning the kinematics of the deformed samples: while some polygon samples subjected to a vertical compression present large damage zones (any polygon shape), dense samples made of clumps always exhibit thin reflecting shear bands. This paper was written as part of a CEGEO research project www.granuloscience.com

Journal ArticleDOI
TL;DR: A deterministic approximation algorithm that computes an inscribed rectangle of area at least ([email protected]) times the optimum in running time O([email protected]^2logn) and how this running time can be slightly improved is given.

Proceedings ArticleDOI
24 Dec 2012
TL;DR: An algorithm is introduced that uses the FSPF-generated plane filtered point clouds to generate convex polygons from individual observed depth images and contributes an approach of merging these detected polygons across successive frames while accounting for a complete history of observed plane filtered points without explicitly maintaining a list of all observed points.
Abstract: There has been considerable interest recently in building 3D maps of environments using inexpensive depth cameras like the Microsoft Kinect sensor. We exploit the fact that typical indoor scenes have an abundance of planar features by modeling environments as sets of plane polygons. To this end, we build upon the Fast Sampling Plane Filtering (FSPF) algorithm that extracts points belonging to local neighborhoods of planes from depth images, even in the presence of clutter. We introduce an algorithm that uses the FSPF-generated plane filtered point clouds to generate convex polygons from individual observed depth images. We then contribute an approach of merging these detected polygons across successive frames while accounting for a complete history of observed plane filtered points without explicitly maintaining a list of all observed points. The FSPF and polygon merging algorithms run in real time at full camera frame rates with low CPU requirements: in a real world indoor environment scene, the FSPF and polygon merging algorithms take 2.5 ms on average to process a single 640 × 480 depth image. We provide experimental results demonstrating the computational efficiency of the algorithm and the accuracy of the detected plane polygons by comparing with ground truth.