scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 2013"


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo simulation of mixing polygons is used to evaluate the point-in-polygon assumption of stable isotope analysis in mixing models incorporating uncertainty, for both two and three-isotope systems.
Abstract: Summary Stable isotope analysis is often used to identify the relative contributions of various food resources to a consumer's diet. Some Bayesian isotopic mixing models now incorporate uncertainty in the isotopic signatures of consumers, sources and trophic enrichment factors (e.g. SIAR, MixSIR). This had made model outputs more comprehensive, but at the expense of simple model evaluation, and there is no quantitative method for determining whether a proposed mixing model is likely to explain the isotopic signatures of all consumers, before the model is run. Earlier linear mixing models (e.g. IsoSource) are easier to evaluate, such that if a consumer's isotopic signature is outside the mixing polygon bounding the proposed dietary sources, then mass balance cannot be established and there is no logical solution. This can be used to identify consumers for exclusion or to reject a model outright. This point-in-polygon assumption is not inherent in the Bayesian mixing models, because the source data are distributions not average values, and these models will quantify source contributions even when the solution is very unlikely. We use a Monte Carlo simulation of mixing polygons to apply the point-in-polygon assumption to these models. Convex hulls (‘mixing polygons’) are iterated using the distributions of the proposed dietary sources and trophic enrichment factors, and the proportion of polygons that have a solution (i.e. that satisfy point-in-polygon) is calculated. This proportion can be interpreted as the frequentist probability that the proposed mixing model can calculate source contributions to explain a consumer's isotopic signature. The mixing polygon simulation is visualised with a mixing region, which is calculated by testing a grid of values for point-in-polygon. The simulation method enables users to quantitatively explore assumptions of stable isotope analysis in mixing models incorporating uncertainty, for both two- and three-isotope systems. It provides a quantitative basis for model rejection, for consumer exclusion (those outside the 95% mixing region) and for the correction of trophic enrichment factors. The simulation is demonstrated using a two-isotope study (15N, 13C) of an Australian freshwater food web.

188 citations


Journal ArticleDOI
TL;DR: The main innovation in the approach lies in the tight coupling between interactive input and automatic optimization, as well as in an algorithm that robustly discovers the set of adjacency relations.
Abstract: In this article, we introduce a novel reconstruction and modeling pipeline to create polygonal models from unstructured point clouds. We propose an automatic polygonal reconstruction that can then be interactively refined by the user. An initial model is automatically created by extracting a set of RANSAC-based locally fitted planar primitives along with their boundary polygons, and then searching for local adjacency relations among parts of the polygons. The extracted set of adjacency relations is enforced to snap polygon elements together, while simultaneously fitting to the input point cloud and ensuring the planarity of the polygons. This optimization-based snapping algorithm may also be interleaved with user interaction. This allows the user to sketch modifications with coarse and loose 2D strokes, as the exact alignment of the polygons is automatically performed by the snapping. The generated models are coarse, offer simple editing possibilities by design, and are suitable for interactive 3D applications like games, virtual environments, etc. The main innovation in our approach lies in the tight coupling between interactive input and automatic optimization, as well as in an algorithm that robustly discovers the set of adjacency relations.

119 citations


Journal ArticleDOI
Yijie Pan1, Yongtian Wang1, Juan Liu1, Xin Li1, Jia Jia1 
TL;DR: A fast polygon-based method based on two-dimensional Fourier analysis of 3D affine transformation that could reconstruct the 3D scene with the solid effect and without the depth limitation is proposed.
Abstract: In the holographic three-dimensional (3D) display, the numerical synthesis of the computer-generated holograms needs tremendous calculation. To solve the problem, a fast polygon-based method based on two-dimensional Fourier analysis of 3D affine transformation is proposed. From one primitive polygon, the proposed method calculates the diffracted optical field of each arbitrary polygon in the 3D model, where the pseudo-inverse matrix, the interpolation, and the compensation of the power spectral density are employed. The proposed method could save the computation time in the hologram synthesis since it does not need the fast Fourier transform for each polygonal surface and the additional diffusion computation. The numerical simulation and the optical experimental results are presented to demonstrate the effectiveness of the method. The results reveal the proposed method could reconstruct the 3D scene with the solid effect and without the depth limitation. The factors that influence the image quality are discussed, and the thresholds are proposed to ensure the reconstruction quality.

102 citations


Patent
15 Mar 2013
TL;DR: In this paper, a scene point cloud is processed and a solution to an inverse-function is determined to determine its source objects, and a primitive extraction process and a part matching process are used to compute the inverse function solution.
Abstract: A scene point cloud is processed and a solution to an inverse-function is determined to determine its source objects. A primitive extraction process and a part matching process are used to compute the inverse function solution. The extraction process estimates models and parameters based on evidence of cylinder and planar geometry in the scene. The matching process matches clusters of 3D points to models of parts from a library. A selected part and its associated polygon model is used to represent the point cluster. Iterations of the extraction and matching processes complete a 3D model for a complex scene made up of planes, cylinders, and complex parts from the parts library. Connecting regions between primitives and/or parts are processed to determine their existence and type. Constraints may be used to ensure a connected model and alignment of its components.

97 citations


Journal ArticleDOI
TL;DR: The closed-form probability density function of the Euclidean distance between any arbitrary reference point and its nth neighbor node when N nodes are uniformly and independently distributed inside a regular L-sided polygon is obtained.
Abstract: This paper derives the exact cumulative density function (cdf) of the distance between a randomly located node and any arbitrary reference point inside a regular L-sided polygon. Using this result, we obtain the closed-form probability density function of the Euclidean distance between any arbitrary reference point and its nth neighbor node when N nodes are uniformly and independently distributed inside a regular L-sided polygon. First, we exploit the rotational symmetry of the regular polygons and quantify the effect of polygon sides and vertices on the distance distributions. Then, we propose an algorithm to determine the distance distributions, given any arbitrary location of the reference point inside the polygon. For the special case when the arbitrary reference point is located at the center of the polygon, our framework reproduces the existing result in the literature.

92 citations


Journal ArticleDOI
TL;DR: In this article, a new fast and accurate algorithm is suggested that is based on the inflation of polygons, starting with an initial triangle located in a topologically connected subset of the area of feasible solutions, an automatic extrusion algorithm is used to form a sequence of growing polygons that approximate the AFS from the interior.
Abstract: The multicomponent factorization of multivariate data often results in nonunique solutions. The so-called rotational ambiguity paraphrases the existence of multiple solutions that can be represented by the area of feasible solutions (AFS). The AFS is a bounded set that may consist of isolated subsets. The numerical computation of the AFS is well understood for two-component systems and is an expensive numerical process for three-component systems. In this paper, a new fast and accurate algorithm is suggested that is based on the inflation of polygons. Starting with an initial triangle located in a topologically connected subset of the AFS, an automatic extrusion algorithm is used to form a sequence of growing polygons that approximate the AFS from the interior. The polygon inflation algorithm can be generalized to systems with more than three components. The efficiency of this algorithm is demonstrated for a model problem including noise and a multicomponent chemical reaction system. Further, the method is compared with the recent triangle-boundary-enclosing scheme of Golshan, Abdollahi, and Maeder (Anal. Chem. 2011, 83, 836–841). Copyright © 2013 John Wiley & Sons, Ltd.

72 citations


Journal ArticleDOI
TL;DR: In this article, a relative entropy measure for signed (positive or negative) shape functions with nodal prior weight functions that have the appropriate zero-set on the boundary of the polygon is presented.

70 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the pentagram map is integrable on the moduli space of closed polygons in the real projective plane and that the leaves of the toric foliation carry affine structure.
Abstract: The pentagram map is a discrete dynamical system defined on the moduli space of polygons in the projective plane. This map has recently attracted a considerable interest, mostly because its connection to a number of different domains, such as classical projective geometry, algebraic combinatorics, moduli spaces, cluster algebras, and integrable systems. Integrability of the pentagram map was conjectured by Schwartz and proved by the present authors for a larger space of twisted polygons. In this article, we prove the initial conjecture that the pentagram map is completely integrable on the moduli space of closed polygons. In the case of convex polygons in the real projective plane, this result implies the existence of a toric foliation on the moduli space. The leaves of the foliation carry affine structure and the dynamics of the pentagram map is quasiperiodic. Our proof is based on an invariant Poisson structure on the space of twisted polygons. We prove that the Hamiltonian vector fields corresponding to the monodromy invariants preserve the space of closed polygons and define an invariant affine structure on the level surfaces of the monodromy invariants.

69 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors presented a hybrid shadow analysis approach that integrates the model-and property-based methods for detecting collapsed buildings after an earthquake using high-resolution satellite imagery, and the experimental results showed the superiority of the proposed approach to the other existing ones.
Abstract: In this paper, we present a hybrid shadow-analysis approach that integrates the model- and property-based methods for detecting collapsed buildings after an earthquake using high-resolution satellite imagery. The framework of the proposed approach has four main steps. (1) The three-dimensional (3D) building model is established according to its footprint and height data stored in a geographical information system. (2) The theoretical shadow area of the building at the time that the post-seismic image was acquired is calculated. And the polygon of the ground shadow area of the building, which is called the theoretical ground shadow polygon, is extracted. (3) The theoretical ground shadow polygon is overlaid with the casting shadow area of the building, which is called the actual shadow area in the post-seismic satellite image, and the mean value of the digital number values of the post-seismic image pixels within the polygon of the theoretical shadow area is calculated. (4) The calculated mean value is compared with predefined thresholds, which are determined by the training pixels collected from the different types of shadows. On this basis, the shadows of totally collapsed, partially collapsed and uncollapsed buildings can be distinguished. A comprehensive experiment for Dujiangyan city, one of the urban areas most severely damaged in the May 2008 Wenchuan Earthquake, was conducted, and the experimental results showed the superiority of the proposed approach to the other existing ones.

67 citations


Journal ArticleDOI
TL;DR: In this article, an automatic local remeshing algorithm that can be applied to any polygon mesh is developed in order to accommodate crack propagation is presented. But this technique does not require local mesh refinement around the crack tip, special purpose elements or nodal enrichment functions.

65 citations


Journal ArticleDOI
21 Jul 2013
TL;DR: The method builds on an existing transfinite interpolant over a continuous domain, which in turn extends the classical mean value interpolant, and re-derive the interpolant from the mean value property of biharmonic functions, and proves that it indeed matches the gradient constraints when the boundary is piece-wise linear.
Abstract: We present a new method for interpolating both boundary values and gradients over a 2D polygonal domain. Despite various previous efforts, it remains challenging to define a closed-form interpolant that produces natural-looking functions while allowing flexible control of boundary constraints. Our method builds on an existing transfinite interpolant over a continuous domain, which in turn extends the classical mean value interpolant. We re-derive the interpolant from the mean value property of biharmonic functions, and prove that the interpolant indeed matches the gradient constraints when the boundary is piece-wise linear. We then give closed-form formula (as generalized barycentric coordinates) for boundary constraints represented as polynomials up to degree 3 (for values) and 1 (for normal derivatives) over each polygon edge. We demonstrate the flexibility and efficiency of our coordinates in two novel applications, smooth image deformation using curved cage networks and adaptive simplification of gradient meshes.

Journal ArticleDOI
TL;DR: It is shown that vertex guarding amonotone polygon is NP-hard and a constant factor approximation algorithm for interior guarding monotone polygons is constructed that has an approximation factor independent of the number of vertices of the polygon.
Abstract: We show that vertex guarding a monotone polygon is NP-hard and construct a constant factor approximation algorithm for interior guarding monotone polygons. Using this algorithm we obtain an approximation algorithm for interior guarding rectilinear polygons that has an approximation factor independent of the number of vertices of the polygon. If the size of the smallest interior guard cover is OPT for a rectilinear polygon, our algorithm produces a guard set of size O(OPT 2).

Journal ArticleDOI
TL;DR: A robust and invisible watermarking scheme based on polylines and polygons for the copyright protection of a GIS digital map that is more robust against geometric attacks, such as rotation, scaling, and translation (RST) transformations, data addition, cropping, breaking, and filleting attacks, and layer attacks with rearrangement and cropping.
Abstract: A geographical information services (GIS) can be provided on the basis of a digital map, which is the fundamental form of representation of data in a GIS. Because the process of producing a digital map is considerably complex and the maintenance of a digital map requires substantial monetary and human resources, a digital map is very valuable and requires copyright protection. A digital map consists of a number of layers that are categorized in terms of topographical features and landmarks. Therefore, any unauthorized person can forge either an entire digital map or the feature layers of the map. In this paper, we present a robust and invisible watermarking scheme based on polylines and polygons for the copyright protection of a GIS digital map. The proposed scheme clusters all polylines and polygons in the feature layers of the map on the basis of the polyline length and the polygon area. And then a watermark is embedded in GIS vector data on the basis of the distribution of polyline length and polygon area in each group by moving all vertices in polylines and polygons within a specified tolerance. Experimental results confirm that the proposed scheme is more robust against geometric attacks, such as rotation, scaling, and translation (RST) transformations, data addition, cropping, breaking, and filleting attacks, and layer attacks with rearrangement and cropping, when compared with conventional schemes. Moreover, the scheme also satisfies data position accuracy.

Journal ArticleDOI
TL;DR: This work disprove a conjecture, posed by two set of authors, that any 4-connected maximal planar graph has a one-legged Hamiltonian cycle, thereby invalidating an attempt to achieve a polygonal complexity 6 in cartograms for this graph class.
Abstract: In a rectilinear dual of a planar graph vertices are represented by simple rectilinear polygons, while edges are represented by side-contact between the corresponding polygons. A rectilinear dual is called a cartogram if the area of each region is equal to a pre-specified weight. The complexity of a cartogram is determined by the maximum number of corners (or sides) required for any polygon. In a series of papers the polygonal complexity of such representations for maximal planar graphs has been reduced from the initial 40 to 34, then to 12 and very recently to the currently best known 10. Here we describe a construction with 8-sided polygons, which is optimal in terms of polygonal complexity as 8-sided polygons are sometimes necessary. Specifically, we show how to compute the combinatorial structure and how to refine it into an area-universal rectangular layout in linear time. The exact cartogram can be computed from the area-universal layout with numerical iteration, or can be approximated with a hill-climbing heuristic. We also describe an alternative construction of cartograms for Hamiltonian maximal planar graphs, which allows us to directly compute the cartograms in linear time. Moreover, we prove that even for Hamiltonian graphs 8-sided rectilinear polygons are necessary, by constructing a non-trivial lower bound example. The complexity of the cartograms can be reduced to 6 if the Hamiltonian path has the extra property that it is one-legged, as in outer-planar graphs. Thus, we have optimal representations (in terms of both polygonal complexity and running time) for Hamiltonian maximal planar and maximal outer-planar graphs. Finally we address the problem of constructing small-complexity cartograms for 4-connected graphs (which are Hamiltonian). We first disprove a conjecture, posed by two set of authors, that any 4-connected maximal planar graph has a one-legged Hamiltonian cycle, thereby invalidating an attempt to achieve a polygonal complexity 6 in cartograms for this graph class. We also prove that it is NP-hard to decide whether a given 4-connected plane graph admits a cartogram with respect to a given weight function.

Journal ArticleDOI
TL;DR: In this paper, a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions is provided, and the gradient does not become large as interior angles of the polygon approach π.
Abstract: In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in Gillette et al. (Adv Comput Math 37(3), 417–439, 2012), we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradients of the mean value coordinates do not become large as interior angles of the polygon approach π.

Patent
21 Oct 2013
TL;DR: In this paper, a method of performing a resolution enhancement technique such as OPC on an initial layout description involves fragmenting a polygon that represents a feature to be created into a number of edge fragments.
Abstract: A method of performing a resolution enhancement technique such as OPC on an initial layout description involves fragmenting a polygon that represents a feature to be created into a number of edge fragments. One or more of the edge fragments is assigned an initial simulation site at which the image intensity is calculated. Upon calculation of the image intensity, the position and/or number of initial simulation sites is varied. New calculations are made of the image intensity with the revised placement or number of simulation sites in order to calculate an OPC correction for the edge fragment. In other embodiments, fragmentation of a polygon is adjusted based on the image intensities calculated at the simulation sites. In one embodiment, the image intensity gradient vector calculated at the initial simulation sites is used to adjust the simulation sites and/or fragmentation of the polygon.

Book ChapterDOI
14 Feb 2013
TL;DR: This paper generalizes the local search framework for obtaining PTASs for NP-hard geometric optimization problems by extending its analysis to additional families of graphs, beyond the family of planar graphs.
Abstract: The local search framework for obtaining PTASs for NP-hard geometric optimization problems was introduced, independently, by Chan and Har-Peled [6] and Mustafa and Ray [17]. In this paper, we generalize the framework by extending its analysis to additional families of graphs, beyond the family of planar graphs. We then present several applications of the generalized framework, some of which are very different from those presented to date (using the original framework). These applications include PTASs for finding a maximum l-shallow set of a set of fat objects, for finding a maximum triangle matching in an l-shallow unit disk graph, and for vertex-guarding a (not-necessarily-simple) polygon under an appropriate shallowness assumption.

Journal ArticleDOI
TL;DR: In this paper, a family of polygon exchange transformations parameterized by points (α, β) in the square is described, and it is shown that if α and β are irrational, α, β has periodic orbits of arbitrarily large period.
Abstract: We describe a family {Ψα,β} of polygon exchange transformations parameterized by points (α,β) in the square \([0, {\frac{1}{2}}]\times[0, {\frac{1}{2}}]\). Whenever α and β are irrational, Ψα,β has periodic orbits of arbitrarily large period. We show that for almost all parameters, the polygon exchange map has the property that almost every point is periodic. However, there is a dense set of irrational parameters for which this fails. By choosing parameters carefully, the measure of non-periodic points can be made arbitrarily close to full measure. These results are powered by a notion of renormalization which holds in a more general setting. Namely, we consider a renormalization of tilings arising from the Corner Percolation Model.

Proceedings ArticleDOI
20 May 2013
TL;DR: This work revisits the distributed polygon overlay problem and its implementation on MapReduce platform and develops algorithms geared towards maximizing local processing and minimizing the communication overhead inherent with shuffle and sort phases in MapReduced.
Abstract: Polygon overlay is one of the complex operations in computational geometry. It is applied in many fields such as Geographic Information Systems (GIS), computer graphics and VLSI CAD. Sequential algorithms for this problem are in abundance in literature but there is a lack of distributed algorithms especially for MapReduce platform. In GIS, spatial data files tend to be large in size (in GBs) and the underlying overlay computation is highly irregular and compute intensive. The MapReduce paradigm is now standard in industry and academia for processing large-scale data. Motivated by the MapReduce programming model, we revisit the distributed polygon overlay problem and its implementation on MapReduce platform. Our algorithms are geared towards maximizing local processing and minimizing the communication overhead inherent with shuffle and sort phases in MapReduce. We have experimented with two data sets and achieved up to 22x speedup with dataset 1 using 64 CPU cores.

Proceedings ArticleDOI
05 Nov 2013
TL;DR: This paper proposes an approach to automatically extract polygons and other attribute data from historical maps using multiple image processing and statistics utilities to produce desirable results in a fraction of the time required to do by hand.
Abstract: Polygon and attribute data extraction from historical maps such as US insurance atlases from the 19th and early 20th centuries has so far been a manual task. The New York Public Library (NYPL) currently relies on staff and volunteer work to manually extract polygons and other attribute data from its collection to create new public data sets for the study of urban history. This is a time-intensive task requiring up to several minutes to trace a shapefile and transcribe attributes for a single building. In this paper we propose an approach to automatically extract such attribute data from historical maps. The approach makes use of multiple image processing and statistics utilities to produce desirable results in a fraction of the time required to do by hand.On average, a shapefile for an atlas sheet is generated in ~11.4 minutes for a total of 23.5 hours of processing time for a whole atlas that contains ~55,000 polygons; contrast this time frame to NYPL's current manual process that has taken three years to extract about 170,000 polygons across four New York City street atlases.Even with some error rate in the proposed approach, the most cumbersome, time-intensive work (manual polygon drawing) has been reduced to a fraction of its original scope. This new workflow has promising implications for historical GIS.

Journal ArticleDOI
TL;DR: A novel graph based approach to compute a polygonal approximation of a shape contour by using the vertex betweenness to measure the importance of each vertices in a graph according to the number of shortest paths where each vertice occurs.

Proceedings ArticleDOI
21 Mar 2013
TL;DR: This work proposes a new method for the fast computation of light maps using a many-light global-illumination solution, avoiding objectionable artifacts even for very short computation times.
Abstract: We propose a new method for the fast computation of light maps using a many-light global-illumination solution. A complete scene can be light mapped on the order of seconds to minutes, allowing fast and consistent previews for editing or even generation at loading time. In our method, virtual point lights are clustered into a set of virtual polygon lights, which represent a compact description of the illumination in the scene. The actual light-map generation is performed directly on the GPU. Our approach degrades gracefully, avoiding objectionable artifacts even for very short computation times.

Journal ArticleDOI
03 Jul 2013
TL;DR: It is demonstrated experimentally that the triangles in the Delaunay tetrahedralization of the polygon vertices offer a reasonable trade off between performance and optimality.
Abstract: We present an algorithm for obtaining a triangulation of multiple, non-planar 3D polygons. The output minimizes additive weights, such as the total triangle areas or the total dihedral angles between adjacent triangles. Our algorithm generalizes a classical method for optimally triangulating a single polygon. The key novelty is a mechanism for avoiding non-manifold outputs for two and more input polygons without compromising optimality. For better performance on real-world data, we also propose an approximate solution by feeding the algorithm with a reduced set of triangles. In particular, we demonstrate experimentally that the triangles in the Delaunay tetrahedralization of the polygon vertices offer a reasonable trade off between performance and optimality.

Journal ArticleDOI
TL;DR: In this paper, the geodesic diameter of polygonal domains with holes and corners is computed in worst-case time, in which the distance between two interior points can be computed in linear time.
Abstract: This paper studies the geodesic diameter of polygonal domains having $$h$$ holes and $$n$$ corners. For simple polygons (i.e., $$h=0$$ ), the geodesic diameter is determined by a pair of corners of a given polygon and can be computed in linear time, as shown by Hershberger and Suri. For general polygonal domains with $$h \ge 1$$ , however, no algorithm for computing the geodesic diameter was known prior to this paper. In this paper, we present the first algorithms that compute the geodesic diameter of a given polygonal domain in worst-case time $$O(n^{7.73})$$ or $$O(n^7 (\log n + h))$$ . The main difficulty unlike the simple polygon case relies on the following observation revealed in this paper: two interior points can determine the geodesic diameter and in that case there exist at least five distinct shortest paths between the two.

Book ChapterDOI
01 Jan 2013
TL;DR: Several theoretical and practical geometry applications are based on polygon meshes with planar faces based on intersection of suitably distributed tangent planes, but this simple tangent plane intersection idea has a number of limitations.
Abstract: Several theoretical and practical geometry applications are based on polygon meshes with planar faces. The planar panelization of freeform surfaces is a prominent example from the field of architectural geometry. One approach to obtain a certain kind of such meshes is by intersection of suitably distributed tangent planes. Unfortunately, this simple tangent plane intersection (TPI) idea has a number of limitations. It is restricted to the generation of hexagon-dominant meshes: as vertices are in general defined by three intersecting planes, the resulting meshes are basically duals of triangle meshes. Furthermore, the explicit computation of intersection points requires dedicated handling of special cases and degenerate constellations to achieve robustness on freeform surfaces. Another limitation is the small number of degrees of freedom for incorporating design parameters.

Patent
06 Dec 2013
TL;DR: In this article, a system for planning a grasp for implementation by a grasping device having multiple grasping members, comprising a processor and a computer-readable storage device having stored thereon computer-executable instructions that, when executed by the processor, cause the processor to perform multiple operations.
Abstract: A system, for planning a grasp for implementation by a grasping device having multiple grasping members, comprising a processor and a computer-readable storage device having stored thereon computer-executable instructions that, when executed by the processor, cause the processor to perform multiple operations. The operations include generating, for each of the multiple grasping members, multiple planar polygon representations of a three-dimensional object model. The operations also include transforming a planar polygon, of the multiple polygons generated, to a frame of a link of multiple links of a subject member of the multiple grasping members, forming a transformed polygon, being a cross-section of the object model taken along a member-curling plane of the subject member. The operations further include sweeping, in iterations associated respectively with each link of the subject member, the link from a fully-open position for the link to a point at which the link contacts the transformed planar polygon.

Book ChapterDOI
02 Sep 2013
TL;DR: The flip distance between two triangulations is the smallest number of flips required to transform one triangulation into the other.
Abstract: Let T be a triangulation of a simple polygon. A flip in T is the operation of removing one diagonal of T and adding a different one such that the resulting graph is again a triangulation. The flip distance between two triangulations is the smallest number of flips required to transform one triangulation into the other. For the special case of convex polygons, the problem of determining the shortest flip distance between two triangulations is equivalent to determining the rotation distance between two binary trees, a central problem which is still open after over 25 years of intensive study.

Book ChapterDOI
05 Jun 2013
TL;DR: This is the first time that an exact algorithm is proposed and extensively tested for this problem, and shows a remarkable performance, obtaining provably optimal solutions for every instance in a matter of minutes on a standard desktop computer.
Abstract: The general Art Gallery Problem (AGP) consists in finding the minimum number of guards sufficient to ensure the visibility coverage of an art gallery represented by a polygon. The AGP is a well known \(\mathbb{NP}\)-hard problem and, for this reason, all algorithms proposed so far to solve it are unable to guarantee optimality except in special cases. In this paper, we present a new method for solving the Art Gallery Problem by iteratively generating upper and lower bounds while seeking to reach an exact solution. Notwithstanding that convergence remains an important open question, our algorithm has been successfully tested on a very large collection of instances from publicly available benchmarks. Tests were carried out for several classes of instances totalizing more than a thousand hole-free polygons with sizes ranging from 20 to 1000 vertices. The proposed algorithm showed a remarkable performance, obtaining provably optimal solutions for every instance in a matter of minutes on a standard desktop computer. To our knowledge, despite the AGP having been studied for four decades within the field of computational geometry, this is the first time that an exact algorithm is proposed and extensively tested for this problem. Future research directions to expand the present work are also discussed.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: This work proposes a novel fast and accurate method based on keypoints and temporal information to solve the registration problem on planar scenes with moving objects for infrared-visible stereo pairs that outperforms two recent state-of-the-art global registration methods by a large margin.
Abstract: In this work, we propose a novel fast and accurate method based on keypoints and temporal information to solve the registration problem on planar scenes with moving objects for infrared-visible stereo pairs. A keypoint descriptor and a temporal buffer (reservoir) filled with matched keypoints are used in order to find the homography transformation for the registration. Inside a given frame, the problem of registration is formulated as correspondences between noisy polygon vertices. Sections of polygons are matched locally to find the corresponding vertices inside a frame. These correspondences are then accumulated temporally using a reservoir of matches for homography calculation. Results show that our method outperforms two recent state-of-the-art global registration methods by a large margin in almost all tested videos.

Journal ArticleDOI
TL;DR: A new error band model, the statistical simulation error model, is created by a simulation method that integrates a population of line segments/polylines/polygons computed from the entire solution set of the error model's defining equation.