scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 2005"


Proceedings Article
01 Jan 2005
TL;DR: A streaming format for polygon meshes is described that is simple enough to replace current offline mesh formats and is more suitable for representing large data sets and is an ideal input and output format for I/O-efficient out-of-core algorithms that process meshes in a streaming, possibly pipelined, fashion.
Abstract: Recent years have seen an immense increase in the complexity of geometric data sets. Today's gigabyte-sized polygon models can no longer be completely loaded into the main memory of common desktop PCs. Unfortunately, current mesh formats, which were designed years ago when meshes were orders of magnitudes smaller, do not account for this. Using such formats to store large meshes is inefficient and complicates all subsequent processing. We describe a streaming format for polygon meshes that is simple enough to replace current offline mesh formats and is more suitable for representing large data sets. Furthermore, it is an ideal input and output format for I/O-efficient out-of-core algorithms that process meshes in a streaming, possibly pipelined, fashion. This paper chiefly concerns the underlying theory and the practical aspects of creating and working with this new representation. In particular, we describe desirable qualities for streaming meshes and methods for converting meshes from a traditional to a streaming format. A central theme of this paper is the issue of coherent and compatible layouts of the mesh vertices and polygons. We present metrics and diagrams that characterize the coherence of a mesh layout and suggest appropriate strategies for improving its "streamability". To this end, we outline several out-of-core algorithms for reordering meshes with poor coherence, and present results for a menagerie of well known and generally incoherent surface meshes.

186 citations


Journal ArticleDOI
TL;DR: A new quadrilateral remeshing method for manifolds of arbitrary genus that is at once general, flexible, and efficient is proposed, based on the use of smooth harmonic scalar fields defined over the mesh.

174 citations


Patent
13 Aug 2005
TL;DR: In this article, a method of modifying polygons in a data set maskless or mask-based optical projection lithography is proposed, which includes: 1) mapping the data set to a figure-of-demerit, 2) moving individual polygon edges to decrease the figure-ofthe-dimention, and 3) disrupting the set of polygons to enable a further decrease in the figure of the demerit.
Abstract: A method of modifying polygons in a data set mask-less or mask based optical projection lithography includes: 1) mapping the data set to a figure-of-demerit; 2) moving individual polygon edges to decrease the figure-of-demerit; and 3) disrupting the set of polygons to enable a further decrease in the figure-of-demerit, wherein disrupting polygons includes any of the following polygon disruptions: breaking up, merging, or deleting polygons.

169 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply time-domain waveform cross-correlation for P and S waves between each event and 100 neighboring events identified from the catalog based on a 3D velocity model.
Abstract: We present the results of relocating 327,000 southern California earthquakes that occurred between 1984 and 2002. We apply time-domain waveform cross-correlation for P and S waves between each event and 100 neighboring events identified from the catalog based on a 3D velocity model. To simplify the computation, we first divide southern California into five polygons, such that there are ∼100,000 events or less in each region. The polygon boundaries are chosen to lie in regions of sparse seismicity. We calculate and save differential times from the peaks in the cross-correlation functions and use a spline interpolation method to achieve a nominal timing precision of 0.001 sec. These differential times, together with existing P- and S-phase picks, are input to the double-difference program of Waldhauser and Ellsworth (2000, 2002) to calculate refined hypocenters. We divide the southern California region into grid cells and successively relocate hypocenters within each grid cell. The overall resulting pattern of seismicity is more focused than the previously determined pattern from 1D or 3D models. The new improved locations are more clustered, in many cases by a factor of two or three, and often show clear linear alignments. In particular, the depth distribution is improved and less affected by layer boundaries in velocity models or other similar artifacts.

167 citations


Journal ArticleDOI
TL;DR: A fully automatic technique which converts an inconsistent input mesh into an output mesh that is guaranteed to be a clean and consistent mesh representing the closed manifold surface of a solid object is presented.
Abstract: We present a fully automatic technique which converts an inconsistent input mesh into an output mesh that is guaranteed to be a clean and consistent mesh representing the closed manifold surface of a solid object. The algorithm removes all typical mesh artifacts such as degenerate triangles, incompatible face orientation, non-manifold vertices and edges, overlapping and penetrating polygons, internal redundant geometry, as well as gaps and holes up to a user-defined maximum size ρ. Moreover, the output mesh always stays within a prescribed tolerance e to the input mesh. Due to the effective use of a hierarchical octree data structure, the algorithm achieves high voxel resolution (up to 40963 on a 2GB PC) and processing times of just a few minutes for moderately complex objects. We demonstrate our technique on various architectural CAD models to show its robustness and reliability.

132 citations


Journal ArticleDOI
TL;DR: This paper presents the first data structure for a variable scale representation of an area partitioning without redundancy of geometry in this structure, which is suitable for progressive transfer of vector maps.
Abstract: This paper presents the first data structure for a variable scale representation of an area partitioning without redundancy of geometry. At the highest level of detail, the areas are represented using a topological structure based on faces and edges; there is no redundancy of geometry in this structure as the shared boundaries (edges) between neighbor areas are stored only once. Each edge is represented by a Binary Line Generalization (BLG)-tree, which enables selection of the proper repre- sentation for a given scale. Further, there is also no geometry redundancy between the different levels of detail. An edge at a higher importance level (less detail) does not contain copies of the lower-level edges or coordinates (more detail), but it is represented by efficiently combining their corresponding BLG trees. Which edges have to be combined follows from the generalization computation, and this is stored in a data structure. This data structure turns out to be a set of trees, which will be called the (Generalized Area Partitioning) GAP-edge forest. With regard to faces, the generalization result can be captured in a single tree structure for the parent-child relationships—the GAP face-tree. At the client side there are no geometric computations necessary to compute the polygon representations of the faces, merely following the topological references is sufficient. Finally, the presented data structure is also suitable for progressive transfer of vector maps, assuming that the client maintains a local copy of the GAP-face tree and the GAP-edge forest.

101 citations


Proceedings ArticleDOI
25 Jul 2005
TL;DR: It is shown by the experiment that most rectangular building roofs can be correctly detected, extracted and reconfigured, demonstrating the potential application of the method.
Abstract: In this paper, we developed a new building extraction system applied on high resolution remote sensing imagery based on multi-scale object oriented classification and probabilistic Hough transform. This can be divided into two different phases: building roof extraction, and shape reconfiguration. For the first phase, the multispectral and panchromatic high resolution satellite imageries are firstly fused for spatial resolution improvement and color information enhancement. The multiresolution image segmentation is applied on the fused image, resulting in the formation of the different level of polygon primitives at different space scale, providing different view of the scene at different resolution. In addition to the spectral information, the tone, texture, shape, context information is evaluated in an object oriented manner. The classification is based on a fuzzy rule decision tree classifier. By fuzzy evaluating of the shape, texture, context and spectral information, building roofs are extracted by reconstruction and classification from an appropriate space scale of roof polygon primitives. For the shape reconfiguration phase, we adopt the probabilistic Hough transform to delineate the roof dominant line which shows the major orientation of the specific building roof. According to the dominant line, a building squaring algorithm is applied based on rectilinear fitting of the building boundary. It is shown by our experiment that most rectangular building roofs can be correctly detected, extracted and reconfigured, demonstrating the potential application of the method.

85 citations


Proceedings ArticleDOI
03 Apr 2005
TL;DR: This paper presents an interactive system for deforming unstructured polygon meshes that is very easy to use, and shows that the formulation of the deformation provides a natural way to interpolate between character poses, allowing generation of simple key framed animations.
Abstract: Techniques for interactive deformation of unstructured polygon meshes are of fundamental importance to a host of applications. Most traditional approaches to this problem have emphasized precise control over the deformation being made. However, they are often cumbersome and unintuitive for non-expert users.In this paper, we present an interactive system for deforming unstructured polygon meshes that is very easy to use. The user interacts with the system by sketching curves in the image plane. A single stroke can define a free-form skeleton and the region of the model to be deformed. By sketching the desired deformation of this reference curve, the user can implicitly and intuitively control the deformation of an entire region of the surface. At the same time, the reference curve also provides a basis for controlling additional parameters, such as twist and scaling. We demonstrate that our system can be used to interactively edit a variety of unstructured mesh models with very little effort. We also show that our formulation of the deformation provides a natural way to interpolate between character poses, allowing generation of simple key framed animations.

81 citations


Journal ArticleDOI
TL;DR: A new active contour model is developed which nicely ties the desirable polygonal representation of an object directly to the image segmentation process and can robustly capture texture boundaries by way of higher-order statistics of the data and using an information-theoretic measure and with its nature of the ordinary differential equations.
Abstract: Curve evolution models used in image segmentation and based on image region information usually utilize simple statistics such as means and variances, hence can not account for higher order nature of the textural characteristics of image regions. In addition, the object delineation by active contour methods, results in a contour representation which still requires a substantial amount of data to be stored for subsequent multimedia applications such as visual information retrieval from databases. Polygonal approximations of the extracted continuous curves are required to reduce the amount of data since polygons are powerful approximators of shapes for use in later recognition stages such as shape matching and coding. The key contribution of this paper is the development of a new active contour model which nicely ties the desirable polygonal representation of an object directly to the image segmentation process. This model can robustly capture texture boundaries by way of higher-order statistics of the data and using an information-theoretic measure and with its nature of the ordinary differential equations. This new variational texture segmentation model, is unsupervised since no prior knowledge on the textural properties of image regions is used. Another contribution in this sequel is a new polygon regularizer algorithm which uses electrostatics principles. This is a global regularizer and is more consistent than a local polygon regularization in preserving local features such as corners.

79 citations


Journal ArticleDOI
TL;DR: The proposed digital steganographic technique is efficient and secure, has high capacity and low distortion, and is robust against affine transformations (which include translation, rotation, scaling, or their combined operations).
Abstract: We present an efficient digital steganographic technique for three-dimensional (3D) triangle meshes. It is based on a substitutive blind procedure in the spatial domain. The basic idea is to consider every vertex of a triangle as a message vertex. We propose an efficient data structure and advanced jump strategy to fast assign order to the message vertex. We also provide a Multi-Level Embed Procedure (MLEP), including sliding, extending, and rotating levels, to embed information based on shifting the message vertex by its geometrical property. Experimental results show that the proposed technique is efficient and secure, has high capacity and low distortion, and is robust against affine transformations (which include translation, rotation, scaling, or their combined operations). The technique provides an automatic, reversible method and has proven to be feasible in steganography.

68 citations


Journal ArticleDOI
TL;DR: It is proved that each loop of such an optimal system of loops homotopic to a given one is a shortest loop among all simple loops in its homotopy class.
Abstract: Every compact orientable boundaryless surface M can be cut along simple loops with a common point v0, pairwise disjoint except at v0, so that the resulting surface is a topological disk; such a set of loops is called a {\it system of loops} for M. The resulting disk may be viewed as a polygon in which the sides are pairwise identified on the surface; it is called a polygonal schema. Assuming that M is a combinatorial surface, and that each edge has a given length, we are interested in a shortest (or optimal) system of loops homotopic to a given one, drawn on the vertex-edge graph of M. We prove that each loop of such an optimal system is a shortest loop among all simple loops in its homotopy class. We give an algorithm to build such a system, which has polynomial running time if the lengths of the edges are uniform. As a byproduct, we get an algorithm with the same running time to compute a shortest simple loop homotopic to a given simple loop.

Journal ArticleDOI
TL;DR: The most recent release of the Schwarz--Christoffel Toolbox for MATLAB supports new features, including an object-oriented command-line interface model, new algorithms for multiply elongated and multiple-sheeted regions, and a module for solving Laplace's equation on a polygon with Dirichlet and homogeneous Neumann conditions.
Abstract: The Schwarz--Christoffel Toolbox (SC Toolbox) for MATLAB, first released in 1994, made possible the interactive creation and visualization of conformal maps to regions bounded by polygons. The most recent release supports new features, including an object-oriented command-line interface model, new algorithms for multiply elongated and multiple-sheeted regions, and a module for solving Laplace's equation on a polygon with Dirichlet and homogeneous Neumann conditions. Brief examples are given to demonstrate the new capabilities.

Book ChapterDOI
01 Jan 2005
TL;DR: These new schemes have variable tension parameter instead of the fixed tension parameter in the linear 4-point scheme, which allows the schemes to remain local and in the same time to achieve two important shape-preserving properties - artifact elimination and convexity-preservation.
Abstract: We present several non-linear 4-point interpolatory schemes, derived from the “classical” linear 4-point scheme. These new schemes have variable tension parameter instead of the fixed tension parameter in the linear 4-point scheme. The tension parameter is adapted locally according to the geometry of the control polygon within the 4-point stencil. This allows the schemes to remain local and in the same time to achieve two important shape-preserving properties - artifact elimination and convexity-preservation. The proposed schemes are robust and have special features such as “double-knot” edges corresponding to continuity without geometrical smoothness and inflection edges support for convexity-preservation. A convergence proof is given and experimental smoothness analysis is done in detail, which indicates that the limit curves are C1.

Proceedings ArticleDOI
17 Oct 2005
TL;DR: A new robust regular polygon detector that facilitates inclusion of additional a priori information leading to real-time application to road sign detection and feature detection, recovering stable features in rectilinear environments is described.
Abstract: This paper describes a new robust regular polygon detector. The regular polygon transform is posed as a mixture of regular polygons in a five dimensional space. Given the edge structure of an image, we derive the a posteriori probability for a mixture of regular polygons, and thus the probability density function for the appearance of a mixture of regular polygons. Likely regular polygons can be isolated quickly by discretising and collapsing the search space into three dimensions. The remaining dimensions may be efficiently recovered subsequently using maximum likelihood at the locations of the most likely polygons in the subspace. This leads to an efficient algorithm. Also the a posteriori formulation facilitates inclusion of additional a priori information leading to real-time application to road sign detection. The use of gradient information also reduces noise compared to existing approaches such as the generalised Hough transform. Results are presented for images with noise to show stability. The detector is also applied to two separate applications: real-time road sign detection for on-line driver assistance; and feature detection, recovering stable features in rectilinear environments.

Proceedings ArticleDOI
06 Nov 2005
TL;DR: This work presents a novel approach to recognizing the class of activities characterized by their rigidity in formation for example people parades, airplane flight formations or herds of animals, by model the entire group as a collective rather than focusing on each individual separately.
Abstract: Most work in human activity recognition is limited to relatively simple behaviors like sitting down, standing up or other dramatic posture changes. Very little has been achieved in detecting more complicated behaviors especially those characterized by the collective participation of several individuals. In this work we present a novel approach to recognizing the class of activities characterized by their rigidity in formation for example people parades, airplane flight formations or herds of animals. The central idea is to model the entire group as a collective rather than focusing on each individual separately. We model the formation as a 3D polygon with each corner representing a participating entity. Tracks from the entities are treated as tracks of feature points on the 3D polygon. Based on the rank of the track matrix we can determine if the 3D polygon under consideration behaves rigidly or undergoes non-rigid deformation. Our method is invariant to camera motion and does not require an a priori model or a training phase.

Patent
19 Dec 2005
TL;DR: In this article, a method for rendering adjacent polygons is proposed, which includes determining when a first polygon and a second polygon have an abutting edge and assigning a majority status to a pixel on the edge.
Abstract: A method for rendering adjacent polygons. The method includes determining when a first polygon and a second polygon have an abutting edge. If an abutting edge exists, a majority status is assigned to a pixel on the abutting edge. A first color of the first polygon or a second color of the second polygon is then allocated to the pixel in accordance with the majority status.

Proceedings ArticleDOI
21 Apr 2005
TL;DR: A new algorithm for removing hidden surfaces from reconstruction of computer-generated holograms is presented and reconstruction of a hologram synthesized by using the presented algorithm is demonstrated.
Abstract: A new algorithm for removing hidden surfaces from reconstruction of computer-generated holograms is presented. The object used in the algorithm is defined by surface model and each polygon composing the object provides a mask for blocking the incident field into the backside of the polygon. The computational cost of the proposed algorithm is 2 FFT/polygon by handling field transmission in Fourier space and integrating the surface diffraction method for generating fields. Reconstruction of a hologram synthesized by using the presented algorithm is demonstrated.

Journal ArticleDOI
TL;DR: It is proved that all codes derived from finite classical generalized quadrangles are quasi-cyclic and the explicit size of the circulant blocks in the parity-check matrix is given.
Abstract: We use the theory of finite classical generalized polygons to derive and study low-density parity-check (LDPC) codes. The Tanner graph of a generalized polygon LDPC code is highly symmetric, inherits the diameter size of the parent generalized polygon, and has minimum (one half) diameter-to-girth ratio. We show formally that when the diameter is four or six or eight, all codewords have even Hamming weight. When the generalized polygon has in addition an equal number of points and lines, we see that the nonregular polygon based code construction has minimum distance that is higher at least by two in comparison with the dual regular polygon code of the same rate and length. A new minimum-distance bound is presented for codes from nonregular polygons of even diameter and equal number of points and lines. Finally, we prove that all codes derived from finite classical generalized quadrangles are quasi-cyclic and we give the explicit size of the circulant blocks in the parity-check matrix. Our simulation studies of several generalized polygon LDPC codes demonstrate powerful bit-error-rate (BER) performance when decoding is carried out via low-complexity variants of belief propagation.

Proceedings ArticleDOI
06 Jun 2005
TL;DR: This paper considers the problem of computing the visibility of a query point inside polygons with holes and claims to be the best query-time result on this problem so far.
Abstract: In this paper, we consider the problem of computing the visibility of a query point inside polygons with holes. The goal is to perform this computation efficiently per query with more cost in the preprocessing phase. Our algorithm is based on solutions in [13] and [2] proposed for simple polygons. In our solution, the preprocessing is done in time O(n3 log(n)) to construct a data structure of size O(n3). It is then possible to report the visibility polygon of any query point q in time O((1+h′) log n+|V(q)|), in which n and h are the number of the vertices and holes of the polygon respectively, |V(q)| is the size of the visibility polygon of q, and h′ is an output and preprocessing sensitive parameter of at most min(h,|V(q)|). This is claimed to be the best query-time result on this problem so far.

Patent
04 May 2005
Abstract: A RFID tag to be attached to an object to identify the object or a characteristic or feature thereof from data stored in the tag accessible by a RFID reader includes a relatively flat structure having a small aperture antenna positioned on or proximate the tag in the form of a polygon having electrically conductive sides. The flat structure, which may be fabricated as a sticker or label, incorporates a small aperture antenna, such as a slot antenna, in the form of a polygon having electrically conductive sides. The polygon may be triangular, rectangular, square, elliptical, circular, or other polygonal figure depending on number of its plurality of sides. A constitutes the aperture of the antenna. An integrated circuit chip containing the electronics of the tag is secured to the flat structure within the boundary of the aperture constituting the central open portion within the polygon, and substantially equidistant from a pair of opposite sides of the polygon, with a pair of conductive impedance matching elements of substantially equal length confronting each other in the aperture from the opposite sides. Methods of use are disclosed.

Proceedings ArticleDOI
03 Apr 2005
TL;DR: A novel algorithm for visibility ordering among non-overlapping geometric objects in complex and dynamic environments that requires no preprocessing and is applicable to all kind of models, including polygon soups and deformable models.
Abstract: We describe a novel algorithm for visibility ordering among non-overlapping geometric objects in complex and dynamic environments. Our algorithm rearranges the objects in a back-to-front or a front-to-back order from a given viewpoint. We perform comparisons between the primitives by using occlusion queries on the GPUs and exploit frame to frame coherence to reduce the number of occlusion queries. Our visibility ordering algorithm requires no preprocessing and is applicable to all kind of models, including polygon soups and deformable models. We have used our algorithm for order-independent transparency computations in high-depth complexity environments and performing N-body collision culling in dynamic environments. We have implemented our algorithm on a PC with a 3.4 GHz Pentium IV CPU with a NVIDIA GeForce FX 6800 Ultra GPU and applied it to complex environments with tens or hundreds of thousands of polygons. Our algorithm can compute a visibility ordering among the objects and triangles at interactive frame rates.

Book ChapterDOI
01 Jan 2005
TL;DR: A new model for the multiple representations of vector data, called changes accumulation model, which considers the spatial representation from one scale to another as an accumulation of the set of changes is offered.
Abstract: The progressive transmission of map data over World Wide Web provides the users with a self-adaptive strategy to access remote data. It not only speeds up the web transfer but also offers an efficient navigation guide in information acquisition. The key technology in this transmission is the efficient multiple representation of spatial data and pre-organization on server site. This paper aims at offering a new model for the multiple representations of vector data, called changes accumulation model, which considers the spatial representation from one scale to another as an accumulation of the set of changes. The difference between two consecutive representations is recorded in a linear order and through gradually addition or subtraction of “change patches” the progressive transmission is realized. As an example, the progressive transmission of area features based on this model is investigated in the project. The model is built upon the hierarchical decomposition of polygon into series of convex hulls or bounding rectangles and the progressive transmission is accomplished through component of the decomposed elements.

Proceedings ArticleDOI
29 Jun 2005
TL;DR: A new algorithm is presented that simplifies implementation and computation by operating only on the skeletons of the polyhedra instead of the multi-dimensional face lattice usually used for exact occlusion queries in 3D.
Abstract: Despite the importance of from-region visibility computation in computer graphics, efficient analytic methods are still lacking in the general 3D case. Recently, different algorithms have appeared that maintain occlusion as a complex of polytopes in Plucker space. However, they suffer from high implementation complexity, as well as high computational and memory costs, limiting their usefulness in practice.In this paper, we present a new algorithm that simplifies implementation and computation by operating only on the skeletons of the polyhedra instead of the multi-dimensional face lattice usually used for exact occlusion queries in 3D. This algorithm is sensitive to complexity of the silhouette of each occluding object, rather than the entire polygonal mesh of each object. An intelligent feedback mechanism is presented that greatly enhances early termination by searching for apertures between query polygons. We demonstrate that our technique is several times faster than the state of the art.

Journal ArticleDOI
Yu Peng1, Jun-Hai Yong1, Wei-Ming Dong1, Hui Zhang1, Jiaguang Sun1 
TL;DR: A new algorithm for Boolean operations on general planar polygons is presented, using the simplex theory to build the basic mathematical model of the new algorithm, and examples show that the running time required is less than one-third of that by the Rivero and Feito algorithm.

Book ChapterDOI
11 Jul 2005
TL;DR: Using this algorithm, an approximation algorithm for interior guarding rectilinear polygons that has an approximation factor independent of the number of vertices of the polygon is obtained.
Abstract: We show a constant factor approximation algorithm for interior guarding of monotone polygons. Using this algorithm we obtain an approximation algorithm for interior guarding rectilinear polygons that has an approximation factor independent of the number of vertices of the polygon. If the size of the smallest interior guard cover is OPT for a rectilinear polygon, our algorithm produces a guard set of size O(OPT2).

Journal ArticleDOI
TL;DR: This paper presents a method for constructing smooth and bounded interpolations on any polygon, whether convex or concave in shape, or even one containing holes or isolated points in its interior, which is invariant with respect to any chosen coordinate system.
Abstract: This paper presents a method for constructing smooth and bounded interpolations on any polygon, whether convex or concave in shape, or even one containing holes or isolated points in its interior. The resulting two-dimensional function distributes the value at any given vertex or internal node over the remaining portion of the domain. The representation depends only on simple geometrical properties such as lengths and areas. Accordingly, it is invariant with respect to any chosen coordinate system. The resulting set of interpolations is smooth within the domain. Within a triangle, the behavior is akin to a linear color gradient. If necessary, linear boundary behavior can also be assured. A Java implementation is available online.

Book ChapterDOI
05 Sep 2005
TL;DR: An algorithm is proposed to determine the topology of an implicit real algebraic surface in ℝ3 by providing a curvilinear wireframe of the surface and surface patches determined by the curvil inear wire frame, which have the same topology as the surface.
Abstract: An algorithm is proposed to determine the topology of an implicit real algebraic surface in ℝ3. The algorithm consists of three steps: surface projection, projection curve topology determination and surface patches composition. The algorithm provides a curvilinear wireframe of the surface and the surface patches of the surface determined by the curvilinear wireframe, which have the same topology as the surface. Most of the surface patches are curvilinear polygons. Some examples are used to show that our algorithm is effective.

Proceedings ArticleDOI
01 Jan 2005
TL;DR: A simple new algorithm to offset multiple, non-overlapping polygons with arbitrary holes that makes use of winding numbers is presented that is extremely simple and reliably produces correct and logically consistent results.
Abstract: In this paper we present a simple new algorithm to offset multiple, non-overlapping polygons with arbitrary holes that makes use of winding numbers. Our algorithm constructs an intermediate “raw offset curve” as input to the tessellator routines in the OpenGL Utility library (GLU), which calculates the winding number for each connected region. By construction, the invalid loops of our raw offset curve bound areas with non-positive winding numbers and thus can be removed by using the positive winding rule implemented in the GLU tessellator. The proposed algorithm takes O((n + k)logn) time and O(n + k) space, where n is the number of vertices in the input polygon and k is the number of self-intersections in the raw offset curve. The implementation is extremely simple and reliably produces correct and logically consistent results.Copyright © 2005 by ASME

Book ChapterDOI
19 Jun 2005
TL;DR: The problem of constructing a tight isothetic outer (or inner) polygon covering an arbitrarily shaped 2D object on a background grid, is addressed in this paper, and a novel algorithm is proposed.
Abstract: The problem of constructing a tight isothetic outer (or inner) polygon covering an arbitrarily shaped 2D object on a background grid, is addressed in this paper, and a novel algorithm is proposed. Such covers have many applications to image mining, rough sets, computational geometry, and robotics. Designing efficient algorithms for these cover problems was an open problem in the literature. The elegance of the proposed algorithm lies in utilizing the inherent combinatoral properties of the relative arrangement of the object and the grid lines. The shape and the relative error of the polygonal cover can be controlled by changing the granularity of the grid. Experimental results on various complex objects with variable grid sizes have been reported to demonstrate the versatility, correctness, and speed of the algorithm.

Journal ArticleDOI
TL;DR: In this paper, a hinged dissection of all edge-to-edge gluings of n congruent copies of a polygon P that join corresponding edges of P is presented.
Abstract: A hinged dissection of a set of polygons S is a collection of polygonal pieces hinged together at vertices that can be rotated into any member of S. We present a hinged dissection of all edge-to-edge gluings of n congruent copies of a polygon P that join corresponding edges of P. This construction uses kn pieces, where k is the number of vertices of P. When P is a regular polygon, we show how to reduce the number of pieces to ⌈k/2⌉ (n - 1). In particular, we consider polyominoes (made up of unit squares), polyiamonds (made up of equilateral triangles), and polyhexes (made up of regular hexagons). We also give a hinged dissection of all polyabolos (made up of right isosceles triangles), which do not fall under the general result mentioned above. Finally, we show that if P can be hinged into Q, then any edge-to-edge gluing of n congruent copies of P can be hinged into any edge-to-edge gluing of n congruent copies of Q.