scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 2002"


Posted Content
TL;DR: In this article, the scaling limit of uniform measures on planar self-avoiding polygonal loops is conjectured based on the stochastic Loewner evolution and non-disconnecting Brownian motions.
Abstract: A planar self-avoiding walk (SAW) is a nearest neighbor random walk path in the square lattice with no self-intersection. A planar self-avoiding polygon (SAP) is a loop with no self-intersection. In this paper we present conjectures for the scaling limit of the uniform measures on these objects. The conjectures are based on recent results on the stochastic Loewner evolution and non-disconnecting Brownian motions. New heuristic derivations are given for the critical exponents for SAWs and SAPs.

223 citations


Journal ArticleDOI
TL;DR: In this article, the authors measured seasonal movements of the active layer and subjacent permafrost in numerous ice-wedge polygons that have varied in age, type, crack frequency, and topographic location.
Abstract: Thermally induced seasonal movements of the active layer and subjacent permafrost have been measured in numerous ice-wedge polygons that have varied in age, type, crack frequency, and topographic location. The field observations show that, in winter, thermal contraction, which is inward, is constrained or vanishes at the polygon centres but, in summer, thermal expansion, which is outward, is unconstrained at the ice-wedge troughs. Therefore, there tends to be a small net summer transport of the active layer, to varying depths, into the ice-wedge troughs. The movement has been observed in all polygons studied. The slow net transport of material into the ice-wedge troughs has implications for: permafrost aggradation and the growth of syngenetic wedges in some troughs; the palaeoclimatic reconstruction of some ice- wedge casts; and the interpretation of polygon stratigraphy based upon the assumption that the polygon material has accumulated in situ .

171 citations


Journal ArticleDOI
TL;DR: A means for linking field and object representations of geographical space based on a series of mappings, where locations in a continuous field are mapped to discrete objects is described.
Abstract: This paper describes a means for linking field and object representations of geographical space. The approach is based on a series of mappings, where locations in a continuous field are mapped to discrete objects. An object in this context is a modeler's conceptualization, as in a viewshed, highway corridor or biological reserve. An object can be represented as a point, line, polygon, network, or other complex spatial type. The relationship between locations in a field and spatial objects may take the form of one-to-one, one-to-many, many-to-one, or many-to-many. We present a typology of object fields and discuss issues in their construction, storage, and analysis. Example applications are presented and directions for further research are offered.

167 citations


Journal ArticleDOI
01 Jan 2002
TL;DR: A procedure for simultaneously decomposing the two polygons such that a "mixed" objective function is minimized and there are optimal decomposition algorithms that significantly expedite the Minkowski-sum computation, but the decomposition itself is expensive to compute.
Abstract: Several algorithms for computing the Minkowski sum of two polygons in the plane begin by decomposing each polygon into convex subpolygons. We examine different methods for decomposing polygons by their suitability for efficient construction of Minkowski sums. We study and experiment with various well-known decompositions as well as with several new decomposition schemes. We report on our experiments with various decompositions and different input polygons. Among our findings are that in general: (i) triangulations are too costly, (ii) what constitutes a good decomposition for one of the input polygons depends on the other input polygon - consequently, we develop a procedure for simultaneously decomposing the two polygons such that a "mixed" objective function is minimized, (iii) there are optimal decomposition algorithms that significantly expedite the Minkowski-sum computation, but the decomposition itself is expensive to compute - in such cases simple heuristics that approximate the optimal decomposition perform very well.

125 citations


Journal ArticleDOI
TL;DR: A fast and efficient implementation of a bottom-left (BL) placement algorithm for polygon packing that produces an optimal BL layout in the sense that the positions considered are guaranteed to contain the bottom- left position of the infinite set of possibilities.

125 citations


Journal ArticleDOI
TL;DR: An on-line strategy that enables a mobile robot with vision to explore an unknown simple polygon is presented and it is proved that the resulting tour is less than 26.5 times as long as the shortest watchman tour that could be computed off-line.
Abstract: We present an on-line strategy that enables a mobile robot with vision to explore an unknown simple polygon. We prove that the resulting tour is less than 26.5 times as long as the shortest watchman tour that could be computed off-line. Our analysis is doubly founded on a novel geometric structure called angle hull. Let D be a connected region inside a simple polygon, P. We define the angle hull of D, ${\cal AH}(D)$, to be the set of all points in P that can see two points of D at a right angle. We show that the perimeter of ${\cal AH}(D)$ cannot exceed in length the perimeter of D by more than a factor of 2. This upper bound is tight.

123 citations


Patent
02 Dec 2002
TL;DR: In this article, a method and apparatus for transforming 3D geometric data, such as polygon data, formed of polygons, into volumetric data (14 ) formed of voxels was provided.
Abstract: A method and apparatus are provided for transforming 3D geometric data, such as polygon data ( 16 ) formed of polygons ( 18 ), into volumetric data ( 14 ) formed of voxels ( 12 ). According to the method, 3D geometric data to be converted to voxel data are acquired, and the resolution of a final voxel grid to be produced is obtained (e.g., user-defined). Then, each geometric unit (e.g., a polygon) in the 3D geometric data is mapped (or scan converted) to an imaginary voxel grid having a higher resolution than the resolution of the final voxel grid. Next, with respect to the geometric units that are mapped to the imaginary voxels in the imaginary voxel grid dividing one final (actual) voxel into smaller sub-volumes, a weighted average of the attribute values (color, normal, intensity, etc.) is obtained. The weighted average is stored as the attribute value of the final voxel.

120 citations


Patent
07 Jun 2002
TL;DR: In this article, the size of polygons for composing an object in computer graphics is compared with a predetermined threshold value, and each polygon having a size which exceeds the threshold value is divided, and the resultant polygons are compared again.
Abstract: The size of polygons for composing an object in computer graphics is compared with a predetermined threshold value. Each polygon not exceeding the threshold value is directly converted into pixels. On the other hand, each polygon having a size which exceeds the threshold value is divided, and the resultant polygons are compared again. This ensures efficient processing of the polygons without causing an expansion of the circuit configuration or a significant increase in cost. The processing of the polygons will never be affected by the size of the polygons.

107 citations


Journal ArticleDOI
TL;DR: Two new related metrics, the geodesic width and the link width, for measuring the “distance” between two nonintersecting polylines in the plane are introduced and used to solve two problems: Compute a continuous transformation that “morphs” one polyline into another polyline and construct a corresponding morphing strategy.
Abstract: We introduce two new related metrics, the geodesic width and the link width, for measuring the "distance" between two nonintersecting polylines in the plane. If the two polylines have n vertices in total, we present algorithms to compute the geodesic width of the two polylines in O(n 2 log n) time using O(n 2) space and the link width in O(n 3 log n) time using O(n 2) working space where n is the total number of edges of the polylines. Our computation of these metrics relies on two closely related combinatorial strutures: the shortest-path diagram and the link diagram of a simple polygon. The shortest-path (resp., link) diagram encodes the Euclidean (resp., link) shortest path distance between all pairs of points on the boundary of the polygon. We use these algorithms to solve two problems:

97 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a method of decomposing a simple polygon that allows the preprocessing of the polygon to efficiently answer visibility queries of various forms in an output sensitive manner.
Abstract: We present a method of decomposing a simple polygon that allows the preprocessing of the polygon to efficiently answer visibility queries of various forms in an output sensitive manner. Using O(n3 log n) preprocessing time and O(n3) space, we can, given a query point q inside or outside an n vertex polygon, recover the visibility polygon of q in O(log n + k) time, where k is the size of the visibility polygon, and recover the number of vertices visible from q in O(log n) time.The key notion behind the decomposition is the succinct representation of visibility regions, and tight bounds on the number of such regions. These techniques are extended to handle other types of queries, such as visibility of fixed points other than the polygon vertices, and for visibility from a line segment rather than a point. Some of these results have been obtained independently by Guibas, Motwani and Raghavan [18].

97 citations


Proceedings ArticleDOI
27 Oct 2002
TL;DR: A generalization of the geometry coder by Touma and Gotsman (1998) to polygon meshes is presented, which improves geometry compression by 10 to 40 percent depending on how polygonal the mesh is and on the quality of the polygons.
Abstract: In this paper we present a generalization of the geometry coder by Touma and Gotsman [34] to polygon meshes. We let the polygon information dictate where to apply the parallelogram rule that they use to predict vertex positions. Since polygons tend to be fairly planar and fairly convex, it is beneficial to make predictions within a polygon rather than across polygons. This, for example, avoids poor predictions due to a crease angle between polygons. Up to 90 percent of the vertices can be predicted this way. Our strategy improves geometry compression by 10 to 40 percent depending on (a) how polygonal the mesh is and (b) on the quality (planarity/convexity) of the polygons.

Patent
Steve W. Bowes1
20 Feb 2002
TL;DR: An aberration mark for use in an optical photolithography system, and a method for estimating overlay errors and optical aberrations was proposed in this article. But the aberration marks were not used to estimate overlay errors.
Abstract: An aberration mark for use in an optical photolithography system, and a method for estimating overlay errors and optical aberrations. The aberration mark includes an inner polygon pattern and an outer polygon pattern, wherein each of the inner and outer polygon patterns include a center, and two sets of lines and spaces having a different feature size and pitch that surround the outer polygon pattern. The aberration mark can be used to estimate overlay errors and optical aberrations. In some embodiments, the mark can also be used with scatterometry or scanning electron microscope devices. In other embodiments, the mark can be used to monitor aberrations of a lens in an optical photolithography system.

Proceedings ArticleDOI
M. Jager1, Bernhard Nebel
07 Aug 2002
TL;DR: The paper describes a dynamic and decentralized method to partition a certain area among multiple robots that does not need any global synchronization and does not require a global communication network.
Abstract: If multiple cleaning robots are used to cooperatively clean a larger room, e.g. an airport, the room must be partitioned among the robots. The paper describes a dynamic and decentralized method to partition a certain area among multiple robots. The area is divided into polygons, which are allocated by the robots. After a robot has been allocated a certain polygon, it is responsible for cleaning the polygon. The method described in the paper does not need any global synchronization and does not require a global communication network.

Patent
18 Dec 2002
TL;DR: In this article, a 3D polygonal model of a 3-D irregular volume within a GIS platform is constructed by introducing data relating to the volume, estimating at least one 2-D polygons representing the volume's lateral boundary, clipping the estimated surfaces with the estimated 2-dimensional polygon, and constructing multipatches of a network of triangular panels or a grid of regularly spaced polylineZs.
Abstract: Methods, and models therefrom, to construct a 3-D polygonal model of a 3-D irregular volume within a GIS platform that include introducing data relating to the volume; estimating at least one 2-D polygon representing the volume's lateral boundary, estimating irregular surfaces representing the volume's top and bottom; clipping the estimated surfaces with the estimated 2-D polygon; constructing multipatches of a network of triangular panels or a grid of regularly spaced polylineZs; and joining the attributes to the model.

Proceedings ArticleDOI
26 Jul 2002
TL;DR: The essence of the idea is to represent 3-D polygons and the stabbing lines connecting them in a 5-D Euclidean space derived from Plucker space and then to perform geometric subtractions of occluded lines from the set of potential stabbing lines to find the visible polygons.
Abstract: To pre-process a scene for the purpose of visibility culling during walkthroughs it is necessary to solve visibility from all the elements of a finite partition of viewpoint space. Many conservative and approximate solutions have been developed that solve for visibility rapidly. The idealised exact solution for general 3D scenes has often been regarded as computationally intractable. Our exact algorithm for finding the visible polygons in a scene from a region is a computationally tractable pre-process that can handle scenes of the order of millions of polygons.The essence of our idea is to represent 3-D polygons and the stabbing lines connecting them in a 5-D Euclidean space derived from Plucker space and then to perform geometric subtractions of occluded lines from the set of potential stabbing lines. We have built a query architecture around this query algorithm that allows for its practical application to large scenes.We have tested the algorithm on two different types of scene: despite a large constant computational overhead, it is highly scalable, with a time dependency close to linear in the output produced.

Journal ArticleDOI
01 Sep 2002
TL;DR: A scheme of control polygons is introduced to design topological skeletons for vector fields of arbitrary topology and this is applied for topology‐preserving compression of vector fields consisting of a simple topology.
Abstract: We introduce a scheme of control polygons to design topological skeletons for vector fields of arbitrary topology. Based on this we construct piecewise linear vector fields of exactly the topology specified by the control polygons. This way a controlled construction of vector fields of any topology is possible. Finally we apply this method for topology-preserving compression of vector fields consisting of a simple topology.

Journal Article
TL;DR: In this paper, an exact geometry kernel for conic arcs, algorithms for exact computation with low-degree algebraic numbers, and an algorithm for computing the arrangement of conic arc that immediately leads to a realization of regularized boolean operations on conic polygons.
Abstract: We give an exact geometry kernel for conic arcs, algorithms for exact computation with low-degree algebraic numbers, and an algorithm for computing the arrangement of conic arcs that immediately leads to a realization of regularized boolean operations on conic polygons. A conic polygon, or polygon for short, is anything that can be obtained from linear or conic halfspaces (= the set of points where a linear or quadratic function is non-negative) by regularized boolean operations. The algorithm and its implementation are complete (they can handle all cases), exact (they give the mathematically correct result), and efficient (they can handle inputs with several hundred primitives).

Patent
24 Oct 2002
TL;DR: In this paper, a method for determining critical area for a semiconductor design layout is disclosed, where the critical area is used to predict yield of the semiconductor device fabricated from such a design layout.
Abstract: Disclosed are mechanisms for efficiently and accurately calculating critical area. In general terms, a method for determining a critical area for a semiconductor design layout is disclosed. The critical area is utilizable to predict yield of a semiconductor device fabricated from such layout. A semiconductor design layout having a plurality of features is first provided. The features have a plurality of polygon shapes which include nonrectangular polygon shapes. Each feature shape has at least one attribute or artifact, such as a vertex or edge. A probability of fail function is calculated based on at least a distance between two feature shape attributes or artifacts. By way of example implementations, a distance between two neighboring feature edges (or vertices) or a distance between two feature edges (or vertices) of the same feature is first determined and then used to calculate the probability of fail function. In a specific aspect, the distances are first used to determine midlines between neighboring features or midlines within a same feature shape, and the midlines are then used to determine the probability of fail function. A critical area of the design layout is then determined based on the determined probability of fail function. In specific implementations, the defect type is a short type defect or an open type defect. In a preferred implementation, the features may have any suitable polygonal shape, as is typical in a design layout.

Proceedings ArticleDOI
26 Jul 2002
TL;DR: This work proposes a novel point rendering technique that yields good image quality while fully making use of hardware acceleration and shows how to use off-the-shelf hardware to draw elliptical Gaussian splats oriented according to normals and to perform texture filtering.
Abstract: High quality point rendering methods have been developed in the last years. A common drawback of these approaches is the lack of hardware support. We propose a novel point rendering technique that yields good image quality while fully making use of hardware acceleration.Previous research revealed various advantages and drawbacks of point rendering over traditional rendering. Thus, a guideline in our algorithm design has been to allow both primitive types simultaneously and dynamically choose the best suited for rendering. An octree-based spatial representation, containing both triangles and sampled points, is used for level-of-detail and visibility calculations. Points in each block are stored in a generalized layered depth image. McMillan's algorithm is extended and hierarchically applied in the octree to warp overlapping Gaussian fuzzy splats in occlusion-compatible order and hence z-buffer tests are avoided. We show how to use off-the-shelf hardware to draw elliptical Gaussian splats oriented according to normals and to perform texture filtering. The result is a hybrid polygon-point system with increased efficiency compared to previous approaches.

Proceedings ArticleDOI
16 Nov 2002
TL;DR: It is proved that each loop of such an optimal system homotopic to a given one is a shortest loop among all simple loops in its homotopy class.
Abstract: Every compact orientable boundaryless surface /spl Mscr/ can be cut along simple loops with a common point /spl upsi//sub 0/, pairwise disjoint except at /spl upsi//sub 0/, so that the resulting surface is a topological disk; such a set of loops is called a fundamental system of loops for /spl Mscr/. The resulting disk is a polygon in which the edges are pairwise identified on the surface; it is called a polygonal schema Assuming that /spl Mscr/ is triangulated, and that each edge has a given length, we are interested in a shortest (or optimal) system homotopic to a given one, drawn on the vertex-edge graph of /spl Mscr/. We prove that each loop of such an optimal system is a shortest loop among all simple loops in its homotopy class. We give a polynomial (under some reasonable assumptions) algorithm to build such a system. As a byproduct, we get a polynomial algorithm to compute a shortest simple loop homotopic to a given simple loop.

Journal ArticleDOI
TL;DR: An algorithm for approximating arbitrary NURBS curves with biarcs is presented, which is most useful in numerical control to drive the cutter along straight line or circular paths.
Abstract: An algorithm for approximating arbitrary NURBS curves with biarcs is presented. The main idea is to approximate the NURBS curve with a polygon, and then to approximate the polygon with biarcs to within the required tolerance. The method uses a parametric formulation of biarcs appropriate in geometric design using parametric curves. The method is most useful in numerical control to drive the cutter along straight line or circular paths.

Proceedings ArticleDOI
01 Jan 2002
TL;DR: This paper defines and evaluates a new easily computable measure of convexity for polygons, and proves that desirable properties of are proved.
Abstract: Convexity estimators are commonly used in the analysis of shape. In this paper we define and evaluate a new easily computable measure of convexity for polygons. Let be an arbitrary polygon. If denotes the perimeter in the sense of metrics of the polygon obtained by the rotation of by angle with the origin as the center of the applied rotation, and if is the Euclidean perimeter of the minimal rectangle having the edges parallel to coordinate axes which includes such a rotated polygon , then we show that defined as can be used as an estimate for the convexity of .S everal desirable properties of are proved, as well.

Journal ArticleDOI
TL;DR: Statically stable gaits are possible for quadrupeds but do not seem to be used, and physical and mathematical models have shown that bipedal gaits can be dynamically stable.
Abstract: For a standing animal to be statically stable, a vertical line through its centre of mass must pass through the polygon of support defined by its feet. Statically stable gaits are possible for quadrupeds but do not seem to be used. Physical and mathematical models have shown that bipedal gaits can be dynamically stable. Accelerations and decelerations of animals may be limited by muscle strength, by the coefficient of friction with the ground or by considerations of stability. Cornering ability similarly may be limited by strength or by the coefficient of friction. It may be faster to use a longer route involving corners of larger radius than a shorter one with sharper corners.

Patent
28 Sep 2002
TL;DR: In this article, an automatic three-dimensional structure shape generation apparatus for automatically generating the shape of a 3D structure from a plurality of points having 3D coordinates containing height information is presented.
Abstract: An automatic three-dimensional structure shape generation apparatus for automatically generating the shape of a three-dimensional structure from a plurality of points having three-dimensional coordinates containing height information includes means for constituting a point group by collecting points such that three-dimensional distances between the points are within a predetermined threshold or two-dimensional distances and height differences between the points are within predetermined thresholds, means for detecting a polygon that includes the points of the point group at a minimum area from at least one of a plurality of predetermined polygons, and means for generating an outer shape or a rooftop shape of the three-dimensional structure from the polygon having the minimum area.

Journal ArticleDOI
TL;DR: This article developed probabilistic pattern association tests that are appropriate when edge effects are present, polygon size is heterogeneous, and the number of polygons varies from one classification to another.
Abstract: Edge effects pervade natural systems, and the processes that determine spatial heterogeneity (e.g. physical, geochemical, biological, ecological factors) occur on diverse spatial scales. Hence, tests for association between spatial patterns should be unbiased by edge effects and be based on null spatial models that incorporate the spatial heterogeneity characteristic of real-world systems. This paper develops probabilistic pattern association tests that are appropriate when edge effects are present, polygon size is heterogeneous, and the number of polygons varies from one classification to another. The tests are based on the amount of overlap between polygons in each of two partitions. Unweighted and area-weighted versions of the statistics are developed and verified using scenarios representing both polygon overlap and avoidance at different spatial scales and for different distributions of polygon sizes. These statistics were applied to Soda Butte Creek, Wyoming, to determine whether stream microhabitats, such as riffles, pools and glides, can be identified remotely using high spatial resolution hyperspectral imagery. These new “spatially explicit” techniques provide information and insights that cannot be obtained from the spectral information alone.

Journal ArticleDOI
TL;DR: In this article, exactly solvable two-dimensional polygon models, counted by perimeter and area, are described by q-algebraic functional equations, and techniques to extract the scaling behaviour of these models up to arbitrary order and apply them to some examples.
Abstract: Exactly solvable two-dimensional polygon models, counted by perimeter and area, are described by q-algebraic functional equations. We provide techniques to extract the scaling behaviour of these models up to arbitrary order and apply them to some examples. These are then used to analyze the unsolved model of self-avoiding polygons, where we numerically confirm predictions about its scaling function and its first two corrections to scaling.

Book ChapterDOI
01 Jan 2002
TL;DR: In Part III of this book, it is proved that every Moufang polygon is isomorphic to one of the Moufag polygons described in this chapter.
Abstract: In this chapter, we describe nine families of Moufang polygons. In Part III of this book, we prove that every Moufang polygon is isomorphic to one of the Moufang polygons described in this chapter.

Book ChapterDOI
01 Jan 2002
TL;DR: This paper focuses on solving some limitations of the existing GAP-tree and explores two extensions, which enables the handling of a disjoint polygonal cluster and allows aggregation operations to be performed and improvement of removing insignificant objects by separating them into parts around the adjusted skeleton.
Abstract: The Generalized Area Partitioning tree (GAP-tree) is a model that supports on-the-fly generalisation of planar partitions of polygon objects. This paper focuses on solving some limitations of the existing GAP-tree and explores two extensions. The first extension enables the handling of a disjoint polygonal cluster (within the partition) and allows aggregation operations to be performed. The skeleton partitioning model, which is based on the constrained Delaunay triangulation for the polygonal cluster, is used to define the bridge areas between aggregated polygons. The second extension involves the improvement of removing insignificant objects by separating them into parts around the adjusted skeleton and assigning these parts to different neighbours. The adjusted skeleton is defined by the compatibility between the removed object and its neighbours, which considers not only topological relationships but also importance and semantic similarity. This process again uses the Delaunay triangulation. The algorithm is given to construct the extended GAP-tree.

Proceedings ArticleDOI
27 Oct 2002
TL;DR: The approach is to render each polygon as a collection of points and to displace each point from the surface in the direction of the surface normal by an amount proportional to some random number multiplied by the uncertainty level at that point.
Abstract: Efficient and informative visualization of surfaces with uncertainties is an important topic with many applications in science and engineering. Examples include environmental pollution borderline identification, identification of the limits of an oil basin, or discrimination between contaminated and healthy tissue in medicine. This paper presents an approach for such visualization using points as display primitives. Our approach is to render each polygon as a collection of points and to displace each point from the surface in the direction of the surface normal by an amount proportional to some random number multiplied by the uncertainty level at that point. This approach can be used in combination with other techniques such as pseudo-coloring and shading to give rise to efficient and revealing visualizations. The method is used to visualize real and simulated tumor formations with uncertainty of tumor boundaries.

Journal ArticleDOI
TL;DR: Although pixel-based accuracy assessment because of spatial autocorrelation may overstate accuracy, polygon-based assessments are also problematic, according to the results of pixel- or site-based sampling of a supervised Gaussian ARTMAP neural network algorithm.