scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 2007"


Proceedings ArticleDOI
29 Jul 2007
TL;DR: An algorithm that generates natural and intuitive deformations via direct manipulation for a wide range of shape representations and editing scenarios through direct manipulation of objects embedded within it, while preserving the embedded objects' features.
Abstract: We present an algorithm that generates natural and intuitive deformations via direct manipulation for a wide range of shape representations and editing scenarios. Our method builds a space deformation represented by a collection of affine transformations organized in a graph structure. One transformation is associated with each graph node and applies a deformation to the nearby space. Positional constraints are specified on the points of an embedded object. As the user manipulates the constraints, a nonlinear minimization problem is solved to find optimal values for the affine transformations. Feature preservation is encoded directly in the objective function by measuring the deviation of each transformation from a true rotation. This algorithm addresses the problem of "embedded deformation" since it deforms space through direct manipulation of objects embedded within it, while preserving the embedded objects' features. We demonstrate our method by editing meshes, polygon soups, mesh animations, and animated particle systems.

559 citations


Patent
08 Oct 2007
TL;DR: In this article, a method of selectively generating one or more scan lines from a multi-scan line scanner involves measuring the pulse widths of the pulses in a signal output of a motor driving the polygon mirror of the scanner wherein the signal relates to the position of the mirror facets.
Abstract: A method of selectively generating one or more scan lines from a multi-scan line scanner involves measuring the pulse widths of the pulses in a signal output of a motor driving the polygon mirror of the scanner wherein the signal relates to the position of the polygon's mirror facets. By measuring and distinguishing each of the pulses in the signal, the illumination of the scan beam can be synchronized with the rotation of the polygon mirror to only generate a desired number of scan line patterns that is less than the full complement of the scan line patterns capable of being generated by the multi-scan line scanner.

350 citations


Proceedings Article
08 Mar 2007
TL;DR: This paper describes an algorithm to compute the envelope of a set of points in a plane, which generates convex or non-convex hulls that represent the area occupied by the given points.
Abstract: This paper describes an algorithm to compute the envelope of a set of points in a plane, which generates convex or non-convex hulls that represent the area occupied by the given points The proposed algorithm is based on a k-nearest neighbours approach, where the value of k, the only algorithm parameter, is used to control the “smoothness” of the final solution The obtained results show that this algorithm is able to deal with arbitrary sets of points, and that the time to compute the polygons increases approximately linearly with the number of points

185 citations


Journal ArticleDOI
TL;DR: A new heuristic solution method for two-dimensional nesting problems based on a simple local search scheme in which the neighborhood is any horizontal or vertical translation of a given polygon from its current position is presented.

140 citations


Journal ArticleDOI
Torbjörn Wigren1
TL;DR: A new adaptive enhanced cell-ID (AECID) localization method that provides location results in terms of minimal areas with a guaranteed confidence, adapted against live measurements and can be viewed as a robust fingerprinting algorithm.
Abstract: Cell identity (cell ID) is the backbone positioning method of most cellular-communication systems. The reasons for this include availability wherever there is cellular coverage and an instantaneous response. Due to these advantages, technology that enhances the accuracy of the method has received considerable interest. This paper presents a new adaptive enhanced cell-ID (AECID) localization method. The method first clusters high-precision position measurements, e.g., assisted-GPS measurements. The high-precision position measurements of each cluster are tagged with the same set of detectable neighbor cells, auxiliary connection information (e.g., the radio-access bearer), as well as quantized auxiliary measurements [e.g., roundtrip time]. The algorithm proceeds by computation and tagging of a polygon of minimal area that contains a prespecified fraction of the high-precision position measurements of each tagged cluster. A novel algorithm for calculation of a polygon is proposed for this purpose. Whenever AECID positioning is requested, the method first looks up the detected neighbor cells and the auxiliary connection information and performs required auxiliary measurements. The polygon corresponding to the so-obtained tag is then retrieved and sent in response to the positioning request. The automatic self-learning algorithm provides location results in terms of minimal areas with a guaranteed confidence, adapted against live measurements. The AECID method can, therefore, also be viewed as a robust fingerprinting algorithm. The application to fingerprinting is illustrated by an example where quantized path-loss measurements from six base stations are combined.

138 citations


Journal ArticleDOI
TL;DR: This paper introduces a robust orbital method for the creation of no-fit polygons which does not suffer from the typical problem cases found in the other approaches from the literature.

132 citations


Posted Content
TL;DR: In this paper, complete minimal graphs in HxR are constructed by taking asymptotic boundary values plus and minus infinity on alternating sides of an ideal inscribed polygon in H. The vertical projection of such a graph yields a harmonic diffeomorphism from C onto H, disproving a conjecture of Rick Schoen.
Abstract: We study complete minimal graphs in HxR, which take asymptotic boundary values plus and minus infinity on alternating sides of an ideal inscribed polygon Γ in H. We give necessary and sufficient conditions on the "lenghts" of the sides of the polygon (and all inscribed polygons in Γ) that ensure the existence of such a graph. We then apply this to construct entire minimal graphs in HxR that are conformally the complex plane C. The vertical projection of such a graph yields a harmonic diffeomorphism from C onto H, disproving a conjecture of Rick Schoen.

108 citations


01 Jan 2007
TL;DR: In this article, a robust orbital method for the creation of no-fit polygons is proposed, which does not suffer from the typical problem cases found in the other approaches from the literature.
Abstract: The no-fit polygon is a construct that can be used between pairs of shapes for fast and efficient handling of geometry within irregular two-dimensional stock cutting problems. Previously, the no-fit polygon (NFP) has not been widely applied because of the perception that it is difficult to implement and because of the lack of generic approaches that can cope with all problem cases without specific case-by-case handling. This paper introduces a robust orbital method for the creation of no-fit polygons which does not suffer from the typical problem cases found in the other approaches from the literature. Furthermore, the algorithm only involves two simple geometric stages so it is easily understood and implemented. We demonstrate how the approach handles known degenerate cases such as holes, interlocking concavities and jigsaw type pieces and we give generation times for 32 irregular packing benchmark problems from the literature, including real world datasets, to allow further comparison with existing and future approaches. � 2006 Elsevier B.V. All rights reserved.

103 citations


Journal ArticleDOI
TL;DR: In Matlab a Gauss-like cubature formula over convex, nonconvex or even multiply connected polygons is implemented, which relies directly on univariate Gauss–Legendre quadrature via Green’s integral formula.
Abstract: We have implemented in Matlab a Gauss-like cubature formula over convex, nonconvex or even multiply connected polygons. The formula is exact for polynomials of degree at most 2n-1 using N∼mn 2 nodes, m being the number of sides that are not orthogonal to a given line, and not lying on it. It does not need any preprocessing like triangulation of the domain, but relies directly on univariate Gauss–Legendre quadrature via Green’s integral formula. Several numerical tests are presented.

96 citations


Journal ArticleDOI
TL;DR: A non-stationary 4-point ternary interpolatory subdivision scheme which provides the user with a tension parameter that, when increased within its range of definition, can generate C^2-continuous limit curves showing considerable variations of shape.

83 citations


Journal ArticleDOI
TL;DR: This research explores and extends an approach to the spatial–temporal analysis of polygons that are spatially distinct and experience discrete changes though time, and presents five new movement events for describing spatial processes: displacement, convergence, divergence, fragmentation and concentration.
Abstract: Research questions regarding temporal change in spatial patterns are increasingly common in geographical analysis. In this research, we explore and extend an approach to the spatial-temporal analysis of polygons that are spatially distinct and experience discrete changes though time. We present five new move- ment events for describing spatial processes: displacement, convergence, diver- gence, fragmentation and concentration. Spatial-temporal measures of events for size and direction are presented for two time periods, and multiple time periods. Size change metrics are based on area overlaps and a modified cone-based model is used for calculating polygon directional relationships. Quantitative directional measures are used to develop application specific metrics, such as an estimation of the concentration parameter for a von Mises distribution, and the directional rate of spread. The utility of the STAMP methods are demonstrated by a case study on the spread of a wildfire in northwestern Montana.

Journal ArticleDOI
TL;DR: Motivated by the rendezvous problem for mobile autonomous robots, a linear scheme is proposed that exhibits several analogues to Euclidean curve shortening: The polygon shrinks to an elliptical point, convex polygons remain convex, and the perimeter of the polygon is monotonically decreasing.
Abstract: If a smooth, closed, and embedded curve is deformed along its normal vector field at a rate proportional to its curvature, it shrinks to a circular point. This curve evolution is called Euclidean curve shortening and the result is known as the Gage-Hamilton-Grayson theorem. Motivated by the rendezvous problem for mobile autonomous robots, we address the problem of creating a polygon shortening flow. A linear scheme is proposed that exhibits several analogues to Euclidean curve shortening: The polygon shrinks to an elliptical point, convex polygons remain convex, and the perimeter of the polygon is monotonically decreasing.

Journal ArticleDOI
TL;DR: A universal algorithm for polygon clipping, which is a frequent operation in GIS, based on so-called entry/exit intersection point property, which has to be explicitly determined only at the first calculated intersection point is introduced.

Journal ArticleDOI
TL;DR: In this article, it was shown that if R is a connected 2-manifold without boundary obtained from a (possibly infinite) collection of polygons by identifying them along edges of equal length, it is homeomorphic to either the 2-sphere or to the projective plane.
Abstract: Let R be a connected 2-manifold without boundary obtained from a (possibly infinite) collection of polygons by identifying them along edges of equal length. Let V be the set of vertices, and for every v ∈ V, let k(v) denote the (Gaussian) curvature of v: 2π minus the sum of incident polygon angles. Descartes showed that Συ∈v k(υ) = 4π whenever R, may be realized as the surface of a convex polytope in R 3 . More generally, if R is made of finitely many polygons, Euler's formula is equivalent to the equation Συ∈vκ(υ) = 1πΧ(R) where Χ(R) is the Euler characteristic of R. Our main theorem shows that whenever Συ∈V:κ(υ) 0 for every vertex v, we apply our main theorem to deduce that R. is made of finitely many polygons and is homeomorphic to either the 2-sphere or to the projective plane. Further, we show that unless Ρ is a prism, antiprism, or the projective planar analogue of one of these that |V| < 3444. This resolves a recent conjecture of Higuchi.

Journal ArticleDOI
TL;DR: In this article, a convex polygon P in the projective plane can be formed by taking the pairwise intersections of the lines extending the edges of P. When P is a Poncelet polygon, it is shown that this grid is contained in a finite union of ellipses and hyperbolas.
Abstract: Given a convex polygon P in the projective plane we can form a finite “grid” of points by taking the pairwise intersections of the lines extending the edges of P . When P is a Poncelet polygon we show that this grid is contained in a finite union of ellipses and hyperbolas and derive other related geometric information about the grid.

Journal ArticleDOI
TL;DR: This work gives the first constant-factor approximation algorithm for a nontrivial instance of the optimal guarding (coverage) problem in polygons, and gives an $O(1)$-approximation algorithm for placing the fewest point guards on a 1.5D terrain.
Abstract: We present the first constant-factor approximation algorithm for a nontrivial instance of the optimal guarding (coverage) problem in polygons. In particular, we give an $O(1)$-approximation algorithm for placing the fewest point guards on a 1.5D terrain, so that every point of the terrain is seen by at least one guard. While polylogarithmic-factor approximations follow from set cover results, our new results exploit the geometric structure of terrains to obtain a substantially improved approximation algorithm.

Patent
15 Aug 2007
TL;DR: In this article, a mixture of different polygons, such as hexagons and pentagons, is used to cover a surface with non-zero Gaussian curvature where each coil is overlapped with its neighbors such that their mutual inductance is nulled.
Abstract: An MRI rf coil array is comprised of a large number of separate coil elements that are supported on a substrate that is shaped to the contour of the anatomy being imaged. The coil elements overlap each other to reduce mutual inductance and their location is determined by tiling the surface of the substrate with regular, substantially same sized polygons. The center of each coil element is aligned with the center of a polygon. By using a mixture of different polygons, such as hexagons and pentagons, an arrangement of coil elements may be formed that cover a surface with non-zero Gaussian curvature where each coil is overlapped with its neighbors such that their mutual inductance is nulled.

Journal ArticleDOI
TL;DR: A high order finite element method is proposed to overcome the difficulty of computations of eigenvalues and eigenvectors for the Schrodinger operator with constant magnetic field in a domain with corners, as the semi-classical parameter $h$ tends to $0$.

Journal ArticleDOI
TL;DR: This work constructs a novel geometric flow that can be added to image-based evolutions of active contours and polygons in order to preserve the topology of the initial contour or polygon and identifies the gradient flow arising from an energy that is based on electrostatic principles.
Abstract: Active contour and active polygon models have been used widely for image segmentation. In some applications, the topology of the object(s) to be detected from an image is known a priori, despite a complex unknown geometry, and it is important that the active contour or polygon maintain the desired topology. In this work, we construct a novel geometric flow that can be added to image-based evolutions of active contours and polygons in order to preserve the topology of the initial contour or polygon. We emphasize that, unlike other methods for topology preservation, the proposed geometric flow continually adjusts the geometry of the original evolution in a gradual and graceful manner so as to prevent a topology change long before the curve or polygon becomes close to topology change. The flow also serves as a global regularity term for the evolving contour, and has smoothness properties similar to curvature flow. These properties of gradually adjusting the original flow and global regularization prevent geometrical inaccuracies common with simple discrete topology preservation schemes. The proposed topology preserving geometric flow is the gradient flow arising from an energy that is based on electrostatic principles. The evolution of a single point on the contour depends on all other points of the contour, which is different from traditional curve evolutions in the computer vision literature

Proceedings ArticleDOI
25 Mar 2007
TL;DR: In this article, high-speed tuning of an extended-cavity semiconductor laser was demonstrated using a scanning polygon filter, achieving a tuning rate of 7714-nm/ms with 65 mW of power over a wavelength range of 135-nm and with an instantaneous line width ~ 0.13 nm.
Abstract: High-speed tuning of an extended-cavity semiconductor laser is demonstrated using a scanning polygon filter. We achieved a tuning rate of 7714-nm/ms with 65 mW of power over a wavelength range of 135-nm and with an instantaneous line-width ~ 0.13 nm.

Journal ArticleDOI
TL;DR: This paper characterize the set of positions of a third robot, the so-called capture region, that prevent P from escaping to infinity via continuous rigid motion, and shows that the computation of the capture region reduces to a visibility problem.
Abstract: This paper addresses the problem of capturing an arbitrary convex object P in the plane with three congruent disc-shaped robots. Given two stationary robots in contact with P, we characterize the set of positions of a third robot, the so-called capture region, that prevent P from escaping to infinity via continuous rigid motion. We show that the computation of the capture region reduces to a visibility problem. We present two algorithms for solving this problem, and for computing the capture region when P is a polygon and the robots are points (zero-radius discs). The first algorithm is exact and has polynomial time complexity. The second one uses simple hidden surface removal techniques from computer graphics to output an arbitrarily accurate approximation of the capture region; it has been implemented, and examples are presented.

DOI
01 Nov 2007
TL;DR: It is proved that finding an optimal auto-partition is NP-hard and proposed an exact algorithm for finding optimal rectilinear r-partitions whose running time is polynomial when r is a constant, and a faster 2-approximation algorithm.
Abstract: Spatial data structures form a core ingredient of many geometric algorithms, both in theory and in practice. Many of these data structures, especially the ones used in practice, are based on partitioning the underlying space (examples are binary space partitions and decompositions of polygons) or partitioning the set of objects (examples are bounding-volume hierarchies). The efficiency of such data structures---and, hence, of the algorithms that use them---depends on certain characteristics of the partitioning. For example the performance of many algorithms that use binary space partitions (BSPs) depends on the size of the BSPs. Similarly, the performance of answering range queries using bounding-volume hierarchies (BVHs) depends on the so-called crossing number that can be associated with the partitioning on which the BVH is based. Much research has been done on the problem of computing partitioning whose characteristics are good in the worst case. In this thesis, we studied the problem from a different point of view, namely instance-optimality. In particular, we considered the following question: given a class of geometric partitioning structures---like BSPs, simplicial partitions, polygon triangulations, …---and a cost function---like size or crossing number---can we design an algorithm that computes a structure whose cost is optimal or close to optimal for any input instance (rather than only worst-case optimal). We studied the problem of finding optimal data structures for some of the most important spatial data structures. As an example having a set of n points and an input parameter r, It has been proved that there are input sets for which any simplicial partitions has crossing number ?(vr). It has also been shown that for any set of n input points and the parameter r one can make a simplicial partition with stabbing number O(vr). However, there are input point sets for which one can make simplicial partition with lower stabbing number. As an example when the points are on a diagonal, one can always make a simplicial partition with stabbing number 1. We started our research by studying BSPs for line segments in the plane, where the cost function is the size of the BSPs. A popular type of BSPs for line segments are the so-called auto-partitions. We proved that finding an optimal auto-partition is NP-hard. In fact, finding out if a set of input segments admits an auto-partition without any cuts is already NP-hard. We also studied the relation between two other types of BSPs, called free and restricted BSPs, and showed that the number of cuts of an optimal restricted BSP for a set of segments in R2 is at most twice the number of cuts of an optimal free BSP for that set. The details are being represented in Chapter 1 of the thesis. Then we turned our attention to so-called rectilinear r-partitions for planar point sets, with the crossing number as cost function. A rectilinear r-partition of a point set P is a partitioning of P into r subsets, each having roughly |P|/r points. The crossing number of the partition is defined using the bounding boxes of the subsets; in particular, it is the maximum number of bounding boxes that can be intersected by any horizontal or vertical line. We performed some theoretical as well as experimental studies on rectilinear r-partitions. On the theoretical side, we proved that computing a rectilinear r-partition with optimal stabbing number for a given set of points and parameter r is NP-hard. We also proposed an exact algorithm for finding optimal rectilinear r-partitions whose running time is polynomial when r is a constant, and a faster 2-approximation algorithm. Our last theoretical result showed that considering only partitions whose bounding boxes are disjoint is not sufficient for finding optimal rectilinear r-partitions. On the experimental side, we performed a comparison between four different heuristics for constructing rectilinear r-partitions. The so-called windmill KD-tree gave the best results. Chapter 2 of the thesis describes all the details of our research on rectilinear r-partitions. We studied another spatial data structure in Chapter 3 of the thesis. Decomposition of the interior of polygons is one of the fundamental problems in computational geometry. In case of a simple polygon one usually wants to make a Steiner triangulation of it, and when we have a rectilinear polygon at hand, one typically wants to make a rectilinear decomposition for it. Due to this reason there are algorithms which make Steiner triangulations and rectangular decompositions with low stabbing number. These algorithms are worst-case optimal. However, similar to the two previous data structures, there are polygons for which one can make decompositions with lower stabbing numbers. In 3 we proposed a 3-approximation for finding an optimal rectangular decomposition of a rectilinear polygon. We also proposed an O(1)-approximation for finding optimal Steiner triangulation of a simple polygon. Finally, in Chapter 4 of the thesis, we considered another optimization problem, namely how to approximate a piecewise-linear function F: R ?R with another piecewise-linear function with fewer pieces. Here one can distinguish two versions of the problem. The first one is called the min-k problem; the goal is then to approximate the function within a given error e such that the resulting function has the minimum number of links. The second one is called the min-e problem; here the goal is to find an approximation with at most k links (for a given k) such that the error is minimized. These problems have already been studied before. Our contribution is to consider the problem for so-called uncertain functions, where the value of the input function F at its vertices is given as a discrete set of different values, each with an associated probability. We show how to compute an approximation that minimizes the expected error.

Journal ArticleDOI
TL;DR: In this article, a convex polygon V n with n sides, perimeter P n, diameter D n, area A n, sum of distances between vertices S n and width W n is considered.
Abstract: Consider a convex polygon V n with n sides, perimeter P n , diameter D n , area A n , sum of distances between vertices S n and width W n . Minimizing or maximizing any of these quantities while fixing another defines 10 pairs of extremal polygon problems (one of which usually has a trivial solution or no solution at all). We survey research on these problems, which uses geometrical reasoning increasingly complemented by global optimization methods. Numerous open problems are mentioned, as well as series of test problems for global optimization and non-linear programming codes.

Patent
24 Apr 2007
TL;DR: In this paper, a standard information obtaining unit is used to obtain standard information including a type of drawing process that could be used for computer graphics and a feature of the shape of polygon.
Abstract: A device that is not necessarily high in performance, includes: a standard information obtaining unit configured to obtain standard information including a type of drawing process that could be used for computer graphics and a feature of the shape of polygon; a drawing information obtaining unit configured to obtain drawing information including information indicating the type of drawing process used for the actual computer graphics and information indicating the shape of polygon; a simplification judging unit configured to judge whether the type of the drawing process and the shape of the polygon indicated by the drawing information satisfy the standard indicated by the drawing standard information; a polygon simplifying unit configured to simplify the polygon by reducing vertices composing the polygon when said simplification judging unit judges that the standard is satisfied; and a drawing unit configured to execute computer graphics process using the polygon whose vertices are reduced, in order to achieve simplification of polygons and reduction of the total processing time for computer graphics (CG).

Proceedings Article
06 Jan 2007
TL;DR: Experimental evidence is given that the heuristics proposed give a provably optimal result, while in other cases there is only a small gap between the computed upper and lower bounds on the optimal guard number.
Abstract: We propose heuristics for visibility coverage of a polygon with the fewest point guards. This optimal coverage problem, often called the "art gallery problem", is known to be NP-hard, so most recent research has focused on heuristics and approximation methods. We evaluate our heuristics through experimentation, comparing the upper bounds on the optimal guard number given by our methods with computed lower bounds based on heuristics for placing a large number of visibility-independent "witness points". We give experimental evidence that our heuristics perform well in practice, on a large suite of input data; often the heuristics give a provably optimal result, while in other cases there is only a small gap between the computed upper and lower bounds on the optimal guard number.

Journal ArticleDOI
TL;DR: In this article, the straight skeleton of a nondegenerate polygon with n vertices was shown to be computed in O(n\sqrt{h+1}\log^2 n+r \sqrt {r} \log r} expected time.
Abstract: We present a new algorithm to compute motorcycle graphs. It runs in $O(n \sqrt{n}\log n)$ time when n is the number of motorcycles. We give a new characterization of the straight skeleton of a nondegenerate polygon. For a polygon with n vertices and h holes, we show that it yields a randomized algorithm that reduces the straight skeleton computation to a motorcycle graph computation in expected $O(n\sqrt{h+1}\log^2 n)$ time. Combining these results, we can compute the straight skeleton of a nondegenerate polygon with h holes and with n vertices, among which r are reflex vertices, in $O(n\sqrt{h+1}\log^2 n+r \sqrt{r} \log r)$ expected time. In particular, we cancompute the straight skeleton of a nondegenerate polygon with n vertices in $O(n\sqrt{n}\log^2n)$ expected time.

Proceedings ArticleDOI
05 Nov 2007
TL;DR: This paper presents a dynamic programming algorithm considering the dynamic cost, called dynamic cost programming (DCP), for the ECO timing optimization with spare cells, and presents an effective pruning method by selecting spare cells only inside an essential bounding polygon to reduce the solution space.
Abstract: We introduce in this paper a new problem of ECO timing optimization using spare-cell rewiring and present the first work for this problem. Spare-cell rewiring is a popular technique for incremental timing optimization and/or functional change after the placement stage. The spare-cell rewiring problem is very challenging because of its dynamic wiring cost nature for selecting a spare cell, while the existing related problems consider only static wiring cost. For the addressed problem, we present a framework of buffer insertion and gate sizing to handle it. In this framework, we present a dynamic programming algorithm considering the dynamic cost, called dynamic cost programming (DCP), for the ECO timing optimization with spare cells. Without loss of solution optimality, we further present an effective pruning method by selecting spare cells only inside an essential bounding polygon to reduce the solution space. The whole framework is integrated into a commercial design flow. Experimental results based on five industry benchmarks show that our method is very effective and efficient in fixing the timing violations of ECO paths.

Journal ArticleDOI
TL;DR: The intuitive Sungear interface has enabled biologists to determine quickly which dataset or groups of datasets play a role in a biological function of interest.
Abstract: Summary: Sungear is a software system that supports a rapid, visually interactive and biologist-driven comparison of large datasets. The datasets can come from microarray experiments (e.g. genes induced in each experiment), from comparative genomics (e.g. genes present in each genome) or even from non-biological applications (e.g. demographics or baseball statistics). Sungear represents multiple datasets as vertices in a polygon. Each possible intersection among the sets is represented as a circle inside the polygon. The position of the circle is determined by the position of the vertices represented in the intersection and the area of the circle is determined by the number of elements in the intersection. Sungear shows which Gene Ontology terms are over-represented in a subset of circles or anchors. The intuitive Sungear interface has enabled biologists to determine quickly which dataset or groups of datasets play a role in a biological function of interest. Availability: A live online version of Sungear can be found at http://virtualplant-prod.bio.nyu.edu/cgi-bin/sungear/index.cgi Contact: shasha@cs.nyu.edu Supplementary information: Submitted---link TBD.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the lowest-order Raviart-Thomas mixed finite element method for second-order elliptic problems posed over a system of intersecting two-dimensional polygons placed in three-dimensional Euclidean space.

Book ChapterDOI
01 Jan 2007
TL;DR: This paper presents a model for building cluster distribution analysis based on the Delaunay triangulation skeleton, which obtains a special geometric construction similar to Voronoi diagram that spatially partitions the gap area equally.
Abstract: This paper presents a model for building cluster distribution analysis based on the Delaunay triangulation skeleton. The skeleton connection within the gap area among the building polygons obtains a special geometric construction similar to Voronoi diagram that spatially partitions the gap area equally. Each building polygon is surrounded by a partitioning polygon which can be regarded as the growth region of inner building. Based on this model, several cluster structure variables can be computed, such as the distribution density, the topological neighbour, the adjacent distance and the adjacent direction. Considering the constraints of position accuracy, statistical area balance, orthogonal shape in building generalization, the study presents a progressive algorithm of building cluster aggregation, including the conflict detection (where), the object (who) displacement and the geometric combination (how). The algorithm has been realized in a generalization system and some experiment illustrations are provided in the paper.