scispace - formally typeset
Search or ask a question

Showing papers on "Regular polygon published in 2000"


Book ChapterDOI
David Avis1
01 Jan 2000
TL;DR: In this paper, an improved implementation of the reverse search vertex enumeration/convex hull algorithm for d-dimensional convex polyhedra is described. But it is not shown how to compute the volume of the convex hull of a set of points.
Abstract: This paper describes an improved implementation of the reverse search vertex enumeration/convex hull algorithm for d-dimensional convex polyhedra. The implementation uses a lexicographic ratio test to resolve degeneracy, works on bounded or unbounded polyhedra and uses exact arithmetic with all integer pivoting. It can also be used to compute the volume of the convex hull of a set of points. For a polyhedron with m inequalities indvariables and known extreme point, it finds all bases in time O(md) 2 per basis. This implementation can handle problems of quite large size, especially for simple polyhedra (where each basis corresponds to a vertex and the complexity reduces to O (md) per vertex). Computational experience is included in the paper, including a comparison with an earlier implementation.

234 citations


Journal ArticleDOI
TL;DR: The proposed algorithm works efficiently for n ≤ 3 and takes O(n3n/2) time for n > 3, where n denotes the number of the fingers.
Abstract: This paper presents an efficient algorithm for computing all n-finger form-closure grasps on a polygonal object based on a new sufficient and necessary condition for form-closure. With this new condition, it is possible to transfer the problem of computing the form-closure grasp in R{sup 3} to one in R{sup 1}. The author demonstrates that the non-form-closure grasps consist of two convex polytopes in the space of n parameters representing grasp points on sides of the polygon. The proposed algorithm works efficiently for n {le} and takes O(n{sup 3n/2})time for n > 3, where n denotes the number of the fingers. The algorithm has been implemented and its efficiency has been confirmed with two examples.

136 citations


Proceedings ArticleDOI
01 May 2000
TL;DR: A simple and elegant kinetic data structure for detecting collisions between simple but not necessarily convex polygonal objects in motion in the plane that gives an easy upper bound on the worst case number of certificate failures.
Abstract: We design a simple and elegant kinetic data structure for detecting collisions between polygonal (but not necessarily convex) objects in motion in the plane. Our structure is compact, maintaining an active set of certificates whose number is proportional to a minimum-size set of separating polygons for the objects. It is also responsive; on the failure of a certificate invariants can be restored in time logarithmic in the total number of object vertices. It is difficult to characterize the efficiency of our structure for lack of a canonical definition of external events. Nevertheless we give an easy upper bound on the worst case number of certificate failures.

91 citations


Journal Article
TL;DR: In this paper, the authors investigated the minimization of Newton's functional for the problem of the body of minimal resistance with maximal height in the class of convex developable functions defined in a disc and proved that the minimizer in this class has a minimal set in the form of a regular polygon with n sides centered in the disc.
Abstract: We investigate the minimization of Newton's functional for the problem of the body of minimal resistance with maximal height ${M>0$ cite{butt in the class of convex developable functions defined in a disc. This class is a natural candidate to find a (non-radial) minimizer in accordance with the results of cite{lrp2. We prove that the minimizer in this class has a minimal set in the form of a regular polygon with~$n$ sides centered in the disc, and numerical experiments indicate that the natural number $ngeq2$ is a non-decreasing function of $M$. The corresponding functions all achieve a lower value of the functional than the optimal radially symmetric function with the same height~$M$.

79 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of finding the Newton polygon of an abelian fiber from a matrix defined by a given Dieudonne module, and show that it is invariant under isogeny.
Abstract: We consider p-divisible groups (also called Barsotti-Tate groups) in characteristic p, their deformations, and we draw some conclusions. For such a group we can define its Newton polygon (abbreviated NP). This is invariant under isogeny. For an abelian variety (in characteristic p) the Newton polygon of its p-divisible group is "symmetric". In 1963 Manin conjectured that conversely any symmetric Newton polygon is "algebroid"; i.e., it is the Newton polygon of an abelian variety. This conjecture was shown to be true and was proved with the help of the "HondaSerre-Tate theory". We give another proof in Section 5. Grothendieck showed that Newton polygons "go up" under specialization: no point of the Newton polygon of a closed fiber in a family is below the Newton polygon of the generic fiber. In 1970 Grothendieck conjectured the converse: any pair of comparable Newton polygons appear for the generic and special fiber of a family. This was extended by Koblitz in 1975 to a conjecture about a sequence of comparable Newton polygons. In Section 6 we show these conjectures to be true. These results are obtained by deforming the most special abelian varieties or p-divisible groups we can think of. In describing deformations we use the theory of displays; this was proposed by Mumford, and has been developed in [17], [18], and recently elaborated in [32] and [33]; also see [11], [31]. Having described a deformation we like to read off the Newton polygon of the generic fiber. In most cases it is difficult to determine the Newton polygon from the matrix defined by F on a basis for the (deformed) Dieudonne module. In general I have no procedure to do this (e.g. in case we deform away from a formal group where the Dieudonne module is not generated by one element). However in the special case we consider here, a(Go) = 1, a noncommutative version of the theorem of Cayley-Hamilton ("every matrix satisfies its own

74 citations



Journal ArticleDOI
TL;DR: A polynomial time algorithm is developed for computing an upper bound for the rotation distance of binary trees and equivalently for the diagonal-flip distance of convex polygons triangulations.

44 citations


Journal ArticleDOI
TL;DR: A near-linear bound on the combinatorial complexity of the union of n fat convex objects in the plane is proved, each pair of whose boundaries cross at most a constant number of times.
Abstract: We prove a near-linear bound on the combinatorial complexity of the union of n fat convex objects in the plane, each pair of whose boundaries cross at most a constant number of times.

43 citations


Journal ArticleDOI
TL;DR: It is shown that all convex polygons which are not parallelograms tile multiply only quasi-periodically, if at all, and that Λ must be a finite union of translated two-dimensional lattices in the plane.
Abstract: We consider polygons with the following ``pairing property'': for each edge of the polygon there is precisely one other edge parallel to it. We study the problem of when such a polygon K tiles multiply the plane when translated at the locations Λ , where Λ is a multiset in the plane. The pairing property of K makes this question particularly amenable to Fourier analysis. As a first application of our approach we establish a necessary and sufficient condition for K to tile with a given lattice Λ . (This was first found by Bolle for the case of convex polygons—notice that all convex polygons that tile, necessarily have the pairing property and, therefore, our theorems apply to them.) Our main result is a proof that a large class of such polygons tile multiply only quasi-periodically, which for us means that Λ must be a finite union of translated two-dimensional lattices in the plane. For the particular case of convex polygons we show that all convex polygons which are not parallelograms tile multiply only quasi-periodically, if at all.

43 citations


Book ChapterDOI
05 Sep 2000
TL;DR: A procedure for simultaneously decomposing the two polygons such that a "mixed" objective function is minimized and there are optimal decomposition algorithms that significantly expedite the Minkowski-sum computation, but the decomposition itself is expensive to compute.
Abstract: Several algorithms for computing the Minkowski sum of two polygons in the plane begin by decomposing each polygon into convex subpolygons. We examine different methods for decomposing polygons by their suitability for efficient construction of Minkowski sums. We study and experiment with various well-known decompositions as well as with several new decomposition schemes. We report on our experiments with the various decompositions and different input polygons. Among our findings are that in general: (i) triangulations are too costly (ii) what constitutes a good decomposition for one of the input polygons depends on the other input polygon-consequently, we develop a procedure for simultaneously decomposing the two polygons such that a "mixed" objective function is minimized, (iii) there are optimal decomposition algorithms that significantly expedite the Minkowski-sum computation, but the decomposition itself is expensive to compute - in such cases simple heuristics that approximate the optimal decomposition perform very well.

37 citations


Journal ArticleDOI
TL;DR: In this paper, a stochastic point process of S-supporting points is introduced and it is shown that upon rescaling it converges to a Gaussian field, and the central limit theorems proven here imply i.e. the asymptotic normality for the number of convex hull vertices in large Poisson sample from a simple polyhedra or for vector extremal points in Poisson samples with independent coordinates.
Abstract: We introduce a stochastic point process of S-supporting points and prove that upon rescaling it converges to a Gaussian field. The notion of S-supporting points specializes (for adequately chosen S) to Pareto (or, more generally, cone) extremal points or to vertices of convex hulls or to centers of generalized Voronoi tessellations in the models of large scale structure of the Universe based on Burgers equation. The central limit theorems proven here imply i.a. the asymptotic normality for the number of convex hull vertices in large Poisson sample from a simple polyhedra or for the number of Pareto (vector extremal) points in Poisson samples with independent coordinates.

Journal ArticleDOI
TL;DR: The purpose of this paper is to describe in detail the statistical properties of this multivariate model and the eigenstructure of the covariance matrix and the model is applied to some datasets to explore shape variability.
Abstract: SUMMARY Grenander & Miller (1994) describe a model for representing amorphous twodimensional objects with no obvious landmark. Each object is represented by n vertices around its perimeter, and is described by deforming an n-sided regular polygon using edge transformations. A multivariate normal distribution with a block circulant covariance matrix is used to model these edge transformations. The purpose of this paper is to describe in detail the statistical properties of this multivariate model and the eigenstructure of the covariance matrix. Various special cases of the model are considered, including articulated models and conditional Markov random field models. We consider maximum likelihood based inference and the model is applied to some datasets to explore shape variability.

Journal ArticleDOI
TL;DR: In this paper, the authors define a geometrical continued fraction algorithm in the setting of regular polygons with an even number of sides, which uses linear transformations generating a group conjugated to an index 2 subgroup of a Hecke group.
Abstract: We define a geometrical continued fraction algorithm in the setting of regular polygons with an even number of sides., The definition of the algorithm uses linear transformations generating a group conjugated to an index 2 subgroup of a Hecke group. We give Markov conditions allowing the iteration of the algorithm. We compute the natural extension and the invariant measure for each of the additive and multiplicative versions of this algorithm.

Journal ArticleDOI
TL;DR: Convex polyhedra in H 3 are not determined by (their combinatorics and) their edge lengths, but in the de Sitter space S 3 1 they are determined neither by their dihedral angles nor by their edge length.
Abstract: Convex polyhedra in H 3 are not determined by (their combinatorics and) their edge lengths. Convex space-like polyhedra in the de Sitter space S 3 1 are determined neither by their dihedral angles nor by their edge lengths. The same holds of convex polyhedra in S 3 .

Journal ArticleDOI
TL;DR: This paper presents different algorithms for decomposing a polygon into convex polygons without adding new vertices as well as a procedure, which can be applied to any partition, to remove the unnecessary edges of a partition by merging the polygons whose union remains a convex polygon.

Journal ArticleDOI
TL;DR: In this article, the problem of finding the n-sided polygon of diameter 1 which has the largest possible width was studied, and it was shown that the polygon is extremal if and only if it has equal sides and inscribed in a Reuleaux polygon.
Abstract: In this paper we consider the problem of finding the n-sided ( $n\geq 3$ ) polygons of diameter 1 which have the largest possible width w n . We prove that $w_4=w_3= {\sqrt 3 \over 2}$ and, in general, $w_n \leq \cos {\pi \over 2n}$ . Equality holds if n has an odd divisor greater than 1 and in this case a polygon $\cal P$ is extremal if and only if it has equal sides and it is inscribed in a Reuleaux polygon of constant width 1, such that the vertices of the Reuleaux polygon are also vertices of $\cal P$ .

Proceedings ArticleDOI
12 Nov 2000
TL;DR: This work presents a data structure whose expected search time is nearly equal to the entropy lower bound, namely H+o(H).
Abstract: We consider the planar point location problem from the perspective of expected search time. We are given a planar polygonal subdivision S and for each polygon of the subdivision the probability that a query point lies within this polygon. The goal is to compute a search structure to determine which cell of the subdivision contains a given query point, so as to minimize the expected search time. This is a generalization of the classical problem of computing an optimal binary search tree for one-dimensional keys. In the one-dimensional case it has long been known that the entropy H of the distribution is the dominant term in the lower bound on the expected-case search time, and further there exist search trees achieving expected search times of at most H+2. Prior to this work, there has been no known structure for planar point location with an expected search time better than 2H, and this result required strong assumptions on the nature of the query point distribution. Here we present a data structure whose expected search time is nearly equal to the entropy lower bound, namely H+o(H). The result holds for any polygonal subdivision in which the number of sides of each of the polygonal cells is bounded, and there are no assumptions on the query distribution within each cell. We extend these results to subdivisions with convex cells, assuming a uniform query distribution within each cell.

Proceedings Article
01 Jan 2000
TL;DR: It is proved that there is a motion from any convex polygon to any conveX polygon with the same counterclockwise sequence of edge lengths, that preserves the lengths of the edges, and keeps the polygon convex at all times.
Abstract: We prove that there is a motion from any convex polygon to any convex polygon with the same counterclockwise sequence of edge lengths, that preserves the lengths of the edges, and keeps the polygon convex at all times. Furthermore, the motion is “direct” (avoiding any intermediate canonical configuration like a subdivided triangle) in the sense that each angle changes monotonically throughout the motion. In contrast, we show that it is impossible to achieve such a result with each vertex-to-vertex distance changing monotonically. We also demonstrate that there is a motion between any two such polygons using three-dimensional moves known as pivots, although the complexity of the motion cannot be bounded as a function of the number of vertices in the polygon.

Book ChapterDOI
26 Jul 2000
TL;DR: In this article, the standard and discrete two-center problems were solved in O(n log 3 n log log n) and O( n log 2 n) time, respectively.
Abstract: Let P be a convex polygon with n vertices. We want to find two congruent disks whose union covers P and whose radius is minimized. We also consider its discrete version with centers restricted to be at vertices of P. Standard and discrete two-center problems are respectively solved in O(n log3 n log log n) and O(n log2 n) time. Furthermore, we can solve both of the standard and discrete two-center problems for a set of points in convex positions in O(n log2 n) time.

Book ChapterDOI
22 Nov 2000
TL;DR: Efficient algorithms are given for disjoint packing of two polygons with a runtime close to linear for tanslations and 0(03) for geneal isometries.
Abstract: We consider the problem of packing several convex polygons into minimum size rectangles. For this purpose the polygons may be moved either by translations only, or by combinations of translations and rotations. We investigate both cases, that the polygons may overlap when being packed or that they must be disjoint. The size of a rectangle to be minimized can either be its area or its perimeter. In the case of overlapping packing very efficient algorithms whose runtime is close to linear or 0(n log n) can be found even for an arbitrary number of polygons. Disjoint optimal packing is known to be NP-hard for arbitrary numbers of polygons. Here, efficient algorithms are given for disjoint packing of two polygons with a runtime close to linear for tanslations and 0(03) for geneal isometries.

Patent
23 Jun 2000
TL;DR: In this paper, a system and method for rendering a warped brush stroke using a bitmap brush image, the brush stroke being along a arbitrarily curved guideline, was described, and the method generated a piecewise linear approximation to the guideline followed by generating polygons with the aid of the linear segments such that the generated polygons are convex and contiguous linear segments result in contiguous polygons.
Abstract: A system and method is described for rendering a warped brush stroke using a bitmap brush image, the brush stroke being along a arbitrarily curved guideline. The described system and method generate a piecewise linear approximation to the guideline followed by generating polygons with the aid of the linear segments such that the generated polygons are convex and contiguous linear segments result in contiguous polygons. A mapping is identified between segments of the bitmap brush and the polygons such that the corners or the boundaries of the segments of a segment map to the corners or boundaries of a corresponding polygon. The segment of the bitmap brush is mapped into the corresponding polygon using transformations that do not require visiting a pixel in the rendered warped brush stroke more than once. Examples of such transformations include bilinear transformations and texture mapping in combination with tiling.

Journal ArticleDOI
TL;DR: In this paper, a tridiagonal matrix model, the q-root of unity model, was studied and the eigenvalue densities were bounded by and have the symmetries of the regular polygon with 2q sides in the complex plane.
Abstract: We study a class of tridiagonal matrix models, the “q-roots of unity” models, which includes the sign (q=2) and the clock (q=∞) models by Feinberg and Zee. We find that the eigenvalue densities are bounded by and have the symmetries of the regular polygon with 2q sides, in the complex plane. Furthermore, the averaged traces of Mk are integers that count closed random walks on the line such that each site is visited a number of times multiple of q. We obtain an explicit evaluation for them.

Proceedings ArticleDOI
10 Apr 2000
TL;DR: The algorithm has been implemented, and the results of distance computations show that it can calculate the minimum distance between non-convex polyhedra composed of a thousand triangles at interactive rates.
Abstract: An algorithm for calculating the minimum distance between non-convex polyhedra is described. A polyhedron is represented by a set of triangles. In calculating the distance between two polyhedra, it is important to search efficiently the pair of the triangles which gives the pair of closest points. In our algorithm discrete Voronoi regions are prepared as voxels around a non-convex polyhedron. Each voxel is given the list of triangles which have the possibility of being the closest to the points in the voxel. When a triangle on the other object is intersecting a voxel, the closest triangles can be efficiently searched from this list on the voxel. The algorithm has been implemented, and the results of distance computations show that it can calculate the minimum distance between non-convex polyhedra composed of a thousand triangles at interactive rates.

Journal ArticleDOI
02 Mar 2000
TL;DR: In this article, it was shown that any fixed point of a Lipschitzian, strictly pseudocontractive mapping T on a closed, convex subset K of a Banach space X is necessarily unique, and may be norm approximated by an iterative procedure.
Abstract: We show that any fixed point of a Lipschitzian, strictly pseudocontractive mapping T on a closed, convex subset K of a Banach space X is necessarily unique, and may be norm approximated by an iterative procedure. Our argument provides a convergence rate estimate and removes the boundedness assumption on K, generalizing theorems of Liu. Let (X, ‖ · ‖) be a Banach space. Let K be a non-empty closed, convex subset of X and T : K → K. We will assume that T is Lipschitzian, i.e. there exists L > 0 such that ‖T (x)− T (y)‖ ≤ L‖x− y‖, for all x, y ∈ K. Of course, we are most interested in the case where L ≥ 1. We also assume that T is strictly pseudocontractive. Following Liu [1] this may be stated as: there exists k ∈ (0, 1) for which ‖x− y‖ ≤ ‖x− y + r[(I − T − kI)x− (I − T − kI)y]‖, for all r > 0 and all x, y ∈ K. Throughout, N will denote the set of positive integers. The following results generalize Liu [1, Theorems 1 and 2], because we remove the assumption that K is bounded and we provide a general convergence rate estimate. We note in passing, however, that the proof of Theorem 2 of Liu [1] does not use the stated boundedness assumption. Our results still extend this enhanced version of Liu [1, Theorem 2], by improving the convergence rate estimate. Theorem 1. Let (X, ‖ · ‖),K, T, L and k be as described above. Let q ∈ K be a fixed point of T . Suppose that (αn)n∈N is a sequence in (0, 1] such that for some η ∈ (0, k), for all n ∈ N, αn ≤ k − η (L+ 1)(L+ 2− k) ; while ∞ ∑


Posted Content
Joseph O'Rourke1
TL;DR: In this paper, the authors define a slice curve as the intersection of a plane with the surface of a polytope, i.e., a convex polyhedron in three dimensions.
Abstract: Define a ``slice'' curve as the intersection of a plane with the surface of a polytope, i.e., a convex polyhedron in three dimensions. We prove that a slice curve develops on a plane without self-intersection. The key tool used is a generalization of Cauchy's arm lemma to permit nonconvex ``openings'' of a planar convex chain.

Book ChapterDOI
01 Jan 2000
TL;DR: In this article, the efficiency of polynomial-time algorithms for computing or approximating four radii of polytopes (diameter, width, inradius, circumradius) and the maximum of the Euclidean norm over a polytope was studied.
Abstract: Since various measurements of convex polytopes play an important role in many applications, it is useful to know how efficiently such measurements can be computed or at least approximated. The present study, set in Euclidean n-space, focuses on the efficiency of polynomial-time algorithms for computing or approximating four “radii” of polytopes (diameter, width, inradius, circumradius) and the maximum of the Euclidean norm over a polytope. These functionals are known to be tractable in some cases, and the tractability results are here complemented by showing for each of the remaining cases that unless ℙ = ℕℙ, the performance ratios of polynomial-time approximation algorithms are uniformly bounded away from 1. These inapproximability results are established by means of a transformation from the problem Max-Not-All-Equal-3-Sat, and they apply even to very small classes of familiar polytopes (simplices, parallelotopes, and close relatives). They are sharp in the sense that the related problems are indeed approximable within a constant performance ratio. The results for parallelotopes apply also to the quadratic pseudoboolean optimization problems of maximizing a positive definite quadratic form over [0, l] n or [−1,1] n .

Patent
13 Apr 2000
TL;DR: In this paper, a method and system for measuring and analysing an industrial process performance visually from a graphical user interface is presented, where the performance is displayed on the user interface graphically as two polygons, in each corner of which there is one piece of performance index information (CSI, VI, SPI, RI, CTI, OI).
Abstract: The invention relates to a method and system for measuring and analysing an industrial process performance visually from a graphical user interface. The performance is displayed on the user interface graphically as two polygons, in each corner of which there is one piece of performance index information (CSI, VI, SPI, RI, CTI, OI). One of the polygons is a reference polygon (21), and each corner point represents a normalised reference value of one performance index. These reference index values represent ideal operation and performance, which have served as a basis for design. Different performance indexes are also preferably scaled and normalised such that each corner point of the reference polygon is at an equal distance from the centre (origin) of the polygon, and a regular polygon is formed. Overlapping with the reference polygon (21), another polygon, i.e. performance polygon, is displayed, and each corner point of this polygon represents the real value, calculated on the basis of the measurements, of one performance index. The performance indexes of the performance polygon are scaled according to the reference polygon. Thus, the shape of the performance polygon and the location of the corner points in relation to the shape of the reference polygon (21) and the location of the corner points visualise the real performance in relation to the reference performance.

Journal ArticleDOI
TL;DR: A hardware polygon rendering pipeline can be used with hardware compositing to volume render arbitrary unstructured grids composed of convex polyhedral cells.
Abstract: A hardware polygon rendering pipeline can be used with hardware compositing to volume render arbitrary unstructured grids composed of convex polyhedral cells. This technique is described, together with the global sorting necessary for back-to-front compos- iting, and the modifications that must be made to approximate cur- vilinear cells, whose faces may not be planar. © 2000 John Wiley & Sons,

Book ChapterDOI
09 Jul 2000
TL;DR: This paper defines a representation scheme for timed polyhedra based on their extreme vertices, and shows that this compact representation scheme is canonical for all (convex and non- Convex)polyhedra in any dimension.
Abstract: In this paper we investigate timed polyhedra, i.e. polyhedra which are finite unions of full dimensional simplices of a special kind. Such polyhedra form the basis of timing analysis and in particular of verification tools based on timed automata. We define a representation scheme for these polyhedra based on their extreme vertices, and show that this compact representation scheme is canonical for all (convex and non-convex) polyhedra in any dimension. We then develop relatively efficient algorithms for membership, boolean operations, projection and passage of time for this representation.