scispace - formally typeset
Search or ask a question

Showing papers on "Regular polygon published in 2004"


Journal ArticleDOI
TL;DR: A new convexity measure for planar regions bounded by polygons is defined and evaluated and it is found that it is more sensitive to measured boundary defects than the so called "area-based" conveXity measures.
Abstract: Convexity estimators are commonly used in the analysis of shape. In this paper, we define and evaluate a new convexity measure for planar regions bounded by polygons. The new convexity measure can be understood as a "boundary-based" measure and in accordance with this it is more sensitive to measured boundary defects than the so called "area-based" convexity measures. When compared with the convexity measure defined as the ratio between the Euclidean perimeter of the convex hull of the measured shape and the Euclidean perimeter of the measured shape then the new convexity measure also shows some advantages-particularly for shapes with holes. The new convexity measure has the following desirable properties: 1) the estimated convexity is always a number from (0, 1], 2) the estimated convexity is I if and only if the measured shape is convex, 3) there are shapes whose estimated convexity is arbitrarily close to 0, 4) the new convexity measure is invariant under similarity transformations, and 5) there is a simple and fast procedure for computing the new convexity measure.

156 citations


Journal ArticleDOI
TL;DR: This paper presents a scheme to deal accurately and efficiently with complex angular masks, such as occur typically in galaxy surveys, and includes facilities to compute the spherical harmonics of the angular mask, and Data–Random and Random–Random angular integrals.
Abstract: This paper presents a scheme to deal accurately and efficiently with complex angular masks, such as occur typically in galaxy surveys. An angular mask is taken to be an arbitrary union of arbitrarily weighted angular regions bounded by arbitrary numbers of edges. The restrictions on the mask are (i) that each edge must be part of some circle on the sphere (but not necessarily a great circle), and (ii) that the weight within each subregion of the mask must be constant. The scheme works by resolving a mask into disjoint polygons, convex angular regions bounded by arbitrary numbers of edges. The polygons may be regarded as the ‘pixels’ of a mask, with the feature that the pixels are allowed to take a rather general shape, rather than following some predefined regular pattern. Among other things, the scheme includes facilities to compute the spherical harmonics of the angular mask, and Data–Random and Random–Random angular integrals. A software package mangle that implements this scheme, along with complete software documentation, is available at http://casa.colorado.edu/~ajsh/mangle/.

118 citations


Journal ArticleDOI
TL;DR: An energy‐based general polygon to polygon normal contact model in which the normal and tangential directions, magnitude and reference contact position of the normal contact force are uniquely defined.
Abstract: This paper proposes an energy‐based general polygon to polygon normal contact model in which the normal and tangential directions, magnitude and reference contact position of the normal contact force are uniquely defined. The model in its final form is simple and elegant with a clear geometric perspective, and also possesses some advanced features. Furthermore, it can be extended to a more complex situations and in particular, it may also provide a sound theoretical foundation to possibly unifying existing contact models for all types of (convex) objects.

80 citations


01 Jul 2004
TL;DR: This thesis examines a specific realistic input model in this thesis: the model where objects are restricted to be fat, and proposes two algorithms for triangulating fat polygons in linear time that are much simpler.
Abstract: Computational geometry is the branch of theoretical computer science that deals with algorithms and data structures for geometric objects. The most basic geometric objects include points, lines, polygons, and polyhedra. Computational geometry has applications in many areas of computer science, including computer graphics, robotics, and geographic information systems. In many computational-geometry problems, the theoretical worst case is achieved by input that is in some way "unrealistic". This causes situations where the theoretical running time is not a good predictor of the running time in practice. In addition, algorithms must also be designed with the worst-case examples in mind, which causes them to be needlessly complicated. In recent years, realistic input models have been proposed in an attempt to deal with this problem. The usual form such solutions take is to limit some geometric property of the input to a constant. We examine a specific realistic input model in this thesis: the model where objects are restricted to be fat. Intuitively, objects that are more like a ball are more fat, and objects that are more like a long pole are less fat. We look at fat objects in the context of five different problems—two related to decompositions of input objects and three problems suggested by computer graphics. Decompositions of geometric objects are important because they are often used as a preliminary step in other algorithms, since many algorithms can only handle geometric objects that are convex and preferably of low complexity. The two main issues in developing decomposition algorithms are to keep the number of pieces produced by the decomposition small and to compute the decomposition quickly. The main question we address is the following: is it possible to obtain better decompositions for fat objects than for general objects, and/or is it possible to obtain decompositions quickly? These questions are also interesting because most research into fat objects has concerned objects that are convex. We begin by triangulating fat polygons. The problem of triangulating polygons—that is, partitioning them into triangles without adding any vertices—has been solved already, but the only linear-time algorithm is so complicated that it has never been implemented. We propose two algorithms for triangulating fat polygons in linear time that are much simpler. They make use of the observation that a small set of guards placed at points inside a (certain type of) fat polygon is sufficient to see the boundary of such a polygon. We then look at decompositions of fat polyhedra in three dimensions. We show that polyhedra can be decomposed into a linear number of convex pieces if certain fatness restrictions aremet. We also show that if these restrictions are notmet, a quadratic number of pieces may be needed. We also show that if we wish the output to be fat and convex, the restrictions must be much tighter. We then study three computational-geometry problems inspired by computer graphics. First, we study ray-shooting amidst fat objects from two perspectives. This is the problem of preprocessing data into a data structure that can answer which object is first hit by a query ray in a given direction from a given point. We present a new data structure for answering vertical ray-shooting queries—that is, queries where the ray’s direction is fixed—as well as a data structure for answering ray-shooting queries for rays with arbitrary direction. Both structures improve the best known results on these problems. Another problem that is studied in the field of computer graphics is the depth-order problem. We study it in the context of computational geometry. This is the problem of finding an ordering of the objects in the scene from "top" to "bottom", where one object is above the other if they share a point in the projection to the xy-plane and the first object has a higher z-value at that point. We give an algorithm for finding the depth order of a group of fat objects and an algorithm for verifying if a depth order of a group of fat objects is correct. The latter algorithm is useful because the former can return an incorrect order if the objects do not have a depth order (this can happen if the above/below relationship has a cycle in it). The first algorithm improves on the results previously known for fat objects; the second is the first algorithm for verifying depth orders of fat objects. The final problem that we study is the hidden-surface removal problem. In this problem, we wish to find and report the visible portions of a scene from a given viewpoint—this is called the visibility map. The main difficulty in this problem is to find an algorithm whose running time depends in part on the complexity of the output. For example, if all but one of the objects in the input scene are hidden behind one large object, then our algorithm should have a faster running time than if all of the objects are visible and have borders that overlap. We give such an algorithm that improves on the running time of previous algorithms for fat objects. Furthermore, our algorithm is able to handle curved objects and situations where the objects do not have a depth order—two features missing from most other algorithms that perform hidden surface removal.

78 citations


Book ChapterDOI
01 Jan 2004
TL;DR: In this paper, the problem of identifying functions whose convex envelope on a polyhedron P coincides with their restrictions to the vertices of P has been addressed, and when this property holds, it is shown that the function has a vertex polyhedral convex envelopes.
Abstract: In this paper we address the problem of identifying those functions whose convex envelope on a polyhedron P coincides with the convex envelope of their restrictions to the vertices of P. When this property holds we say that the function has a vertex polyhedral convex envelope.

67 citations


Journal ArticleDOI
TL;DR: In this article, the motion of an infinitesimal particle under the gravitational field of (n+1) bodies in ring configuration, that consist of n primaries of equal mass m placed at the vertices of a regular polygon, plus another primary of mass m 0 = βm located at the geometric center of the polygon.

47 citations


Journal ArticleDOI
TL;DR: In this article, a theory of discrete convexity for the lattice of integer points Z n, which is based on the separation property, was developed, and it is shown that these (maximal) classes are in one-to-one correspondence with pure systems.

45 citations


Journal ArticleDOI
TL;DR: For convex bodies of differentiability class Kk+1, precise asymptotic expansions for the intrinsic volume of a random polytope as n → ∞ are known.
Abstract: A random polytope is the convex hull of n random points in the interior of a convex body K The expectation of the ith intrinsic volume of a random polytope as n → ∞ is investigated It is proved that, for convex bodies of differentiability class Kk+1, precise asymptotic expansions for these expectations exist The proof makes essential use of a refinement of Crofton's boundary theorem

43 citations


Journal ArticleDOI
TL;DR: In this article, a 2-parameter family of 4-dimensional polytopes π(P 2r n ) with extreme combinatorial structure was constructed, and it was shown that the cone of the f-vector of a 4-polytope can be arbitrarily large.
Abstract: It is an open problem to characterize the cone of f -vectors of 4-dimensional convex polytopes. The question whether the “fatness” of the f vector of a 4-polytope can be arbitrarily large is a key problem in this context. Here we construct a 2-parameter family of 4-dimensional polytopes π(P 2r n ) with extreme combinatorial structure. In this family, the “fatness” of the f vector gets arbitrarily close to 9; an analogous invariant of the flag vector, the “complexity,” gets arbitrarily close to 16. The polytopes are obtained from suitable deformed products of even polygons by a projection to R4.

33 citations


Journal ArticleDOI
TL;DR: In this paper, an approximation to a distribution governed by temperature readings taken at the edge of a polygonal domain can be constructed using interpolation functions which satisfy first order, constancy and linearity conditions.

33 citations


Journal ArticleDOI
29 Jan 2004
TL;DR: A nonempty, closed, bounded, convex subset of c 0 has the fixed point property if and only if it is weakly compact as discussed by the authors, i.e., it has a fixed point if
Abstract: A nonempty, closed, bounded, convex subset of c 0 has the fixed point property if and only if it is weakly compact.

Posted Content
TL;DR: The authors came up with new inequalities: Scott's inequality can be sharpened if one takes into account another invariant, which is de fined by peeling off the skins of the polygons like an onion (see Section 3).
Abstract: In this note we classify all triples (a,b,i) such that there is a convex lattice polygon P with area a, and b respectively i lattice points on the boundary respectively in the interior. The crucial lemma for the classification is the necessity of b \le 2 i + 7. We sketch three proofs of this fact: the original one by Scott, an elementary one, and one using algebraic geometry. As a refinement, we introduce an onion skin parameter l: how many nested polygons does P contain? and give sharper bounds.

Journal ArticleDOI
TL;DR: 3D case: It is proved that the VC-dimension is unbounded if the cameras lie on a sphere containing the polyhedron, hence the term exterior visibility.
Abstract: In this paper, we study the Vapnik-Chervonenkis (VC)-dimension of set systems arising in 2D polygonal and 3D polyhedral configurations where a subset consists of all points visible from one camera. In the past, it has been shown that the VC-dimension of planar visibility systems is bounded by 23 if the cameras are allowed to be anywhere inside a polygon without holes. Here, we consider the case of exterior visibility, where the cameras lie on a constrained area outside the polygon and have to observe the entire boundary. We present results for the cases of cameras lying on a circle containing a polygon (VC-dimension=2) or lying outside the convex hull of a polygon (VC-dimension=5). The main result of this paper concerns the 3D case: We prove that the VC-dimension is unbounded if the cameras lie on a sphere containing the polyhedron, hence the term exterior visibility.

Posted Content
TL;DR: In this article, a sharp combinatorial bound for the metric entropy of sets in R^n and general classes of functions is given, and a nicely bounded coordinate section of a symmetric convex body is constructed.
Abstract: We find a sharp combinatorial bound for the metric entropy of sets in R^n and general classes of functions. This solves two basic combinatorial conjectures on the empirical processes. 1. A class of functions satisfies the uniform Central Limit Theorem if the square root of its combinatorial dimension is integrable. 2. The uniform entropy is equivalent to the combinatorial dimension under minimal regularity. Our method also constructs a nicely bounded coordinate section of a symmetric convex body in R^n. In the operator theory, this essentially proves for all normed spaces the restricted invertibility principle of Bourgain and Tzafriri.

Journal ArticleDOI
20 May 2004
TL;DR: In this paper, the authors improved the Gauss-Lucas theorem by proving that all nontrivial roots of a complex non-constant polynomial lie in a smaller convex polygon which is obtained by a strict contraction of the Lucas polygon of p.
Abstract: The celebrated Gauss-Lucas theorem states that all the roots of the derivative of a complex non-constant polynomial p lie in the convex hull of the roots of p, called the Lucas polygon of p. We improve the Gauss-Lucas theorem by proving that all the nontrivial roots of p' lie in a smaller convex polygon which is obtained by a strict contraction of the Lucas polygon of p.

Book ChapterDOI
TL;DR: The dynamic programming approach is extended to the general problem and an exact algorithm is described which runs in O(6 k n 5log n) time where n is the total number of input points and k is the number of inner points.
Abstract: We propose to look at the computational complexity of 2-dimensional geometric optimization problems on a finite point set with respect to the number of inner points (that is, points in the interior of the convex hull). As a case study, we consider the minimum weight triangulation problem. Finding a minimum weight triangulation for a set of n points in the plane is not known to be NP-hard nor solvable in polynomial time, but when the points are in convex position, the problem can be solved in O(n 3) time by dynamic programming. We extend the dynamic programming approach to the general problem and describe an exact algorithm which runs in O(6 k n 5log n) time where n is the total number of input points and k is the number of inner points. If k is taken as a parameter, this is a fixed-parameter algorithm. It also shows that the problem can be solved in polynomial time if k=O(log n). In fact, the algorithm works not only for convex polygons, but also for simple polygons with k interior points.

Proceedings Article
01 Jan 2004
TL;DR: This paper presents a polygon collision detection algorithm which uses polygon decomposition through triangle coverings and polygon influence areas (implemented by signs of barycentric coordinates) and the amount of time needed to detect a collision between objects is reduced.
Abstract: Collision detection between moving objects is an open question which raises major problems concerning its algorithmic complexity. In this paper we present a polygon collision detection algorithm which uses polygon decomposition through triangle coverings and polygon influence areas (implemented by signs of barycentric coordinates). By using influence areas and the temporal and spatial coherence property, the amount of time needed to detect a collision between objects is reduced. By means of these techniques, a valid representation for any kind of polygon is obtained, whether concave or convex, manifold or non-manifold, with or without holes, as well as a collision detection algorithm for this type of figures. This detection algorithm has been compared with the well-known PIVOT2D [Hof01] one and better results have been achieved in most situations. This improvement together with its possible extension to 3D makes it an attractive method because pre-processing of the polygons is no longer necessary. Besides, since this method uses sign operations, it proves to be a simple, more efficient and robust method.

Journal ArticleDOI
TL;DR: In this article, the eigenvalues of the Laplacian have been studied numerically from three directions, without attempting a general theory: 1. Discretizing the boundary conditions, 2.
Abstract: ''The difficulties are almost always at the boundary.'' That statement applies to the solution of partial differential equations (with a given boundary) and also to shape optimization (with an unknown boundary). These problems require two decisions, closely related but not identical:o1.How to discretize the boundary conditions. 2.How to discretize the boundary itself. That second problem is the one we discuss here. The region @W is frequently replaced by a polygon or polyhedron. The approximate boundary @[email protected]"N may be only a linear interpolation of the true boundary @[email protected] A perturbation theory that applies to smooth changes of domain is often less successful for a polygon. This paper concentrates on a model problem-the simplest we could find-and we look at eigenvalues of the Laplacian. The boundary @[email protected] will be the unit circle. The approximate boundary @[email protected]"N is the regular inscribed polygon with N equal sides. It seems impossible that the eigenvalues of regular polygons have not been intensively studied., but we have not yet located an authoritative reference. The problem will be approached numerically from three directions, without attempting a general theory. Those directions are:o1.Finite-element discretizations of the polygons @W"N. 2.A Taylor series based on piecewise smooth perturbations of the circle. 3.A series expansion of the eigenvalues in powers of 1/N. The second author particularly wishes that we could have consulted George Fix about this problem.His Harvard thesis demonstrated the tremendous improvement that ''singular elements'' can bring to the finite-element method (particularly when @W has a reentrant corner, or even a crack). His numerical experiments in [1] came at the beginning of a long and successful career in applied mathematics. We only wish it had been longer.

Journal ArticleDOI
TL;DR: The problem of point-in-polygon analysis under randomness, i.e., with random measurement error (ME), is discussed and a conditional probability mechanism is first introduced in order to accurately characterize the nature of the problem and establish the basis for further analysis.
Abstract: This is the second paper of a four-part series of papers on the development of a general framework for error analysis in measurement-based geographic information systems (MBGIS). In this paper, we discuss the problem of point-in-polygon analysis under randomness, i.e., with random measurement error (ME). It is well known that overlay is one of the most important operations in GIS, and point-in-polygon analysis is a basic class of overlay and query problems. Though it is a classic problem, it has, however, not been addressed appropriately. With ME in the location of the vertices of a polygon, the resulting random polygons may undergo complex changes, so that the point-in-polygon problem may become theoretically and practically ill-defined. That is, there is a possibility that we cannot answer whether a random point is inside a random polygon if the polygon is not simple and cannot form a region. For the point-in-triangle problem, however, such a case need not be considered since any triangle always forms an interior or region. To formulate the general point-in-polygon problem in a suitable way, a conditional probability mechanism is first introduced in order to accurately characterize the nature of the problem and establish the basis for further analysis. For the point-in-triangle problem, four quadratic forms in the joint coordinate vectors of a point and the vertices of the triangle are constructed. The probability model for the point-in-triangle problem is then established by the identification of signs of these quadratic form variables. Our basic idea for solving a general point-in-polygon (concave or convex) problem is to convert it into several point-in-triangle problems under a certain condition. By solving each point-in-triangle problem and summing the solutions, the probability model for a general point-in-polygon analysis is constructed. The simplicity of the algebra-based approach is that from using these quadratic forms, we can circumvent the complex geometrical relations between a random point and a random polygon (convex or concave) that one has to deal with in any geometric method when probability is computed. The theoretical arguments are substantiated by simulation experiments.

Journal ArticleDOI
TL;DR: The history of the proofs of the well-known Cauchy lemma on comparison of the distances between the endpoints of two convex open polygons on a plane or sphere is briefly described in this paper.
Abstract: We briefly describe the history of the proofs of the well-known Cauchy lemma on comparison of the distances between the endpoints of two convex open polygons on a plane or sphere, present a rather analytical proof, and explain why the traditional constructions lead in general to inevitable appearance of nonstrictly convex open polygons. We also consider bendings one to the other of two isometric open or closed convex isometric polygons.

Journal ArticleDOI
TL;DR: In this article, the farthest points and cut loci on doubly covered convex polygons were determined explicitly on n-dimensional simplices, and the distance to the cut locus was investigated.
Abstract: We study farthest points and cut loci on doubly covered convex polygons, and determine them explicitly on doubly covered n-dimensional simplices

01 Dec 2004
TL;DR: This work proves it is possible to compute in almost linear time a cutting sequence that is an O(log2 n)-factor approximation of an optimal cutting sequence.
Abstract: We present approximation algorithms for cutting out a polygon P with n vertices from another convex polygon Q with m vertices by line cuts and ray cuts. For line cuts we require both P and Q are convex while for ray cuts we require Q is convex and P is ray cuttable. Our results answer a number of open problems and are either the first solutions or significantly improve over previously known solutions. For the line cutting version, we prove a key property that leads to a simple, constant factor approximation algorithm. For the ray cutting version, we prove it is possible to compute in almost linear time a cutting sequence that is an O(log2 n)-factor approximation of an optimal cutting sequence. No algorithms were previously known for the ray cutting version.

Proceedings ArticleDOI
26 Apr 2004
TL;DR: Results show that the proposed approach can not only find a set of minimum faulty polygons but also does so quickly in terms of the number of rounds of information exchanges and updates between neighbors in the distributed solution.
Abstract: Summary form only given. The rectangular faulty block model is the most commonly used fault model for designing fault-tolerant and deadlock-free routing algorithms in mesh-connected multicomputer. The convexity of a rectangle facilitates simple and efficient ways to route messages around fault regions using relatively few or no virtual channels to avoid deadlock. However, such a faulty block may include many nonfaulty nodes which are disabled, i.e., they are not involved in the routing process. Therefore, it is important to define a fault region that is convex and, at the same time, to include a minimum number of nonfaulty nodes. We propose an optimal solution that can quickly construct a set of minimum faulty polygons, called orthogonal convex polygons, from a given set of faulty blocks in a 2-D mesh (or 2-D torus). The formation of orthogonal convex polygons is implemented using either a centralized or distributed solution. Both solutions are based on the formation of faulty components each of which consists of adjacent faulty nodes only, followed by the addition of a minimum number of nonfaulty nodes to make each component a convex polygon. Extensive simulation has been done to determine the number of nonfaulty nodes included in the polygon, and the result obtained is compared with the best existing known result. Results show that the proposed approach can not only find a set of minimum faulty polygons but also does so quickly in terms of the number of rounds of information exchanges and updates between neighbors in the distributed solution.

01 Jan 2004
TL;DR: In this paper, the authors consider two families of hyperbolic polyhedra and derive formulae relating the dihedral angles, side lengths and the volume of the polyhedron.
Abstract: We consider two families of hyperbolic polyhedra. With one set of face pairings, these polyhedra give the convex core of certain quasi-Fuchsian punctured torus groups. With additional face pairings, they are related to hyperbolic cone manifolds with singularities over certain links. For both families we derive formulae relating the dihedral angles, side lengths and the volume of the polyhedron.

Journal ArticleDOI
TL;DR: In this paper, it was shown that any smooth knot is inscribed by a regular n-gon for any n, where n is the number of vertices in a regular polygon.
Abstract: A regular n-gon inscribing a knot is a sequence of n points on a knot, such that the distances between adjacent points are all the same. It is shown that any smooth knot is inscribed by a regular n-gon for any n. A knot K : S → R is said to be inscribed by a regular n-gon if there is a set of points x0, ..., xn−1 lying on K in a cyclic order, such that the distances ‖xi−1−xi‖ between xi−1 and xi are the same for i = 1, ..., n, where xn = x0. Jon Simon asked the question of whether every smooth knot K is inscribed by a regular n-gon for all n. There has been quite some research activities on this and related problems. See [2, §11] and the references there. In particular, it was shown by Meyerson [4] and E. Kronheimer and P. Kronheimer [3] that given any triangle there is one similar to it which inscribes a given planar curve. It is a very interesting open question whether any closed planar curve is inscribed by a square [2], although this has been proved for a very large class of curves, including all smooth or piecewise linear curves [6]. See also [5]. Up to rescaling we may assume that the length of K is 1. It has been observed by Eric Rawdon and Jonathan Simon (unpublished) that given any smooth knot K, there is a number N such that the statement is true for all n > N . Define an -chain of length n to be a sequence of points (x0, ..., xn−1) on K, lying successively along the positive orientation of K, such that ‖xi − xi−1‖ = for i = 1, ..., n − 1. Choose N large enough so that for any 1/N . Hence by continuity there must be some such that xn = x0, and the result follows. The proof fails when n is small. The question of whether every smooth knot admits an inscribed n-gon for all n has remained open for some time and no answer is known. It seems worth while to record a positive solution. Actually, a little more is true. One can find a regular polygon with one vertex at any prescribed point. The proof is very elementary, although it does use the concept of degree of maps between spheres in an essential way. See [1] for some background. Note that the theorem as stated is not true if the smoothness assumption is dropped; however, it is not known whether it is true if one is also allowed to move the base point. See Remark 6 and Conjecture 7 below for more details. 1991 Mathematics Subject Classification. Primary 57M25.

01 Jan 2004
TL;DR: In this article, three measures of connectivity of 19 planar graphs are studied: the connective constant for self-avoiding walks, and bond and site percolation critical probabilities, with the goal of comparing the induced orders of the Archimedean and Laves lattices under the three measures.
Abstract: An Archimedean lattice is a graph of a regular tiling of the plane, such that all corners are equivalent. A tiling is regular if all tiles are regular polygons: equilateral triangles, squares, et cetera. There exist exactly 11 Archimedean lattices. Being planar graphs, the Archimedean lattices have duals, 3 of which are Archimedean, the other 8 are called Laves lattices.In the thesis, three measures of connectivity of these 19 graphs are studied: the connective constant for self-avoiding walks, and bond and site percolation critical probabilities. The connective constant measures connectivity by the number of walks in which all visited vertices are unique. The critical probabilities quantify the proportion of edges or vertices that can be removed, so that the produced subgraph has a large connected component.A common issue for these measures is that they, although intensely studied by both mathematicians and scientists from other fields, have been calculated only for very few graphs. With the goal of comparing the induced orders of the Archimedean and Laves lattices under the three measures, the thesis gives improved bounds and estimates for many graphs. A large part of the thesis focuses on the problem of deciding whether a given graph is a subgraph of another graph. This, surprisingly difficult problem, is considered for the set of Archimedean and Laves lattices, and for the set of matching Archimedean and Laves lattices.

Journal ArticleDOI
TL;DR: This paper provides an alternative demonstration of Hunt and Hirschhorn's result on equilateral convex pentagon tiles the plane if and only if it has two angles adding to 2π or it is a uniquely determined pentagon with special angles.

Journal ArticleDOI
TL;DR: Bernstein bases, control polygons and corner-cutting algorithms are defined for C1 Merrien's curves and the convergence of these algorithms is proved for two specific families of curves.
Abstract: Bernstein bases, control polygons and corner-cutting algorithms are defined for C1 Merrien's curves introduced in [7] The convergence of these algorithms is proved for two specific families of curves Results on monotone and convex interpolants which have been proved in [8] by Merrien and the author are also recovered

Journal ArticleDOI
TL;DR: In this paper, a new approach to the derivation of self-similar distributions for grain growth is presented, based on the idea that an entire class of N-sided polygons may be represented by a single regular polygon of N curved sides.

Journal ArticleDOI
TL;DR: In this article, the authors generalize the Higman-haemers inequalities for generalized polygons to thick regular near polygons and show that these inequalities generalize well to regular polygons.
Abstract: In this note we will generalize the Higman-Haemers inequalities for generalized polygons to thick regular near polygons.