scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 1987"


Book
01 Jan 1987
TL;DR: In this paper, the authors proposed a visibility algorithm based on three-dimensions and miscellany of the polygons, and showed that minimal guard covers threedimensions of the polygon.
Abstract: Polygon partitions Orthogonal polygons Mobile guards Miscellaneous shapes Holes Exterior visibility Visibility groups Visibility algorithms Minimal guard covers Three-dimensions and miscellany.

1,547 citations


Journal ArticleDOI
TL;DR: Two algorithms for computing the gravitational and magnetic anomalies due to an n‐sided polygon in a two‐dimensional space are presented, implemented as subroutines coded in Fortran-77.
Abstract: We present two algorithms for computing the gravitational and magnetic anomalies due to an n‐sided polygon in a two‐dimensional space. Both algorithms have been implemented as subroutines coded in Fortran-77, and listings are provided. Because references to trigonometric functions have been almost completely eliminated, these codes run substantially faster than mosts codes now in existence. Furthermore, anomalies can be computed at any point outside, on, or inside the polygon. Unlike other codes, these algorithms can be used to model subsurface observations.

311 citations


Journal ArticleDOI
TL;DR: It is shown that form closure of a polygon object can be achieved by four fingers (previous proofs were not complete), and the problem of finding the optimum stable grip or formclosure of any given polygon is solved.
Abstract: It has been shown by Baker, Fortune and Grosse that any two-dimensional polygonal object can be prehended stably with three fingers, so that its weight (along the third dimension) is balanced Besides, in this paper we show that form closure of a polygon object can be achieved by four fingers (previous proofs were not complete) We formulate and solve the problem of finding the optimum stable grip or form closure of any given polygon For stable grip it is most natural to minimize the forces needed to balance through friction the object''s weight along the third dimension For form closure, we minimize the worst-case forces needed to balance any unit force acting on the center of gravity of the object The mathematical techniques used in the two instances are an interesting mix of Optimization and Euclidean geometry Our results lead to algorithms for the efficient computation of the optimum grip in each case

189 citations


Journal ArticleDOI
TL;DR: Computer vision algorithms that recognize and locate partially occluded objects using a generate-test paradigm that iteratively generates and tests hypotheses for compatibility with the scene until it identifies all the scene objects.
Abstract: We present computer vision algorithms that recognize and locate partially occluded objects. The scene may contain unknown objects that may touch or overlap giving rise to partial occlusion. The algorithms revolve around a generate-test paradigm. The paradigm iteratively generates and tests hypotheses for compatibility with the scene until it identifies all the scene objects. Polygon representations of the object's boundary guide the hypothesis generation scheme. Choosing the polygon representation turns out to have powerful consequences in all phases of hypothesis generation and verification. Special vertices of the polygon called ``corners'' help detect and locate the model in the scene. Polygon moment calculations lead to estimates of the dissimilarity between scene and model corners, and determine the model corner location in the scene. An association graph represents the matches and compatibility constraints. Extraction of the largest set of mutually compatible matches from the association graph forms a model hypothesis. Using a coordinate transform that maps the model onto the scene, the hypothesis gives the proposed model's location and orientation. Hypothesis verification requires checking for region consistency. The union of two polygons and other polygon operations combine to measure the consistency of the hypothesis with the scene. Experimental results give examples of all phases of recognizing and locating the objects.

148 citations


Journal ArticleDOI
TL;DR: A modification and extension of the (linear time) visibility polygon algorithm of Lee that shows by example that the original algorithm by Lee, and a more complex algorithm by El Gindy and Avis, can fail for polygons that wind sufficiently.
Abstract: We present a modification and extension of the (linear time) visibility polygon algorithm of Lee. The algorithm computes the visibility polygon of a simple polygon from a viewpoint that is either interior to the polygon, or in its blocked exterior (the cases of viewpoints on the boundary or in the free exterior being simple extensions of the interior case). We show by example that the original algorithm by Lee, and a more complex algorithm by El Gindy and Avis, can fail for polygons that wind sufficiently. We present a second version of the algorithm, which does not extend to the blocked exterior case.

137 citations


Journal ArticleDOI
P. Widmayer, Y. F. Wu, C. K. Wong1
TL;DR: The distance concept is generalized to the case where any fixed set of orientations is allowed, and a family of naturally induced metrics is introduced, and the subsequent generalization of geometrical concepts are introduced.
Abstract: In VLSI design, technology requirements often dictate the use of only two orthogonal orientations, determining both the shape of objects and the distance function, the $L_1 $-metric, to be used for wiring objects. More recent VLSI fabrication technology is capable of creating edges and wires in both the orthogonal and diagonal orientations.We generalize the distance concept to the case where any fixed set of orientations is allowed, and introduce a family of naturally induced metrics, and the subsequent generalization of geometrical concepts. A shortest connection between two points is in this case a path composed of line segments with only the given orientations. We derive optimal solutions for various basic planar distance problems in this setting, such as the computation of a Voronoi diagram, a minimum spanning tree, and the (minimum and maximum) distance between two convex polygons. Many other theoretically interesting and practically relevant problems remain to be solved. In particular, the new famil...

122 citations


Patent
04 Nov 1987
TL;DR: In this paper, a pipeline of polygon processors coupled in series is used for representing 3D objects on a monitor, with each polygon having its position determined by the first scan line on which it appears.
Abstract: A graphic processing system for representing three-dimensional objects on a monitor which uses a pipeline of polygon processors coupled in series. The three-dimensional objects are converted into a group of two-dimensional polygons. These polygons are then sorted to put them in scan line order, with each polygon having its position determined by the first scan line on which it appears. Before each scan line is processed, the descriptions of the polygons beginning on that scan line are sent into a pipeline of polygon processors. Each polygon processor accepts one of the polygon descriptions and stores it for comparison to the pixels of that scan line which are subsequently sent along the polygon processor pipeline. For each new scan line, polygons which are no longer covered are eliminated and new polygons are entered into the pipe. After each scan line is processed, the pixels can be sent directly to the CRT or can be stored in a frame buffer for later accessing. Two polygon processor pipelines can be arranged in parallel to process two halves of a display screen, with one pipeline being loaded while the other is processing. A frame buffer and frame buffer controller are provided for overflow conditions where two passes through the polygon pipeline are needed. A unique clipping algorithm forms a guardband space around a viewing space and clips only polygons intersecting both shells. Extra areas processed are simply not displayed.

111 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduced the concept of a reference network, in which each molecule is hydrogen bonded to four others, and reported on the temperature dependence of its topological characteristics.
Abstract: Configurations of 216 water molecules sampled during the course of isobaric-isothermal simulations over the temperature range -25 to 100 /sup 0/C at 1 atm pressure, using the TIP4P model, are analyzed to study the hydrogen-bond network topology. Results are presented for the total number of polygons of up to seven molecules and for primitive polygons, being those which have no pair of nonadjacent vertices connected by a bridge which is shorter than either of the paths between these vertices within the polygon itself. The authors introduce the concept of a reference network, in which each molecule is hydrogen bonded to four others, and report on the temperature dependence of its topological characteristics.

111 citations


Patent
09 Dec 1987
TL;DR: In this article, the edge of intersection between the planes of such polygons is resolved by determining the edges of intersections between the polygons and testing the signs of certain values in accordance with predetermined criteria, and subpixel priority is treated for similar resolution to provide improved antialiased images.
Abstract: Image data is composed from primitives (polygons) to attain data for displays with the removal of hidden surfaces and smooth-appearing edges. Defined polygons are tested for priority in a determined field of vision by scan conversion to specify individual picture elements (pixels). Polygon contention for pixels is resolved by determining the edge of intersection between the planes of such polygons and testing the signs of certain values in accordance with predetermined criteria. Subpixel priority is treated for similar resolution to provide improved antialiased images.

99 citations


Patent
30 Mar 1987
TL;DR: In this article, Plumb vectors, having a predetermined relationship with the vehicle model, are used to obtain samples of the terrain at the intersection between the vectors and polygons defining the ground in the vicinity of the vehicle.
Abstract: In a computer image generation system, the choice of a path for a vehicle model over a landscape is not restricted. Objects and features in an image to be displayed are defined by polygons. Plumb vectors, having a predetermined relationship with the vehicle model, are used to obtain samples of the terrain at the intersection between the vectors and polygons defining the ground in the vicinity of the vehicle model. The vectors may sample in advance of the vehicle model in the direction of motion or under the vehicle model. The polygons may be encoded with characteristics of the terrain they define so that appropriate noise cues can be generated from information extracted at the intersection of the plumb vectors and polygons. Predetermined ones of the sample points are interpolated for inferring the contour and slope of the terrain before interaction between the vehicle model and interpolated terrain is determined. The display of an image is modified in response to the real-time interaction between the vehicle model and local topology to reflect such interaction at a predetermined viewpoint.

67 citations


01 Jan 1987
TL;DR: This thesis investigates problems related to paths of fewest number of turns (or minimum link paths), under the simplifying assumption that robot is a point, and presents efficient algorithms for the following problems in a polygon P.
Abstract: Consider the motion of a robot in a two-dimensional Euclidean space that is bounded by a simple polygon. The robot can move only along straight-line segments and the boundary of the polygon represents an impenetrable obstacle. Suppose that it is easier for the robot to move in a straight line rather than to turn and rotate. In this thesis, we investigate problems related to paths of fewest number of turns (or minimum link paths), under the simplifying assumption that robot is a point. In particular, we present efficient algorithms for the following problems in a polygon P: (1) Find a minimum link path between two given points of P. (2) Find minimum link paths between a fixed point and all the vertices of P. (3) Preprocess P with respect to a fixed point x such that the number of edges in a minimum link path between x and a query point can be determined in logarithmic time. (4) Compute the link diameter of P (which is the maximum number of edges in any minimum link path of P). We also consider problems where the complexity of a path is measured by its Euclidean length rather than by its number of turns. In this geodesic metric, the distance between two points of the polygon is the length of the shortest Euclidean path between them. We present fast algorithms for the following problems in P: (1) Compute the geodesic diameter of P (which is the maximum distance between any two points of P). (2) Compute geodesic furthest neighbors for all the vertices of P (furthest neighbor of a point x is another point in P whose distance from x is maximum). Several open problems are discussed at the conclusion of the work.

Book
01 Jul 1987
TL;DR: Analysis of the so-called greedy triangulation, which has an application in fabricating masks for integrated circuits, and several heuristics are proposed, which produce solutions within moderate constant factors from the optimum.
Abstract: The following problems of minimally decomposing polygons are considered: (1) decompose a polygon into a minimum number of rectangles, (2) partition a polygon into rectangles by inserting edges of minimum total length and (3) partition a polygon into triangles by inserting a maximal set of non-intersecting diagonals, such that their total length is minimized. The first problem has an application in fabricating masks for integrated circuits. Tight upper and lower bounds are shown for the maximal number of rectangles which may be required to cover any polygon. Also, a fast heuristic which achieves these upper bounds is presented. The second problem has an application in VLSI design, in dividing routing regions into channels. Several heuristics are proposed, which produce solutions within moderate constant factors from the optimum. Also, by employing an unusual divide-and-conquer method, the time performance of a known heuristic is substantially reduced. The third problem has an application in numerical analysis and in constructing optimal search trees. Here, the contribution of the thesis concerns analysis of the so-called greedy triangulation. Previous upper and lower bounds on the length of the greedy triangulation are improved. Also, a linear-time algorithm computing greedy triangulations for an interesting class of polygons is presented.

Journal ArticleDOI
TL;DR: The technique described creates smooth, shaded images for parametrically defined surfaces that directly renders from the definitions of the surfaces without needing polygons, and can maintain a relatively compact database.
Abstract: Special attention is given to the adaptation of the technique for efficiency to both polynomial and rational Bezier patches. The resulting algorithms are simple and robust. Their speed depends mainly on the surface area of the object being rendered, not on the number of surface patches. The technique is particularly suitable for rapid display of geometrically complex objects. Since the technique directly renders from the definitions of the surfaces without needing polygons, it can maintain a relatively compact database. Several examples with timing are included. The technique described creates smooth, shaded images for parametrically defined surfaces. It depends on a general surface scan to generate a dense set of points that represents the surface. Hidden surfaces are eliminated by sorting the sample points into a z-buffer and retaining the points nearest the viewer. Alternative schemes are given for computing the surface normal for each point in the z-buffer. Conventional illumination routines can then be applied to determine the intensities at each pixel, although the reflection-mapping technique is preferred for this article.

Patent
20 Jan 1987
TL;DR: In this paper, a computer is used to generate electrical signals that represent the coordinates of the vertices and a point normal used to obtain the intensities are controlled by programming a computer.
Abstract: Data is generated for graphically displaying objects having surfaces defined by vertex locations, each having a particular intensity. The values of electrical signals that represent the coordinates of the vertex locations and a point normal used to obtain the intensities are controlled by programming a computer to generate electrical signals that represent coordinates of the vertices. The values of each of the coordinates is in terms of an equation having parameters "s" and "t". "t" is identified and kept constant at a first amount. The coordinate equations are factored to redefine them in terms of a first constant and a variable represented by "s". The equations are sequentially solved by substituting therein a predetermined series of values for "s" to control the values of the electrical signals so that certain of the signals represent ones of the coordinated that form a first row of the vertices. The factoring and solving steps are repeated using a second amount for "t" and the same predetermined series of values for "s" so that certain electrical signals represent ones of the coordinates that form a second row of the vertices. Groups of three of the electrical signals are sequentially selected from those representing the coordinates forming the first and second rows of vertices to form a first row of triangular polygons. The factoring, solving and selection are repeated as necessary to form rows of polygons that represent the entire object for display.

Journal ArticleDOI
Jack E. Bresenham1
TL;DR: Implementation considerations relevant to selecting and customizing incremental line-drawing algorithms to cope with such anomalies as equal error metric instances, perturbation effects of clipping, interesections in raster space, EXOR interpretations for polylines, reversibility, and fractional endpoint rounding are discussed.
Abstract: In implmenting rater grahic algorithms, it is impotant to toroughly understand behavior and implicit defaults inherent in each algorithm. Design choices must balance performance with respect to drawing speed, circult count, code space, picture fidelity, system complexity, and system consistency. For example, "close" may sound appealing when describing the match of the rastered representation to a geometirc line. An implementation, however, must quantily an error metric?such as minimum normal distance between candidate raster grid points and the geometric line?and resolve "ties" in which two candidate grid points have an equal error metric. Equal error metric ambiguity can permit algorithimic selection of raster points for a line from (X0, Y0) to (X1, Y1) to differ from points selected rastering the same line back from (X1, Y1) to (X0, Y0). Modilying a rastering algorithm to provide an exactly reversibie path, though, will cause problems when lines are rastered in a context of approximating a circle with a polygon. Only by fully understanding any algorithm can designers determine whether such pel-level anomalies are worth the code space or circuit count necessary to provide explicit user resolution, or whether a fixed default must suffice. This article discusses implementation considerations relevant to selecting and customizing incremental line-drawing algorithms to cope with such anomalies as equal error metric instances, perturbation effects of clipping, interesections in raster space, EXOR interpretations for polylines, reversibility, and fractional endpoint rounding.

Patent
18 Feb 1987
TL;DR: In this paper, a method of filling polygonal regions enables a polygon of any shape to be filled with pixels with a simple algorithm and a display memory area and a working area each having a plurality of addresses correponding to all the pixels on a display screen are provided.
Abstract: A method of filling a polygonal region enables a polygon of any shape to be filled with pixels with a simple algorithm. A display memory area and a working area each having a plurality of addresses correponding to all the pixels on a display screen are provided. A minimum rectangular area including the polygon is determined in accordance with all the polygon vertices, and the minimum rectangular area within the working area is cleared. A straight line (or an edge) connecting each pair of adjoining vertices of the polygon is described to the display memory area with predetermined values. The edges are also described to the working area but in accordance with three rules. Under Rule 1, when a constituent dot of the edge is described to the corresponding address, the data in the address is inverted. Under Rule 2, a start constituent dot of each edge is described only when the edge to be described has an inclination different in polarity from the precedingly described edge. Under Rule 3, the constituent dot of each edge is described only when the constituent dot is shifted in the vertical direction. Then, the rectangular area in the working area is scanned to detect dots in the state of "1" for each scanning line and to number the detected dots. Finally, addresses of the display memory area corresponding to each interval from the odd-numbered detected dot to the even-numbered detected dot are filled with predetermined values.

01 Aug 1987
TL;DR: In this article, the authors define and give algorithms for two classes of physical modeling problems: mobility problems and stability problems, and give a restricted class of configurations, the determined configurations, for which a conservative stability problem can be solved in polynomial time.
Abstract: The ability to model physical objects and procedures accurately enough to predict their behavior without performing physical experimentation is a fundamental goal of robotics. This facility is prerequisite to offline modeling of assembly tasks, high level robotics languages, and automated assembly planning. This thesis defines and gives algorithms for two classes of physical modeling problems: Mobility problems and Stability problems. The Mobility problem for polygons is that of determining whether, in a configuration of non-intersecting polygons, one or more polygons can be moved (relative to some other polygon in the configuration) without causing intersection. Mobility is shown to be NP-hard. An upper bound for the mobility problem remains an open problem. Translational mobility is the problem of determining whether any polygons can be simultaneously moved by translations without causing intersection. Translational mobility is shown to be NP-complete. Infinitesimal mobility is the problem of determining whether there is a set of velocities for the polygons of the configuration such that no point of a polygon $P_{i}$ that is in contact with another polygon $P_{j}$ has a velocity directed towards the interior of $P_{j}$. Infinitesimal mobility can be viewed as an approximation to the mobility problem in that any configuration that is mobile is also infinitesimally mobile. Infinitesimal mobility is shown to be NP-complete. The Stability problem for polygons with mass is the problem of determining whether a configuration of polygons is at a static equilibrium point. The stability problem is considered for configurations of polygons with and without friction, and is shown to be NP-hard for both cases. An algorithm is given that distinguishes between configurations that are stable, unstable, and indeterminate. The ability to distinguish indeterminate configurations is important because indeterminate configurations arise when the model of an assembly is not accurate enough to determine whether the assembly is stable. Finally, a restricted class of configurations is developed, the determined configurations, for which a conservative stability problem can be solved in polynomial time. The determined configurations are a natural class in the sense that they preclude a type of contact that ``seems unpredictable''''. If undetermined points are desired or unavoidable, the restricted stability problem can be solved in time exponential in the number of undetermined points in the configuration.

Journal ArticleDOI
TL;DR: An algorithm is described which computes the conformal mapping from the unit disk onto an arbitrary polygon having circular arcs as sides, which generalizes the Schwarz-Christoffel program of Trefethen.
Abstract: An algorithm is described which computes the conformal mapping from the unit disk onto an arbitrary polygon having circular arcs as sides. This generalizes the Schwarz-Christoffel program of Trefethen (SIAM J. Sci. Stat. Comp., 1 (1980), pp. 82–102). Our algorithm must also determine certain parameters by solving a nonlinear least squares problem. Instead of using Gauss-Jacobi quadrature to evaluate the Schwarz-Christoffel integral, however, an ordinary differential equation solver is applied to a non-singular formulation of the Schwarzian differential equation. The construction of a conformal mapping reduces simple elliptic partial differential equations on an irregular region to similar problems on a disk, for which existing programs can compute solutions very efficiently. Typical examples arise in the modeling of conductivity past an array of conducting cylinders and electrical fields inside a waveguide.

Patent
Masahide Ohhashi1
25 Sep 1987
TL;DR: In this paper, a digital differential analyzer unit (l6) is used to calculate the depth change ΔZ/ΔX of Z coordinate for each unit change in X coordinate and the change ΔI/ ΔX of intensity for each change in Z coordinate, based on X, Y and Z coordinates and intensities of three vertexes of each of triangular polygons constituting a solid model.
Abstract: A shading circuit has a unit for calculating coor­dinates and intensities of points inside a polygon based on X, Y, and Z coordinates and intensities of vertexes of each of polygons constituting a solid model. This unit includes a preprocessing section (l5) for obtaining the depth change ΔZ/ΔX of Z coordinate for each unit change in X coordinate and the change ΔI/ΔX of intensity for each unit change in X coordinate, based on X, Y, and Z coordinates and intensities of three vertexes of each of triangular polygons constituting a solid model, and a digital differential analyzer unit (l6) for obtaining Z coordinates and intensities of points inside each poly­gon commonly using ΔZ/ΔX and ΔI/ΔX when the X and Y coordinates of the points are determined.

Journal ArticleDOI
TL;DR: A new heuristic for minimum weight triangulation of planar point sets is proposed, and a polygon whose vertices are all points from the input set is constructed.
Abstract: A new heuristic for minimum weight triangulation of planar point sets is proposed. First, a polygon whose vertices are all points from the input set is constructed. Next, a minimum weight triangulation of the polygon is found by dynamic programming. The union of the polygon triangulation with the polygon yields a triangulation of the input n-point set. A nontrivial upper bound on the worst-case performance of the heuristic in terms of n and another parameter is derived. Under the assumption of uniform point distribution it is observed that the heuristic yields a solution within the factor of $O(\log n)$ from the optimum almost certainly, and the expected length of the resulting triangulation is of the same order as that of a minimum length triangulation. The heuristic runs in time $O(n^3 )$ .

Journal ArticleDOI
TL;DR: In this article, the Sapondzhyan-Babuska paradox of thin circular plates with convex holes was investigated and the boundary conditions on the polygon were not preserved in the limit.
Abstract: The Sapondzhyan-Babuska paradox consists in the fact that, when thin circular plates are approximated by regular polygons with freely supported edges, the limit solution does not satisfy the conditions of free support on the circle. In this article, new effects of the same nature are found. In particular, plates with convex holes are considered. Here, in contrast to the case of convex plates, the boundary conditions on the polygon are not preserved in the limit. Methods of approximating a smooth contour leading to passage to the limit from conditions of free support to conditions of rigid support are discussed. Bibliography: 20 titles.

Patent
21 Aug 1987
TL;DR: In this paper, a desk-top decorator is made of trapezoidal face panels and internal glue tabs that are urged toward each other by elastic bands, forming an equatorial plane polygon similar to but larger than the top and bottom polygon panels.
Abstract: Attractive eye-catching desk-top decorations formed as flattenable pop-up cardboard structures resembling solid polyhedrons have similar polygons as top and bottom supporting panels. Foldably joined to the sides of each of these polygons are trapezoidal face panels extending diagonally outward with their longer parallel edges foldably joined to corresponding edges of mating trapezoidal face panels extending outward from the other polygon, forming an equatorial plane polygon similar to but larger than the top and bottom polygon panels. Internal glue tabs extending inward in the equatorial plane are urged toward each other by elastic bands, popping the flattened structures into their erect condition. Internal guide flanges formed by or cooperating with the glue tabs arrest and block the inward flexing movement of the trapezoidal face panels as the desired erect polyhedron shape is reached, stabilizing and rigidifying the structures to provide a prolonged useful life.

Proceedings ArticleDOI
01 Oct 1987
TL;DR: A method of analysis based on dent diagrams for orthogonal polygons, and it is able to show that Keil's algorithm for covering horizontally convex polygons is optimal, but can be improved to improve the number of polygons required for a minimal cover.
Abstract: The problem of covering a polygon with convex polygons has proven to be very difficult, even when restricted to the class of orthogonal polygons using orthogonally convex covers. We develop a method of analysis based on dent diagrams for orthogonal polygons, and are able to show that Keil's O(n2) algorithm for covering horizontally convex polygons is optimal, but can be improved to O(n) for counting the number of polygons required for a minimal cover. We also give an optimal O(n2) algorithm for covering another subclass of orthogonal polygons. Finally, we develop a method of signatures which can be used to obtain polynomial time algorithms for an even larger class of orthogonal polygons.

Journal ArticleDOI
TL;DR: A goodness-of-fit statistic is derived and examined by testing randomly generated convex tessellations by finding Voronoi centers that best fit the given tessellingation.
Abstract: Voronoi, or area-of-influence, polygons are convex, space-filling polygons constructed around a set of points (Voronoi centers) such that each polygon contains all points closer to its Voronoi center than to the center of any other polygon. The relationship of Voronoi centers to edges of Voronoi polygons is used to test whether any convex tessellation consists of Voronoi polygons. This test amounts to finding Voronoi centers that best fit the given tessellation. Voronoi centers are found by solving two systems of linear equations. These equations represent (1) conditions on the slope of polygon edges relative to the slope of lines through Voronoi centers, and (2) conditions on the distance from edges to Voronoi centers. Least squares and constrained least-squares solutions are used to solve the two systems. Different methods of solution can provide insight as to how a tessellation varies from Voronoi polygons. A goodness-of-fit statistic is derived and examined by testing randomly generated convex tessellations. Some polygonal ice cracks provide an example of naturally occurring polygons that are approximated closely by Voronoi polygons.

Patent
Tsuyoshi Yoshimura1
18 Aug 1987
TL;DR: In this paper, a rotary polygon mirror is used to reflect a laser beam onto a photo-sensitive drum, and an optical lens system is placed in front of the mirror.
Abstract: A scanning apparatus using a rotary polygon mirror comprises a rotary polygon mirror, a motor for driving the rotary polygon mirror, an optical lens system positioned in front of the rotary polygon mirror for projecting a laser beam onto the rotary polygon mirror and outputting the laser beam reflected thereby toward a photosensitive drum, and a housing for enclosing the rotary polygon mirror in a closed space formed therein and accommodating the optical lens system.

Journal ArticleDOI
TL;DR: The status of an ongoing research effort to develop a geographic information system based on a variant of the linear quadtree, which uses quadtree encodings for storing area, point and line features, is presented.

Journal ArticleDOI
TL;DR: The boundary characteristic of a lattice polygon is defined for every Archimedean tiling, and related enumeration formulae are found.

Journal ArticleDOI
01 Jun 1987
TL;DR: An O(n \ast \log (n) time algorithm is presented for computing a data structure that represents the minimal-turn paths from a source point to all other points in the polygon.
Abstract: The problem of movement in two-dimensional Euclidean space that is bounded by a (not necessarily convex) polygon is considered. Movement is restricted to be along straight line segments, and the objective is to minimize the number of bends or "turns" in a path. Most past work on this problem has addressed the movement between a source point and a destination point. An O(n \ast \log (n)) time algorithm is presented for computing a data structure that represents the minimal-turn paths from a source point to all other points in the polygon. An advantage of this algorithm is that it uses relatively simple data structures and is practical to implement. Another advantage is that it is easily generalized to accommodate the movement of a disk of radius r > 0.

Patent
13 Jan 1987
TL;DR: A polyhedron that approximates a sphere made up of generally irregular polygons is defined in this article, where polygons form faces of polyhedra and each vertex is a junction of either 3 or 4 polygonal edges.
Abstract: A polyhedron that approximates a sphere made up of generally irregular polygons (2) which polygons form faces of the polyhedron. The polygon faces are formed in successive ring arrangements starting at an equatorial ring (E) and continuing outwardly on either side of the equational ring towards a cap (5). Polygon faces of successive rings from the equatorial ring (E) are either half in number or the same number, as presented in a previous ring, which previous ring is closer to the equatorial ring. Each vertex (7) is a junction of either 3 or 4 polygonal edges. Portion or section of polyhedron may be used to form a shelter structure.

Journal ArticleDOI
TL;DR: This work presents an algorithm for solving the problem of determining whether a set of polygons is multi-directionally separable, and shows how to compute all directions of unidirectional separability for sets of arbitrary simple polygons.
Abstract: We consider the problem of separating a set of polygons by a sequence of translations (one such collision-free translation motion for each polygon). If all translations are performed in a common direction the separability problem so obtained has been referred to as the uni-directional separability problem; for different translation directions, the more general multi-directional separability problem arises. The class of such separability problems has been studied previously and arises e.g. in computer graphics and robotics. Existing solutions to the uni-directional problem typically assume the objects to have a certain predetermined shape (e.g., rectangular or convex objects), or to have a direction of separation already available. Here we show how to compute all directions of unidirectional separability for sets of arbitrary simple polygons. The problem of determining whether a set of polygons is multi-directionally separable had been posed by G.T. Toussaint. Here we present an algorithm for solving this problem which, in addition to detecting whether or not the given set is multidirectionally separable, also provides an ordering in which to separate the polygons. In case that the entire set is not multi-directionally separable, the algorithm will find the largest separable subset.