scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 1994"


Book
Joseph O'Rourke1
01 Jan 1994
TL;DR: In this paper, the design and implementation of geometry algorithms arising in areas such as computer graphics, robotics, and engineering design are described and a self-contained treatment of the basic techniques used in computational geometry is presented.
Abstract: From the Publisher: This is the newly revised and expanded edition of a popular introduction to the design and implementation of geometry algorithms arising in areas such as computer graphics, robotics, and engineering design. The basic techniques used in computational geometry are all covered: polygon triangualtions, convex hulls, Voronoi diagrams, arrangements, geometric searching, and motion planning. The self-contained treatment presumes only an elementary knowledge of mathematics, but it reaches topics on the frontier of current research. Thus professional programmers will find it a useful tutorial.

1,874 citations


Proceedings ArticleDOI
24 Jul 1994
TL;DR: A method for combining a collection of range images into a single polygonal mesh that completely describes an object to the extent that it is visible from the outside is presented.
Abstract: Range imaging offers an inexpensive and accurate means for digitizing the shape of three-dimensional objects. Because most objects self occlude, no single range image suffices to describe the entire object. We present a method for combining a collection of range images into a single polygonal mesh that completely describes an object to the extent that it is visible from the outside.The steps in our method are: 1) align the meshes with each other using a modified iterated closest-point algorithm, 2) zipper together adjacent meshes to form a continuous surface that correctly captures the topology of the object, and 3) compute local weighted averages of surface positions on all meshes to form a consensus surface geometry.Our system differs from previous approaches in that it is incremental; scans are acquired and combined one at a time. This approach allows us to acquire and combine large numbers of scans with minimal storage overhead. Our largest models contain up to 360,000 triangles. All the steps needed to digitize an object that requires up to 10 range scans can be performed using our system with five minutes of user interaction and a few hours of compute time. We show two models created using our method with range data from a commercial rangefinder that employs laser stripe technology.

1,518 citations


01 May 1994
TL;DR: A new method of rendering volumes that leverages the 3D texturingHardware in Silicon Graphics RealityEngine workstations and utilizes the parallel texturing hardware to perform reconstruction and resampling on polygons embedded in the texture.
Abstract: This paper describes a new method of rendering volumes that leverages the 3D texturing hardware in Silicon Graphics RealityEngine workstations The method defines the volume data as a 3D texture and utilizes the parallel texturing hardware to perform reconstruction and resampling on polygons embedded in the texture The resampled data on each polygon is transformed into color and opacity values and composited into the frame buffer A 128×128×64 volume is rendered into a 512 window at over 10 frames per-second Two alternative strategies for embedding the resampling polygons are described and their trade-offs are discussed This method is easy to implement and we apply it to the production of digitally reconstructed radiographs as well as opacity-based volume rendered images The generality of this approach is demonstrated by describing its application to the proposed PixelFlow graphics system PixelFlow overcomes the lighting and volume size limitations imposed by the RealityEngine It is expected to render 256 data sets on a 640×512 screen at over 10 frames per second 1 Fig 1 Resampling polygon orientations Volume Boundary a) Object space sample planes b) Image space sample planes RealityEngine is a trademark of Silicon Graphics Inc where ui are the resample values behind a pixel and d is the spacing between sample values Note that d is constant for all samples behind a pixel, but due to perspective, it varies from pixel to pixel The resampled ui terms are summed at each pixel, and the d factors are applied by using an additional full-screen polygon with a 2D texture corresponding to the d values required for each pixel The summation results may be viewed directly or the exponential required to mimic a radiograph may be computed at each pixel by using a lookup table The RealityEngine has a maximum precision of 12bits per frame buffer and texture component The summation could easily overflow that unless the sample values are properly scaled Our implementation maintains 12-bit volume data values in the texture memory and scales each resampled value by a user controlled "exposure" value ranging from zero to one The scaled samples are then summed and clamped if they exceed the 12-bit range In practice, it has been easy to find suitable exposure control settings for the data sets tested Figure 2 shows radiograph images of 128×128×64 CT data of a human pelvis made with polygons aligned in object-space polygons aligned with the object-space axes Figure 1b shows resampling on polygons aligned with the image-space axes In either case, the resampled values behind each pixel are combined to produce a color for that pixel The combining method is often a compositing operation, but may be other operations as required by the visualization application Polygons aligned in object-space are defined to lie within the volume and rendered with GL library calls This method is complicated slightly by the need to reorient the sampling polygons in the plane most parallel to the view-plane as the view-point changes This is accomplished by examining the view matrix and explicitly creating polygons for the six cases that arise [Westover91] Polygons aligned in image-space must be clipped to the boundaries of the volume to ensure valid texture coordinates Polygons are defined in image-space and transformed by the inverse viewing matrix into objectspace where the clipping occurs Clipped polygons are then rendered with the usual GL library calls In addition to using unclipped polygons, there are other advantages to using the object-space method The texturing time for any polygon is proportional to the number of pixels it covers A priori information about the extent of interesting features in each slice of the volume may be used to minimize the polygon size, and thus its texturing time, as a function of its location The texture memory of the RealityEngine is limited to 1M 12-bit data points To render larger volumes, slab subsets are loaded and rendered in succession Texture memory may be reloaded in about 01 seconds With the object-space method, rendering each slab is simple The image-space method must render polygons multiple times, clipping them to the currently loaded volume slab 3 Radiographs A digitally reconstructed radiograph of medical volume data is produced by combining the resampled values behind each pixel to approximate the attenuation integral pixel intensity = 10 exp(-∑ ui d ) (1) 2 Fig 3 Digitally reconstructed radiographs is not normalized and normalization is an expensive process requiring a square root Lighting without normalization is possible, but this has not yet been tried to see how serious the artifacts are 5 Performance We consider two data sizes rendered into a 512 window The smaller data size of 128×128×64 may be rendered at ten frames per-second using 128 polygons aligned in object-space This equates to a processing rate of 10 million voxels per-second In our test images we measured about 160 million pixel operations per second, where each pixel operation is a trilinear interpolation of the 3D texture components, a multiplication by a scaling or opacity factor, and a summation or composite into the frame buffer The larger data size of 256×256×64 requires four 256×256×16 texture slabs and is rendered at 25 frames per-second with 256 resampling polygons Loading texture slabs consumes less than 01 seconds per-slab A progressive refinement approach would allow a user to manipulate the low-resolution data at a high frame rate, and render the high-resolution data as soon as the user allows the motion to stop The performance is very linear with respect to the number of pixels processed As the number of screen pixels or resampling polygons is doubled, the frame rate is halved If more resampling polygons are used, higher quality images are obtained at the expense of lower rendering speed 6 PixelFlow Texturing hardware is likely to be a common feature of graphics systems in the future The PixelFlow graphics system under development at the University of North Carolina at Chapel Hill will have texturing hardware [Molnar92] that it is suitable for a variant of the polygon resampling approach described above for the RealityEngine We propose a polygon texturing approach for the PixelFlow system that will overcome the limitations on realistic lighting and data size imposed by the RealityEngine The texturing hardware in PixelFlow will allow 128 pixel processors to access eight arbitrarily-addressed 32-bit values in texture memory in under 500 μs PixelFlow texturing hardware does not perform any operations on these texture values; rather, they are simply loaded into the pixel processors where a user’s program manipulates them as ordinary data If the 32-bit values are treated as four 8-bit texture components, then three may be 4 Opacity-Based Rendering The summation of samples produces radiograph images Compositing samples produces images with occlusion Only one texture component is required for the linear attenuation coefficient used to produce radiographs Two 8-bit texture components can represent the raw data and a precomputed shading coefficient The resampled data component values are used as indices into an opacity lookup table This lookup uses the texture hardware for speed The shading coefficient is a function of the original data gradient and multiplies the sample opacity to produce images of shaded features as shown in figure 3 This figure shows the human pelvis data set above an image of a 4 mm volume of a chicken embryo acquired by a microscopic MRI scanner The precomputed shading fixes the light position(s) relative to the volume For more general lighting by sources fixed in image-space, the shade texture component must be replaced by three components containing the normalized data gradient Unfortunately, the resampled gradient on the polygons Fig 3 Shaded volume rendering

269 citations


Journal ArticleDOI
TL;DR: Two gradient consistency heuristics are introduced that use calculated gradients at the corners of ambiguous faces, as well as the function values at those corners, to disambiguate at a reasonable computational cost.
Abstract: A popular technique for rendition of isosurfaces in sampled data is to consider cells with sample points as corners and approximate the isosurface in each cell by one or more polygons whose vertices are obtained by interpolation of the sample data. That is, each polygon vertex is a point on a cell edge, between two adjacent sample points, where the function is estimated to equal the desired threshold value. The two sample points have values on opposite sides of the threshold, and the interpolated point is called an intersection point.When one cell face has an intersection point in each of its four edges, then the correct connection among intersection points becomes ambiguous. An incorrect connection can lead to erroneous topology in the rendered surface, and possible discontinuities. We show that disambiguation methods, to be at all accurate, need to consider sample values in the neighborhood outside the cell. This paper studies the problems of disambiguation, reports on some solutions, and presents some statistics on the occurrence of such ambiguities.A natural way to incorporate neighborhood information is through the use of calculated gradients at cell corners. They provide insight into the behavior of a function in well-understood ways. We introduce two gradient consistency heuristics that use calculated gradients at the corners of ambiguous faces, as well as the function values at those corners, to disambiguate at a reasonable computational cost. These methods give the correct topology on several examples that caused problems for other methods we examined.

256 citations


Patent
28 Sep 1994
TL;DR: In this article, a method for determining position by obtaining directional information from spatial division multiple access (SDMA)-equipped and non-SDMA-equipped base stations is proposed, which is directed for use in a wireless communication system which includes a plurality of base stations each having a corresponding coverage area.
Abstract: A method for determining position by obtaining directional information from spatial division multiple access (SDMA)-equipped and non-SDMA-equipped base stations. The method is directed for use in a wireless communication system which includes a plurality of base stations each having a corresponding coverage area. For each of the base stations, a plurality of RF measurements are determined in cooperation with a receiver, including a link budget of the base station, for a predetermined plurality of distances and directions. Determined RF measurements for each of the base stations are modeled as a scaled contour shape having minimum and maximum boundaries. Base stations which neighbor the mobile unit are determined so as to define a first bounding polygon area by their intersecting contours. The first bounding polygon area generally describes the relative position of the mobile unit. A second bounding polygon area is determined in accordance with the lobes of neighboring base stations as described in terms of azimuth angles. The intersection of the first and second bounding polygon areas are determined so as to define a location polygon which more precisely describes the position of the mobile unit in terms of minimum and maximum error estimate.

182 citations


Patent
28 Sep 1994
TL;DR: In this article, an improved positioning system and method for use in a wireless communication system including a plurality of base stations each having a corresponding coverage area is presented. And the intersections of the contour shapes define a bounding polygon area that describes the position of a mobile unit in terms of minimum and maximum error estimate.
Abstract: An improved positioning system and method for use in a wireless communication system including a plurality of base stations each having a corresponding coverage area. Scaled contour shapes are generated having minimum and maximum boundaries based upon determined RF measurements of each of the base stations. The intersections of the contour shapes define a bounding polygon area that describes the position of a mobile unit in terms of minimum and maximum error estimate. Once the bounding polygon area has been defined, the latitude and longitude of the center of the polygon area is determined whereupon corresponding street addresses may be obtained through reference to one or more databases.

136 citations


Journal ArticleDOI
TL;DR: The NTP basis with optimal shape preserving properties in the sense of (Goodman and Said, 1991), that is, theshape of the control polygon of a curve with respect to the optimal basis resembles with the highest fidelity the shape of the curve among all the control polygons of the same curve corresponding to NTP bases.

132 citations


Journal ArticleDOI
TL;DR: A parallel algorithm designed for polygon scan conversion and rendering is presented which supports fast rendering of highly complex data sets using advanced lighting models and an in-depth analysis of the overhead costs accompanying parallel processing shows where performance is adequate or could be improved.
Abstract: Using parallel processing for visualization speeds up computer graphics rendering of complex data sets. A parallel algorithm designed for polygon scan conversion and rendering is presented which supports fast rendering of highly complex data sets using advanced lighting models. Dedicated graphics rendering engines do not necessarily suit such data sets, although they can support real-time update of moderately complex scenes using simple lighting. Advantages to using a software-based approach include the feasibility of adding special rendering features to the program and the capability of integrating a parallel scientific application with a parallel graphics renderer. A new work decomposition strategy presented, called task adaptive, is based on dynamically partitioning the amount of computational work left at a given time. The algorithm uses a heuristic for dynamic task decomposition in which image space tasks are partitioned without requiring interruption of the partitioned processor. A sophisticated memory referencing strategy lets local memory access graphics data during rendering. This permits implementation of the algorithm on a distributed memory multiprocessor. An in-depth analysis of the overhead costs accompanying parallel processing shows where performance is adequate or could be improved. >

101 citations


Journal ArticleDOI
TL;DR: A parallel method using a Competitive Hopfield Neural Network (CHNN) is proposed for polygonal approximation and is compared to several existing methods by the approximation error norms L2 and L∞ with the result that promising approximation polygons are obtained.

99 citations


Journal ArticleDOI
TL;DR: It is shown that these results hold even if the polygons are required to be in general position, and that covering the interior or boundary of an orthogonal polygon with rectangles is NP-complete.

90 citations


Patent
26 Oct 1994
TL;DR: In this paper, a method for detecting unconstrained collisions between three-dimensional moving objects is described. But the method is not suitable for handling objects with substance passing through each other in 3D space.
Abstract: Apparatus and method for detecting unconstrained collisions between three-dimensional moving objects are described. The apparatus and method addresses the problems associated with handling objects with substance passing through each other in three-dimensional space. When objects collide in a three-dimensional simulation, it is important to identify such collisions in real-time so that the behavior of the colliding objects may be adjusted appropriately. Native vertices are stored and novel structure is provided so that the stored words containing native vertices work together to form polygons, or other object primitives, that work together. For triangle object primitives, three vertices form the first triangle primitive, but a second triangle primitive is formed by receiving and storing only one additional vertex, the other two vertices needed to form the second triangle primitive being shared with the first triangle primitive. The apparatus and method also provides structure for storing and communicating polygon vertex relationship information between multiple object primitives and objects, and structure and method for comparing the extent of an object primitive with all other previously stored object primitive extents simultaneous with receipt and storage of the object primitive vertex data. The ability to store each coordinate vertex only once and to share the vertex coordinate information among multiple objects radically reduces the vertex storage requirements, simplifies unconstrained object collision determinations, and increases data throughput so that real-time, or near real-time, computations appropriate for simulation are achieved.

Patent
13 May 1994
TL;DR: A geometric toy construction system has a multiplicity of flat, regular polygonal construction panels interengageable edge-to-edge by means of shared but separate intervening cylindrical axles positioned parallel to the edges of the panels and attached thereon, thereby enabling the building of two-and three-dimensional constructions as discussed by the authors.
Abstract: A geometric toy construction system has a multiplicity of flat, regular polygonal construction panels interengageable edge-to-edge by means of shared but separate intervening cylindrical axles positioned parallel to the edges of the panels and attached thereon, thereby enabling the building of two- and three-dimensional constructions. One axle enables up to six panels to be snap-fit into position about the axis of their commonly shared axle.

Journal ArticleDOI
TL;DR: An efficient method is given for triangulating a collection of p disjoint Jordan polygonal chains in time O (n + p (log p)1+e), for any fixed e > 0, where n is the total number of vertices.
Abstract: Recent advances on polygon triangulation have yielded efficient algorithms for a large number of problems dealing with a single simple polygon. If the input consists of several disjoint polygons, however, it is often desirable to merge them in preprocessing so as to produce a single polygon that retains the geometric characteristics of its individual components. We give an efficient method for doing so, which combines a generalized form of Jordan sorting with the efficient use of point location and interval trees. As a corollary, we are able to triangulate a collection of p disjoint Jordan polygonal chains in time O (n + p (log p)1+e), for any fixed e > 0, where n is the total number of vertices. A variant of the algorithm gives a running time of O ((n + p log p) log log p). The performance of these solutions approaches the lower bound of Ω (n + p log p).

Journal ArticleDOI
TL;DR: An algorithm for the detection of dominant points and for building a hierarchical approximation of a digital curve is proposed and is shown to perform well for a wide variety of shapes, including scaled and rotated ones.
Abstract: An algorithm for the detection of dominant points and for building a hierarchical approximation of a digital curve is proposed. The algorithm does not require any parameter tuning and is shown to perform well for a wide variety of shapes, including scaled and rotated ones. Dominant points are first located by a coarse-to-fine detector scheme. They constitute the vertices of a polygon closely approximating the curve. Then, a criterion of perceptual significance is used to repeatedly remove suitable vertices until a stable polygonal configuration, the contour sketch, is reached. A highly compressed hierarchical description of the shape also becomes available. >

Journal ArticleDOI
TL;DR: In this article, the functional determinant of an elliptic operator with positive, discrete spectrum was defined for polygons with the topology of a disc in the Euclidean plane.
Abstract: The functional determinant of an elliptic operator with positive, discrete spectrum may be defined ase −Z' (0), whereZ(s), the zeta function, is the sum $$\sum\limits_n {\lambda _n^{ - \delta } } $$ analytically continued ins. In this paperZ'(0) is calculated for the Laplace operator with Dirichlet boundary conditions inside polygons with the topology of a disc in the Euclidean plane. Our results are complementary to earlier investigations of the determinants on smooth surfaces with smooth boundaries. Our expression can be viewed as the energy for a system of static point particles, corresponding to the corners of the polygon, with self-energy and pair interaction energy. We have completely explicit closed expressions for triangles and regular polygons with an arbitrary number of sides. Among these, there are five special cases (three triangles, the square and the circled), where theZ'(0) are known by other means. One special case fixes an integration constant, and the other provide four independent analytical checks on our calculation.

Journal ArticleDOI
TL;DR: The technique that is described aims at evaluating the numerical density of cells in highly heterogeneous regions, e.g., nuclei, layers or columns of neurones, rather than counting the number of neuronal sections ('profiles') in a reference frame, and evaluated the 'free area' which lies around each profile.

Journal ArticleDOI
TL;DR: In this article, the conditions générales d'utilisation (http://www.numdam.org/legal.php) of a fichier do not necessarily imply a mention of copyright.
Abstract: © Annales de l’institut Fourier, 1994, tous droits réservés. L’accès aux archives de la revue « Annales de l’institut Fourier » (http://annalif.ujf-grenoble.fr/) implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/legal.php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.

Patent
21 Oct 1994
TL;DR: In this paper, the authors proposed a document imaging system which detects skew and/or detects size/shape of a document image by modifying the scanning signals as required such that the resultant skew angle is substantially equal to zero.
Abstract: The present invention relates in general to optical scanning and image processing, and relates more particularly to a document imaging system which detects skew and/or detects size/shape of a document image Preferred embodiments utilize a background with optical characteristics which contrast with those of the scanned document In one embodiment, a document imaging system generates scanning signals in response to optical characteristics of a medium such as a sheet of paper, detects transitions in the scanning signal which define points along one or more edges of the medium, establishes a skew angle between the detected edges and a reference orientation, and compensates for skew by modifying the scanning signals as required such that the resultant skew angle is substantially equal to zero In another embodiment, a document imaging system detects one or more edges of a document, defines a polygon having sides substantially congruent with the detected edges, and establishes the size of the document in response to the polygon

Journal ArticleDOI
TL;DR: A new computational model of crystal growth is presented, in which the interface between liquid and solid is explicitly tracked, but the measurement of curvature is simplified through the assumption that the crystal is a polygon having a limited number of possible normal directions.

Proceedings ArticleDOI
10 Jun 1994
TL;DR: A new, strictly larger class of polygons, called generalized streets, is defined which are characterized by the property that every point on the boundary of a -street is visible from a point on a horizontal line segment connecting the two boundary chains.
Abstract: We consider the problem of a robot which has to find a path in an unknown simple polygon from one point s to another point t, based only on what it has seen so far. A Street is a polygon for which the two boundary chains from s to t are mutually weakly visible, and the set of streets was the only class of polygons for which a competitive search algorithm was known.We define a new, strictly larger class of polygons, called 𝒢-streets which are characterized by the property that every point on the boundary of a 𝒢-street is visible from a point on a horizontal line segment connecting the two boundary chains from s to t. We present an on-line strategy for a robot placed at s to find t in an unknown rectilinear 𝒢-street; the length of the path created is at most 9 times the length of the shortest path in the L1 metric. This is optimal since we show that no strategy can achieve a smaller competitive factor for all rectilinear 𝒢-streets. Compared to the L2-shortest path, the strategy is 9.06-competitive which leaves only a very small gap to the lower bound of 9.

Proceedings ArticleDOI
14 Feb 1994
TL;DR: A dynamic and efficient index scheme called the time polygon (TP-index) for temporal databases, which handles long duration temporal data elegantly and efficiently.
Abstract: To support temporal operators efficiently, indexing based on temporal attributes must be supported. The authors propose a dynamic and efficient index scheme called the time polygon (TP-index) for temporal databases. In the scheme, temporal data are mapped into a two-dimensional temporal space, where the data can be clustered based on time. The date space is then partitioned into time polygons where each polygon corresponds to a data page. The time polygon directory can be organized as a hierarchical index. The index handles long duration temporal data elegantly and efficiently. The performance analysis indicates that the time polygon index is efficient both in storage utilization and query search. >

Journal ArticleDOI
TL;DR: This article considers the problem of determining the convex shape of a polygonal part from a sequence of projections, and shows that, given a diameter function, deciding whether such apolygon exists is NP-complete.
Abstract: Our objective is to automatically recognize parts in a struc tured environment (such as a factory) using inexpensive and widely available hardware. In this article we consider the pla nar problem of determining the convex shape of a polygonal part from a sequence of projections. Projecting the part onto an axis in the plane of the part produces a scalar measure, the diameter, which is a function of the angle of projection. The diameter of a part at a particular angle can be measured using an instrumented parallel-jaw gripper.First, we present the negative result that shape cannot be uniquely recovered: for a given set of diameter measurements, there is an (uncountably) infinite set of polygonal shapes con sistent with these measurements. Because most of these shapes have parallel edges of varying lengths, we consider the related problem of identifying a representative polygon with no par allel edges. We show that, given a diameter function, deciding whether such a polygon exists is NP-complete.These resul...

Patent
24 Jan 1994
TL;DR: In this paper, an image generating apparatus consisting of an input unit for setting data related to a shape of each polygon, an attribute-data-setting units for setting physical-property attribute data of the polygons, a view-point-data setting unit, a boundary-box-dividing unit for dividing a boundary box enclosing all polygons into a set of voxels of equal-size with parameters for the boundary box and a linear conditional expression, a segment buffer for registering intersection data per segment for each light source consisting of a thread of a processor
Abstract: An image generating apparatus used in the field of image processing such as Computer Graphics. The image generating apparatus comprises an input unit for setting data related to a shape of each polygon, an attribute-data-setting unit for setting physical-property attribute data of the polygons, a view-point-data setting unit for setting data related to a view point, a boundary-box-dividing unit for dividing a boundary box enclosing all the polygons into a set of voxels of equal-size with parameters for the boundary box and a linear conditional expression, a segment buffer for registering intersection data per segment for each light source consisting of a thread of a processor assigned for each segment on one polygon and a ray-intercepting polygon at the time of intensity computation per pixel, and an intensity computing unit for checking an intersection with the registered ray-intercepting polygon first when computing an adjacent pixel in the same segment.

Proceedings ArticleDOI
24 Jul 1994
TL;DR: This is the first algorithm to antialias with guaranteed accuracy scenes consisting of hundreds of millions of polygons, and uses an object-space octree to cull hidden geometry rapidly and a quadtree data structure to test visibility throughout image-space regions.
Abstract: In previous work, we presented an algorithm to accelerate z-buffer rendering of enormously complex scenes. Here, we extend the approach to antialiased rendering with an algorithm that guarantees that each pixel of the output image is within a user-specified error tolerance of the filtered underlying continuous image. As before, we use an object-space octree to cull hidden geometry rapidly. However, instead of using an image-space depth pyramid to test visibility of collections of pixel samples, we use a quadtree data structure to test visibility throughout image-space regions. When regions are too complex, we use quadtree subdivision to simplify the geometry as in Warnock's algorithm. Subdivison stops when the algorithm can either analytically filter the required region or bound the convolution integral appropriately with interval methods. To the best of our knowledge, this is the first algorithm to antialias with guaranteed accuracy scenes consisting of hundreds of millions of polygons.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the knot probability of polygons confined to slabs or prisms, considered as subsets of the simple cubic lattice, and showed rigorously that almost all sufficiently long polygons in a slab are knotted.
Abstract: We study the knot probability of polygons confined to slabs or prisms, considered as subsets of the simple cubic lattice. We show rigorously that almost all sufficiently long polygons in a slab are knotted and we use Monte Carlo methods to investigate the behaviour of the knot probability as a function of the width of the slab or prism and the number of edges in the polygon. In addition we consider the effect of solvent quality on the knot probability in these confined geometries.

Patent
22 Sep 1994
TL;DR: In this article, a method for generating a subpixel mask for polygon edges directly by an operation without using a look-up table, includes the steps of forming subblocks by dividing a pixel into n subpixels depending on the slope of the polygon edge.
Abstract: In a computer graphics system, a method for generating a subpixel mask for polygon edges directly by an operation without using a look-up table, includes the steps of forming subblocks by dividing a pixel into n subpixels depending on the slope of the polygon edge, calculating subblock coverage which is a distance from the pixel boundary to the intersection point of n subblocks and polygon edge, and generating an n×n subpixel mask depending on the calculated subblock coverage. In an apparatus using the method, edge-generated aliasing is removed.

Journal ArticleDOI
TL;DR: In this article, a fast and efficient method for determining if a coordinate point lies within a closed region or polygon, defined by any number of coordinate points, is described, based on the point-vector form of a line and the ray intersection approach to the point in polygon test.
Abstract: This study outlines a fast and efficient method for determining if a coordinate point lies within a closed region or polygon, defined by any number of coordinate points. The described algorithm is based on the point-vector form of a line and the ray intersection approach to the point in polygon test. The method described was first given by Saalfield [5], but also includes an appropriate modification for point in polygon testing, derived during its implementation into various computer programs (Taylor [6]).

Patent
10 Jun 1994
TL;DR: In this paper, a real-time display type image synthesizing system which can display a 3D object with less polygons and with high resolution is provided, where texture information applied to each polygon in each of the shape models is stored in a texture information storage unit.
Abstract: A real-time display type image synthesizing system which can display a 3-D object with less polygons and with high resolution is provided. The 3-D object data is stored in a 3-D object data storage unit 26 as shape models having different degrees of precision. The closer the 3-D object is to the view point in the view-point coordinate system, the object data of the shape model of higher precision is read out. Texture information applied to each polygon in each of the shape models is stored in a texture information storage unit 32 as image information of different resolution for every shape model and for every polygon in the shape models. An image forming unit 34 maps the texture information of precision corresponding to each polygon in the 3-D object perspectively projected and output by a 3-D calculation unit 22 onto the respective polygons to synthesize and display an image on a display 40.

Patent
19 Aug 1994
TL;DR: In this article, the final viewable color of each pixel to be displayed from video data signal output of a computer image generator is found by using input data signals setting display space coordinates of each vertex of each face polygon, to generate a crossing location of each polygon edge along an edge segment of any of the array of display pixels; storing, and then processing, the edge segment crossing data signals for all polygons affecting that pixel, along with color data for each of the faces occupying any portion of a pixel, and for a plurality of different edge segments of a constellation
Abstract: The final viewable color of each pixel to be displayed from video data signal output of a computer image generator, is found by: using input data signals setting display space coordinates of each vertex of each face polygon to be displayed, to generate a crossing location of each polygon edge along an edge segment of any of the array of display pixels; storing, and then processing, the edge segment crossing data signals for all polygons affecting that pixel, along with color data for each of the faces occupying any portion of that pixel, and for a plurality of different edge segments of a constellation of four adjacent pixels, to obtain pixel color intensity data for each corner of each displayable pixel; and mixing polygon color intensity data signals for all corners of a presently-processed pixel, to determine the final, observable color of that display pixel.

Patent
12 May 1994
TL;DR: Signal transformations of inputted data brought about by 58 new subroutines in combination with other sub-routeines to display world maps or other display items with the unique capability of performing the following functions in complete generality as discussed by the authors.
Abstract: Signal transformations of inputted data brought about by 58 new subroutines in combination with other subroutines to display world maps or other display items with the unique capability of performing the following functions in complete generality. (1) Arbitrary selection of map center and coverage, including global displays, (2) filling of all land and lake areas defined by polygons composed of an arbitrary number of vertices, (3) clipping of map features and overlays at map boundaries and poles, (4) selection from any of nineteen currently implemented map projections with provision to install any other projection topologically similar to an oblique conic, (5) calculation of latitude/longitude for any point on a map without the need for inverse mapping equations, and (6) an efficient method of plotting polyline segments along great circles. These are a number of feature functions provided by this inventive concept. The software could potentially be used with any digital global geographic data base, such as World Data Bank II (WDBII), a geographic information system or other data base where polylines are used to depict linear and/or areal features. Polygon (region filled) maps and other display items can be constructed from any data base from which closed polygons can be extracted directly, or constructed via additional processing.