scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 1997"


Proceedings ArticleDOI
03 Aug 1997
TL;DR: HDS is dynamic, retessellating the scene continually as the user's viewing position shifts, and global, processing the entire database without first decomposing the environment into individual objects.
Abstract: This dissertation describes hierarchical dynamic simplification (HDS), a new approach to the problem of simplifying arbitrary polygonal environments. HDS is dynamic, retessellating the scene continually as the user's viewing position shifts, and global, processing the entire database without first decomposing the environment into individual objects. The resulting system enables real-time display of very complex polygonal CAD models consisting of thousands of parts and millions of polygons. HDS supports various preprocessing algorithms and various run-time criteria, providing a general framework for dynamic view-dependent simplification. Briefly, HDS works by clustering vertices together in a hierarchical fashion. The simplification process continually queries this hierarchy to generate a scene containing only those polygons that are important from the current viewpoint. When the volume of space associated with a vertex cluster occupies less than a user-specified amount of the screen, all vertices within that cluster are collapsed together and degenerate polygons filtered out. HDS maintains an active list of visible polygons for rendering. Since frame-to-frame movements typically involve small changes in viewpoint, and therefore modify this list by only a few polygons, the method takes advantage of temporal coherence for greater speed.

588 citations


Journal ArticleDOI
TL;DR: The honeycomb mesh, based on hexagonal plane tessellation, is considered as a multiprocessor interconnection network and honeycomb networks with rhombus and rectangle as the bounding polygons are considered.
Abstract: The honeycomb mesh, based on hexagonal plane tessellation, is considered as a multiprocessor interconnection network. A honeycomb mesh network with n nodes has degree 3 and diameter /spl ap/1.63/spl radic/n-1, which is 25 percent smaller degree and 18.5 percent smaller diameter than the mesh-connected computer with approximately the same number of nodes. Vertex and edge symmetric honeycomb torus network is obtained by adding wraparound edges to the honeycomb mesh. The network cost, defined as the product of degree and diameter, is better for honeycomb networks than for the two other families based on square (mesh-connected computers and tori) and triangular (hexagonal meshes and tori) tessellations. A convenient addressing scheme for nodes is introduced which provides simple computation of shortest paths and the diameter. Simple and optimal (in the number of required communication steps) routing, broadcasting, and semigroup computation algorithms are developed. The average distance in honeycomb torus with n nodes is proved to be approximately 0.54/spl radic/n. In addition to honeycomb meshes bounded by a regular hexagon, we consider also honeycomb networks with rhombus and rectangle as the bounding polygons.

300 citations


Book ChapterDOI
06 Aug 1997
TL;DR: This paper addresses the problem of planning the motion of one or more pursuers in a polygonal environment to eventually “see” an evader that is unpredictable, has unknown initial position, and is capable of moving arbitrarily fast.
Abstract: This paper addresses the problem of planning the motion of one or more pursuers in a polygonal environment to eventually “see” an evader that is unpredictable, has unknown initial position, and is capable of moving arbitrarily fast. This problem was first introduced by Suzuki and Yamashita. Our study of this problem is motivated in part by robotics applications, such as surveillance with a mobile robot equipped with a camera that must find a moving target in a cluttered workspace. A few bounds are introduced, and a complete algorithm is presented for computing a successful motion strategy for a single pursuer. For simply-connected free spaces, it is shown that the minimum number of pursuers required is θ(lg n). For multiply-connected free spaces, the bound is θ(√h+lg n) pursuers for a polygon that has n edges and h holes. A set of problems that are solvable by a single pursuer and require a linear number of recontaminations is shown. The complete algorithm searches a finite cell complex that is constructed on the basis of critical information changes. It has been implemented and computed examples are shown.

193 citations


Proceedings ArticleDOI
03 Aug 1997
TL;DR: A bump mapping method that requires minimal hardware beyond that necessary for Phong shading is presented, eliminating the costly per-pixel steps of reconstructing a tangent space and perturbing the interpolated normal vector.
Abstract: We present a bump mapping method that requires minimal hardware beyond that necessary for Phong shading. We eliminate the costly per-pixel steps of reconstructing a tangent space and perturbing the interpolated normal vector by a) interpolating vectors that have been transformed into tangent space at polygon vertices and b) storing a precomputed, perturbed normal map as a texture. This represents a considerable savings in hardware or rendering speed compared to a straightforward implementation of bump mapping. CR categories and subject descriptors: I.3.3 [Computer Graphics]: Picture/Image generation; I.3.7 [Image Processing]: Enhancement

162 citations


Proceedings ArticleDOI
03 Aug 1997
TL;DR: The Visibility Skeleton is a new powerful utility which can efficiently and accurately answer visibility queries for the entire scene, and on-demand or lazy contruction is presented, its implementation showing encouraging first results.
Abstract: Many problems in computer graphics and computer vision require accurate global visibility information. Previous approaches have typically been complicated to implement and numerically unstable, and often too expensive in storage or computation. The Visibility Skeleton is a new powerful utility which can efficiently and accurately answer visibility queries for the entire scene. The Visibility Skeleton is a multi-purpose tool, which can solve numerous different problems. A simple construction algorithm is presented which only requires the use of well known computer graphics algorithmic components such as ray-casting and line/plane intersections. We provide an exhaustive catalogue of visual events which completely encode all possible visibility changes of a polygonal scene into a graph structure. The nodes of the graph are extremal stabbing lines, and the arcs are critical line swaths. Our implementation demonstrates the construction of the Visibility Skeleton for scenes of over a thousand polygons. We also show its use to compute exact visible boundaries of a vertex with respect to any polygon in the scene, the computation of global or on-the-fly discontinuity meshes by considering any scene polygon as a source, as well as the extraction of the exact blocker list between any polygon pair. The algorithm is shown to be manageable for the scenes tested, both in storage and in computation time. To address the potential complexity problems for large scenes, on-demand or lazy contruction is presented, its implementation showing encouraging first results.

143 citations


Proceedings ArticleDOI
03 Aug 1997
TL;DR: A novel algorithm which provides interactive update rates of global illumination for complex scenes with moving objects, and the idea of an implicit line-space hierarchy is introduced, which provides a natural control mechanism allowing the regulation of the tradeoff between image quality and frame rate.
Abstract: In this paper we describe a unified data-structure, the 3D Visibility Complex which encodes the visibility information of a 3D scene of polygons and smooth convex objects This datastructure is a partition of the maximal free segments and is based on the characterization of the topological changes of visibility along critical line sets We show that the size k of the complex is (n) and O(n4) and we give an output sensitive algorithm to build it in time O((n3 + k) log n) This theoretical work has already been used to define a practical data-structure, the Visibility Skeleton described in a companion paper Interactively manipulating the geometry of complex, globally illuminated scenes has to date proven an elusive goal Previous attempts have failed to provide interactive updates of global illumination and have not been able to offer well-adapted algorithms controlling the frame rate The need for such interactive updates of global illumination is becoming increasingly important as the field of application of radiosity algorithms widens To address this need, we present a novel algorithm which provides interactive update rates of global illumination for complex scenes with moving objects In the contact of clustering for hierarchical radiosity, we introduce the idea of an implicit line-space hierarchy This hierarchy is realized by augmenting the links between hierarchical elements (clusters or surfaces) with shafts, representing the set of lines passing through the two linked elements We show how line-space traversal allows rapid identification of modified links, and simultaneous cleanup of subdivision no longer required after a geometry move, by identifying the modified paths in the scene hierarchy The implementation of our new algorithm allows interactive updates of illumination after object motion for scenes containing several thousand polygons, including global illumination effects Finally, the line-space hierarchy traversal provides a natural control mechanism allowing the regulation of the tradeoff between image quality and frame rate

138 citations


Journal ArticleDOI
TL;DR: The Kernel estimator was robust to changes in spatial resolution of the data, and the polygon estimates were severely biased upwards with decreasing spatial resolution (increasing grid cell size), therefore, comparative studies based on polygon methods must use the same spatial resolution.
Abstract: We compared 3 home range estimators (kernel estimator [Kernel], multiple polygons by clustering [Cluster], and minimum convex polygon [MCP]) and evaluated a measure of autocorrelation (Schoener's ratio), with respect to the effects of sampling frequency, spatial resolution of the sampling reference grid, and sample size. We also used Schoener's ratio as a descriptor of within home range movements. An extensive dataset from radiotracking of root voles (Microtus oeconomus) formed the basis for these comparisons. The degree of autocorrelation was sex specific. In particular, locations of reproductive females were significantly autocorrelated for a sampling interval equal to the period of the population's ultradian activity rhythm, indicating territory patrolling behavior in this sex. We assessed the effect of spatial resolution of animal location data on home range descriptors by manipulating the cell size of the sampling reference grids. The Kernel estimator was robust to changes in spatial resolution of the data. In contrast, the polygon estimates were severely biased upwards with decreasing spatial resolution (increasing grid cell size). Therefore, comparative studies based on polygon methods must use the same spatial resolution. The sampling frequency affected all estimators, but qualitative differences were found among the specific estimators. Numerical resampling methods indicated that home range sizes were underestimated, and that the precision of the estimators was generally low.

136 citations


01 Jan 1997
TL;DR: In this article, the moduli spaces of polygons in R^2 and R^3 were studied and the bending flows defined by Kapovich-Millson arise as a reduction of the Gel'fand-Cetlin system on the Grassmannian.
Abstract: We study the moduli spaces of polygons in R^2 and R^3, identifying them with subquotients of 2-Grassmannians using a symplectic version of the Gel'fand-MacPherson correspondence. We show that the bending flows defined by Kapovich-Millson arise as a reduction of the Gel'fand-Cetlin system on the Grassmannian, and with these determine the pentagon and hexagon spaces up to equivariant symplectomorphism. Other than invocation of Delzant's theorem, our proofs are purely polygon-theoretic in nature.

115 citations


Proceedings ArticleDOI
01 Oct 1997
TL;DR: An algorithm for repairing polyhedral CAD models that have errors in their B-REP that converts an unordered collection of polygons to a shared-vertex representation to help eliminate errors.
Abstract: We describe an algorithm for repairing polyhedral CAD models that have errors in their B-REP. Errors like cracks, degeneracies, duplication, holes and overlaps are usually introduced in solid models due to imprecise arithmetic, model transformations, designer errors, programming bugs, etc. Such errors often hamper further processing such as finite element analysis, radiosity computation and rapid prototyping. Our fault-repair algorithm converts an unordered collection of polygons to a shared-vertex representation to help eliminate errors. This is done by choosing, for each polygon edge, the most appropriate edge to unify it with. The two edges are then geometrically merged into one, by moving vertices. At the end of this process, each polygon edge is either coincident with another or is a boundary edge for a polygonal hole or a dangling wall and may be appropriately repaired. Finally, in order to allow user-inspection of the automatic corrections, we produce a visualization of the repair and let the user mark the corrections that conflict with the original design intent. A second iteration of the correction algorithm then produces a repair that is commensurate with the intent. This, by involving the users in a feedback loop, we are able to refine the correction to their satisfaction.

113 citations


15 Jan 1997
TL;DR: The algorithm for simulating soft shadows at interactive rates using graphics hardware is described, which can be generalized for the simulation of shadows on specular surfaces.
Abstract: : This paper describes all algorithm for simulating soft shadows at interactive rates using graphics hardware. On current graphics workstations, the technique can calculate the soft shadows cast by moving, complex objects onto multiple planar surfaces in about a second. In a static, diffuse scene, these high quality shadows can then be displayed at 30 Hz, independent of the number and size of the light sources. For a diffuse scene, the method precomputes a radiance texture that captures the shadows and other brightness variations on each polygon. The texture for each polygon is computed by creating registered projections of the scene onto the polygon from multiple sample points on each light source, and averaging the resulting hard shadow images to compute a soft shadow image. After this precomputation, soft shadows in a static scene can be displayed in real time with simple texture mapping of the radiance textures. All pixel operations employed by the algorithm are supported in hardware by existing graphics workstations. The technique can be generalized for the simulation of shadows on specular surfaces.

106 citations


Journal Article
TL;DR: This study explored and evaluated the consequences of data conversion on the accuracy of the resulting data layer and recommended its application to information derived from remotely sensed data.
Abstract: Spatial data can be represented in two formats, raster (grid cell) or vector (polygon). It is inevitable that conversion of the data between these two formats be essential to the best use of the data. Most geographic information systems (GIS) now provide software for such a conversion. The objective of this study was to explore and evaluate the consequences of data conversion on the accuracy of the resulting data layer. Simple shapes were chosen to document the results of the raster-to-vector and vector-to-raster conversion processes. These shapes included a square, a triangle (not aligned with the grid), a circle, a hole within the circle, and a non-convex shape. Error matrices were employed to represent the changes in area through the conversion process. A second set of data including a circle, a thin rectangle, and a wide rectangle were used to examine the effect of grid cell size on both presence/absence of a feature as well as to maintain the feature's shape. Finally, recommendations for continuing this work and its application to information derived from remotely sensed data were presented.

Patent
30 Apr 1997
TL;DR: In this paper, an adaptive pixel multisampler (24) generates pixel data for display using an interlocking sub-pixel sampling pattern P and a frame buffer (26) organized as a per-polygon, per-pixel heap (64).
Abstract: An adaptive pixel multisampler (24) generates pixel data for display using an interlocking sub-pixel sampling pattern P and a frame buffer (26) organized as a per-polygon, per-pixel heap (64). The interlocking sampling pattern P provides the advantages of a multi-pixel shaped filter without pixel-to-pixel cross communication and without additional sub-pixels. The per-polygon, per-pixel heap (64) allocates frame buffer memory so that each pixel will have one set of data stored in the frame buffer (26) for every polygon that influences that pixel. This memory allocation scheme can significantly reduce frame buffer memory requirements. The polygon data is blended to properly handle processing of transparent polygons and polygon edges without the degradation of image quality found in conventional computer graphics systems.

Proceedings ArticleDOI
30 Apr 1997
TL;DR: This paper presents a novel method for fast and efficient backface culling: it reduces the backface test to one logical operation per polygon while requiring only two bytes extra storage perpolygon.
Abstract: This paper presents a novel method for fast and efficient backface culling: we reduce the backface test to one logical operation per polygon while requiring only two bytes extra storage per polygon. The normal mask is introduced, where each bit is associated with a cluster of normals in a normal-space partitioning. A polygon's normal is approximated by the cluster of normals in which it falls; the cluster's normal mask is stored with the polygon in a preprocessing step. Although conceptually the normal masks require as many bits as the number of clusters, we observe that only two bytes are actually necessary. For each frame (and for each viewing volume), we calculate the backface mask by ORing the normals masks of all normal clusters that are backfacing. The backface test finally reduces to a single logical AND operation between the polygon's normal mask and the backface mask. CR

Journal ArticleDOI
TL;DR: The complexities of eight point-in-polygon algorithms were analyzed and the sum of area method, the sign of offset method, and the orientation method are well suited for a single point query.

Journal ArticleDOI
TL;DR: A quantitative characterization and response analysis of a multi-phase representative material by the use of Voronoi cells as a unified tool and results are used for establishing anisotropy measures, and for evaluating statistical functions of stresses.

Journal ArticleDOI
TL;DR: This work gives new methods for maintaining a data structure that supports ray-shooting and shortest-path queries in a dynamically changing connected planar subdivision S, and outperforms the previous best data structure for this problem by a lognfactor in all the complexity measures.

Journal ArticleDOI
TL;DR: A general framework is presented for solving a key subproblem of the LR problem which dominates the running time for a variety of polygon types, and it is shown that the LR in a general polygon (allowing holes) can be found in O(n log2 n) time.
Abstract: This paper considers the geometric optimization problem of finding the Largest area axis-parallel Rectangle (LR) in an n-vertex general polygon. We characterize the LR for general polygons by considering different cases based on the types of contacts between the rectangle and the polygon. A general framework is presented for solving a key subproblem of the LR problem which dominates the running time for a variety of polygon types. This framework permits us to transform an algorithm for orthogonal polygons into an algorithm for non-orthogonal polygons. Using this framework, we show that the LR in a general polygon (allowing holes) can be found in O(n log2 n) time. This matches the running time of the best known algorithm for orthogonal polygons. References are given for the application of the framework to other types of polygons. For each type, the running time of the resulting algorithm matches the running time of the best known algorithm for orthogonal polygons of that type. A lower bound of time in Ω(n log n) is established for finding the LR in both self-intersecting polygons and general polygons with holes. The latter result gives us both a lower bound of Ω(n log n) and an upper bound of O(n log2 n) for general polygons.

Journal ArticleDOI
11 Nov 1997
TL;DR: The concept of extended feature objects consisting of an infinite set of feature points for similarity retrieval is introduced and the selectivity of the index is improved by using an adaptive decomposition of very large feature objects and a dynamic joining of small feature objects.
Abstract: In this paper, we introduce the concept of extended feature objects for similarity retrieval. Conventional approaches for similarity search in databases map each object in the database to a point in some high-dimensional feature space and define similarity as some distance measure in this space. For many similarity search problems, this feature-based approach is not sufficient. When retrieving partially similar polygons, for example, the search cannot be restricted to edge sequences, since similar polygon sections may start and end anywhere on the edges of the polygons. In general, inherently continuous problems such as the partial similarity search cannot be solved by using point objects in feature space. In our solution, we therefore introduce extended feature objects consisting of an infinite set of feature points. For an efficient storage and retrieval of the extended feature objects, we determine the minimal bounding boxes of the feature objects in multidimensional space and store these boxes using a spatial access structure. In our concrete polygon problem, sets of polygon sections are mapped to 2D feature objects in high-dimensional space which are then approximated by minimal bounding boxes and stored in an R $^*$-tree. The selectivity of the index is improved by using an adaptive decomposition of very large feature objects and a dynamic joining of small feature objects. For the polygon problem, translation, rotation, and scaling invariance is achieved by using the Fourier-transformed curvature of the normalized polygon sections. In contrast to vertex-based algorithms, our algorithm guarantees that no false dismissals may occur and additionally provides fast search times for realistic database sizes. We evaluate our method using real polygon data of a supplier for the car manufacturing industry.

Patent
01 Oct 1997
TL;DR: In this paper, problem polygons are buffered for later processing, while the standard graphics processing continues, without the need for periodically reformatting data and performing clipping after either a predefined number of polygons have been stored at the buffer location, or at such time as a change in the rendering state occurs.
Abstract: While executing the standard graphics processing steps, problem polygons (ie, those outside of a defined clip volume) are buffered for later processing, while the standard graphics processing continues, without the need for periodically reformatting data and performing clipping After either a predefined number of polygons have been stored at the buffer location, or at such time as a change in the rendering state occurs, the buffered polygons are clipped

Journal ArticleDOI
TL;DR: A new approach is introduced to reduce the number of nodes of a polyhedral model in accordance to an error criterion which can be easily understood by the user.
Abstract: Polyhedral models of objects represent an efficient issue for reverse engineering applications and visualization purposes. Because digitizing devices can produce large amounts of points, the polyhedral representation of objects may require a treatment of simplification to preserve the efficiency of subsequent processes for machining or visualization. Here, a new approach is introduced to reduce the number of nodes of a polyhedral model in accordance to an error criterion which can be easily understood by the user. This criterion uses error zones assigned to each node of the initial polyhedron and guarantees that the simplified polyhedron intersects each of the error zones. This behaviour is equivalent to constraining the simplified polyhedron to be included into the geometric envelope defined by all the error zones attached to the initial polyhedron. The iterative treatments involved in the simplification process incorporate an inheritance mechanism to transfer the error zones from one iteration to the next one. Thus, an intuitive manner to monitor the simplification process is set up. A node selection criterion based on curvature approximation has been included to sort the nodes according to their probability of removal. A specific remeshing scheme has also been included to increase the robustness of the treatment when face arrangements around a node generate distorted polygon shapes. Examples highlight the predictable behaviour of the algorithm.

Patent
31 Dec 1997
TL;DR: In this paper, a system for computing a pattern function for a polygonal pattern having a finite number of predetermined face angles is presented, which includes the steps of decomposing the polygon into a set of flashes, computing the pattern function by summing together all flashes evaluated at a point (x,y), and returning a 1 if a point is inside a polygons and otherwise will return a 0.
Abstract: A system for computing a pattern function for a polygonal pattern having a finite number of predetermined face angles. One method includes the steps of decomposing the polygon into a set of flashes, computing the pattern function by summing together all flashes evaluated at a point (x,y), and the pattern function returning a 1 if point (x,y) is inside a polygon and otherwise will return a 0. Another method for computing a two-dimensional convolution value for any point (x,y) on a polygonal pattern includes the steps of identifying a set of half-plane basis functions corresponding to each face angle of the polygonal pattern, convolving each half-plane basis function with a convolution kernel using integration to find convolved flash (cflash) x,y values, storing the cflash (x,y) values to a two-dimensional look-up table, decomposing the polygonal pattern into a set of flashes where each of the flashes is an instance of the half-plane basis functions, and computing a convolution value for point (x,y) by looking-up a corresponding cflash x,y value for each flash in the table and summing together the corresponding cflash x,y values. The present invention may be used in a method for determining correction steps to which a design layout is to be subjected during wafer proximity correction.

Book
01 Jan 1997
TL;DR: A competitive strategy for walking into the kernel of an initially unknown star-shaped polygon is presented and a tight upper bound of 5:3331 is shown for the length of a self-approaching curve over the distance between its endpoints.
Abstract: We present a competitive strategy for walking into the kernel of an initially unknown star-shaped polygon. From an arbitrary start point, s, within the polygon, our strategy finds a path to the closest kernel point, k, whose length does not exceed 5:3331...times the distance from s to k. This is complemented by a general lower bound of v2. Our analysis relies on a result about a new and interesting class of curves which are self-approaching in the following sense. For any three consecutive points a, b, c on the curve the point b is closer to c than a to c. We show a tight upper bound of 5:3331... for the length of a self-approaching curve over the distance between its endpoints.

Patent
07 Oct 1997
TL;DR: In this paper, a system and method of determining, and subsequently using in a rendering engine, an illumination map is employed by the rendering engine to avoid having to calculate the contributions of lights in the scene during rendering, thus reducing the rendering time.
Abstract: A system and method of determining, and subsequently using in a rendering engine, an illumination map. The illumination map is employed by the rendering engine to avoid having to calculate the contributions of lights in the scene during rendering, thus reducing the rendering time. In one embodiment, the system and method is used to determine the illumination values from the contribution of one or more lights to one or more texture mapped objects. This illumination map can either be stored independently of the texture picture to be mapped or can be combined with the texture picture to obtain an illuminated texture picture for subsequent rendering independent of the light sources. In another embodiment, the present invention is used to determine the illumination values for one or more objects represented by a polygon mesh. This illumination map can be stored independent of the material and color of the object or can be combined with the color and material information and stored in the object definition. In either of these cases, this illumination map represents the illumination values at the vertices of the polygons, the rendering engine and/or hardware linearly interpolating the remainder of the rendering information for each polygon. The illumination values can be determined either by summing the contribution of the lights in a scene at points of interest or by evaluating the entire shade tree defined for the scene at those points of interest. In this latter case, the contributions of reflections, refractions, transparencies and any procedural functions defined for the scene are considered in determining the illumination values. Evaluation of the entire shade tree also allows other options, such as the ability to generate 2D textures from procedural 3D textures or to generate a texture that contains the result of a blending between multiple textures.

Journal ArticleDOI
TL;DR: One of the applications of the estimate is that the topological entropy of polygonal billiards is zero, which implies the subexponential growth of various geometric quantities associated with a polygon.
Abstract: We study the topological entropy of a class of transformations with mild singularities: the generalized polygon exchanges. This class contains, in particular, polygonal billiards. Our main result is a geometric estimate, from above, on the topological entropy of generalized polygon exchanges. One of the applications of our estimate is that the topological entropy of polygonal billiards is zero. This implies the subexponential growth of various geometric quantities associated with a polygon. Other applications are to the piecewise isometries in two dimensions, and to billiards in rational polyhedra.

Patent
06 Oct 1997
TL;DR: In this paper, a digital map system for displaying 3D terrain data using terrain data in the form of polygons is presented, which is produced from a database of elevation points which are divided into, for example, n×n (where n is a positive integer) squares which have an elevation point in the center of the square.
Abstract: A digital map system for displaying three dimensional terrain data uses terrain data in the form of polygons. The polygon database is produced from a database of elevation points which are divided into, for example, n×n (where n is a positive integer) squares which have an elevation point in the center of the square. The center point forms four polygons with the corners of the square. The elevation of the center point may be chosen to be the highest elevation point in the n×n square, the average elevation of the elevation points in the n×n square, the elevation of the actual center point, or other methods. The method chosen depends on how the data base is to be used. The size of the n×n square chosen also depends on how the data base is to be used since there is a tradeoff between the resolution of the displayed scene and the amount of data reduction from the original database of elevation points. The polygon database may be used in a pilot aid using a synthetic environment, a flight simulator, as part of the control system for a remotely piloted vehicle, or in a video game.

Patent
28 Aug 1997
TL;DR: In this paper, a plurality of object images information is picked up by rotating a real object for every arbitrary angle to assign texture information on each polygon from object image information having the largest projection area of the relevant polygon.
Abstract: The present method represents a three-dimensional shape model by polygons according to a plurality of object images information picked up by rotating a real object for every arbitrary angle to assign texture information on each polygon from object image information having the largest projection area of the relevant polygon. In order to improve the color continuity between adjacent polygons, the object image information having correspondence between a polygon of interest and an adjacent polygon thereof is selected so as to be the object image information approximating the shooting position and the shooting direction. An alternative method divides an object image into a plurality of regions, obtains difference between an object image and a background image in region level, outputs a mean value of the absolute value of difference in the region level, and detects the region having the mean value of absolute values of difference equal to or greater than a threshold value as the object portion. Another further method obtains a plurality of object images by shooting only a background of an object of interest and by shooting the object of interest during each rotation. A silhouette image is generated by carrying out a difference process between the object image and the background image. A voting process is carried out on the voxel space on the basis of the silhouette image. A polygon is generated according to the three-dimensional shape obtained by the voting process. The texture obtained from the object image is mapped to the polygon.

Patent
Kari Heiska1, Arto Kangas1
20 May 1997
TL;DR: In this paper, the authors proposed a method and an apparatus for determining path attenuation of radio waves in a radio system, in which method at least a two-dimensional vector map describing the environment of a base station is used for determining the coverage area of the base station of the system, and in which the strength of the emission of a transmitter is determined at various points in the environment.
Abstract: The invention relates to a method and an apparatus for determining path attenuation of radio waves in a radio system, in which method at least a two-dimensional vector map describing the environment of a base station is used for determining the coverage area of the base station of the system, and in which the strength of the emission of a transmitter is determined at various points in the environment. To determine propagation attenuation preferably when determining the strength of the emission of the transmitter at each location point of the transmitter to be examined, the calculation is restricted to a polygon area (500, 502, 504) determined from a calculation area described by the vector map, radio waves being able to propagate to the polygon area both directly and by means of diffraction and reflections.

Patent
24 Dec 1997
TL;DR: In this article, a process and implementing computer system for graphics applications in which polygon information, including transparency, color and other polygon characteristics, is organized, stored and transferred in terms of areas or tiled blocks of information in a matrix configuration.
Abstract: A process and implementing computer system for graphics applications in which polygon information, including transparency, color and other polygon characteristics, is organized, stored and transferred in terms of areas or tiled blocks of information in a matrix configuration. The polygon bytes of texel information are organized in an exemplary 8X8 matrix row and column format in the graphics subsystem for improved cache-hit efficiency and translated to and from the linear addressing scheme of a host storage device when the host storage is accessed to refill the graphics cache. The bytes comprising the memory tiles of polygon information are arranged such that a complete tile of information is transferred in one burst-mode host memory access to minimize normal multi-line access arbitration and other typical access delays.

Journal ArticleDOI
TL;DR: A straightforward parameter-free method for detecting dominant points of digital curves that uses the chain code properties of digital straight lines to compute two vectors of significance.

Journal ArticleDOI
TL;DR: An O(n) time algorithm for computing row-wise maxima or minima of an implicit, totally monotone matrix whose entries represent shortest-path distances between pairs of vertices in a simple polygon is presented.
Abstract: We present an O(n) time algorithm for computing row-wise maxima or minima of an implicit, totally monotone $n \times n$ matrix whose entries represent shortest-path distances between pairs of vertices in a simple polygon. We apply this result to derive improved algorithms for several well-known problems in computational geometry. Most prominently, we obtain linear-time algorithms for computing the geodesic diameter, all farthest neighbors, and external farthest neighbors of a simple polygon, improving the previous best result by a factor of O(log n) in each case.