scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 2004"


Journal ArticleDOI
TL;DR: In this paper, conforming finite elements on polygonal meshes are developed, and a particular contribution is the use of mesh-free (natural-neighbour, nn) basis functions on a canonical element combined with an affine map to construct conforming approximations on convex polygons.
Abstract: SUMMARY In this paper, conforming finite elements on polygon meshes are developed. Polygonal finite elements provide greater flexibility in mesh generation and are better-suited for applications in solid mechanics which involve a significant change in the topology of the material domain. In this study, recent advances in meshfree approximations, computational geometry, and computer graphics are used to construct different trial and test approximations on polygonal elements. A particular and notable contribution is the use of meshfree (natural-neighbour, nn) basis functions on a canonical element combined with an affine map to construct conforming approximations on convex polygons. This numerical formulation enables the construction of conforming approximation on n-gons (n 3), and hence extends the potential applications of finite elements to convex polygons of arbitrary order. Numerical experiments on second-order elliptic boundary-value problems are presented to demonstrate the accuracy and convergence of the proposed method. Copyright 2004 John Wiley & Sons, Ltd.

429 citations


Journal ArticleDOI
TL;DR: This paper describes a straightforward method of calculating surface-area grids directly from digital elevation models (DEMs), by generating 8 3-dimensional triangles connecting each cell centerpoint with the centerpoints of the 8 surrounding cells, then calculating and summing the area of the portions of each triangle that lay within the cell boundary.
Abstract: There are many reasons to want to know the true surface area of the landscape, especially in landscape analysis and studies of wildlife habitat. Surface area provides a better esti- mate of the land area available to an animal than planimetric area, and the ratio of this surface area to planimetric area provides a useful measure of topographic roughness of the landscape. This paper describes a straightforward method of calculating surface-area grids directly from digital elevation models (DEMs), by generating 8 3-dimensional trian- gles connecting each cell centerpoint with the centerpoints of the 8 surrounding cells, then calculating and summing the area of the portions of each triangle that lay within the cell boundary. This method tended to be slightly less accurate than using Triangulated Irregular Networks (TINs) to generate surface-area statistics, especially when trying to analyze areas enclosed by vector-based polygons (i.e., management units or study areas) when there were few cells within the polygon. Accuracy and precision increased rapid- ly with increasing cell counts, however, and the calculated surface-area value was con- sistently close to the TIN-based area value at cell counts above 250. Raster-based analy- ses offer several advantages that are difficult or impossible to achieve with TINs, includ- ing neighborhood analysis, faster processing speed, and more consistent output. Useful derivative products such as surface-ratio grids are simple to calculate from surface-area grids. Finally, raster-formatted digital elevation data are widely and often freely available, whereas TINs must generally be generated by the user.

386 citations


Journal ArticleDOI
Tao Ju1
01 Aug 2004
TL;DR: This work presents a robust method for repairing arbitrary polygon models that can efficiently process large models containing millions of polygons and is capable of reproducing sharp features in the original geometry.
Abstract: We present a robust method for repairing arbitrary polygon models. The method is guaranteed to produce a closed surface that partitions the space into disjoint internal and external volumes. Given any model represented as a polygon soup, we construct an inside/outside volume using an octree grid, and reconstruct the surface by contouring. Our novel algorithm can efficiently process large models containing millions of polygons and is capable of reproducing sharp features in the original geometry.

308 citations


Journal ArticleDOI
01 Aug 2004
TL;DR: This paper describes a method for building interpolating or approximating implicit surfaces from polygonal data using a moving least-squares formulation with constraints integrated over the polygons.
Abstract: This paper describes a method for building interpolating or approximating implicit surfaces from polygonal data. The user can choose to generate a surface that exactly interpolates the polygons, or a surface that approximates the input by smoothing away features smaller than some user-specified size. The implicit functions are represented using a moving least-squares formulation with constraints integrated over the polygons. The paper also presents an improved method for enforcing normal constraints and an iterative procedure for ensuring that the implicit surface tightly encloses the input vertices.

291 citations


Journal ArticleDOI
TL;DR: A new highly robust algorithm, called Pyramid, is presented to identify the stars observed by star trackers in the general lost-in-space case, where no a priori estimate of pointing is available.
Abstract: A new highly robust algorithm, called Pyramid, is presented to identify the stars observed by star trackers in the general lost-in-space case, where no a priori estimate of pointing is available. At the heart of the method is the k-vector approach for accessing the star catalog, which provides a searchless means to obtain all cataloged stars from the whole sky that could possibly correspond to a particular measured pair, given the measured interstar angle and the measurement precision. The Pyramid logic is built on the identification of a four-star polygon structure—the Pyramid—which is associated with an almost certain star identification. Consequently, the Pyramid algorithm is capable of identifying and discarding even a high number of spikes (false stars). The method, which has already been tested in space, is demonstrated to be highly efficient, extremely robust, and fast. All of these features are supported by simulations and by a few ground test experimental results.

183 citations


Patent
James Xiaqing Wu1
12 May 2004
TL;DR: In this article, a line-end seed is associated with a line feature, and if so, the system marks the line end feature inside the line feature inside a polygon from the layout and marks a line end seed on the polygon.
Abstract: One embodiment of the invention provides a system that facilitates identifying line-end features in a layout for an integrated circuit. The system operates by first receiving the layout for the integrated circuit. Next, the system selects a polygon from the layout and marks a line-end seed on the polygon. The system then determines if the line-end seed is associated with a line feature, and if so, the system marks the line-end feature inside the line feature.

161 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the best-fit ellipse also behaves as if it were deforming passively, which implies that all techniques of strain analysis that were previously restricted to populations of elliptical objects may now be applied to populations with arbitrary shapes.

134 citations


Journal ArticleDOI
TL;DR: Using image knowledge extracted, a new method of interpolating LiDAR raw data into grid digital surface model (DSM) with considering the steep discontinuities of buildings is developed.

132 citations


Journal ArticleDOI
TL;DR: This paper discusses a set of possible estimation procedures that are based on the Prony and the Pencil methods, relate them one to the other, and compare them through simulations, and presents an improvement over these methodsbased on the direct use of the maximum-likelihood estimator, exploiting the above methods as initialization.
Abstract: This paper discusses the problem of recovering a planar polygon from its measured complex moments These moments correspond to an indicator function defined over the polygon's support Previous work on this problem gave necessary and sufficient conditions for such successful recovery process and focused mainly on the case of exact measurements being given In this paper, we extend these results and treat the same problem in the case where a longer than necessary series of noise corrupted moments is given Similar to methods found in array processing, system identification, and signal processing, we discuss a set of possible estimation procedures that are based on the Prony and the Pencil methods, relate them one to the other, and compare them through simulations We then present an improvement over these methods based on the direct use of the maximum-likelihood estimator, exploiting the above methods as initialization Finally, we show how regularization and, thus, maximum a posteriori probability estimator could be applied to reflect prior knowledge about the recovered polygon

120 citations


Patent
06 Oct 2004
TL;DR: In this paper, a geographical information system and a method are disclosed for geospatially mapping a least one parcel polygon within a geographical region and for displaying at least one specific attribute of each polygon, i.e. a topological area within the given geographical region, as an attached attribute of latitude and longitude coordinates.
Abstract: A geographical information system and a method are disclosed for geospatially mapping a least one parcel polygon within a geographical region and for displaying at least one specific attribute of each parcel polygon, i.e. a topological area within the given geographical region, as an attached attribute of latitude and longitude coordinates. The centroid or center point of each of the parcel polygons is determined and stored into conventional computer storage means. The latitude and longitude point feature at the centroid of each parcel polygon is established and similarly stored. A unique tax identification number, e.g. the Assessor Parcel Number (APN) or Parcel Identifier Number (PIN), is assigned to each of the point features. A correlation is then made between the unique tax identification number of the point feature to a text list of at least one attribute, e.g., the physical address of the parcel polygon, of each of the point features. This attribute becomes attached to each point feature. The resulting parcel polygon map and point features with one or more of the attached attributes can then be displayed within a GIS or CAD system to provide the user, for example, accurate locations of street addresses for use in environments that require pinpoint accuracy, such as emergency response.

102 citations


Journal ArticleDOI
TL;DR: In this paper, a method based on subdivision of a 2D polygonal section into a set of monotone polygons to generate a continuous path for material deposition is proposed.

Journal ArticleDOI
TL;DR: A new algorithm is described that detects a set of feature points on the boundary of an 8-connected shape that constitute the vertices of a polygonal approximation of the shape itself, and the polygon obtained by linking the detected nodes approximates the contour in an intuitive way.

Proceedings ArticleDOI
08 Jul 2004
TL;DR: This work presents a method for parameterizing irregularly triangulated input models over polyhedral domains with quadrilateral faces, and uses the parameterization for multiresolution Catmull-Clark remeshing and illustrates two applications that take advantage of the resulting representation.
Abstract: We present a method for parameterizing irregularly triangulated input models over polyhedral domains with quadrilateral faces. A combination of center-based clustering techniques is used to generate a partition of the model into regions suitable for remeshing. Several issues are addressed: the size and shape of the regions, their positioning with respect to features of the input geometry, and the amount of distortion introduced by approximating each region with a coarse polygon. Region boundaries are used to define a coarse polygonal mesh which is quadrangulated to obtain a parameterization domain. Constraints can be optionally imposed to enforce a strict correspondence between input and output features. We use the parameterization for multiresolution Catmull-Clark remeshing and we illustrate two applications that take advantage of the resulting representation: interactive model editing and texture mapping.

Journal ArticleDOI
TL;DR: An energy‐based general polygon to polygon normal contact model in which the normal and tangential directions, magnitude and reference contact position of the normal contact force are uniquely defined.
Abstract: This paper proposes an energy‐based general polygon to polygon normal contact model in which the normal and tangential directions, magnitude and reference contact position of the normal contact force are uniquely defined. The model in its final form is simple and elegant with a clear geometric perspective, and also possesses some advanced features. Furthermore, it can be extended to a more complex situations and in particular, it may also provide a sound theoretical foundation to possibly unifying existing contact models for all types of (convex) objects.

01 Jul 2004
TL;DR: This thesis examines a specific realistic input model in this thesis: the model where objects are restricted to be fat, and proposes two algorithms for triangulating fat polygons in linear time that are much simpler.
Abstract: Computational geometry is the branch of theoretical computer science that deals with algorithms and data structures for geometric objects. The most basic geometric objects include points, lines, polygons, and polyhedra. Computational geometry has applications in many areas of computer science, including computer graphics, robotics, and geographic information systems. In many computational-geometry problems, the theoretical worst case is achieved by input that is in some way "unrealistic". This causes situations where the theoretical running time is not a good predictor of the running time in practice. In addition, algorithms must also be designed with the worst-case examples in mind, which causes them to be needlessly complicated. In recent years, realistic input models have been proposed in an attempt to deal with this problem. The usual form such solutions take is to limit some geometric property of the input to a constant. We examine a specific realistic input model in this thesis: the model where objects are restricted to be fat. Intuitively, objects that are more like a ball are more fat, and objects that are more like a long pole are less fat. We look at fat objects in the context of five different problems—two related to decompositions of input objects and three problems suggested by computer graphics. Decompositions of geometric objects are important because they are often used as a preliminary step in other algorithms, since many algorithms can only handle geometric objects that are convex and preferably of low complexity. The two main issues in developing decomposition algorithms are to keep the number of pieces produced by the decomposition small and to compute the decomposition quickly. The main question we address is the following: is it possible to obtain better decompositions for fat objects than for general objects, and/or is it possible to obtain decompositions quickly? These questions are also interesting because most research into fat objects has concerned objects that are convex. We begin by triangulating fat polygons. The problem of triangulating polygons—that is, partitioning them into triangles without adding any vertices—has been solved already, but the only linear-time algorithm is so complicated that it has never been implemented. We propose two algorithms for triangulating fat polygons in linear time that are much simpler. They make use of the observation that a small set of guards placed at points inside a (certain type of) fat polygon is sufficient to see the boundary of such a polygon. We then look at decompositions of fat polyhedra in three dimensions. We show that polyhedra can be decomposed into a linear number of convex pieces if certain fatness restrictions aremet. We also show that if these restrictions are notmet, a quadratic number of pieces may be needed. We also show that if we wish the output to be fat and convex, the restrictions must be much tighter. We then study three computational-geometry problems inspired by computer graphics. First, we study ray-shooting amidst fat objects from two perspectives. This is the problem of preprocessing data into a data structure that can answer which object is first hit by a query ray in a given direction from a given point. We present a new data structure for answering vertical ray-shooting queries—that is, queries where the ray’s direction is fixed—as well as a data structure for answering ray-shooting queries for rays with arbitrary direction. Both structures improve the best known results on these problems. Another problem that is studied in the field of computer graphics is the depth-order problem. We study it in the context of computational geometry. This is the problem of finding an ordering of the objects in the scene from "top" to "bottom", where one object is above the other if they share a point in the projection to the xy-plane and the first object has a higher z-value at that point. We give an algorithm for finding the depth order of a group of fat objects and an algorithm for verifying if a depth order of a group of fat objects is correct. The latter algorithm is useful because the former can return an incorrect order if the objects do not have a depth order (this can happen if the above/below relationship has a cycle in it). The first algorithm improves on the results previously known for fat objects; the second is the first algorithm for verifying depth orders of fat objects. The final problem that we study is the hidden-surface removal problem. In this problem, we wish to find and report the visible portions of a scene from a given viewpoint—this is called the visibility map. The main difficulty in this problem is to find an algorithm whose running time depends in part on the complexity of the output. For example, if all but one of the objects in the input scene are hidden behind one large object, then our algorithm should have a faster running time than if all of the objects are visible and have borders that overlap. We give such an algorithm that improves on the running time of previous algorithms for fat objects. Furthermore, our algorithm is able to handle curved objects and situations where the objects do not have a depth order—two features missing from most other algorithms that perform hidden surface removal.

Proceedings ArticleDOI
26 Apr 2004
TL;DR: A new iterative distributed algorithm for linear minimum mean-squared-error (LMMSE) estimation in sensor networks whose measurements follow a Gaussian hidden Markov graphical model with cycles that is robust to temporary communication faults and sleeping nodes, and enjoys guaranteed convergence under mild conditions.
Abstract: We propose a new iterative distributed algorithm for linear minimum mean-squared-error (LMMSE) estimation in sensor networks whose measurements follow a Gaussian hidden Markov graphical model with cycles. The embedded polygons algorithm decomposes a loopy graphical model into a number of linked embedded polygons and then applies a parallel block Gauss-Seidel iteration comprising local LMMSE estimation on each polygon (involving inversion of a small matrix) followed by an information exchange between neighboring nodes and polygons. The algorithm is robust to temporary communication faults such as link failures and sleeping nodes and enjoys guaranteed convergence under mild conditions. A simulation study indicates that energy consumption for iterative estimation increases substantially as more links fail or nodes sleep. Thus, somewhat surprisingly, energy conservation strategies such as low-powered transmission and aggressive sleep schedules could actually be counterproductive.

Patent
07 May 2004
TL;DR: In this paper, a method of processing sequential frames of data comprises repeating the following steps: acquiring at least a reference frame containing data points and a current frame of data points; identifying a set of anchor points in the reference frame; assigning to each anchor point in the frame a respective motion vector that estimates the location of the anchor point.
Abstract: A method of processing sequential frames of data comprises repeating the following steps for successive frames of data: acquiring at least a reference frame containing data points and a current frame of data points; identifying a set of anchor points in the reference frame; assigning to each anchor point in the reference frame a respective motion vector that estimates the location of the anchor point in the current frame; defining polygons formed of anchor points in the reference frame, each polygon containing data points in the reference frame, each polygon and each data point contained within the polygon having a predicted location in the current frame based on the motion vectors assigned to anchor points in the polygon; for one or more polygons in the reference frame, adjusting the number of anchor points in the reference frame based on accuracy of the predicted locations of data points in the current frame; and if the number of anchor points is increased by addition of new anchor points, then assigning motion vectors to the new anchor points that estimate the location of the anchor points in the current frame.

Patent
18 Oct 2004
TL;DR: In this paper, a system and method for preventing the occurrence of aliasing at the edges of polygons in 3D graphics is presented, which includes an edge anti-aliasing module configured to selectively super-sample edge portions of primitives.
Abstract: A system and method is provided for preventing the occurrence of aliasing at the edges of polygons in 3D graphics. The system may detect both polygon geometric edges and Z edges due to intersection of multiple polygons. In one embodiment, the system includes an edge anti-aliasing module configured to selectively super-sample edge portions of primitives. The system further includes a coarse memory for storing information of pixels that are not super-sampled and a fine memory for storing information of pixels that are super-sampled by the edge anti-aliasing module.

Journal ArticleDOI
23 Aug 2004
TL;DR: A new area-based convexity measure for polygons is described, which has the desirable properties that it is not sensitive to small boundary defects, and it is symmetric with respect to intrusions and protrusions.
Abstract: A new area-based convexity measure for polygons is described. It has the desirable properties that it is not sensitive to small boundary defects, and it is symmetric with respect to intrusions and protrusions. The measure requires a maximally overlapping convex polygon, and this is efficiently estimated using a genetic algorithm. Examples of the measures application to medical image analysis are shown.

Proceedings ArticleDOI
08 Jun 2004
TL;DR: A notion of "fat" or "robust" visibility is introduced, and an output-sensitive algorithm is given, which is nearly optimal, when Q is a simple polygon.
Abstract: We study the problem of computing the visibility graph defined by a set P of n points inside a polygon Q: two points p,q e P are joined by an edge if the segment ‾pq ⊂ Q. Efficient output-sensitive algorithms are known for the case in which P is the set of all vertices of Q. We examine the general case in which P is an arbitrary set of points, interior or on the boundary of Q and study a variety of algorithmic questions. We give an output-sensitive algorithm, which is nearly optimal, when Q is a simple polygon. We introduce a notion of "fat" or "robust" visibility, and give a nearly optimal algorithm for computing visibility graphs according to it, in polygons Q that may have holes. Other results include an algorithm to detect if there are any visible pairs among P, and algorithms for output-sensitive computation of visibility graphs with distance restrictions, invisibility graphs, and rectangle visibility graphs.

Journal ArticleDOI
TL;DR: In this article, the motion of an infinitesimal particle under the gravitational field of (n+1) bodies in ring configuration, that consist of n primaries of equal mass m placed at the vertices of a regular polygon, plus another primary of mass m 0 = βm located at the geometric center of the polygon.

Journal ArticleDOI
TL;DR: A simple but general model for ME in MBGIS is introduced and an approximate law of error propagation is formulated, and a simple, unified, and effective treatment of error bands for a line segment is made under the name of “covariance-based error band”.
Abstract: This is the first of a four-part series of papers which proposes a general framework for error analysis in measurement-based geographical information systems (MBGIS). The purpose of the series is to investigate the fundamental issues involved in measurement error (ME) analysis in MBGIS, and to provide a unified and effective treatment of errors and their propagation in various interrelated GIS and spatial operations. Part 1 deals with the formulation of the basic ME model together with the law of error propagation. Part 2 investigates the classic point-in-polygon problem under ME. Continuing to Part 3 is the analysis of ME in intersections and polygon overlays. In Part 4, error analyses in length and area measurements are made. In this present part, a simple but general model for ME in MBGIS is introduced. An approximate law of error propagation is then formulated. A simple, unified, and effective treatment of error bands for a line segment is made under the name of “covariance-based error band”. A new concept, called “maximal allowable limit”, which guarantees invariance in topology or geometric-property of a polygon under ME is also advanced. To handle errors in indirect measurements, a geodetic model for MBGIS is proposed and its error propagation problem is studied on the basis of the basic ME model as well as the approximate law of error propagation. Simulation experiments all substantiate the effectiveness of the proposed theoretical construct.

Patent
26 May 2004
TL;DR: In this paper, the first edge of a polygon is determined based on the projection point and characteristics of the first vertex of a second vertex. And then, the second vertex is determined to correct at least a portion of the edge for proximity effects based on an analysis performed at the evaluation point.
Abstract: Techniques for fabricating a device include forming a fabrication layout, such as a mask layout, for a physical design layer, such as a design for an integrated circuit, and identifying evaluation points on an edge of a polygon corresponding to the design layer for correcting proximity effects. Techniques include selecting from among all edges of all polygons in a proposed layout a subset of edges for which proximity corrections are desirable. The subset of edges includes less than all the edges. Evaluation points are established only for the subset of edges. Corrections are determined for at least portions of the subset of edges based on an analysis performed at the evaluation points. Other techniques include establishing a projection point on a first edge corresponding to the design layout based on whether a vertex of a second edge is within a halo distance. An evaluation point is determined for the first edge based on the projection point and characteristics of the first edge. It is then determined how to correct at least a portion of the edge for proximity effects based on an analysis at the evaluation point.

Patent
22 Oct 2004
TL;DR: In this paper, the layout vs. layout comparison of the design cell with the master cell is used to determine if the layout of the cells and the corresponding master cells match one another.
Abstract: Systems and methods for verifying integrated circuit designs: (a) receive input corresponding to physical layouts of cells of the design and available master cells. The systems and methods then determine if the design cells are intended to correspond to one of the master cells, and if so, the systems and methods then determine if the layouts of the cells and the corresponding master cells match one another, e.g., by a layout vs. layout comparison of the design cell with the master cell to determine if the coordinates of the polygon(s) in the design cell match corresponding coordinates of the polygon(s) in the master cell. An “XOR” comparison may be used to determine if the design cell features match the corresponding master cell features. Computer-readable media may be adapted to include computer-executable instructions for performing such methods and operating such systems.

Patent
Haomin Jin1, Iwamura Kazuaki1, Ishimaru Nobuhiro, Hino Takashi, Kagawa Yoshiaki 
20 Jan 2004
TL;DR: In this paper, the authors proposed a map generation device which extracts a polygon shape of a building having a complex upper portion structure from a wide area image, which includes an image appointment unit that receives appointment of at least one position in a building existing within an aerial photograph.
Abstract: To provide a map generation device according to the present invention which extracts a polygon shape of a building having a complex upper portion structure from a wide area image. The map generation device includes an image appointment unit that receives appointment of at least one position in a building existing within an aerial photograph, a polygon extraction unit that extracts a building region based on a result of discriminating a color around the appointed position and extracts a polygon line of the building region, and a vector generation unit that generates a vector of the polygon line of the building region.

Journal ArticleDOI
TL;DR: In this paper, a sharp lower bound for Newton polygons of L-functions of exponential sums of one-variable rational functions was proved for any rational function over the algebraic closure of the finite field of p elements.
Abstract: This paper proves a sharp lower bound for Newton polygons of L-functions of exponential sums of one-variable rational functions. Let p be a prime and let F¯p be the algebraic closure of the finite field of p elements. Let f¯(x) be any one-variable rational function over F¯p with l poles of orders d 1 ,…,d l . Suppose p is coprime to d 1 ,…,d l . We prove that there exists a tight lower bound which we call Hodge polygon, depending only on the d j 's, to the Newton polygon of L-function of exponential sums of f¯(x). Moreover, we show that for any f¯(x) these two polygons coincide if and only if p ≡ 1 mod d j for every 1 ≤ j ≤ l. As a corollary, we obtain a tight lower bound for the p-adic Newton polygon of zeta-function of an Artin-Schreier curve given by affine equations yp−y=f¯(x).

Journal ArticleDOI
TL;DR: Subdivision solves the problem of modeling with polygons by representing a smooth shape in terms of a coarse polygonal model, and the subdivision rules used during this refinement process depend only on the initial model's topological connectivity and yield surfaces with guaranteed smoothness.
Abstract: Polygons are a ubiquitous modeling primitive in computer graphics. However, modeling with polygons is problematic for highly faceted approximations to smooth surfaces. The sheer size of these approximations makes them impossible to manipulate directly. Subdivision solves this problem by representing a smooth shape in terms of a coarse polygonal model. The subdivision rules used during this refinement process depend only on the initial model's topological connectivity and yield surfaces with guaranteed smoothness. Subdivision schemes are either interpolating or approximating. The averaging methods we've described are approximating in that the surfaces don't interpolate the original surface's vertices. Interpolating methods interpolate the vertices of the original surface, giving the user a more intuitive feel of the final surface shape.

Patent
14 Jun 2004
TL;DR: In this paper, the number of people in a crowd is determined using visual hull information, where the hull information is used to determine the intersection of the silhouette image cone and a working volume, and the projection of the intersection onto a plane is determined.
Abstract: Systems, apparatuses, and methods are presented that determine the number of people in a crowd using visual hull information. In one embodiment, an image sensor generates a conventional image of a crowd. A silhouette image is then determined based on the conventional image. The intersection of the silhouette image cone and a working volume is determined. The projection of the intersection onto a plane is determined. Planar projections from several image sensors are aggregated by intersecting them, forming a subdivision pattern. Polygons that are actually empty are identified and removed. Upper and lower bounds of the number of people in each polygon are determined and stored in a tree data structure. This tree is updated as time passes and new information is received from image sensors. The number of people in the crowd is equal to the lower bound of the root node of the tree.

Journal ArticleDOI
01 Sep 2004
TL;DR: This work presents the first approach to visualizing higher order critical points of 3D vector fields based on a complete segmentation of the areas around critical points into sectors of different flow behavior.
Abstract: We present the first algorithm for constructing 3D vector fields based on their topological skeleton. The skeleton itself is modeled by interactively moving a number of control polygons. Then a piecewise linear vector field is automatically constructed which has the same topological skeleton as modeled before. This approach is based on a complete segmentation of the areas around critical points into sectors of different flow behavior. Based on this, we present the first approach to visualizing higher order critical points of 3D vector fields. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Line and Curve Generation I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism

Proceedings ArticleDOI
08 Aug 2004
TL;DR: Squashing Cubes automates the construction of physically based deformable objects from arbitrary geometric models, and allows animators to focus on animation, and less on physically based modeling details, such as the engineering of bridges using properly connected beam, truss, and shell elements.
Abstract: The vast majority of geometric meshes used in computer graphics are optimized for rendering, and not deformable object simulation. Despite tools for volume (or surface) (re)meshing of geometric models to support physical simulation, in practice, the construction of physically based deformable models from arbitrary graphical models remains a tedious process for animators. Squashing Cubes (SC) automates the construction of physically based deformable objects from arbitrary geometric models. During preprocess, the geometric model (typically a surface mesh) is voxelized into tiny elastic cubes, i.e., the squashing cubes model. Second, a generic deformable object simulator is used to deform the SC model. Finally, the resulting deformations are interpolated back onto the original model, thus producing the final animation. Such domain embedding schemes are familiar to graphics [Pentland and Williams 1989; Faloutsos et al. 1997]. SC is simple to implement, practical for complex models, supports any geometric representation, and the SC deformable models are trivial to simulate. Although SC geometry is approximate, reasonable approximations of deformation displacement fields can be obtained for many animation purposes. One practical benefit of voxelization is that geometric features and defects smaller than the voxel scale are merged; consequently, intersecting and topologically disconnected polygons (so common in graphical models) are deformed appropriately. SC is especially appealing for complex superstructures, such as the bridge, for which the geometric mesh is not “structurally sound.” For example, the bridge model contains numerous modeling shortcuts, such as improperly attached cables, intersecting geometry, small “gaps,” badly shaped (skinny) triangles, and isolated polygons. In short, Squashing Cubes allows animators to focus on animation, and less on physically based modeling details, such as the engineering of bridges using properly connected beam, truss, and shell elements.