scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 2005"


Proceedings ArticleDOI
17 Oct 2005
TL;DR: This work shows that it can estimate the coarse geometric properties of a scene by learning appearance-based models of geometric classes, even in cluttered natural scenes, and provides a multiple-hypothesis framework for robustly estimating scene structure from a single image and obtaining confidences for each geometric label.
Abstract: Many computer vision algorithms limit their performance by ignoring the underlying 3D geometric structure in the image. We show that we can estimate the coarse geometric properties of a scene by learning appearance-based models of geometric classes, even in cluttered natural scenes. Geometric classes describe the 3D orientation of an image region with respect to the camera. We provide a multiple-hypothesis framework for robustly estimating scene structure from a single image and obtaining confidences for each geometric label. These confidences can then be used to improve the performance of many other applications. We provide a thorough quantitative evaluation of our algorithm on a set of outdoor images and demonstrate its usefulness in two applications: object detection and automatic single-view reconstruction.

792 citations


Journal ArticleDOI
TL;DR: In this paper, a collection of distributed control laws that are related to nonsmooth gradient systems for disk-covering and sphere-packing problems is presented. And the resulting dynamical systems promise to be of use in coordination problems for networked robots; in this setting the distributed control law correspond to local interactions between the robots.
Abstract: This paper discusses dynamical systems for disk-covering and sphere-packing problems. We present facility location functions from geometric optimization and characterize their differentiable properties. We design and analyze a collection of distributed control laws that are related to nonsmooth gradient systems. The resulting dynamical systems promise to be of use in coordination problems for networked robots; in this setting the distributed control laws correspond to local interactions between the robots. The technical approach relies on concepts from computational geometry, nonsmooth analysis, and the dynamical system approach to algorithms.

452 citations


01 Jan 2005
TL;DR: In this article, the authors present theoretical and practical discussions of nearest-neighbor (NN) methods in machine learning and examine computer vision as an application domain in which the benefit of these advanced methods is often dramatic.
Abstract: Regression and classification methods based on similarity of the input to stored examples have not been widely used in applications involving very large sets of high-dimensional data. Recent advances in computational geometry and machine learning, however, may alleviate the problems in using these methods on large data sets. This volume presents theoretical and practical discussions of nearest-neighbor (NN) methods in machine learning and examines computer vision as an application domain in which the benefit of these advanced methods is often dramatic. It brings together contributions from researchers in theory of computation, machine learning, and computer vision with the goals of bridging the gaps between disciplines and presenting state-of-the-art methods for emerging applications.The contributors focus on the importance of designing algorithms for NN search, and for the related classification, regression, and retrieval tasks, that remain efficient even as the number of points or the dimensionality of the data grows very large. The book begins with two theoretical chapters on computational geometry and then explores ways to make the NN approach practicable in machine learning applications where the dimensionality of the data and the size of the data sets make the naive methods for NN search prohibitively expensive. The final chapters describe successful applications of an NN algorithm, locality-sensitive hashing (LSH), to vision tasks.

338 citations


Proceedings ArticleDOI
23 Oct 2005
TL;DR: A method for generating flow maps using hierarchical clustering given a set of nodes, positions, and flow data between the nodes, inspired by graph layout algorithms that minimize edge crossings and distort node positions while maintaining their relative position to one another is presented.
Abstract: Cartographers have long used flow maps to show the movement of objects from one location to another, such as the number of people in a migration, the amount of goods being traded, or the number of packets in a network The advantage of flow maps is that they reduce visual clutter by merging edges Most flow maps are drawn by hand and there are few computer algorithms available We present a method for generating flow maps using hierarchical clustering given a set of nodes, positions, and flow data between the nodes Our techniques are inspired by graph layout algorithms that minimize edge crossings and distort node positions while maintaining their relative position to one another We demonstrate our technique by producing flow maps for network traffic, census data, and trade data

310 citations


Proceedings ArticleDOI
20 Jun 2005
TL;DR: A RANSAC-based algorithm for robust estimation of ep bipolar geometry from point correspondences in the possible presence of a dominant scene plane is presented, exploiting a theorem that if five or more of seven correspondences are related by a homography then there is an epipolar geometry consistent with the seven-tuple as well as with all correspondences related by the homography.
Abstract: A RANSAC-based algorithm for robust estimation of epipolar geometry from point correspondences in the possible presence of a dominant scene plane is presented. The algorithm handles scenes with (i) all points in a single plane, (ii) majority of points in a single plane and the rest off the plane, (iii) no dominant plane. It is not required to know a priori which of the cases (i)-(iii) occurs. The algorithm exploits a theorem we proved, that if five or more of seven correspondences are related by a homography then there is an epipolar geometry consistent with the seven-tuple as well as with all correspondences related by the homography. This means that a seven point sample consisting of two outliers and five inliers lying in a dominant plane produces an epipolar geometry which is wrong and yet consistent with a high number of correspondences. The theorem explains why RANSAC often fails to estimate epipolar geometry in the presence of a dominant plane. Rather surprisingly, the theorem also implies that RANSAC-based homography estimation is faster when drawing nonminimal samples of seven correspondences than minimal samples of four correspondences.

220 citations


Proceedings ArticleDOI
17 Oct 2005
TL;DR: This work proposes to extend the standard epipolar geometry to the geometry of dynamic scenes where the cameras are moving and demonstrates the versatility of the proposed geometric approach for recognition of actions in a number of challenging sequences.
Abstract: Most work in action recognition deals with sequences acquired by stationary cameras with fixed viewpoints. Due to the camera motion, the trajectories of the body parts contain not only the motion of the performing actor but also the motion of the camera. In addition to the camera motion, different viewpoints of the same action in different environments result in different trajectories, which can not be matched using standard approaches. In order to handle these problems, we propose to use the multi-view geometry between two actions. However, well known epipolar geometry of the static scenes where the cameras are stationary is not suitable for our task. Thus, we propose to extend the standard epipolar geometry to the geometry of dynamic scenes where the cameras are moving. We demonstrate the versatility of the proposed geometric approach for recognition of actions in a number of challenging sequences

191 citations


Proceedings ArticleDOI
17 Oct 2005
TL;DR: This paper presents a novel quasiconvex optimization framework in which the geometric reconstruction problems are formulated as a small number of small-scale convex programs that are readily solvable and provides an intuitive method to handle directional uncertainties and outliers in measurements.
Abstract: Geometric reconstruction problems in computer vision are often solved by minimizing a cost function that combines the reprojection errors in the 2D images. In this paper, we show that, for various geometric reconstruction problems, their reprojection error functions share a common and quasiconvex formulation. Based on the quasiconvexity, we present a novel quasiconvex optimization framework in which the geometric reconstruction problems are formulated as a small number of small-scale convex programs that are ready to solve. Our final reconstruction algorithm is simple and has intuitive geometric interpretation. In contrast to existing random sampling or local minimization approaches. Our algorithm is deterministic and guarantees a predefined accuracy of the minimization result. We demonstrate the effectiveness of our algorithm by experiments on both synthetic and real data

179 citations


Journal ArticleDOI
01 Jun 2005-Top
TL;DR: A basic territory design model is introduced and two approaches for solving this model are presented: a classical location-allocation approach combined with optimal split resolution techniques and a newly developed computational geometry based method.
Abstract: Territory design may be viewed as the problem of grouping small geographic areas into larger geographic clusters called territories in such a way that the latter are acceptable according to relevant planning criteria. In this paper we review the existing literature for applications of territory design problems and solution approaches for solving these types of problems. After identifying features common to all applications we introduce a basic territory design model and present in detail two approaches for solving this model: a classical location-allocation approach combined with optimal split resolution techniques and a newly developed computational geometry based method. We present computational results indicating the efficiency and suitability of the latter method for solving large-scale practical problems in an interactive environment. Furthermore, we discuss extensions to the basic model and its integration into Geographic Information Systems.

163 citations


Proceedings ArticleDOI
17 Oct 2005
TL;DR: A solution for optimal triangulation in three views is presented and it is shown experimentally that the number of stationary points that are local minima and lie in front of each camera is small but does depend on the scene geometry.
Abstract: We present a solution for optimal triangulation in three views. The solution is guaranteed to find the optimal solution because it computes all the stationary points of the (maximum likelihood) objective function. Internally, the solution is found by computing roots of multivariate polynomial equations, directly solving the conditions for stationarity. The solver makes use of standard methods from computational commutative algebra to convert the root-finding problem into a 47 /spl times/ 47 nonsymmetric eigenproblem. Although there are in general 47 roots, counting both real and complex ones, the number of real roots is usually much smaller. We also show experimentally that the number of stationary points that are local minima and lie in front of each camera is small but does depend on the scene geometry.

143 citations


DissertationDOI
01 Jan 2005
TL;DR: In this article, a model predictive control (MPC) of discrete-time hybrid systems is proposed to predict the future evolution of the controlled variables over a prediction horizon, where a cost function is minimized to obtain the optimal control input sequence, which is applied to the plant by means of receding horizon policy.
Abstract: This thesis focuses on Model Predictive Control (MPC) of discrete-time hybrid systems. Hybrid systems contain continuous and discrete valued components, and are located at the intersection between the fields of control theory and computer science. MPC uses an internal model of the controlled plant to predict the future evolution of the controlled variables over a prediction horizon. A cost function is minimized to obtain the optimal control input sequence, which is applied to the plant by means of a receding horizon policy. The latter implies that only the first control input of the input sequence is implemented, the horizon is shifted by one time-step and the above procedure is repeated at the next sampling instant. Most importantly, theory and tools are available to off-line derive the piecewise affine (PWA) state-feedback control law. Hence, any time-consuming on-line computation of the control input is avoided and plants with high sampling frequencies can be controlled. The thesis is divided into two parts: The first part is devoted to theory and algorithms, whereas the second part tackles applications in the fields of power electronics and power systems. In the first part, using the notion of cell enumeration in hyperplane arrangements from computational geometry, we propose an algorithm that efficiently enumerates all feasible modes of a composition of hybrid systems. This technique allows the designer to evaluate the complexity of the compound model, to efficiently translate the model into a PWA representation, and to reduce the computational burden of optimal control schemes by adding cuts that prune infeasible modes from the model. With respect to implementation, an important issue is the complexity reduction of PWA state-feedback controllers. Hence, we propose two algorithms that solve the problem of deriving a PWA representation that is both equivalent to the given one and minimal in the number of regions. As both algorithms refrain from solving additional Linear Programs, they are not only optimal but also computationally feasible. In many cases, the optimal complexity reduction constitutes an enabling technique when implementing the optimal controllers as look-up tables in hardware. In the second part of the thesis, we consider the field of power electronics that is intrinsically hybrid, since the positions of semiconductor switches are described by binary variables. The fact that the methodologies of MPC and hybrid systems are basically unknown in the power electronics community has motivated us to consider such problems, namely

138 citations


Journal ArticleDOI
TL;DR: This work presents near-linear time approximation algorithms that, given a parameter ε > 0, compute a simplified polygonal curve P’ whose error is less than ε and size at most the size of an optimal simplified Polygonal Curve with error ε/2.
Abstract: We consider the problem of approximating a polygonal curve P under a given error criterion by another polygonal curve P' whose vertices are a subset of the vertices of P. The goal is to minimize the number of vertices of P' while ensuring that the error between P' and P is below a certain threshold. We consider two different error measures: Hausdorff and Frechet. For both error criteria, we present near-linear time approximation algorithms that, given a parameter ź > 0, compute a simplified polygonal curve P' whose error is less than ź and size at most the size of an optimal simplified polygonal curve with error ź/2. We consider monotone curves in ź2 in the case of the Hausdorff error measure under the uniform distance metric and arbitrary curves in any dimension for the Frechet error measure under Lp metrics. We present experimental results demonstrating that our algorithms are simple and fast, and produce close to optimal simplifications in practice.

Book ChapterDOI
05 Sep 2005
TL;DR: This paper discusses and evaluates the use of sparse direct solvers for such kind of systems in geometry processing applications, and finds that in experiments they turned out to be superior even to highly optimized multigrid methods, but at the same time were considerably easier to use and implement.
Abstract: The use of polygonal mesh representations for freeform geometry enables the formulation of many important geometry processing tasks as the solution of one or several linear systems. As a consequence, the key ingredient for efficient algorithms is a fast procedure to solve linear systems. A large class of standard problems can further be shown to lead more specifically to sparse, symmetric, and positive definite systems, that allow for a numerically robust and efficient solution. In this paper we discuss and evaluate the use of sparse direct solvers for such kind of systems in geometry processing applications, since in our experiments they turned out to be superior even to highly optimized multigrid methods, but at the same time were considerably easier to use and implement. Although the methods we present are well known in the field of high performance computing, we observed that they are in practice surprisingly rarely applied to geometry processing problems.

Proceedings ArticleDOI
08 Jun 2005
TL;DR: It is demonstrated that a “blind” swarm of robots with no localization and only a weak form of distance estimation can rigorously determine coverage in a bounded planar domain of unknown size and shape.
Abstract: We consider coverage problems in robot sensor networks with minimal sensing capabilities. In particular, we demonstrate that a “blind” swarm of robots with no localization and only a weak form of distance estimation can rigorously determine coverage in a bounded planar domain of unknown size and shape. The methods we introduce come from algebraic topology. I. COVERAGE PROBLEMS Many of the potential applications of robot swarms require information about coverage in a given domain. For example, using a swarm of robot sensors for surveillance and security applications carries with it the charge to maximize, or, preferably, guarantee coverage. Such applications include networks of security cameras, mine field sweeping via networked robots [18], and oceanographic sampling [4]. In these contexts, each robot has some coverage domain, and one wishes to know about the union of these coverage domains. Such problems are also crucial in applications not involving robots directly, e.g., communication networks. As a preliminary analysis, we consider the static “field” coverage problem, in which robots are assumed stationary and the goal is to verify blanket coverage of a given domain. There is a large literature on this subject; see, e.g., [7], [1], [16]. In addition, there are variants on these problems involving “barrier” coverage to separate regions. Dynamic or “sweeping” coverage [3] is a common and challenging task with applications ranging from security to vacuuming. Although a sensor network composed of robots will have dynamic capabilities, we restrict attention in this brief paper to the static case in order to lay the groundwork for future inquiry. There are two primary approaches to static coverage problems in the literature. The first uses computational geometry tools applied to exact node coordinates. This typically involves ‘ruler-and-compass’ style geometry [10] or Delaunay triangulations of the domain [16], [14], [20]. Such approaches are very rigid with regards to inputs: one must know exact node coordinates and one must know the geometry of the domain precisely to determine the Delaunay complex. To alleviate the former requirement, many authors have turned to probabilistic tools. For example, in [13], the author assumes a randomly and uniformly distributed collection of nodes in a domain with a fixed geometry and proves expected area coverage. Other approaches [15], [19] give percolationtype results about coverage and network integrity for randomly distributed nodes. The drawback of these methods is the need for strong assumptions about the exact shape of the domain, as well as the need for a uniform distribution of nodes. In the sensor networks community, there is a compelling interest (and corresponding burgeoning literature) in determining properties of a network in which the nodes do not possess coordinate data. One example of a coordinate-free approach is in [17], which gives a heuristic method for geographic routing without coordinate data: among the large literature arising from this paper, we note in particular the mathematical analysis of this approach in [11]. To our knowledge, noone has treated the coverage problem in a coordinate-free setting. In this note, we introduce a new set of tools for answering coverage problems in robotics and sensor networks with minimal assumptions about domain geometry and node localization. We provide a sufficiency criterion for coverage. We do not answer the problem of how the nodes should be placed in order to maximize coverage, nor the minimum number of such nodes necessary; neither do we address how to reallocate nodes to fill coverage holes.

Proceedings ArticleDOI
20 Jun 2005
TL;DR: An algorithm to infer a similarity matrix by decomposing the n-dimensional probability tensor is presented and its applicability is illustrated on two significant problems, namely perceptually salient geometric grouping and parametric motion segmentation.
Abstract: While spectral clustering has been applied successfully to problems in computer vision, their applicability is limited to pairwise similarity measures that form a probability matrix. However many geometric problems with parametric forms require more than two observations to estimate a similarity measure, e.g. epipolar geometry. In such cases we can only define the probability of belonging to the same cluster for an n-tuple of points and not just a pair, leading to an n-dimensional probability tensor. However spectral clustering methods are not available for tensors. In this paper we present an algorithm to infer a similarity matrix by decomposing the n-dimensional probability tensor. Our method exploits the super-symmetry of the probability tensor to provide a randomised scheme that does not require the explicit computation of the probability tensor. Our approach is fast and accurate and its applicability is illustrated on two significant problems, namely perceptually salient geometric grouping and parametric motion segmentation (like affine, epipolar etc).

Journal ArticleDOI
TL;DR: This survey describes recent progress in EGC research in three key areas: constructive zero bounds, approximate expression evaluation and numerical filters.

Proceedings ArticleDOI
13 Jun 2005
TL;DR: The problem of automatic data-driven scale selection to improve point cloud classification is investigated and the approach is validated with results using data from different sensors in various environments classified into different terrain types.
Abstract: Three-dimensional ladar data are commonly used to perform scene understanding for outdoor mobile robots, specifically in natural terrain. One effective method is to classify points using features based on local point cloud distribution into surfaces, linear structures or clutter volumes. But the local features are computed using 3D points within a support-volume. Local and global point density variations and the presence of multiple manifolds make the problem of selecting the size of this support volume, or scale, challenging. In this paper, we adopt an approach inspired by recent developments in computational geometry (Mitra et al., 2005) and investigate the problem of automatic data-driven scale selection to improve point cloud classification. The approach is validated with results using data from different sensors in various environments classified into different terrain types (vegetation, solid surface and linear structure).

Journal ArticleDOI
TL;DR: This paper presents a representation of model density and porosity based on stochastic geometry and uses this representation to develop an approach to modeling of porous, heterogeneous materials and provides experimental data to validate the approach.
Abstract: Heterogeneous structures represent an important new frontier for 21st century engineering. Human tissues, composites, ‘smart’ and multi-material objects are all physically manifest in the world as three-dimensional (3D) objects with varying surface, internal and volumetric properties and geometries. For instance, a tissue engineered structure, such as bone scaffold for guided tissue regeneration, can be described as a heterogeneous structure consisting of 3D extra-cellular matrices (made from biodegradable material) and seeded donor cells and/or growth factors. The design and fabrication of such heterogeneous structures requires new techniques for solid models to represent 3D heterogeneous objects with complex material properties. This paper presents a representation of model density and porosity based on stochastic geometry. While density has been previously studied in the solid modeling literature, porosity is a relatively new problem. Modeling porosity of bio-materials is critical for developing replacement bone tissues. The paper uses this representation to develop an approach to modeling of porous, heterogeneous materials and provides experimental data to validate the approach. The authors believe that their approach introduces ideas from the stochastic geometry literature to a new set of engineering problems. It is hoped that this paper stimulates researchers to find new opportunities that extend these ideas to be more broadly applicable for other computational geometry, graphics and computer-aided design problems.

Journal ArticleDOI
TL;DR: In this article, the authors give the first algorithmic study of a class of covering tour problems related to the geometric traveling salesman problem, and prove the NP-completeness of minimum-turn milling and give efficient approximation algorithms for several natural versions of the problem.
Abstract: We give the first algorithmic study of a class of "covering tour" problems related to the geometric traveling salesman problem: Find a polygonal tour for a cutter so that it sweeps out a specified region ("pocket") in order to minimize a cost that depends mainly on the number of turns. These problems arise naturally in manufacturing applications of computational geometry to automatic tool path generation and automatic inspection systems, as well as arc routing ("postman") problems with turn penalties. We prove the NP-completeness of minimum-turn milling and give efficient approximation algorithms for several natural versions of the problem, including a polynomial-time approximation scheme based on a novel adaptation of the $m$-guillotine method.

Proceedings ArticleDOI
01 Jan 2005

Book
01 Jan 2005
TL;DR: This tutorial introduces several data structures, discusses their complexity, point out construction schemes and the corresponding performance and presents standard applications in two and three dimensions.
Abstract: Data structures and tools from computational geometry help to solve problems in computer graphics; these methods have been widely adopted by the computer graphics community yielding elegant and efficient algorithms. This book focuses on algorithms and data structures that have proven to be versatile, efficient, fundamental, and easy to implement. The book familiarizes students, as well as practitioners in the field of computer graphics, with a wide range of data structures. The authors describe each data structure in detail, highlight fundamental properties, and present algorithms based on the data structure. A number of recent representative and useful algorithms from computer graphics are described in detail, illuminating the utilization of the data structure in a creative way.

Journal ArticleDOI
TL;DR: A new class of shape approximation techniques for irregular triangular meshes that approximates the geometry of the mesh using a linear combination of a small number of basis vectors, and develops an incremental update of the factorization of the least-squares system.
Abstract: We introduce a new class of shape approximation techniques for irregular triangular meshes. Our method approximates the geometry of the mesh using a linear combination of a small number of basis vectors. The basis vectors are functions of the mesh connectivity and of the mesh indices of a number of anchor vertices. There is a fundamental difference between the bases generated by our method and those generated by geometry-oblivious methods, such as Laplacian-based spectral methods. In the latter methods, the basis vectors are functions of the connectivity alone. The basis vectors of our method, in contrast, are geometry-aware since they depend on both the connectivity and on a binary tagging of vertices that are "geometrically important" in the given mesh (e.g., extrema). We show that, by defining the basis vectors to be the solutions of certain least-squares problems, the reconstruction problem reduces to solving a single sparse linear least-squares problem. We also show that this problem can be solved quickly using a state-of-the-art sparse-matrix factorization algorithm. We show how to select the anchor vertices to define a compact effective basis from which an approximated shape can be reconstructed. Furthermore, we develop an incremental update of the factorization of the least-squares system. This allows a progressive scheme where an initial approximation is incrementally refined by a stream of anchor points. We show that the incremental update and solving the factored system are fast enough to allow an online refinement of the mesh geometry

Proceedings ArticleDOI
Martin Wattenberg1
23 Oct 2005
TL;DR: A connection between space-filling visualizations and the mathematics of space- filling curves is described, and that connection is used to characterize a family of layout algorithms which produce nonrectangular regions but enjoy geometric continuity under changes to the data and legibility even for highly unbalanced trees.
Abstract: A recent line of treemap research has focused on layout algorithms that optimize properties such as stability, preservation of ordering information, and aspect ratio of rectangles. No ideal treemap layout algorithm has been found, and so it is natural to explore layouts that produce nonrectangular regions. This note describes a connection between space-filling visualizations and the mathematics of space-filling curves, and uses that connection to characterize a family of layout algorithms which produce nonrectangular regions but enjoy geometric continuity under changes to the data and legibility even for highly unbalanced trees.

Journal ArticleDOI
TL;DR: In this article, the authors studied facility location problems that deal with finding a median for acontinuum of demand points, and provided polynomial-time algorithms for various versions of the L 1 1-median (Fermat-Weber) problem.
Abstract: We give the firstexact algorithmic study of facility location problems that deal with finding a median for acontinuum of demand points. In particular, we consider versions of the "continuousk-median (Fermat-Weber) problem" where the goal is to select one or more center points that minimize the average distance to a set of points in a demandregion. In such problems, the average is computed as an integral over the relevant region, versus the usual discrete sum of distances. The resulting facility location problems are inherently geometric, requiring analysis techniques of computational geometry. We provide polynomial-time algorithms for various versions of theL1 1-median (Fermat-Weber) problem. We also consider the multiple-center version of theL1k-median problem, which we prove is NP-hard for largek.

Journal ArticleDOI
TL;DR: This work describes data structures for representing simplicial meshes compactly while supporting online queries and updates efficiently and uses the data structures to implement 2D and 3D incremental algorithms for generating a Delaunay mesh.
Abstract: We describe data structures for representing simplicial meshes compactly while supporting online queries and updates efficiently. Our data structure requires about a factor of five less memory than the most efficient standard data structures for triangular or tetrahedral meshes, while efficiently supporting traversal among simplices, storing data on simplices, and insertion and deletion of simplices. Our implementation of the data structures uses about 5 bytes/triangle in two dimensions (2D) and 7.5 bytes/tetrahedron in three dimensions (3D). We use the data structures to implement 2D and 3D incremental algorithms for generating a Delaunay mesh. The 3D algorithm can generate 100 Million tetrahedra with 1 Gbyte of memory, including the space for the coordinates and all data used by the algorithm. The runtime of the algorithm is as fast as Shewchuk's Pyramid code, the most efficient we know of, and uses a factor of 3.5 less memory overall.

Journal ArticleDOI
TL;DR: A new diamond data structure for the basic selective-refinement processing is introduced, which is a streamlined method of representing the well-known hierarchies of right triangles that have enjoyed much success in real-time, view-dependent terrain display.
Abstract: The real-time display of huge geometry and imagery databases involves view-dependent approximations, typically through the use of precomputed hierarchies that are selectively refined at runtime. A classic motivating problem is terrain visualization in which planetary databases involving billions of elevation and color values are displayed on PC graphics hardware at high frame rates. This paper introduces a new diamond data structure for the basic selective-refinement processing, which is a streamlined method of representing the well-known hierarchies of right triangles that have enjoyed much success in real-time, view-dependent terrain display. Regular-grid tiles are proposed as the payload data per diamond for both geometry and texture. The use of 4-8 grid refinement and coarsening schemes allows level-of-detail transitions that are twice as gradual as traditional quadtree-based hierarchies, as well as very high-quality low-pass filtering compared to subsampling-based hierarchies. An out-of-core storage organization is introduced based on Sierpinski indices per diamond, along with a tile preprocessing framework based on fine-to-coarse, same-level, and coarse-to-fine gathering operations. To attain optimal frame-to-frame coherence and processing-order priorities, dual split and merge queues are developed similar to the realtime optimally adapting meshes (ROAM) algorithm, as well as an adaptation of the ROAM frustum culling technique. Example applications of lake-detection and procedural terrain generation demonstrate the flexibility of the tile processing framework.

Proceedings ArticleDOI
06 Jun 2005
TL;DR: It is shown that polynomial time approximation algorithms with provable performance exist, under a certain general condition: that for a random subset R ⊂ S and function f(), there is a decomposition of the complement U ∖ ∪Y ∈ R Y into an expected f(|R|) regions.
Abstract: Given a collection S of subsets of some set U, and M ⊂ U, the set cover problem is to find the smallest subcollection C ⊂ S such that M is a subset of the union of the sets in C. While the general problem is NP-hard to solve, even approximately, here we consider some geometric special cases, where usually U = Rd. Combining previously known techniques [3, 4], we show that polynomial time approximation algorithms with provable performance exist, under a certain general condition: that for a random subset R ⊂ S and function f(), there is a decomposition of the complement U ∖ ∪Y ∈ R Y into an expected f(|R|) regions, each region of a particular simple form. Under this condition, a cover of size O(f(|C|)) can be found in polynomial time. Using this result, and combinatorial geometry results implying bounding functions f(c) that are nearly linear, we obtain o(log c) approximation algorithms for covering by fat triangles, by pseudodisks, by a family of fat objects, and others. Similarly, constant-factor approximations follow for similar-sized fat triangles and fat objects, and for fat wedges. With more work, we obtain constant-factor approximation algorithms for covering by unit cubes in R3, and for guarding an x-monotone polygonal chain.

Journal ArticleDOI
TL;DR: A new and efficient algorithm to accurately polygonize an implicit surface generated by multiple Boolean operations with globally deformed primitives that can be applied to objects with both an implicit and a parametric representation, such as superquadrics, supershapes, and Dupin cyclides is presented.
Abstract: We present a new and efficient algorithm to accurately polygonize an implicit surface generated by multiple Boolean operations with globally deformed primitives. Our algorithm is special in the sense that it can be applied to objects with both an implicit and a parametric representation, such as superquadrics, supershapes, and Dupin cyclides. The input is a constructive solid geometry tree (CSG tree) that contains the Boolean operations, the parameters of the primitives, and the global deformations. At each node of the CSG tree, the implicit formulations of the subtrees are used to quickly determine the parts to be transmitted to the parent node, while the primitives' parametric definition are used to refine an intermediary mesh around the intersection curves. The output is both an implicit equation and a mesh representing its solution. For the resulting object, an implicit equation with guaranteed differential properties is obtained by simple combinations of the primitives' implicit equations using R-functions. Depending on the chosen R-function, this equation is continuous and can be differentiable everywhere. The primitives' parametric representations are used to directly polygonize the resulting surface by generating vertices that belong exactly to the zero-set of the resulting implicit equation. The proposed approach has many potential applications, ranging from mechanical engineering to shape recognition and data compression. Examples of complex objects are presented and commented on to show the potential of our approach for shape modeling.

Book
01 Jan 2005
TL;DR: Basic Concepts Computational Geometry and Geometric Data Structures Discretization Methods for Differential Equations Solving the Mesh Enhancement Algebraic Equation System
Abstract: Basic Concepts Computational Geometry and Geometric Data Structures Discretization Methods for Differential Equations Solving the Mesh Enhancement Algebraic Equation System The Geometry of Surfaces in Euclidean Space Special Coordinate Systems Elliptic Mesh Enhancement Equation Systems Structured Mesh Smoothing and Enhancement Mesh Enhancement Methods for Unstructured Meshes

Proceedings ArticleDOI
17 Oct 2005
TL;DR: Line element geometry, which generalizes both line geometry and the Laguerre geometry of oriented planes, enables us to recognize a wide class of surfaces, by fitting linear subspaces in an appropriate seven-dimensional image space.
Abstract: This paper presents a new method for the recognition and reconstruction of surfaces from 3D data. Line element geometry, which generalizes both line geometry and the Laguerre geometry of oriented planes, enables us to recognize a wide class of surfaces (spiral surfaces, cones, helical surfaces, rotational surfaces, cylinders, etc.), by fitting linear subspaces in an appropriate seven-dimensional image space. In combination with standard techniques such as PCA and RANSAC, line element geometry is employed to effectively perform the segmentation of complex objects according to surface type. Examples show applications in reverse engineering of CAD models and testing mathematical hypotheses concerning the exponential growth of sea shells

Journal ArticleDOI
TL;DR: This paper presents the formulation of representative 3D computer graphics operations in terms of CORDIC-type primitives, and briefly outlines a stream processor based on CORDic-type modules to efficiently implement these graphic operations.
Abstract: Graphics processors require strong arithmetic support to perform computational kernels over data streams. Because of the current implementation using the basic arithmetic operations, the algorithms are given in algebraic terms. However, since the operations are really of a geometric nature, it seems to us that more flexibility in the implementation is obtained if the description is given in a high-level geometrical form. As a consequence of this line of thought, this paper is an attempt to reconsider some kernels in a graphics processor to obtain implementations that are potentially more scalable than just replicating the modules used in conventional implementations. We present the formulation of representative 3D computer graphics operations in terms of CORDIC-type primitives. Then, we briefly outline a stream processor based on CORDIC-type modules to efficiently implement these graphic operations. We perform a rough comparison with current implementations and conclude that the CORDIC-based alternative might be attractive.