scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 2007"




Book
01 Apr 2007
TL;DR: In this paper, the authors describe algorithms in a C-like language for automatic processing of natural language, analysis of molecular sequences and management of textual databases, and present examples related to the automatic processing and analysis of natural languages.
Abstract: Describing algorithms in a C-like language, this text presents examples related to the automatic processing of natural language, to the analysis of molecular sequences and to the management of textual databases.

686 citations


Book
01 Jan 2007
TL;DR: In this paper, the authors present rigorous descriptions of the main algorithms and their analyses for different variations of the Geometric Spanner Network Problem, and present several basic principles and results that are used throughout the book.
Abstract: Aimed at an audience of researchers and graduate students in computational geometry and algorithm design, this book uses the Geometric Spanner Network Problem to showcase a number of useful algorithmic techniques, data structure strategies, and geometric analysis techniques with many applications, practical and theoretical. The authors present rigorous descriptions of the main algorithms and their analyses for different variations of the Geometric Spanner Network Problem. Though the basic ideas behind most of these algorithms are intuitive, very few are easy to describe and analyze. For most of the algorithms, nontrivial data structures need to be designed, and nontrivial techniques need to be developed in order for analysis to take place. Still, there are several basic principles and results that are used throughout the book. One of the most important is the powerful well-separated pair decomposition. This decomposition is used as a starting point for several of the spanner constructions.

444 citations


Journal ArticleDOI
TL;DR: This paper proposes a method, named orthogonal neighborhood preserving projections, which works by first building an "affinity" graph for the data in a way that is similar to the method of locally linear embedding (LLE); in contrast with the standard LLE, ONPP employs an explicit linear mapping between the input and the reduced spaces.
Abstract: This paper considers the problem of dimensionality reduction by orthogonal projection techniques. The main feature of the proposed techniques is that they attempt to preserve both the intrinsic neighborhood geometry of the data samples and the global geometry. In particular, we propose a method, named orthogonal neighborhood preserving projections, which works by first building an "affinity" graph for the data in a way that is similar to the method of locally linear embedding (LLE). However, in contrast with the standard LLE where the mapping between the input and the reduced spaces is implicit, ONPP employs an explicit linear mapping between the two. As a result, handling new data samples becomes straightforward, as this amounts to a simple linear transformation. We show how we can define kernel variants of ONPP, as well as how we can apply the method in a supervised setting. Numerical experiments are reported to illustrate the performance of ONPP and to compare it with a few competing methods.

306 citations


Proceedings ArticleDOI
11 Jun 2007
TL;DR: A new algorithm with running time approaching O(n3/log2n), which improves all known algorithms for general real-weighted dense graphs and is perhaps close to the best result possible without using fast matrix multiplication, modulo a few log log n factors.
Abstract: In the first part of the paper, we reexamine the all-pairsshortest paths (APSP) problem and present a newalgorithm with running time approaching O(n3/log2n), which improves all known algorithms for general real-weighted dense graphs andis perhaps close to the best result possible without using fast matrix multiplication, modulo a few log log n factors.In the second part of the paper, we use fast matrix multiplication to obtain truly subcubic APSP algorithms for a large class of "geometrically weighted" graphs, where the weight of an edge is a function of the coordinates of its vertices. For example, for graphs embedded in Euclidean space of a constant dimension d, we obtain a time bound near O(n3-(3-Ω)/(2d+4)), where Ω

212 citations


Journal ArticleDOI
TL;DR: A new technique is provided that allows for the systematic creation and cancellation of fixed points and periodic orbits, based on Conley theory, that enables vector field design and editing on the plane and surfaces with desired qualitative properties.
Abstract: Design and control of vector fields is critical for many visualization and graphics tasks such as vector field visualization, fluid simulation, and texture synthesis. The fundamental qualitative structures associated with vector fields are fixed points, periodic orbits, and separatrices. In this paper, we provide a new technique that allows for the systematic creation and cancellation of fixed points and periodic orbits. This technique enables vector field design and editing on the plane and surfaces with desired qualitative properties. The technique is based on Conley theory, which provides a unified framework that supports the cancellation of fixed points and periodic orbits. We also introduce a novel periodic orbit extraction and visualization algorithm that detects, for the first time, periodic orbits on surfaces. Furthermore, we describe the application of our periodic orbit detection and vector field simplification algorithms to engine simulation data demonstrating the utility of the approach. We apply our design system to vector field visualization by creating data sets containing periodic orbits. This helps us understand the effectiveness of existing visualization techniques. Finally, we propose a new streamline-based technique that allows vector field topology to be easily identified.

144 citations


Journal ArticleDOI
TL;DR: This work introduces a framework for computing statistically optimal estimates of geometric reconstruction problems with a hierarchy of convex relaxations to solve non-convex optimization problems with polynomials and shows how one can detect whether the global optimum is attained at a given relaxation.
Abstract: We introduce a framework for computing statistically optimal estimates of geometric reconstruction problems. While traditional algorithms often suffer from either local minima or non-optimality--or a combination of both--we pursue the goal of achieving global solutions of the statistically optimal cost-function. Our approach is based on a hierarchy of convex relaxations to solve non-convex optimization problems with polynomials. These convex relaxations generate a monotone sequence of lower bounds and we show how one can detect whether the global optimum is attained at a given relaxation. The technique is applied to a number of classical vision problems: triangulation, camera pose, homography estimation and last, but not least, epipolar geometry estimation. Experimental validation on both synthetic and real data is provided. In practice, only a few relaxations are needed for attaining the global optimum.

136 citations


Journal ArticleDOI
TL;DR: This work improves the running time bounds of existing algorithms to detect spatio-temporal patterns, namely flock, leadership, convergence, and encounter, that exhibit similar movement in the sense of direction, heading for the same location, and/or proximity.
Abstract: Moving point object data can be analyzed through the discovery of patterns in trajectories. We consider the computational efficiency of detecting four such spatio-temporal patterns, namely flock, leadership, convergence, and encounter, as defined by Laube et al., Finding REMO--detecting relative motion patterns in geospatial lifelines, 201---214, (2004). These patterns are large enough subgroups of the moving point objects that exhibit similar movement in the sense of direction, heading for the same location, and/or proximity. By the use of techniques from computational geometry, including approximation algorithms, we improve the running time bounds of existing algorithms to detect these patterns.

129 citations


Journal ArticleDOI
TL;DR: It is discussed here how state-of-the-art computational geometry methods make it tractable to solve the problem of finding the extreme rays of a cone with a degenerate vertex at the origin, a difficult problem.
Abstract: We discuss an implementation of a derivative-free generating set search method for linearly constrained minimization with no assumption of nondegeneracy placed on the constraints. The convergence guarantees for generating set search methods require that the set of search directions possesses certain geometrical properties that allow it to approximate the feasible region near the current iterate. In the hard case, the calculation of the search directions corresponds to finding the extreme rays of a cone with a degenerate vertex at the origin, a difficult problem. We discuss here how state-of-the-art computational geometry methods make it tractable to solve this problem in connection with generating set search. We also discuss a number of other practical issues of implementation, such as the careful treatment of equality constraints and the desirability of augmenting the set of search directions beyond the theoretically minimal set. We illustrate the behavior of the implementation on several problems from the CUTEr test suite. We have found it to be successful on problems with several hundred variables and linear constraints.

113 citations


01 Jan 2007
TL;DR: This work presents KDSs that are robust against the out-of-order processing, including kinetic sorting and kinetic tournaments, and presents a new and simple variant of the standard kD-tree, called rank-based kd-trees, for a set of n points in d-dimensional space.
Abstract: Recent advances in sensing and tracking technology have led researchers to investigate the problem of designing and analyzing algorithms and data structures for moving objects. One important issue in this area of research is which assumptions are made about the motions of the objects. The most common model is one where motions are assumed to be continuous and explicitly known in advance (or at least in the near future), usually as polynomial functions of time. The kinetic-data-structure framework introduced by Basch et al. is based on this model. It has become the common model for dealing with moving objects in computational geometry. A kinetic data structure (KDS) maintains a discrete attribute of a set of moving objects— the convex hull, for instance, or the closest pair. The basic idea is that although all objects move continuously there are only certain discrete moments in time when the combinatorial structure of the attribute—the ordered set of convex-hull vertices, or the pair that is closest—changes. A KDS contains a set of certificates that constitutes a proof that the maintained structure is correct. These certificates are inserted in a priority queue based on their time of expiration. The KDS then performs an event-driven simulation of the motion of the objects, updating the structure whenever an event happens, that is, when a certificate fails. In some applications, continuous tracking of a geometric attribute may be more than is needed; the attribute is only needed at certain times. This leads us to view a KDS as a query structure: we want to maintain a set S of moving objects in such a way that we can reconstruct the attribute of interest efficiently whenever this is called for. This makes it possible to reduce the maintenance cost, as it is no longer necessary to update the KDS whenever the attribute changes. On the other hand, a reduction in maintenance cost will have an impact on the query time, that is, the time needed to reconstruct the attribute. Thus there is a trade-off between maintenance cost and query time. In Chapter 2, we show a lower bound for the kinetic sorting problem showing the following: with a subquadratic maintenance cost one cannot obtain any significant speed-up on the time needed to generate the sorted list (compared to the trivial O(n log n) time), even for linear motions. This negative result gives a strong indication that good trade-offs are not possible for a large number of geometric problems—Voronoi diagrams and Delaunay triangulations, for example, or convex hulls—as the sorting problem can often be reduced to such problems. KDSs form a beautiful framework from a theoretical point of view, but whether or not they perform well in practice is unclear. A serious problem for the applicability of the KDS framework in practice is how to cope with the situation where event times cannot be computed exactly and events may be processed in a wrong order. We addresses this problem in Chapter 3. We present KDSs that are robust against the out-of-order processing, including kinetic sorting and kinetic tournaments. Our algorithms are quasi-robust in the sense that the maintained attribute of the moving objects will be correct for most of the time, and when it is incorrect, it will not be far from the correct attribute. The aim of the KDS framework is not only maintaining a uniquely defined geometric attribute but also maintaining a query data structure in order to quickly answer queries involving objects in motion such as "Which are the points currently inside a query rectangle?", or "What is currently the nearest point to a given query point?". In Chapter 4, we study the kinetic maintenance of kd-trees which are practical data structures to quickly report all points inside any given region. We present a new and simple variant of the standard kd-tree, called rank-based kd-trees, for a set of n points in d-dimensional space. Our rank-based kd-tree supports orthogonal range searching in time O(n1-1/d+k) and it uses O(n) storage—just like the original. But additionally it can be kinetized easily and efficiently. We obtain the similar results for longest-side kd-trees. Collision detection is a basic problem arising in all areas of geometric modeling involving objects in motion—motion planning, computer-simulated environments, or virtual prototyping, to name a few. Kinetic methods are naturally applicable to this problem. Although most applications of collision detection are more concerned with three dimensions than two dimensions, so far KDSs have been mostly developed for two-dimensional settings. In Chapter 5, we develop KDSs for 3D collision detection that have a near-linear number of certificates for multiple convex fat objects of varying sizes and for a special case of balls rolling on a plane. In a practical setting, the object motion may not known exactly or the explicit description of the motion may be unknown in advance. For instance, suppose we are tracking one, or maybe many, moving objects. Each object is equipped with a device that is transmitting its position at certain times. Then, we just have access to some sample points of the object path instead of the whole path, and an explicit motion description is unavailable. Thus, we are just receiving a stream of data points that describes the path along which the object moves. This model is the subject of Chapter 6. Here, we present the first general algorithm for maintaining a simplification of the trajectory of a moving object in this model, without using too much storage. We analyze the competitive ratio of our algorithms, allowing resource augmentation: we let our algorithm maintain a simplification with 2k (internal) points, and compare the error of our simplification to the error of the optimal simplification with k points.

Journal ArticleDOI
TL;DR: An interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model that can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates is presented.
Abstract: We present an interactive and accurate collision detection algorithm for deformable, polygonal objects based on the streaming computational model. Our algorithm can detect all possible pairwise primitive-level intersections between two severely deforming models at highly interactive rates. In our streaming computational model, we consider a set of axis aligned bounding boxes (AABBs) that bound each of the given deformable objects as an input stream and perform massively-parallel pairwise, overlapping tests onto the incoming streams. As a result, we are able to prevent performance stalls in the streaming pipeline that can be caused by expensive indexing mechanism required by bounding volume hierarchy-based streaming algorithms. At runtime, as the underlying models deform overtime, we employ a novel, streaming algorithm to update the geometric changes in the AABB streams. Moreover, in order to get only the computed result (i.e., collision results between AABBs) without reading back the entire output streams, we propose a streaming en/decoding strategy that can be performed in a hierarchical fashion. After determining overlapped AABBs, we perform a primitive-level (e.g., triangle) intersection checking on a serial computational model such as CPUs. We implemented the entire pipeline of our algorithm using off-the-shelf graphics processors (GPUs), such as nVIDIA GeForce 7800 GTX, for streaming computations, and Intel Dual Core 3.4G processors for serial computations. We benchmarked our algorithm with different models of varying complexities, ranging from 15K up to 50K triangles, under various deformation motions, and the timings were obtained as 30~100 FPS depending on the complexity of models and their relative configurations. Finally, we made comparisons with a well-known GPU-based collision detection algorithm, CULLIDE and observed about three times performance improvement over the earlier approach. We also made comparisons with a SW-based AABB culling algorithm and observed about two times improvement

Journal ArticleDOI
TL;DR: This paper presents a new data structure for the boundary representation of three-dimensional Nef polyhedra and efficient algorithms for boolean operations and presents important optimizations for the algorithms, and evaluates this optimized implementation with extensive experiments.
Abstract: Nef polyhedra in d-dimensional space are the closure of half-spaces under boolean set operations. In consequence, they can represent non-manifold situations, open and closed sets, mixed-dimensional complexes, and they are closed under all boolean and topological operations, such as complement and boundary. They were introduced by W. Nef in his seminal 1978 book on polyhedra. The generality of Nef complexes is essential for some applications. In this paper, we present a new data structure for the boundary representation of three-dimensional Nef polyhedra and efficient algorithms for boolean operations. We use exact arithmetic to avoid well-known problems with floating-point arithmetic and handle all degeneracies. Furthermore, we present important optimizations for the algorithms, and evaluate this optimized implementation with extensive experiments. The experiments supplement the theoretical runtime analysis and illustrate the effectiveness of our optimizations. We compare our implementation with the Acis CAD kernel. Acis is mostly faster, by a factor up to six. There are examples on which Acis fails. The implementation was released as Open Source in the Computational Geometry Algorithm Library (Cgal) release 3.1 in December 2004.

Patent
09 Apr 2007
TL;DR: In this article, a method and system for computer aided design (CAD) is disclosed for designing geometric objects, which interpolates and/or blends between such geometric objects sufficiently fast so that real time deformation of such objects occurs while deformation data is being input.
Abstract: A method and system for computer aided design (CAD) is disclosed for designing geometric objects. The present invention interpolates and/or blends between such geometric objects sufficiently fast so that real time deformation of such objects occurs while deformation data is being input. Thus, a user designing with the present invention obtains immediate feedback to input modifications without separately entering a command for performing such deformations. The present invention utilizes novel computational techniques for blending between geometric objects, wherein weighted sums of points on the geometric objects are used in deriving a new blended geometric object. The present invention is particularly useful for designing the shape of surfaces. Thus, the present invention is applicable to various design domains such as the design of, e.g., bottles, vehicles, and watercraft. Additionally, the present invention provides for efficient animation via repeatedly modifying surfaces of an animated object such as a representation of a face.

Proceedings ArticleDOI
29 Oct 2007
TL;DR: This work proposes to represent the boundary of the Minkowski sum approximately using only points, and shows that this point-based representation can be generated efficiently and demonstrated to provide similar functionality as mesh-based representations can.
Abstract: Minkowski sum is a fundamental operation in many geometric applications, including robotics, penetration depth estimation, solid modeling, and virtual prototyping. However, due to its high computational complexity and several nontrivial implementation issues, computing the exact boundary of the Minkowski sum of two arbitrary polyhedra is generally a difficult task. In this work, we propose to represent the boundary of the Minkowski sum approximately using only points. Our results show that this point-based representation can be generated efficiently. An important feature of our method is its straightforward implementation and parallelization. We also demonstrate that the point-based representation of the Minkowski sum boundary can indeed provide similar functionality as mesh-based representations can. We show several applications in motion planning, penetration depth approximation and modeling.

Proceedings Article
01 Jan 2007
TL;DR: This paper presents a new method of specific cavity analysis in protein molecules that uses Voronoi diagram and Delaunay triangulation and computes tunnels with better quality in reasonable computational time.
Abstract: This paper presents a new method of specific cavity analysis in protein molecules. Long-term biochemical research has the discovery that protein molecule behaviour depends on the existence of cavities (tunnels) leading from the inside of the molecule to its surface. Previous methods of tunnel computation were based on space rasterization. Our approach is based on computational geometry and uses Voronoi diagram and Delaunay triangulation. Our method computes tunnels with better quality in reasonable computational time. The proposed algorithm was implemented and tested on several real protein molecules and is expected to be used in various applications in protein modelling and analysis. This is an interesting example of applying computational geometry principles to practical problems.

Proceedings ArticleDOI
Bin Yang1
15 Apr 2007
TL;DR: The relationship between maximum Fisher information matrix, minimum Cramer-Rao bound, spherical codes, uniform angular arrays, and Platonic solids as well as their roles in optimizing the sensor placement are discussed.
Abstract: This paper studies different optimization strategies for the sensor placement in source localization by using time differences of arrival. It continues the works in (B. Yang et al., 2005, 2006) and gives answers to some open questions there. In particular, we discuss the relationship between maximum Fisher information matrix, minimum Cramer-Rao bound, spherical codes, uniform angular arrays, and Platonic solids as well as their roles in optimizing the sensor placement. Various new optimum sensor array geometries are given.

Journal ArticleDOI
TL;DR: In this article, an attempt has been made in the present work to establish the limacon-cylinder using computational geometric techniques, and the proposed algorithms are fast and yield accurate results, and can be implemented in computer-aided form measuring instruments.
Abstract: The form tester having a straight datum in addition to rotational axis provides the cylindricity data. Due to misalignment between axis of the component and instrument axis, and size-suppression inherent in such measurements, a limacon-cylinder has to be used for cylindricity evaluation. For the first time, an attempt has been made in the present work to establish the limacon-cylinder using computational geometric techniques. Concepts involved in the construction of limacon from 2D control hull as brought out in Part I are further extended in this Part II to establish the limacon-cylinder. The algorithms developed in this paper have been applied for minimum circumscribed, maximum inscribed and minimum zone evaluations of the data available in the literature. The proposed algorithms are fast and yield accurate results, and can be implemented in computer-aided form measuring instruments.

Journal ArticleDOI
TL;DR: The concept of hand geometry is extended from a geometrical size-based technique that requires physical hand constraints to a projective invariant- based technique that allows free hand motion.
Abstract: Our research focuses on finding mathematical representations of biometric features that are not only distinctive, but also invariant to projective transformations. We have chosen hand geometry technology to work with, because it has wide public awareness and acceptance and most important, large space for improvement. Unlike the traditional hand geometry technologies, the hand descriptor in our hand geometry system is constructed using projective-invariant features. Hand identification can be accomplished by a single view of a hand regardless of the viewing angles. The noise immunity and the discriminability possessed by a hand feature vector using different types of projective invariants are studied. We have found an appropriate symmetric polynomial representation of the hand features with which both noise immunity and discrimminability of a hand feature vector are considerably improved. Experimental results show that the system achieves an equal error rate (EER) of 2.1% by a 5-D feature vector on a database of 52 hand images. The EER reduces to 0.00% when the feature vector dimension increases to 18. In this paper, we extend the concept of hand geometry from a geometrical size-based technique that requires physical hand constraints to a projective invariant-based technique that allows free hand motion.

Journal ArticleDOI
TL;DR: A novel method for efficient matching and retrieval of 3D deformable models using Topological Point Ring (TPR) analysis to locate reliable topological points and rings and captures both local and global geometric information to characterize each of these topological features.
Abstract: With the increasing popularity of 3D applications such as computer games, a lot of 3D geometry models are being created. To encourage sharing and reuse, techniques that support matching and retrieval of these models are emerging. However, only a few of them can handle deformable models, that is, models of different poses, and these methods are generally very slow. In this paper, we present a novel method for efficient matching and retrieval of 3D deformable models. Our research idea stresses using both topological and geometric features at the same time. First, we propose Topological Point Ring (TPR) analysis to locate reliable topological points and rings. Second, we capture both local and global geometric information to characterize each of these topological features. To compare the similarity of two models, we adapt the Earth Mover Distance (EMD) as the distance function and construct an indexing tree to accelerate the retrieval process. We demonstrate the performance of the new method, both in terms of accuracy and speed, through a large number of experiments.

01 Jan 2007
TL;DR: This work surveys results in a recent branch of computational geometry: folding and unfolding of linkages, paper, and polyhedra.
Abstract: We survey results in a recent branch of computational geometry: folding and unfolding of linkages, paper, and polyhedra.

Journal ArticleDOI
TL;DR: A method is derived that allows correct maintenance of free and occupied space of a set of n rectangular modules in time O(n log n) and finds an optimal feasible communication-conscious placement which minimizes the total weighted Manhattan distance between the new module and existing demand points.
Abstract: We describe algorithmic results on two crucial aspects of allocating resources on computational hardware devices with partial reconfigurability. By using methods from the field of computational geometry, we derive a method that allows correct maintenance of free and occupied space of a set of n rectangular modules in time O(n log n); previous approaches needed a time of O(n2) for correct results and O(n) for heuristic results. We also show a matching lower bound of Omega (n log n), so our approach is optimal. We also show that finding an optimal feasible communication-conscious placement (which minimizes the total weighted Manhattan distance between the new module and existing demand points) can be computed with Theta(n log n). Both resulting algorithms are practically easy to implement and show convincing experimental behavior

Journal ArticleDOI
TL;DR: This paper introduces a taxonomy and a set of transformations especially for illustrative deformation of general data exploration, and presents combined geometric or optical illustration operators for focus+context visualization, and examines the best means for preventing the deformed context from being misperceived.
Abstract: Much of the visualization research has focused on improving the rendering quality and speed, and enhancing the perceptibility of features in the data. Recently, significant emphasis has been placed on focus+context (F+C) techniques (e.g., fisheye views and magnification lens) for data exploration in addition to viewing transformation and hierarchical navigation. However, most of the existing data exploration techniques rely on the manipulation of viewing attributes of the rendering system or optical attributes of the data objects, with users being passive viewers. In this paper, we propose a more active approach to data exploration, which attempts to mimic how we would explore data if we were able to hold it and interact with it in our hands. This involves allowing the users to physically or actively manipulate the geometry of a data object. While this approach has been traditionally used in applications, such as surgical simulation, where the original geometry of the data objects is well understood by the users, there are several challenges when this approach is generalized for applications, such as flow and information visualization, where there is no common perception as to the normal or natural geometry of a data object. We introduce a taxonomy and a set of transformations especially for illustrative deformation of general data exploration. We present combined geometric or optical illustration operators for focus+context visualization, and examine the best means for preventing the deformed context from being misperceived. We demonstrated the feasibility of this generalization with examples of flow, information and video visualization.

Journal ArticleDOI
TL;DR: An approach to simulate divide-and-conquer space-efficiently, stably selecting and unselecting a subset from a sorted set, and computing the kth smallest element in one dimension from a multi-dimensional set that is sorted in another dimension are developed.
Abstract: We develop a number of space-efficient tools including an approach to simulate divide-and-conquer space-efficiently, stably selecting and unselecting a subset from a sorted set, and computing the kth smallest element in one dimension from a multi-dimensional set that is sorted in another dimension. We then apply these tools to solve several geometric problems that have solutions using some form of divide-and-conquer. Specifically, we present a deterministic algorithm running in O(nlogn) time using O(1) extra memory given inputs of size n for the closest pair problem and a randomized solution running in O(nlogn) expected time and using O(1) extra space for the bichromatic closest pair problem. For the orthogonal line segment intersection problem, we solve the problem in O(nlogn+k) time using O(1) extra space where n is the number of horizontal and vertical line segments and k is the number of intersections.

Proceedings ArticleDOI
10 Sep 2007
TL;DR: This work presents a novel hierarchical representation, ray-strips, for interactive ray tracing of complex triangle meshes that takes advantage of mesh connectivity for compact storage, efficient traversal and ray intersections, and can reduce the memory overhead by up to five times compared to standard approaches.
Abstract: We present a novel hierarchical representation, ray-strips, for interactive ray tracing of complex triangle meshes. Prior optimized algorithms for ray tracing explicitly store each triangle in the input model. Instead, a ray-strip takes advantage of mesh connectivity for compact storage, efficient traversal and ray intersections. As a result, we considerably reduce the memory overhead of the original model and the hierarchical representation. We also present efficient algorithms for single ray and ray packet traversal using ray-strips. Furthermore, we demonstrate that our representation can utilize the SIMD capabilities of current CPUs for incoherent ray packets and single rays. We show the benefit of ray-strips on models with tens of thousands to tens of millions of triangles. In practice, our approach can reduce the storage overhead of interactive ray tracing algorithms by up to five times compared to standard approaches. Moreover, we improve the runtime performance of ray tracing on large models.

Proceedings ArticleDOI
12 Nov 2007
TL;DR: A novel approach based on region growing is presented that the underlying mathematics has been re-formulated such that an incremental fit can be done, i.e., the best fit surface does not have to be completely re-computed the moment a new point is investigated in the region growing process.
Abstract: 3D sensing and modeling is increasingly important for mobile robotics in general and safety, security and rescue robotics (SSRR) in particular. To reduce the data and to allow for efficient processing, e.g., with computational geometry algorithms, it is necessary to extract surface data from 3D point clouds delivered by range sensors. A significant amount of work on this topic exists from the computer graphics community. But the existing work relies on relatively exact point cloud data. As also shown by others, sensors suited for mobile robots are very noise-prone and standard approaches that use local processing on surface normals are doomed to fail. Hence plane fitting has been suggested as solution by the robotics community. Here, a novel approach for this problem is presented. Its main feature is that it is based on region growing and that the underlying mathematics has been re-formulated such that an incremental fit can be done, i.e., the best fit surface does not have to be completely re-computed the moment a new point is investigated in the region growing process. The worst case complexity is O(n log(n)), but as shown in experiments it tends to scale linearly with typical data. Results with real world data from a Swissranger time-of-flight camera are presented where surface polygons are always successfully extracted within about 0.3 sec.


Journal ArticleDOI
TL;DR: In this paper, the authors give explicit formulae to the bounds of Hausdorff distance, normal distance and Riemannian metric distortion between the smooth surface and the discrete mesh in terms of principle curvature and the radii of geodesic circum-circle of the triangles.

Proceedings ArticleDOI
26 Dec 2007
TL;DR: This paper proposes a new skeleton based parametric representation of freeform shell-like solid objects - Ball B-Spline Surfaces (BBSSs) and their fundamental properties and algorithms.
Abstract: This paper proposes a parametric solid representation of freeform tubular objects - Ball B-Spline Curves (BBSCs), which are skeleton based parametric solid model. Their fundamental properties, algorithms and modeling methods are investigated. BBSC directly defines objects in B-Spline function form (not a procedure method, like sweeping), that uses control ball instead of control point in B-Spline curve. BBSC describes not only every point inside 3D solid objects, but also provides its center curve in B-Spline form directly. So the representation is more flexible for modeling, manipulation and deformation.

Journal ArticleDOI
TL;DR: In this paper, a new family called graded sparse graphs, arising from generically pinned (completely immobilized) bar-and-joint frameworks, was defined and proved to also form matroids.
Abstract: Sparse graphs and their associated matroids play an important role in rigidity theory, where they capture the combinatorics of generically rigid structures. We define a new family called {\bf graded sparse graphs}, arising from generically pinned (completely immobilized) bar-and-joint frameworks and prove that they also form matroids. We address five problems on graded sparse graphs: {\bf Decision}, {\bf Extraction}, {\bf Components}, {\bf Optimization}, and {\bf Extension}. We extend our {\bf pebble game algorithms} to solve them.