scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 2000"



Journal ArticleDOI
TL;DR: This work gives an explicit method for mapping any simply connected surface onto the sphere in a manner which preserves angles and provides a new way to automatically assign texture coordinates to complex undulating surfaces.
Abstract: We give an explicit method for mapping any simply connected surface onto the sphere in a manner which preserves angles. This technique relies on certain conformal mappings from differential geometry. Our method provides a new way to automatically assign texture coordinates to complex undulating surfaces. We demonstrate a finite element method that can be used to apply our mapping technique to a triangulated geometric description of a surface.

400 citations


Journal ArticleDOI
TL;DR: The CPM (compressed progressive meshes) approach proposed here uses a new technique, which refines the topology of the mesh in batches in batches, which each increase the number of vertices by up to 50 percent, leading to representations of vertex coordinates that are 50 percent more compact than previously reported progressive geometry compression techniques.
Abstract: Most systems that support visual interaction with 3D models use shape representations based on triangle meshes. The size of these representations imposes limits on applications for which complex 3D models must be accessed remotely. Techniques for simplifying and compressing 3D models reduce the transmission time. Multiresolution formats provide quick access to a crude model and then refine it progressively. Unfortunately, compared to the best nonprogressive compression methods, previously proposed progressive refinement techniques impose a significant overhead when the full resolution model must be downloaded. The CPM (compressed progressive meshes) approach proposed here eliminates this overhead. It uses a new technique, which refines the topology of the mesh in batches, which each increase the number of vertices by up to 50 percent. Less than an amortized total of 4 bits per triangle encode where and how the topological refinements should be applied. We estimate the position of new vertices from the positions of their topological neighbors in the less refined mesh using a new estimator that leads to representations of vertex coordinates that are 50 percent more compact than previously reported progressive geometry compression techniques.

399 citations


Book
18 Feb 2000
TL;DR: The computational geometry algorithms and applications second edition is a perfect book that comes from great author to share with you and offers the best experience and lesson to take, not only take, but also learn.
Abstract: computational geometry algorithms and applications second edition. Book lovers, when you need a new book to read, find the book here. Never worry not to find what you need. Is the computational geometry algorithms and applications second edition your needed book now? That's true; you are really a good reader. This is a perfect book that comes from great author to share with you. The book offers the best experience and lesson to take, not only take, but also learn.

386 citations


Proceedings ArticleDOI
24 Apr 2000
TL;DR: These algorithms have been used to perform proximity queries for applications including virtual prototyping, dynamic simulation, and motion planning on complex models and have achieved significant speedups on many benchmarks.
Abstract: We present new distance computation algorithms using hierarchies of rectangular swept spheres. Each bounding volume of the tree is described as the Minkowski sum of a rectangle and a sphere, and fits tightly to the underlying geometry. We present accurate and efficient algorithms to build the hierarchies and perform distance queries between the bounding volumes. We also present traversal techniques for accelerating distance queries using coherence and priority directed search. These algorithms have been used to perform proximity queries for applications including virtual prototyping, dynamic simulation, and motion planning on complex models. As compared to earlier algorithms based on bounding volume hierarchies for separation distance and approximate distance computation, our algorithms have achieved significant speedups on many benchmarks.

303 citations


Journal ArticleDOI
TL;DR: The design and implementation of a library of C-code procedures to perform operations on rational polyhedra to support intersection, union, difference, simplification in context, convex hull, affine image, affines preimage, and computation of dual forms is described.
Abstract: The design and implementation of a library of C-code procedures to perform operations on rational polyhedra is described. The library supports intersection, union, difference, simplification in context, convex hull, affine image, affine preimage, and computation of dual forms. Since not all of these functions are closed over polyhedra, the library is extended to operate on finite unions of polyhedra. The major design decisions made during the implementation of the library are discussed. The data structure used for representing finite unions of polyhedra is developed and validity rules for the representation of polyhedra are derived. And finally, the algorithms used to implement the various functions in the library are presented.

244 citations


Journal ArticleDOI
TL;DR: The major design goals for CGAL are correctness, flexibility, ease‐of‐use, efficiency, and robustness, and the approach to reach these goals is presented, and generic programming using templates in C++ plays a central role in the architecture of CGAL.
Abstract: CGAL is a Computational Geometry Algorithms Library written in C++. The goal is to make the large body of geometric algorithms developed inthe field of computational geometry available for industrial application. In this report we discuss the major design goals for CGAL, which are correctne- ss, flexibility, ease-of-use, efficiency, and robustness, and present our approach to reach these goals. Templates and the relatively new generic programming play a central role in the architecture of CGAL. We give a short introduction to generic programming in C++, compare it to the object-oriented programming paradigm, and present examples where both paradigms are used effectively in CGAL. Moreover, we give an overview on the current structure of the library and consider software engineering aspects in the CGAL-project.

221 citations


Proceedings ArticleDOI
01 Jan 2000
TL;DR: A new method for determining correspondence between points on pairs of surfaces based on shape using a combination of geodesic distance and surface curvature is described, which is applied to human cerebral cortical surfaces.
Abstract: This paper describes a new method for determining correspondence between points on pairs of surfaces based on shape using a combination of geodesic distance and surface curvature An initial sparse set of corresponding points is generated using a shape-based matching procedure Geodesic interpolation is employed in order to capture the complex surface In addition, surface correspondence and triangulation are computed simultaneously in a hierarchical way Results applied to human cerebral cortical surfaces are shown to evaluate the approach

159 citations


Book
01 Jan 2000
TL;DR: This thesis shows that it is in fact possible to obtain efficient algorithms for the nearest neighbor problem and a wide range of metrics, including Euclidean, Manhattan or maximum norms and Hausdorff metrics; some of the results hold even for general metrics.
Abstract: Consider the following problem: given a database of points in a multidimensional space, construct a data structure which given any query point finds the database point(s) closest to it. This problem, called nearest neighbor search has been of great importance to several areas of computer science, including pattern recognition, databases, vector compression, computational statistics and data mining. Many of the above applications involve data sets whose size and dimensionality are very large. Therefore, it is crucial to design algorithms which scale well with the database size as well as with the dimension. The nearest neighbor problem is an example of a large class of proximity problems. Apart from the nearest neighbor, the class contains problems like closest pair, diameter (or furthest pair), minimum spanning tree and clustering problems. In the latter case the goal is to find a partition of points into k clusters, in order to minimize a certain function. Example functions are: the sum of the distances from each point to its nearest cluster representative (this problem is called k-median), the maximum such distance (k-center), the sum of all distances between points in same clusters (k-clustering), etc. Since these problems are ubiquitous, they have been investigated in computer science for a long while (e.g., in computational geometry). As a result of this research effort, many efficient solutions have been discovered for the case when the points lie in a space of small dimension. Unfortunately, their running time grows exponentially with the dimension. In this thesis we show that it is in fact possible to obtain efficient algorithms for the aforementioned problems, if we are satisfied with answers which are approximate. The running time of our algorithms for the aforementioned problems has only polynomial dependence on the dimension, and sublinear (for the nearest neighbor problem) or subquadratic (for the closest pair, minimum spanning tree, clustering etc.) dependence on the number of input points. These results hold for a wide range of metrics, including Euclidean, Manhattan or maximum norms and Hausdorff metrics; some of the results hold even for general metrics. We support our theoretical results with their experimental evaluation.

130 citations


Journal ArticleDOI
TL;DR: A lower bound for approximate range searching based on partition trees of Ω( log n+(1/e) d−1 ) , which implies optimality for convex ranges (assuming fixed dimensions), and empirical evidence showing that allowing small relative errors can significantly improve query execution times.
Abstract: The range searching problem is a fundamental problem in computational geometry, with numerous important applications. Most research has focused on solving this problem exactly, but lower bounds show that if linear space is assumed, the problem cannot be solved in polylogarithmic time, except for the case of orthogonal ranges. In this paper we show that if one is willing to allow approximate ranges, then it is possible to do much better. In particular, given a bounded range Q of diameter w and e>0 , an approximate range query treats the range as a fuzzy object, meaning that points lying within distance ew of the boundary of Q either may or may not be counted. We show that in any fixed dimension d , a set of n points in R d can be preprocessed in O (n+ log n) time and O (n) space, such that approximate queries can be answered in O ( log n(1/e) d ) time. The only assumption we make about ranges is that the intersection of a range and a d -dimensional cube can be answered in constant time (depending on dimension). For convex ranges, we tighten this to O ( log n+(1/e) d−1 ) time. We also present a lower bound for approximate range searching based on partition trees of Ω( log n+(1/e) d−1 ) , which implies optimality for convex ranges (assuming fixed dimensions). Finally, we give empirical evidence showing that allowing small relative errors can significantly improve query execution times.

129 citations


Proceedings ArticleDOI
31 Oct 2000
TL;DR: An accelerated proximity query algorithm between moving convex polyhedra that combines Voronoi-based feature tracking with a multi-level-of-detail representation that provides a progressive refinement framework for collision detection and distance queries is presented.
Abstract: We present an accelerated proximity query algorithm between moving convex polyhedra. The algorithm combines Voronoi-based feature tracking with a multi-level-of-detail representation, in order to adapt to the variation in levels of coherence and speed up the computation. It provides a progressive refinement framework for collision detection and distance queries. We have implemented our algorithm and have observed significant performance improvements in our experiments, especially on scenarios where the coherence is low.

01 Jan 2000
TL;DR: In this paper, the closest point transform to a manifold on a rectilinear grid in low dimensional spaces is computed by solving the Eikonal equation |∇u| = 1 by the method of characteristics.
Abstract: This paper presents a new algorithm for computing the closest point transform to a manifold on a rectilinear grid in low dimensional spaces. The closest point transform finds the closest point on a manifold and the Euclidean distance to a manifold for all the points in a grid, (or the grid points within a specified distance of the manifold). We consider manifolds composed of simple geometric shapes, such as, a set of points, piecewise linear curves or triangle meshes. The algorithm computes the closest point on and distance to the manifold by solving the Eikonal equation |∇u| = 1 by the method of characteristics. The method of characteristics is implemented efficiently with the aid of computational geometry and polygon/polyhedron scan conversion. The computed distance is accurate to within machine precision. The computational complexity of the algorithm is linear in both the number of grid points and the complexity of the manifold. Thus it has optimal computational complexity. Examples are presented for piecewise linear curves in 2D and triangle meshes in 3D. 1 The Closest Point Transform Let u(x), x ∈ R, be the distance from the point x to a manifold S. If dim(S) = n − 1, (for example curves in 2D or surfaces in 3D), then the distance is signed. The orientation of the manifold determines the sign of the distance. One can adopt the convention that the outward normals point in the direction of positive or negative distance. In order for the distance to be well-defined, the manifold must be orientable and have a consistent orientation. A Klein bottle in 3D for example is not orientable. Two concentric circles in 2D have consistent orientations only if the normals of the inner circle point “inward” and the normals of the outer circle point “outward”, or vice-versa. Otherwise the distance would be ill-defined in the region between the circles. For manifolds which are not closed, the distance is ill-defined in any neighborhood of the boundary. However, the distance is well-defined in neighborhoods of the manifold which do not contain the boundary. If dim(S) < n− 1, (for example a set of points in 2D or a curve in 3D), the distance is unsigned, (non-negative).

Journal ArticleDOI
TL;DR: In this paper, the computational geometric concepts of convex hulls are used, and a new heuristic algorithm is suggested to arrive at the inner hull, where Equi-Distant (Voronoi) and newly proposed equi-Angular diagrams are employed for establishing the assessment features under different conditions.
Abstract: Data for evaluating circularity error can be obtained from coordinate measuring machines or form measuring instruments. In this article, appropriate methods based on computational geometric techniques have been developed to deal with coordinate measurement data and form data. The computational geometric concepts of convex hulls are used, and a new heuristic algorithm is suggested to arrive at the inner hull. Equi-Distant (Voronoi) and newly proposed Equi-Angular diagrams are employed for establishing the assessment features under different conditions. The algorithms developed in this article are implemented and validated with the simulated data and the data available in the literature.

Journal ArticleDOI
TL;DR: A polynomial time solution for the 3D version of the Art Gallery Problem is presented and an analysis of the overall quality of the solution is given.

Proceedings ArticleDOI
24 Apr 2000
TL;DR: The computation of the generalized Voronoi diagram provides fast proximity query toolkits for motion planning and their performance for path planning in a complex dynamic environment composed of more than 140,000 polygons is demonstrated.
Abstract: We present techniques for fast motion planning by using discrete approximations of generalized Voronoi diagrams, computed with graphics hardware. Approaches based on this diagram computation are applicable to both static and dynamic environments of fairly high complexity. We compute a discrete Voronoi diagram by rendering a 3D distance mesh for each Voronoi site. The sites can be points, line segments, polygons, polyhedra, curves and surfaces. The computation of the generalized Voronoi diagram provides fast proximity query toolkits for motion planning. The tools provide the distance to the nearest obstacle stored in the Z-buffer, as well as the Voronoi boundaries, Voronoi vertices and weighted Voronoi graphs extracted from the frame buffer using continuation methods. We have implemented these algorithms and demonstrated their performance for path planning in a complex dynamic environment composed of more than 140,000 polygons.

Journal ArticleDOI
TL;DR: This paper connects the predominantly combinatorial work in classical computational geometry with the numerical interest in mesh generation with the two- and three-dimensional case and covers results obtained during the twentieth century.
Abstract: The Delaunay triangulation of a finite point set is a central theme in computational geometry. It finds its major application in the generation of meshes used in the simulation of physical processes. This paper connects the predominantly combinatorial work in classical computational geometry with the numerical interest in mesh generation. It focuses on the two- and three-dimensional case and covers results obtained during the twentieth century.

Proceedings ArticleDOI
24 Apr 2000
TL;DR: In this paper, the authors define exact cellular decompositions where critical points of Morse functions indicate the location of cell boundaries, and derive a general framework for defining decomposition in terms of critical points.
Abstract: Exact cellular decompositions are structures that globally encode the topology of a robot's free space, while locally describing the free space geometry. These structures have been widely used for path planning between two points, but can be used for mapping and coverage of robot free spaces. In this paper, we define exact cellular decompositions where critical points of Morse functions indicate the location of cell boundaries. Morse functions are those whose critical points are non-degenerate. Between critical points, the structure of a space is effectively the same, so simple control strategies to achieve tasks, such as coverage, are feasible within each cell. In this paper, we derive a general framework for defining decompositions in terms of critical points and then give examples, each corresponding to a different task. All of the results in this paper are derived in an m-dimensional Euclidean space, but the examples depicted in the figures are 2D and 3D for ease of presentation.

Proceedings ArticleDOI
01 Oct 2000
TL;DR: An approach is proposed for the integrated evaluation of the error introduced by both the modification of the domain and the approximation of the field of the original volume dataset for reducing the size of a volume dataset.
Abstract: The techniques for reducing the size of a volume dataset by preserving both the geometrical/topological shape and the information encoded in an attached scalar field are attracting growing interest. Given the framework of incremental 3D mesh simplification based on edge collapse, we propose an approach for the integrated evaluation of the error introduced by both the modification of the domain and the approximation of the field of the original volume dataset. We present and compare various techniques to evaluate the approximation error or to produce a sound prediction. A flexible simplification tool has been implemented, which provides a different degree of accuracy and computational efficiency for the selection of the edge to be collapsed. Techniques for preventing a geometric or topological degeneration of the mesh are also presented.

Book ChapterDOI
26 Jun 2000
TL;DR: The key idea is to measure the net outward flux of a vector field per unit volume, and to detect locations where a conservation of energy principle is violated.
Abstract: The medial surface of a volumetric object is of significant interest for shape analysis. However, its numerical computation can be subtle. Methods based on Voronoi techniques preserve the object's topology, but heuristic pruning measures are introduced to remove unwanted faces. Approaches based on Euclidean distance functions can localize medial surface points accurately, but often at the cost of altering the object's topology. In this paper we introduce a new algorithm for computing medial surfaces which addresses these concerns. The method is robust and accurate, has low computational complexity, and preserves topology. The key idea is to measure the net outward flux of a vector field per unit volume, and to detect locations where a conservation of energy principle is violated. This is done in conjunction with a thinning process applied in a cubic lattice. We illustrate the approach with examples of medial surfaces of synthetic objects and complex anatomical structures obtained from medical images.

Journal ArticleDOI
TL;DR: Prioritized-Layered Projection is a technique for fast rendering of high depth complexity scenes by estimating the visible polygons of a scene from a given viewpoint incrementally, one primitive at a time and is suitable for the computation of partially correct images for use as part of time-critical rendering systems.
Abstract: Prioritized-Layered Projection (PLP) is a technique for fast rendering of high depth complexity scenes. It works by estimating the visible polygons of a scene from a given viewpoint incrementally, one primitive at a time. It is not a conservative technique, instead PLP is suitable for the computation of partially correct images for use as part of time-critical rendering systems. From a very high level, PLP amounts to a modification of a simple view-frustum culling algorithm, however, it requires the computation of a special occupancy-based tessellation and the assignment to each cell of the tessellation a solidity value, which is used to compute a special ordering on how primitives get projected. The authors detail the PLP algorithm, its main components, and implementation. They also provide experimental evidence of its performance, including results on two types of spatial tessellation (using octree- and Delaunay-based tessellations), and several datasets. They also discuss several extensions of their technique.

Proceedings ArticleDOI
01 May 2000
TL;DR: This paper shows that such a point set permits a small perturbation whose Delaunay triangulation contains no slivers, and gives deterministic algorithms that compute the perturbations of n points in time O(n logn) with one processor and inTime O(log n) with O( n) processors.
Abstract: A sliver is a tetrahedron whose four vertices lie close to a plane and whose perpendicular projection to that plane is a convex quadrilateral with no short edge. Slivers axe both undesirable and ubiquitous in 3-dimensional Delaunay triangulations. Even when the point-set is well-spaced, slivers may result. This paper shows that such a point set permits a small perturbation whose Delaunay triangulation contains no slivers. It also gives deterministic algorithms that compute the perturbation of n points in time O(n logn) with one processor and in time O(log n) with O(n) processors.

Proceedings ArticleDOI
03 May 2000
TL;DR: This work presents a very general geometrical correction method that takes advantage of an efficient use of the conjugate gradient algorithm to find the appropriate displacement of the mesh vertices that would satisfy all the constraints simultaneously, and according to momentum conservation laws.
Abstract: We present a very general geometrical correction method for enforcing collisions and other geometrical constraints between polygonal mesh surfaces. It is based on a global resolution scheme that takes advantage of an efficient use of the conjugate gradient algorithm to find the appropriate displacement of the mesh vertices that would satisfy all the constraints simultaneously, and according to momentum conservation laws. This method has been implemented in a cloth simulation system along with a collision response model that enforces a minimum "thickness" distance between cloth surfaces, which can be efficiently integrated in an simulation scheme based on implicit integration. Some provided examples illustrate the efficiency of the method.

Journal ArticleDOI
Hassan Masum1
TL;DR: This second edition of the book is obviously the product of much effort by the authors, and although some improvements are possible, on the whole this book is worth considering both as a text for a first computational geometry course and as a refresher on basic concepts.
Abstract: Computational Geometry is a wide-ranging introductory text which exposes readers to the main themes in modern computational geometry. Each chapter introduces a subfield of computational geometry, via natural problems and basic algorithms; exercises and notes help to flesh out the chapter material. This second edition of the book is obviously the product of much effort by the authors, and although some improvements are possible, on the whole this book is worth considering both as a text for a first computational geometry course and as a refresher on basic concepts.Features of interest include:Beginning each chapter with a motivating real-world example, to naturally introduce the algorithms. The solution of this example leads to the key algorithmic idea of the chapter.Emphasis on derivation of algorithms, as opposed to a cookbook-style presentation. The authors often spend a large amount of time to work through several suboptimal solutions for a problem before presenting the final one. While not suitable for the already-knowledgeable practitioner, this gives the far-larger category of students or other less-than-expert readers training in the process of generating new algorithms.Good layout, with wide margins containing intuition-generating diagrams. The Notes and Comments section at the end of each chapter also provides useful orientation to further algorithms and context in each subfield.Wide coverage of algorithms. As the authors say: "In general we have chosen the solution that is most easy to understand and implement. This is not necessarily the most efficient solution. We also took care that the book contains a good mixture of techniques like divide-and-conquer, plane sweep, and randomized algorithms. We decided not to treat all sorts of variations to the problems; we felt it is more important to introduce all main topics in computational geometry than to give more detailed information about a smaller number of topics."

Book ChapterDOI
Jürg Nievergelt1
25 Nov 2000
TL;DR: Reverse search is described and illustrated on a case study of enumerative optimization: enumerating the shortest Euclidean spanning trees.
Abstract: For half a century since computers came into existence, the goal of finding elegant and efficient algorithms to solve "simple" (well-defined and well-structured) problems has dominated algorithm design. Over the same time period, both processing and storage capacity of computers have increased by roughly a factor of a million. The next few decades may well give us a similar rate of growth in raw computing power, due to various factors such as continuing miniaturization, parallel and distributed computing. If a quantitative change of orders of magnitude leads to qualitative advances, where will the latter take place? Only empirical research can answer this question. Asymptotic complexity theory has emerged as a surprisingly effective tool for predicting run times of polynomial-time algorithms. For NP-hard problems, on the other hand, it yields overly pessimistic bounds. It asserts the non-existence of algorithms that are efficient across an entire problem class, but ignores the fact that many instances, perhaps including those of interest, can be solved efficiently. For such cases we need a complexity measure that applies to problem instances, rather than to over-sized problem classes. Combinatorial optimization and enumeration problems are modeled by state spaces that usually lack any regular structure. Exhaustive search is often the only way to handle such "combinatorial chaos". Several general purpose search algorithms are used under different circumstances. We describe reverse search and illustrate this technique on a case study of enumerative optimization: enumeratingt he k shortest Euclidean spanning trees.


Journal ArticleDOI
TL;DR: This paper presents a brief overview of algorithms, theorems, and software in mesh generation.
Abstract: Mesh generation is a great example of inter-disciplinary research. Its development is built upon advances in computational and combinatorial geometry, data structures, numerical analysis, and scientific applications. Its success is justified not only by mathematical proofs about the quality and the numerical relevancy of geometry-based meshing algorithms, but also by the performance of meshing software in real applications. It embraces both provably good algorithms and practical heuristics. This paper presents a brief overview of algorithms, theorems, and software in mesh generation.

Proceedings ArticleDOI
01 Oct 2000
TL;DR: A new algorithm is developed to automatically generate the isosurface and triangulation tables for any dimension, which allows the efficient calculation of 4D isOSurfaces, which can be interactively sliced to provide smooth animation or slicing through oblique hyperplanes.
Abstract: Visualization algorithms have seen substantial improvements in the past several years. However, very few algorithms have been developed for directly studying data in dimensions higher than three. Most algorithms require a sampling in three-dimensions before applying any visualization algorithms. This sampling typically ignores vital features that may be present when examined in oblique cross-sections, and places an undo burden on system resources when animation through additional dimensions is desired. For time-varying data of large data sets, smooth animation is desired at interactive rates. We provide a fast Marching Cubes like algorithm for hypercubes of any dimension. To support this, we have developed a new algorithm to automatically generate the isosurface and triangulation tables for any dimension. This allows the efficient calculation of 4D isosurfaces, which can be interactively sliced to provide smooth animation or slicing through oblique hyperplanes. The former allows for smooth animation in a very compressed format. The latter provide better tools to study time-evolving features as they move downstream. We also provide examples in using this technique to show interval volumes or the sensitivity of a particular isovalue threshold.

Proceedings ArticleDOI
01 Jun 2000
TL;DR: A new more stable algorithm is proposed to overcome the problem of convergence becoming unstable for object meshes consisting of several thousand vertices using multi-resolution meshes using a modification of the original procedure to map object surfaces to the unit sphere.
Abstract: A procedure for the parameterization of surface meshes of objects with spherical topology is presented The generation of such a parameterisation has been formulated and solved as a large constrained optimization problem by C Brechbuhler (1995), but the convergence of this algorithm becomes unstable for object meshes consisting of several thousand vertices We propose a new more stable algorithm to overcome this problem using multi-resolution meshes A triangular mesh is mapped to a sphere by harmonic mapping Next, a mesh hierarchy is constructed The coarsest level is then optimized using a modification of the original procedure to map object surfaces to the unit sphere The result is used as a starting point for the mapping of the next finer mesh, a process which is repeated until the final result is obtained The new approach is compared to the original one and some parameterized object surfaces are presented

Proceedings ArticleDOI
31 Oct 2000
TL;DR: A new image-based approach using epipolar geometry to drive the robot to a desired one using only image data provided during the robot motion, which assumes no prior knowledge about the 3D structure or about a desired image to reach.
Abstract: In this paper, we propose a new image-based approach using epipolar geometry. The problem which is addressed can be stated as follows: starting from a Cartesian situation, we want to drive the robot to a desired one using only image data provided during the robot motion. With regard to classical image-based visual servoing, we assume no prior knowledge about the 3D structure or about a desired image to reach. Simulation and experimental results are shown.

Book ChapterDOI
05 Sep 2000
TL;DR: It is shown that many basic geometric properties have very efficient testing algorithms, whose running time is significantly smaller than the object description size.
Abstract: We consider the notion of property testing as applied to computational geometry. We aim at developing efficient algorithms which determine whether a given (geometrical) object has a predetermined property Q or is "far" from any object having the property. We show that many basic geometric properties have very efficient testing algorithms, whose running time is significantly smaller than the object description size.