scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 1997"


Book
01 Jan 1997
TL;DR: In this article, an introduction to computational geometry focusing on algorithms is presented, which is related to particular applications in robotics, graphics, CAD/CAM, and geographic information systems.
Abstract: This introduction to computational geometry focuses on algorithms. Motivation is provided from the application areas as all techniques are related to particular applications in robotics, graphics, CAD/CAM, and geographic information systems. Modern insights in computational geometry are used to provide solutions that are both efficient and easy to understand and implement.

4,805 citations


Proceedings ArticleDOI
03 Aug 1997
TL;DR: This work has developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models, and which also supports non-manifold surface models.
Abstract: Many applications in computer graphics require complex, highly detailed models. However, the level of detail actually necessary may vary considerably. To control processing time, it is often desirable to use approximations in place of excessively detailed models. We have developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models. The algorithm uses iterative contractions of vertex pairs to simplify models and maintains surface error approximations using quadric matrices. By contracting arbitrary vertex pairs (not just edges), our algorithm is able to join unconnected regions of models. This can facilitate much better approximations, both visually and with respect to geometric error. In order to allow topological joining, our system also supports non-manifold surface models. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—surface and object representations

3,564 citations


Proceedings ArticleDOI
Hugues Hoppe1
03 Aug 1997
TL;DR: This paper defines efficient refinement criteria based on the view frustum, surface orientation, and screen-space geometric error, and develops a real-time algorithm for incrementally refining and coarsening the mesh according to these criteria.
Abstract: Level-of-detail (LOD) representations are an important tool for realtime rendering of complex geometric environments. The previously introduced progressive mesh representation defines for an arbitrary triangle mesh a sequence of approximating meshes optimized for view-independent LOD. In this paper, we introduce a framework for selectively refining an arbitrary progressive mesh according to changing view parameters. We define efficient refinement criteria based on the view frustum, surface orientation, and screen-space geometric error, and develop a real-time algorithm for incrementally refining and coarsening the mesh according to these criteria. The algorithm exploits view coherence, supports frame rate regulation, and is found to require less than 15% of total frame time on a graphics workstation. Moreover, for continuous motions this work can be amortized over consecutive frames. In addition, smooth visual transitions (geomorphs) can be constructed between any two selectively refined meshes. A number of previous schemes create view-dependent LOD meshes for height fields (e.g. terrains) and parametric surfaces (e.g. NURBS). Our framework also performs well for these special cases. Notably, the absence of a rigid subdivision structure allows more accurate approximations than with existing schemes. We include results for these cases as well as for general meshes. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation Display algorithms; I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling surfaces and object representations. Additional

973 citations


01 May 1997
TL;DR: Methods for simplifying and approximating polygonal surfaces from computer graphics, computer vision, cartography, computational geometry, and other fields are classified, summarized, and compared both practically and theoretically.
Abstract: : This paper surveys methods for simplifying and approximating polygonal surfaces. A polygonal surface is a piecewise-linear surface in 3-D defined by a set of polygons; typically a set of triangles. Methods from computer graphics, computer vision, cartography, computational geometry, and other fields are classified, summarized, and compared both practically and theoretically. The surface types range from height fields (bivariate functions), to manifolds, to non-manifold self-intersecting surfaces. Piecewise-linear curve simplification is also briefly surveyed.

594 citations


Book ChapterDOI
07 Jul 1997
TL;DR: An overview of the LEDA platform for combinatorial and geometric computing and an account of its development are given and some recent theoretical developments are discussed.
Abstract: We give an overview of the LEDA platform for combinatorial and geometric computing and an account of its development. We discuss our motivation for building LEDA and to what extent we have reached our goals. We also discuss some recent theoretical developments. This paper contains no new technical material. It is intended as a guide to existing publications about the system. We refer the reader also to our web-pages for more information.

473 citations


Proceedings ArticleDOI
03 Aug 1997
TL;DR: An optimization algorithm for constructing PSC representations for graphics surface models, and the framework on models that are both geometrically and topologically complex is demonstrated.
Abstract: In this paper, we introduce the progressive simplicial complex (PSC) representation, a new format for storing and transmitting triangulated geometric models. Like the earlier progressive mesh (PM) representation, it captures a given model as a coarse base model together with a sequence of refinement transformations that progressively recover detail. The PSC representation makes use of a more general refinement transformation, allowing the given model to be an arbitrary triangulation (e.g. any dimension, non-orientable, non-manifold, non-regular), and the base model to always consist of a single vertex. Indeed, the sequence of refinement transformations encodes both the geometry and the topology of the model in a unified multiresolution framework. The PSC representation retains the advantages of PM’s. It defines a continuous sequence of approximating models for runtime level-of-detail control, allows smooth transitions between any pair of models in the sequence, supports progressive transmission, and offers a space-efficient representation. Moreover, by allowing changes to topology, the PSC sequence of approximations achieves better fidelity than the corresponding PM sequence. We develop an optimization algorithm for constructing PSC representations for graphics surface models, and demonstrate the framework on models that are both geometrically and topologically complex. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling surfaces and object representations. Additional

394 citations


Journal ArticleDOI
TL;DR: In this article, the authors present simulations of large-scale landscape evolution on tectonic time scales obtained from a new numerical model which allows for arbitrary spatial discretization, which is also ideally suited for problems which require large variations in spatial disambiguation and/or self-adaptive meshing.
Abstract: We present simulations of large-scale landscape evolution on tectonic time scales obtained from a new numerical model which allows for arbitrary spatial discretization. The new method makes use of efficient algorithms from the field of computational geometry to compute the set of natural neighbours of any irregular distribution of points in a plane. The natural neighbours are used to solve geomorphic equations that include erosion/deposition by channelled flow and diffusion. The algorithm has great geometrical flexibility, which makes it possible to solve problems involving complex boundaries, radially symmetrical uplift functions and horizontal tectonic transport across strike-slip faults. The algorithm is also ideally suited for problems which require large variations in spatial discretization and/or self-adaptive meshing. We present a number of examples to illustrate the power of the new approach and its advantages over more ‘classical’ models based on regular (rectangular) discretization. We also demonstrate that the synthetic river networks and landscapes generated by the model obey the laws of network composition and have scaling properties similar to those of natural landscapes. Finally we explain how orographically controlled precipitation and flexural isostasy may be easily incorporated in the model without sacrificing efficiency.

360 citations


Book ChapterDOI
24 Jun 1997
TL;DR: This paper presents an intuitive model for measuring the similarity between two time series that takes into account outliers, different scaling functions, and variable sampling rates, and shows the naturalness of this notion of similarity.
Abstract: Similarity of objects is one of the crucial concepts in several applications, including data mining For complex objects, similarity is nontrivial to define In this paper we present an intuitive model for measuring the similarity between two time series The model takes into account outliers, different scaling functions, and variable sampling rates Using methods from computational geometry, we show that this notion of similarity can be computed in polynomial time Using statistical approximation techniques, the algorithms can be speeded up considerably We give preliminary experimental results that show the naturalness of the notion

336 citations


Proceedings ArticleDOI
17 Jun 1997
TL;DR: The paper presents a new method for matching individual line segments between images that uses both grey-level information and the multiple view geometric relations between the images and eliminates all mismatches.
Abstract: The paper presents a new method for matching individual line segments between images. The method uses both grey-level information and the multiple view geometric relations between the images. For image pairs epipolar geometry facilitates the computation of a cross-correlation based matching score for putative line correspondences. For image triplets cross-correlation matching scores are used in conjunction with line transfer based on the trifocal geometry. Algorithms are developed for both short and long range motion. In the case of long range motion the algorithm involves evaluating a one parameter family of plane induced homographies. The algorithms are robust to deficiencies in the line segment extraction and partial occlusion. Experimental results are given for image pairs and triplets, for varying motions between views, and for different scene types. The three view algorithm eliminates all mismatches.

326 citations


Journal ArticleDOI
TL;DR: A measure which combines the relative fidelity and efficiency of a curve segmentation is described, and this measure is used to compare the application of 23 algorithms to a curve first used by Teh and Chin (1989).
Abstract: Given the enormous number of available methods for finding polygonal approximations to curves techniques are required to assess different algorithms. Some of the standard approaches are shown to be unsuitable if the approximations contain varying numbers of lines. Instead, we suggest assessing an algorithm's results relative to an optimal polygon, and describe a measure which combines the relative fidelity and efficiency of a curve segmentation. We use this measure to compare the application of 23 algorithms to a curve first used by Teh and Chin (1989); their integral square errors (ISEs) are assessed relative to the optimal ISE. In addition, using an example of pose estimation, it is shown how goal-directed evaluation can be used to select an appropriate assessment criterion.

263 citations


Journal ArticleDOI
TL;DR: This work proposes two strategies which lead to either surfacical skeletons or wireframe skeletons, and two angular criteria are proposed that allow us to build a size-invariant hierarchy of simplified skeletons.

Book ChapterDOI
01 Jan 1997
TL;DR: This work surveys various forms of the problem, primarily in two and three dimensions, for motion of a single point, since most results have focused on these cases.
Abstract: Computing an optimal path in a geometric domain is a fundamental problem in computational geometry, with applications in robotics, geographic information systems (GIS), wire routing, etc.

Journal ArticleDOI
TL;DR: In this article, a theoretical framework for the perception of specular surface geometry is introduced, based on the notion of caustics, and a feature classification algorithm is developed that distinguishes real and virtual features from their image trajectories that result from observer motion.
Abstract: A theoretical framework is introduced for the perception of specular surface geometry. When an observer moves in three-dimensional space, real scene features such as surface markings remain stationary with respect to the surfaces they belong to. In contrast, a virtual feature which is the specular reflection of a real feature, travels on the surface. Based on the notion of caustics, a feature classification algorithm is developed that distinguishes real and virtual features from their image trajectories that result from observer motion. Next, using support functions of curves, a closed-form relation is derived between the image trajectory of a virtual feature and the geometry of the specular surface it travels on. It is shown that, in the 2D case, where camera motion and the surface profile are coplanar, the profile is uniquely recovered by tracking just two unknown virtual features. Finally, these results are generalized to the case of arbitrary 3D surface profiles that are traveled by virtual features when camera motion is not confined to a plane. This generalization includes a number of mathematical results that substantially enhance the present understanding of specular surface geometry. An algorithm is developed that uniquely recovers 3D surface profiles using a single virtual feature tracked from the occluding boundary of the object. All theoretical derivations and proposed algorithms are substantiated by experiments.

Proceedings ArticleDOI
01 Aug 1997
TL;DR: A model of time-series similarity that allows outliers, different scaling functions, and variable sampling rates is analyzed and several deterministic and randomized algorithms for computing this notion of similarity are presented.
Abstract: Given a pair of nonidentical complex objects, defining (and determining) how similar they are to each other is a nontrivial problem. In data mining applications, one frequently needs to determine the similarity between two time series. We analyze a model of time-series similarity that allows outliers, different scaling functions, and variable sampling rates. We present several deterministic and randomized algorithms for computing this notion of similarity. The algorithms are based on nontrivial tools and methods from computational geometry. In particular, we use properties of families of well-separated geometric sets. The randomized algorithm has provably good performance and also works extremely efficiently in practice.

Proceedings ArticleDOI
01 Jan 1997

Journal ArticleDOI
TL;DR: A broadly applicable formulation for representing the boundary of swept geometric entities is presented, applicable to entities of any dimension and applications to NC part geometry verification, robotic manipulators, and computer modeling are discussed.
Abstract: A broadly applicable formulation for representing the boundary of swept geometric entities is presented. Geometric entities of multiple parameters are considered. A constraint function is defined as one entity is swept along another. Boundaries in terms of inequality constraints imposed on each entity are considered which gives rise to an ability of modeling complex solids. A rank-deficiency condition is imposed on the constraint Jacobian of the sweep to determine singular sets. Because of the generality of the rank-deficiency condition, the formulation is applicable to entities of any dimension. The boundary to the resulting swept volume, in terms of enveloping surfaces, is generated by substituting the resulting singularities into the constraint equation. Singular entities (hyperentities) are then intersected to determine sub-entities that may exist on the boundary of the generated swept volume. Physical behavior of singular entities is discussed. A perturbation method is used to identify the boundary envelope. Numerical examples illustrating this formulation are presented. Applications to NC part geometry verification, robotic manipulators, and computer modeling are discussed.

Journal ArticleDOI
TL;DR: The topology and geometry of discrete lines are introduced, and algorithms for generating discrete lines for ray traversal in voxel space are considered.
Abstract: Voxelization algorithms that convert a 3D continuous line representation into a discrete line representation have a dual role in graphics. First, these algorithms synthesize voxel-based objects in volume graphics. The 3D line itself is a fundamental primitive, also used as a building block for voxelizing more complex objects. For example, sweeping a 3D voxelized line along a 3D voxelized circle generates a voxelized cylinder. The second application of 3D line voxelization algorithms is for ray traversal in voxel space. Rendering techniques that cast rays through a volume of voxels are based on algorithms that generate the set of voxels visited (or pierced) by the continuous ray. Discrete ray algorithms have been developed for traversing a 3D space partition or a 3D array of sampled or computed data. These algorithms produce one discrete point per step, in contrast to ray casting algorithms for volume rendering, which track a continuous ray at constant intervals, and to voxelization algorithms that generate nonbinary voxel values (for example, partial occupancies). Before considering algorithms for generating discrete lines, we introduce the topology and geometry of discrete lines.

Journal ArticleDOI
S. Cameron1
01 Dec 1997
TL;DR: By recasting the algorithms into configuration space, it is shown that a minor modification to the earlier algorithm of Gilbert, Johnson, and Keerthi (1988) also gives this algorithm the same expected cost.
Abstract: The problem of tracking the distance between two convex polyhedra is finding applications in many areas of robotics. The algorithm of Lin and Canny (1991) is a well-known fast solution to this problem, but by recasting the algorithms into configuration space, we show that a minor modification to the earlier algorithm of Gilbert, Johnson, and Keerthi (1988) also gives this algorithm the same expected cost.

Proceedings ArticleDOI
18 Dec 1997
TL;DR: The technique is based on a comparison routine that determines the relative position of two points in the order induced by a space filling curve and could be used in conjunction with any parallel sorting algorithm to effect parallel domain decomposition.
Abstract: Partitioning techniques based on space filling curves have received much recent attention due to their low running time and good load balance characteristics. The basic idea underlying these methods is to order the multidimensional data according to a space filling curve and partition the resulting one dimensional order. However, space filling curves are defined for points that lie on a uniform grid of a particular resolution. It is typically assumed that the coordinates of the points are representable using a fixed number of bits, and the run times of the algorithms depend upon the number of bits used. We present a simple and efficient technique for ordering arbitrary and dynamic multidimensional data using space filling curves and its application to parallel domain decomposition and load balancing. Our technique is based on a comparison routine that determines the relative position of two points in the order induced by a space filling curve. The comparison routine could then be used in conjunction with any parallel sorting algorithm to effect parallel domain decomposition.

Book ChapterDOI
07 Jul 1997
TL;DR: In this paper, the authors present deterministic parallel algorithms for the coarse-grained multicomputer (CGM) and bulk-synchronous parallel computer (BSP) models which solve the following well known graph problems: (1) list ranking, (2) Euler tour construction, (3) computing the connected components and spanning forest, (4) lowest common ancestor preprocessing, (5) tree contraction and expression tree evaluation, (6) computing an ear decomposition, (7) 2-edge connectivity and biconnectivity (testing and component computation
Abstract: In this paper, we present deterministic parallel algorithms for the coarse grained multicomputer (CGM) and bulk-synchronous parallel computer (BSP) models which solve the following well known graph problems: (1) list ranking, (2) Euler tour construction, (3) computing the connected components and spanning forest, (4) lowest common ancestor preprocessing, (5) tree contraction and expression tree evaluation, (6) computing an ear decomposition or open ear decomposition, (7) 2-edge connectivity and biconnectivity (testing and component computation), and (8) cordai graph recognition (finding a perfect elimination ordering). The algorithms for Problems 1–7 require O(log p) communication rounds and linear sequential work per round. Our results for Problems 1 and 2 hold for arbitrary ratios \(\frac{n}{p}\), i.e. they are fully scalable, and for Problems 3–8 it is assumed that \(\frac{n}{p} \geqslant p^ \in ,{\mathbf{ }} \in {\mathbf{ }} > 0\), which is true for all commercially available multiprocessors. We view the algorithms presented as an important step towards the final goal of O(1) communication rounds. Note that, the number of communication rounds obtained in this paper is independent of n and grows only very slowly with respect to p. Hence, for most practical purposes, the number of communication rounds can be considered as constant. The result for Problem 1 is a considerable improvement over those previously reported. The algorithms for Problems 2–7 are the first practically relevant deterministic parallel algorithms for these problems to be used for commercially available coarse grained parallel machines.

Proceedings ArticleDOI
05 Jan 1997
TL;DR: A bicriteria version of the map-labeling problem is formulated and polynomial-time approximation schemes for a number of such problems are provided.
Abstract: Map labeling is of fundamental importance in cartography and geographical information systems and is one of the areas targeted for research by the ACM Computational Geometry Impact Task Force. Previous work on map labeling has focused on the problem of placing maximal uniform, axis-aligned, disjoint rectangles on the plane so that each point feature to be labeled lies at the corner of one rectangle. Here, we consider a number of variants of the map labeling problem. We obtain three general types of results. First, we devise constant-factor polynomial-time-approximation algorithms for labeling point features by rectangular labels, where the feature may lie anywhere on the boundary of its label region and where labeling rectangles may be placed in any orientation. These results generalize to the case of elliptical labels. Secondly, we consider the problem of labeling a map consisting of disjoint rectilinear fine segments. We obtain constant-factor polynomial-time approximation algorithms for the general problem and an optimal algorithm for the special case where all segments are horizontal. Finally, we formulate a bicriteria version of the map-labeling problem and provide bicriteria polynomial- time approximation schemes for a number of such problems.

Journal ArticleDOI
TL;DR: In this article, the authors proved full Hausdorff dimension in a variant of the Kakeya problem involving circles in the plane, and also sharp estimates for the relevant maximal function.
Abstract: We prove full Hausdorff dimension in a variant of the Kakeya problem involving circles in the plane, and also sharp estimates for the relevant maximal function. These results can also be formulated in terms of the wave equation in two space variables. A novelty in our approach is the use of ideas from computational geometry.

Journal ArticleDOI
TL;DR: It is found that, in general, the optimal reconstruction of polyhedra with a bounded number of faces may take an unbounded number of silhouettes, and it is shown that O(n5) silhouettes are sufficient, and an algorithm for finding the viewpoints is described.

Journal ArticleDOI
TL;DR: This work examines a wavelet basis representation of reflectance functions, and the algorithms required for efficient point-wise reconstruction of the BRDF, and shows that the nonstandard wavelet decomposition leads to considerably more efficient algorithms than the standard wave let decomposition.
Abstract: Analytical models of light reflection are in common use in computer graphics. However, models based on measured reflectance data promise increased realism by making it possible to simulate many more types of surfaces to a greater level of accuracy than with analytical models. They also require less expert knowledge about the illumination models and their parameters. There are a number of hurdles to using measured reflectance functions, however. The data sets are very large. A reflectance distribution function sampled at five degrees angular resolution, arguably sparse enough to miss highlights and other high frequency effects, can easily require over a million samples, which in turn amount to over four megabytes of data. These data then also require some form of interpolation and filtering to be used effectively. We examine issues of representation of measured reflectance distribution functions. In particular, we examine a wavelet basis representation of reflectance functions, and the algorithms required for efficient point-wise reconstruction of the BRDF. We show that the nonstandard wavelet decomposition leads to considerably more efficient algorithms than the standard wavelet decomposition. We also show that thresholding allows considerable improvement in running times, without unduly sacrificing image quality.

Book ChapterDOI
01 Sep 1997
TL;DR: A new indexing scheme is presented, called H- indexing, which has superior locality, and with respect to the Euclidean metric the H-indexing provides locality approximately 50% better than the usually used Hilbert indexing.
Abstract: The efficiency of many algorithms in parallel processing, computational geometry, image processing, and several other fields relies on “locality-preserving” indexing schemes for meshes. We concentrate on the case where the maximum distance between two mesh nodes indexed i and j shall be a slow-growing function of i — j (using the Euclidean, the maximum, and the Manhattan metric). In this respect, space-filling, self-similar curves like the Hilbert curve are superior to simple indexing schemes like “row-major.” We present new tight results on 2-D and 3-D Hilbert indexings which are easy to generalize to a quite large class of curves. We then present a new indexing scheme we call H- indexing, which has superior locality. For example, with respect to the Euclidean metric the H-indexing provides locality approximately 50% better than the usually used Hilbert indexing. This answers an open question of Gotsman and Lindenbaum. In addition, H-indexings have the useful property to form a Hamiltonian cycle and they are optimally locality-preserving among all cyclic indexings.

Journal ArticleDOI
TL;DR: A new data structure for a set of n convex simply-shaped fat objects in the plane is presented, and it is used to obtain efficient and rather simple solutions to several problems including vertical ray shooting and bounded-size range searching.
Abstract: We present a new data structure for a set of n convex simply-shaped fat objects in the plane, and use it to obtain efficient and rather simple solutions to several problems including (i) vertical ray shooting —preprocess a set ϰ of n non-intersecting convex simply-shaped flat objects in 3-space, whose xy -projections are fat, for efficient vertical ray shooting queries, (ii) point enclosure —preprocess a set C of n convex simply-shaped fat objects in the plane, so that the k objects containing a query point p can be reported efficiently, (iii) bounded-size range searching — preprocess a set C of n convex fat polygons, so that the k objects intersecting a “not-too-large” query polygon can be reported efficiently, and (iv) bounded-size segment shooting —preprocess a set C as in (iii), so that the first object (if exists) hit by a “not-too-long” oriented query segment can be found efficiently. For the first three problems we construct data structures of size O( λ s ( n )log 3 n ), where s is the maximum number of intersections between the boundaries of the ( xy -projections) of any pair of objects, and λ s ( n ) is the maximum length of ( n , s ) Davenport-Schinzel sequences. The data structure for the fourth problem is of size O( λ s ( n )log 2 n ). The query time in the first problem is O(log 4 n ), the query time in the second and third problems is O(log 3 n + k log 2 n ), and the query time in the fourth problem is O(log 3 n ). We also present a simple algorithm for computing a depth order for a set ϰ as in (i), that is based on the solution to the vertical ray shooting problem. (A depth order for ϰ, if exists, is a linear order of ϰ, such that, if K 1 , K 2 ∈ ϰ and K 1 lies vertically above K 2 , then K 1 precedes K 2 .) Unlike the algorithm of Agarwal et al. (1995) that might output a false order when a depth order does not exist, the new algorithm is able to determine whether such an order exists, and it is often more efficient in practical situations than the former algorithm.

Proceedings ArticleDOI
13 Oct 1997
TL;DR: A new algorithm for 3D geometric metamorphosis between two objects based on the harmonic map is presented, applicable for arbitrary polyhedra that are homeomorphic to the three dimensional sphere or the two dimensional disk.
Abstract: Recently, animations with deforming objects are frequently used in various computer graphics applications. Metamorphosis (or morphing) of three dimensional objects can realize a shape transformation between two or more existing objects. We present a new algorithm for 3D geometric metamorphosis between two objects based on the harmonic map. Our algorithm is applicable for arbitrary polyhedra that are homeomorphic to the three dimensional sphere or the two dimensional disk. In our algorithm, each of the two 3D objects is first embedded to the circular disk on the plane. This embedded model has the same graph structure as its 3D objects. By overlapping those two embedded models, we can establish a correspondence between the two objects. Using this correspondence, intermediate objects between two objects are easily generated. The user only specifies a boundary loop on an object and a vertex on that boundary which control the interpolation.

Book ChapterDOI
01 Jan 1997
TL;DR: Computational real algebraic geometry studies various algorithmic questions dealing with the real solutions of a system of equalities, inequalities, and inequations of polynomials over the real numbers.
Abstract: Computational real algebraic geometry studies various algorithmic questions dealing with the real solutions of a system of equalities, inequalities, and inequations of polynomials over the real numbers. This emerging field is largely motivated by the power and elegance with which it solves a broad and general class of problems arising in robotics, vision, computer aided design, geometric theorem proving, etc. The following survey paper discusses the underlying concepts, algorithms and a series of representative applications. This paper will appear as a chapter in the "Handbook of Discrete and Computational Geometry" (Edited by J.E. Goodman and J. O''Rourke), CRC Series in Discrete and Combinatorial Mathematics.


Journal ArticleDOI
Todd A. Cass1
TL;DR: The formulation and algorithms for geometric feature matching presented here provide a guaranteed method for finding all feasible interpretations of the data in terms of the model and are robust to measurement uncertainty in the data features and to the presence of spurious scene features.
Abstract: This paper considers the task of recognition and position determination, by computer, of a 2D or 3D object where the input is a single 2D brightness image, and a model of the object is known a priori. The primary contribution of this paper is a novel formulation and methods for local geometric feature matching. This formulation is based on analyzing geometric constraints on transformations of the model features which geometrically align it with a substantial subset of image features. Specifically, the formulation and algorithms for geometric feature matching presented here provide a guaranteed method for finding all feasible interpretations of the data in terms of the model. This method is robust to measurement uncertainty in the data features and to the presence of spurious scene features, and its time and space requirements are only polynomial in the size of the feature sets. This formulation provides insight into the fundamental nature of the matching problem, and the algorithms commonly used in computer vision for solving it.