scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 1994"


Book
Joseph O'Rourke1
01 Jan 1994
TL;DR: In this paper, the design and implementation of geometry algorithms arising in areas such as computer graphics, robotics, and engineering design are described and a self-contained treatment of the basic techniques used in computational geometry is presented.
Abstract: From the Publisher: This is the newly revised and expanded edition of a popular introduction to the design and implementation of geometry algorithms arising in areas such as computer graphics, robotics, and engineering design. The basic techniques used in computational geometry are all covered: polygon triangualtions, convex hulls, Voronoi diagrams, arrangements, geometric searching, and motion planning. The self-contained treatment presumes only an elementary knowledge of mathematics, but it reaches topics on the frontier of current research. Thus professional programmers will find it a useful tutorial.

1,874 citations


Book
01 Jan 1994
TL;DR: A comparison of quick-sort and search problems, Voronoi diagrams of Hyperplanes, and the model of randomness: The number of faces and the expected structural and conflict change.
Abstract: I BASICS 1 Quick-sort and Search Quick-sort Another view of quick-sort Randomized binary trees Skip lists 2 What Is Computational Geometry? Range queries Arrangements Trapezoidal decompositions Convex polytopes Voronoi diagrams Hidden surface removal Numerical precision and degeneracies Early deterministic algorithms Deterministic vs randomized algorithms The model of randomness 3 Incremental Algorithms Trapezoidal decompositions Convex polytopes Voronoi diagrams Configuration spaces Tail estimates 4 Dynamic Algorithms trapezoidal decompositions Voronoi diagrams History and configuration spaces Rebuilding history Deletions in history Dynamic shuffling 5 Random Sampling Configuration spaces with bounded valence Top-down sampling Bottom-up sampling Dynamic sampling Average conflict size More dynamic algorithms Range spaces and E-nets Comparisons II APPLICATIONS 6 Arrangements of Hyperplanes Incremental construction Zone Theorem Canonical triangulations Point location and ray shooting Point location and range queries 7 Convex Polytopes Linear Programming The number of faces Incremental construction The expected structural and conflict change Dynamic maintenance Voronoi diagrams Search problems Levels and Voronoi diagrams of order k 8 Range Search Orthogonal intersection search Nonintersecting segments in the plane Dynamic point location Simplex range search Half-space range queries Decomposable search problems Parametric search 9 Computer Graphics Hidden surface removal Binary Space Partitions Moving viewpoint 10 How Crucial Is Randomness? Pseudo-random sources Derandomization Appendix: Tail Estimates Chernoff's technique Chebychev's technique Bibliography Index

595 citations


Journal ArticleDOI
01 Feb 1994
TL;DR: Some of the mathematical techniques borrowed from algebraic geometry, projective geometry, and homotopy theory that are required to solve three-dimensional (3D) motion and structure of rigid objects when their corresponding features are known at different times or are viewed by different cameras are mentioned.
Abstract: We present a review of algorithms and their performance for determining three-dimensional (3D) motion and structure of rigid objects when their corresponding features are known at different times or are viewed by different cameras. Three categories of problems are considered, depending upon whether the features are two (2D) or three-dimensional (3D) and the type of correspondence: a) 3D to 3D (i.e., locations of corresponding features in 3D space are known at two different times), b) 2D to 3D (i.e., locations of features in 3D space and their projection on the camera plane are known, and c) 2D to 2D (i.e., projections of features on the camera plane are known at two different times). Features considered include points, straight lines, curved lines, and corners. Emphasis is on problem formulation, efficient algorithms for solution, existence and uniqueness of solutions, and sensitivity of solutions to noise in the observed data. Algorithms described have been used in a variety of applications. Some of these are: a) positioning and navigating 3D objects in a 3D world, b) camera calibration, i.e., determining location and orientation of a camera by observing 3D features whose location is known, c) estimating motion and structure of moving objects relative to a camera. We mention some of the mathematical techniques borrowed from algebraic geometry, projective geometry, and homotopy theory that are required to solve these problems, list unsolved problems, and give some directions for future research. >

525 citations


Journal ArticleDOI
TL;DR: Basic algorithms to extract coherent amorphous regions (features or objects) from 2 and 3D scalar and vector fields and then track them in a series of consecutive time steps are described.
Abstract: We describe basic algorithms to extract coherent amorphous regions (features or objects) from 2 and 3D scalar and vector fields and then track them in a series of consecutive time steps. We use a combination of techniques from computer vision, image processing, computer graphics, and computational geometry and apply them to data sets from computational fluid dynamics. We demonstrate how these techniques can reduce visual clutter and provide the first step to quantifying observable phenomena. These results can be generalized to other disciplines with continuous time-dependent scalar (and vector) fields. >

209 citations


Journal ArticleDOI
TL;DR: The authors have proposed an algorithm to generate a surface-skeleton so that the topology of the original image is preserved, the shape of the image is maintained as much as possible, and the results are less affected by noise.
Abstract: The problems of 3-D digital topology preservation under binary transformations and 3-D object thinning are considered in this correspondence. First, the authors establish the conditions under which transformation of an object voxel to a non-object voxel, or its inverse does not affect the image topology. An efficient algorithm to detect a simple point has been proposed on the basis of those conditions. In this connection, some other interesting properties of 3-D digital geometry are also discussed. Using these properties and the simple point detection algorithm, the authors have proposed an algorithm to generate a surface-skeleton so that the topology of the original image is preserved, the shape of the image is maintained as much as possible, and the results are less affected by noise. >

209 citations


Journal ArticleDOI
TL;DR: An efficient algorithm for generating largely optimal tool paths to solve this pocket-machining problem for multiply connected planar pocket areas bounded by curvilinear boundaries is presented.
Abstract: A fundamental NC-machining problem is the clearing of areas within specified boundaries from material. The paper presents an efficient algorithm for generating largely optimal tool paths to solve this pocket-machining problem for multiply connected planar pocket areas bounded by curvilinear boundaries. Using certain concepts of computational geometry, i.e. Voronoi diagrams and monotonic pouches, offsets of a pocket boundary are efficiently generated. Further, it is explained how the tool path can be optimized with respect to several criteria arising from technological requirements. The concepts presented have been implemented, and they form the basis of the pocketing package lark .

187 citations


Journal ArticleDOI
TL;DR: This work considers how to formulate a parallel analytical molecular surface algorithm that has expected linear complexity with respect to the total number of atoms in a molecule, and aims to compute and display these surfaces at interactive rates, by taking advantage of advances in computational geometry.
Abstract: We consider how we set out to formulate a parallel analytical molecular surface algorithm that has expected linear complexity with respect to the total number of atoms in a molecule. To achieve this goal, we avoided computing the complete 3D regular triangulation over the entire set of atoms, a process that takes time O(n log n), where n is the number of atoms in the molecule. We aim to compute and display these surfaces at interactive rates, by taking advantage of advances in computational geometry, making further algorithmic improvements and parallelizing the computations. >

186 citations


Proceedings ArticleDOI
17 Oct 1994
TL;DR: An algorithm is presented that considerably reduces the number of polygons generated by a Marching Cubes-like scheme without excessively increasing the overall computational complexity.
Abstract: Since the introduction of standard techniques for isosurface extraction from volumetric datasets, one of the hardest problems has been to reduce the number of triangles (or polygons) generated. The paper presents an algorithm that considerably reduces the number of polygons generated by a Marching Cubes-like scheme (W. Lorensen and H. Cline, 1987) without excessively increasing the overall computational complexity. The algorithm assumes discretization of the dataset space and replaces cell edge interpolation by midpoint selection. Under these assumptions, the extracted surfaces are composed of polygons lying within a finite number of incidences, thus allowing simple merging of the output facets into large coplanar polygons. An experimental evaluation of the proposed approach on datasets related to biomedical imaging and chemical modelling is reported. >

179 citations


Journal ArticleDOI
TL;DR: Evidence that the 3-D dynamic movement problem is intractable even if B has only a constant number of degrees of freedom of movement is provided, and evidence that the problem is PSPACE-hard if B is given a velocity modulus bound on its movements.
Abstract: This paper investigates the computational complexity of planning the motion of a body B in 2-D or 3-D space, so as to avoid collision with moving obstacles of known, easily computed, trajectories. Dynamic movement problems are of fundamental importance to robotics, but their computational complexity has not previously been investigated.We provide evidence that the 3-D dynamic movement problem is intractable even if B has only a constant number of degrees of freedom of movement. In particular, we prove the problem is PSPACE-hard if B is given a velocity modulus bound on its movements and is NP-hard even if B has no velocity modulus bound, where, in both cases, B has 6 degrees of freedom. To prove these results, we use a unique method of simulation of a Turing machine that uses time to encode configurations (whereas previous lower bound proofs in robotic motion planning used the system position to encode configurations and so required unbounded number of degrees of freedom).We also investigate a natural class of dynamic problems that we call asteroid avoidance problems: B, the object we wish to move, is a convex polyhedron that is free to move by translation with bounded velocity modulus, and the polyhedral obstacles have known translational trajectories but cannot rotate. This problem has many applications to robot, automobile, and aircraft collision avoidance. Our main positive results are polynomial time algorithms for the 2-D asteroid avoidance problem, where B is a moving polygon and we assume a constant number of obstacles, as well as single exponential time or polynomial space algorithms for the 3-D asteroid avoidance problem, where B is a convex polyhedron and there are arbitrarily many obstacles. Our techniques for solving these asteroid avoidance problems use “normal path” arguments, which are an intereting generalization of techniques previously used to solve static shortest path problems.We also give some additional positive results for various other dynamic movers problems, and in particular give polynomial time algorithms for the case in which B has no velocity bounds and the movements of obstacles are algebraic in space-time.

164 citations


Journal ArticleDOI
TL;DR: A way in which a 3D workpiece is mapped onto the unit sphere, and its visibility is determined is described, using algorithms for optimal workpiece orientation as simple intersections on the sphere.
Abstract: By the extraction of ideas from computer vision, geometrical design and complexity analysis, a structure called visibility emerges. The paper describes a way in which a 3D workpiece is mapped onto the unit sphere, and its visibility is determined. For applications, manufacturing machines are classified by their degrees of freedom into point, line and surface visible processes Algorithms for optimal workpiece orientation are then formulated as simple intersections on the sphere.

160 citations


Journal ArticleDOI
TL;DR: A survey of theoretical results and the main techniques in geometric range searching is presented, which can be used as subroutines in solutions to many seemingly unrelated problems.
Abstract: In geometric range searching, algorithmic problems of the following type are considered. Given an n-point set P in the plane, build a data structure so that, given a query triangle R, the number of points of P lying in R can be determined quickly. Similar questions can be asked for point sets in higher dimensions, with triangles replaced by simplices or by more complicated shapes. Algorithms of this type are of crucial importance in computational geometry, as they can be used as subroutines in solutions to many seemingly unrelated problems, which are often motivated by practical applications, for instance in computer graphics (ray tracing, hidden-surface removal etc.). We present a survey of theoretical results and the main techniques in geometric range searching.

Journal ArticleDOI
TL;DR: A variety of problems on the interaction between two sets of line segments in two and three dimensions are considered, including counting the number of intersecting pairs between m blue segments andn red segments in the plane.
Abstract: We consider a variety of problems on the interaction between two sets of line segments in two and three dimensions. These problems range from counting the number of intersecting pairs between m blue segments andn red segments in the plane (assuming that two line segments are disjoint if they have the same color) to finding the smallest vertical distance between two nonintersecting polyhedral terrains in three-dimensional space. We solve these problems efficiently by using a variant of the segment tree. For the three-dimensional problems we also apply a variety of recent combinatorial and algorithmic techniques involving arrangements of lines in three-dimensional space, as developed in a companion paper.

Proceedings ArticleDOI
20 Nov 1994
TL;DR: An improved bound on the radius of a ball centered at the origin, which is guaranteed to intersect every connected component of the sign partition induced by a family of polynomials is given.
Abstract: In this paper we give a new algorithm for performing quantifier elimination from first order formulae over real closed fields. This algorithm improves the complexity of the asymptotically fastest algorithm for this problem, known to this date. A new feature of our algorithm is that the role of the algebraic part (the dependence on the degrees of the input polynomials) and the combinatorial part (the dependence on the number of polynomials) are separated, making possible our improved complexity bound. Another new feature is that the degrees of the polynomials in the equivalent quantifier-free formula that we output, are independent of the number of input polynomials. As special cases of this algorithm, we obtain new and improved algorithms for deciding a sentence in the first order theory over real closed fields, and also for solving the existential problem in the first order theory over real closed fields. Using the theory developed in this paper, we also give an improved bound on the radius of a ball centered at the origin, which is guaranteed to intersect every connected component of the sign partition induced by a family of polynomials. We also use our methods to obtain algorithms for solving certain decision problems in real and complex geometry which improves the complexity of the currently known algorithms for these problems. >

Book ChapterDOI
24 Feb 1994
TL;DR: This note addresses some fundamental questions concerning perturbations as they are used in computational geometry: How does one define them?
Abstract: This note addresses some fundamental questions concerning perturbations as they are used in computational geometry: How does one define them? What does it mean to compute with them? How can one compute with them? Is it sensible to use them?

Journal ArticleDOI
TL;DR: The paper discusses the application of space-filling curves as tool paths for sculptured-surface machining, and the preliminary conclusions are favourable for space- filling curves.
Abstract: Several methods have been developed for the computerized generation of space-filling curves, but these curves have never been used for NC tool-path generation. The paper discusses the application of space-filling curves as tool paths for sculptured-surface machining. Tool paths that are space-filling curves, single-direction conventional paths, and 2-direction conventional paths are compared. The efficiency ratings of the paths require further testing, but the preliminary conclusions are favourable for space-filling curves.

Journal ArticleDOI
TL;DR: This paper develops approximate decision algorithms that are considerably faster than the known decision algorithms, and have bounds on their imprecision, and reduces the problem to that of computing maximum flows on a series of graphs with integral capacities.
Abstract: This paper considers the computer vision problem of testing whether two equal cardinality points sets A and B in the plane are e-congruent. We say that A and B are e-congruent if there exists an isometry I and bijection l:A → B such that dist(I(a), l(a)) ⩽ e, for all a ϵ A. Since known methods for this problem are expensive, we develop approximate decision algorithms that are considerably faster than the known decision algorithms, and have bounds on their imprecision. Our approach reduces the problem to that of computing maximum flows on a series of graphs with integral capacities.

Book ChapterDOI
01 Jan 1994
TL;DR: The second part of a broader survey of computational convexity as mentioned in this paper is concerned with computing volumes and mixed volumes of convex polytopes, and more general convex bodies.
Abstract: This paper is the second part of a broader survey of computational convexity, an area of mathematics that has crystallized around a variety of results, problems and applications involving interactions among convex geometry, mathematical programming and computer science. The first part [GrK94a] discussed containment problems. This second part is concerned with computing volumes and mixed volumes of convex polytopes and more general convex bodies. In order to keep the paper self-contained we repeat some aspects that have already been mentioned in [GrK94a]. However, this overlap is limited to Section 1. For further background material and references, see [GrK94a], and for other parts of the survey see [GrK94b] and [GrK94c].

Book ChapterDOI
26 Sep 1994
TL;DR: A detailed study of the numerical precision required for evaluating the geometric test exactly and about first experimental experiences improves the precision bound implied by classical root separation results by more than two orders of magnitude.
Abstract: Given a set of non-intersecting (except at endpoints) line segments in the plane we want to compute their Voronoi diagram. Although there are several algorithms for this problem in the literature [Yap87, For87, CS89, BDS+92, KMM90] nobody claims to have a correct implementation. This is due to the fact that the algorithms presuppose exact arithmetic and that the Voronoi diagram of segments requires to compute with non-rational algebraic numbers. We report about a detailed study of the numerical precision required for evaluating the geometric test exactly and about first experimental experiences. More specifically, we improve the precision bound implied by classical root separation results by more than two orders of magnitude and we compare the implementation strategies suggested by our theoretical results.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the ray-shooting problem for three special classes of polyhedral objects in space: axis-parallel polyhedra, curtains (unbounded polygons with three edges), and fat horizontal triangles (triangles parallel to thexy-plane whose angles are greater than some given constant).
Abstract: In this paper we study the ray-shooting problem for three special classes of polyhedral objects in space: axis-parallel polyhedra, curtains (unbounded polygons with three edges, two of which are parallel to thez-axis and extend downward to minus infinity), and fat horizontal triangles (triangles parallel to thexy-plane whose angles are greater than some given constant). For all three problems structures are presented usingO(n 2+ɛ) preprocessing, for any fixedɛ > 0, withO(logn) query time. We also study the general ray-shooting problem in an arbitrary set of triangles. Here we present a structure that usesOn 4+ɛ) preprocessing and has a query time ofO(logn). We use the ray-shooting structure for curtains to obtain an algorithm for computing the view of a set of nonintersecting prolyhedra. For any ɛ > 0, we can obtain an algorithm with running time $$O(n^{1 + \varepsilon } \sqrt k )$$ , wheren is the total number of vertices of the polyhedra andk is the size of the output. This is the first output-sensitive algorithm for this problem that does not need a depth order on the faces of the polyhedra.

Proceedings ArticleDOI
23 May 1994
TL;DR: This work will survey some of its principal accomplishments, and in light of recent developments, it will discuss the profound transformations the field has begun to undergo.
Abstract: BERNARD CHAZELLE Department of Computer Science Princeton University Princetoq NJ08544, USA Computational geometry is at a crossroads. New challenges and opportunities are likely to reshape the field rather drastically in the years ahead. I will survey some of its principal accomplishments, and in light of recent developments, I will discuss the profound transformations the field has begun to undergo. There are reasons to believe that computational geometry will emerge from this transition far richer and stronger but barely recognizable from what it was ten years ago. Over the last two decades the field has enjoyed tremendous successes. Some of them might be dismissed as the cheap payoffs to be expected from any field lacking maturity. But others are the products of indisputable creativity and should be held as genuirle scientific achievements. More important, the field is now able to claim a broad, solid foundation upon which its future can be securely built. To mature fully as an original subfield of computer science, however, computational geometry must broaden its connections to applied mathematics while at the same time pay more than lip service to the applications areas that it purports to serve. Happily, active efforts to meet these challenges are underway. Three recent developments are particular encouraging: one is the building of a theory of geometric sampiing and its revolutionary impact on the design of geometric algorithms. Another is the maturing of computational real-algebraic geometry and computational topology both subjects are being revitalized by the introduction of geometric (as opposed to purely algebraic) methods. On the practical end of the spectrum, the emergence of a sub-area concerned specifically with issues of This work was supported in part by NSF Grant CCR-9301254 and The Geometry Center, University of Minnesota, an STC funded by NSF, DOE, and Mimesota Technology, Inc. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Asmciation of Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. finite precision and degeneracy in geometric computing is a most welcome development.

Proceedings ArticleDOI
08 May 1994
TL;DR: An efficient contact determination algorithm for objects undergoing rigid motion is presented and techniques to reduce O(n/sup 2/) pairwise intersection tests for a large environment of n objects are proposed.
Abstract: We present an efficient contact determination algorithm for objects undergoing rigid motion. The environment consists of polytopes and models described by algebraic sets. We extend an expected constant time collision detection algorithm between convex polytopes to concave polytopes and curved models. The algorithm makes use of hierarchical representations for concave polytopes and local, global methods for solving polynomial equations to determine possible contact points. We also propose techniques to reduce O(n/sup 2/) pairwise intersection tests for a large environment of n objects. These algorithms work well in practice and give real time performance for most environments. >

Journal ArticleDOI
TL;DR: It is shown that the computational efficiency increases when the computational load per processor is increased and the structure of the parallel code is discussed and then described algorithmically.
Abstract: This paper describes a parallel algorithm for the dynamic relaxation (DR) method. The basic theory of the dynamic relaxation is briefly reviewed to prepare the reader for the parallel implementation of the algorithm. Some fundamental parallel processing schemes have been explored for the implementation of the algorithm. Geometric Parallelism was found suitable for the DR method when using transputer‐based systems. The evolution of the parallel algorithm is given by identifying the steps which may be executed in parallel. The structure of the parallel code is discussed and then described algorithmically. Two geometrically non‐linear parallel finite element analyses have been performed using different mesh densities. The number of processors was varied to investigate algorithm efficiency and speed ups. Using the results obtained it is shown that the computational efficiency increases when the computational load per processor is increased.

Journal ArticleDOI
TL;DR: This paper presents an optimal parallel randomized algorithm for computing intersection of half spaces in three dimensions that is randomized in the sense that they use a total of only polylogarithmic number of random bits and terminate in the claimed time bound with probability of 1 - n - \alpha for any fixed $\alpha > 0$.
Abstract: Further applications of random sampling techniques which have been used for deriving efficient parallel algorithms are presented by J. H. Reif and S. Sen [Proc. 16th International Conference on Parallel Processing, 1987]. This paper presents an optimal parallel randomized algorithm for computing intersection of half spaces in three dimensions. Because of well-known reductions, these methods also yield equally efficient algorithms for fundamental problems like the convex hull in three dimensions, Voronoi diagram of point sites on a plane, and Euclidean minimal spanning tree. The algorithms run in time $T = O(\log n)$ for worst-case inputs and use $P = O(n)$ processors in a CREW PRAM model where n is the input size. They are randomized in the sense that they use a total of only polylogarithmic number of random bits and terminate in the claimed time bound with probability $1 - n^{ - \alpha } $ for any fixed $\alpha > 0$. They are also optimal in $P\cdot T$ product since the sequential time bound for all thes...

Proceedings ArticleDOI
26 Oct 1994
TL;DR: This paper presents parallel computational geometry algorithms that are scalable, architecture independent, easy to implement, and have, with high probability, an optimal time complexity for uniformly distributed random input data.
Abstract: We present parallel computational geometry algorithms that are scalable, architecture independent, easy to implement, and have, with high probability, an optimal time complexity for uniformly distributed random input data. Our methods apply to multicomputers with arbitrary interconnection network or bus system. The following problems are studied in this paper: (1) lower envelope of line segments, (2) visibility of parallelepipeds, (3) convex hull, (4) maximal elements, (5) Voronoi diagram, (6) all-nearest neighbors, (7) largest empty circle, and (8) largest empty hyperrectangle. Problems 2-8 are studied for d-dimensional space, d=O(1). We implemented and tested the lower envelope algorithm and convex hull algorithm (for d=3 and d=4) on a CM5. The results indicate that our methods are of considerable practical relevance. >

Journal ArticleDOI
TL;DR: This work demonstrates the fruitful relationship between mathematics and the discipline of computer graphics, emphasizing those areas of low-dimensional geometry and topology where interactive paradigms are of growing importance.
Abstract: Interactive computer graphics can provide new insights into the objects of pure geometry, providing intuitively useful images, and, in some cases, unexpected results. Interactive computer graphics systems have opened a new era in the visualization of pure geometry. We demonstrate the fruitful relationship between mathematics and the discipline of computer graphics, emphasizing those areas of low-dimensional geometry and topology where interactive paradigms are of growing importance. >

Proceedings ArticleDOI
10 Jun 1994
TL;DR: A deterministic polynomial time method for finding a set cover in a set system of VC-dimension such that the size of the cover is at most a factor of O(c) from the optimal size, and it is shown that in some cases, such as those that arise in 3-d polytope approximation and 2-d disc covering, the authors can quickly find O-sized covers.
Abstract: We give a deterministic polynomial time method for finding a set cover in a set system (X,ℜ) of VC-dimension d such that the size of our cover is at most a factor of O(dlog(dc)) from the optimal size, c. For constant VC-dimension set systems, which are common in computational geometry, our method gives an O(logc) approximation factor. This improves the previous Θ(log |X|) bound of the greedy method and beats recent complexity-theoretic lower bounds for set covers (which don't make any assumptions about VC-dimension). We give several applications of our method to computational geometry, and we show that in some cases, such as those that arise in 3-d polytope approximation and 2-d disc covering, we can quickly find O(c)-sized covers.

Journal ArticleDOI
Jarek Rossignac1, Anil Kaul1
TL;DR: A new technique for interactively editing such deformations and for animating them in realtime is presented, using the new Bezier Interpolating Polyhedron (BIP), which provides a graphics representation of such a deforming object formulated mathematically as a point describing a Beziers curve in the space of all polyhedra.
Abstract: The metamorphosis between two user-specified objects offers an intuitive metaphor for designing animations of deforming shapes. We present a new technique for interactively editing such deformations and for animating them in realtime. Besides the starting and ending shapes, our approach offers easy to use additional control over the deformations. The new Bezier Interpolating Polyhedron (BIP) provides a graphics representation of such a deforming object formulated mathematically as a point describing a Bezier curve in the space of all polyhedra. We replace, in the Bezier formulation, the traditional control points by arbitrary polyhedra and the vector addition by the Minkowski sum. BIPs are composed of Animated GRaphic ELement (AGRELs), which are faces with constant orientation, but with parametrized vertices represented by Bezier curves. AGRELs were designed to efficiently support smooth realtime animation on commercially available rendering hardware. We provide a tested algorithm for automatically computing BIPs from the sequence of control polyhedra and demonstrate its applications to animation design.

Journal ArticleDOI
TL;DR: A new, parallel, nearest-neighbor (NN) pattern classifier, based on a 2D Cellular Automaton (CA) architecture, is presented, which produces piece-wise linear discriminant curves between clusters of points of complex shape (nonlinearly separable).
Abstract: A new, parallel, nearest-neighbor (NN) pattern classifier, based on a 2D Cellular Automaton (CA) architecture, is presented in this paper. The proposed classifier is both time and space efficient, when compared with already existing NN classifiers, since it does not require complex distance calculations and ordering of distances, and storage requirements are kept minimal since each cell stores information only about its nearest neighborhood. The proposed classifier produces piece-wise linear discriminant curves between clusters of points of complex shape (nonlinearly separable) using the computational geometry concept known as the Voronoi diagram, which is established through CA evolution. These curves are established during an "off-line" operation and, thus, the subsequent classification of unknown patterns is achieved very fast. The VLSI design and implementation of a nearest neighborhood processor of the proposed 2D CA architecture is also presented in this paper. >

Book ChapterDOI
01 Jan 1994
TL;DR: This paper decompositions polygons or polyhedra into convex pieces or simplices is a typical preprocessing step in automated design, robotics, and pattern recognition.
Abstract: Decomposing complex shapes into simpler components has always been a focus of attention in computational geometry. The reason is obvious: most geometric algorithms perform more efficiently and are easier to implement and debug if the objects have simple shapes. For example, mesh-generation is a standard staple of the finite-element method; partitioning polygons or polyhedra into convex pieces or simplices is a typical preprocessing step in automated design, robotics, and pattern recognition. In computer graphics, decompositions of two-dimensional scenes are used in contour filling, hit detection, clipping and windowing; polyhedra are decomposed into smaller parts to perform hidden surface removal and ray-tracing.

Proceedings ArticleDOI
08 May 1994
TL;DR: This work presents, using standard ideas from Lie groups and Riemannian geometry, a computationally efficient recursive algorithm for the inverse dynamics of an open-chain manipulator, derived entirely from Lie theoretic concepts and definitions.
Abstract: We present, using standard ideas from Lie groups and Riemannian geometry, a computationally efficient recursive algorithm for the inverse dynamics of an open-chain manipulator. Our algorithm bears close resemblance to Featherstone's (1991) approach, but is derived entirely from Lie theoretic concepts and definitions. Our geometric approach permits a high-level view of robot dynamics that emphasizes the coordinate-free aspects of the equations of motion while preserving the computational efficiency of recursive algorithms. >