scispace - formally typeset
Search or ask a question
Author

Joseph O'Rourke

Bio: Joseph O'Rourke is an academic researcher from Arizona State University. The author has contributed to research in topics: Computational geometry & Polyhedron. The author has an hindex of 45, co-authored 351 publications receiving 12839 citations. Previous affiliations of Joseph O'Rourke include Yale University & University of Pennsylvania.


Papers
More filters
Book
Joseph O'Rourke1
01 Jan 1994
TL;DR: In this paper, the design and implementation of geometry algorithms arising in areas such as computer graphics, robotics, and engineering design are described and a self-contained treatment of the basic techniques used in computational geometry is presented.
Abstract: From the Publisher: This is the newly revised and expanded edition of a popular introduction to the design and implementation of geometry algorithms arising in areas such as computer graphics, robotics, and engineering design. The basic techniques used in computational geometry are all covered: polygon triangualtions, convex hulls, Voronoi diagrams, arrangements, geometric searching, and motion planning. The self-contained treatment presumes only an elementary knowledge of mathematics, but it reaches topics on the frontier of current research. Thus professional programmers will find it a useful tutorial.

1,874 citations

Book
01 Jan 1987
TL;DR: In this paper, the authors proposed a visibility algorithm based on three-dimensions and miscellany of the polygons, and showed that minimal guard covers threedimensions of the polygon.
Abstract: Polygon partitions Orthogonal polygons Mobile guards Miscellaneous shapes Holes Exterior visibility Visibility groups Visibility algorithms Minimal guard covers Three-dimensions and miscellany.

1,547 citations

BookDOI
01 Jan 1997
TL;DR: New!
Abstract: COMBINATORIAL AND DISCRETE GEOMETRY Finite Point Configurations, J. Pach Packing and Covering, G. Fejes Toth Tilings, D. Schattschneider and M. Senechal Helly-Type Theorems and Geometric Transversals, R. Wenger Pseudoline Arrangements, J.E. Goodman Oriented Matroids, J. Richter-Gebert and G.M. Ziegler Lattice Points and Lattice Polytopes, A. Barvinok New! Low-Distortion Embeddings of Finite Metric Spaces, P. Indyk and J. Matousek New! Geometry and Topology of Polygonal Linkages, R. Connelly and E.D. Demaine New! Geometric Graph Theory, J. Pach Euclidean Ramsey Theory, R.L. Graham Discrete Aspects of Stochastic Geometry, R. Schneider Geometric Discrepancy Theory and Uniform Distribution, J.R. Alexander, J. Beck, and W.W.L. Chen Topological Methods, R.T. Zivaljevic Polyominoes, S.W. Golomb and D.A. Klarner POLYTOPES AND POLYHEDRA Basic Properties of Convex Polytopes, M. Henk, J. Richter-Gebert, and G.M. Ziegler Subdivisions and Triangulations of Polytopes, C.W. Lee Face Numbers of Polytopes and Complexes, L.J. Billera and A. Bjoerner Symmetry of Polytopes and Polyhedra, E. Schulte Polytope Skeletons and Paths, G. Kalai Polyhedral Maps, U. Brehm and E. Schulte ALGORITHMS AND COMPLEXITY OF FUNDAMENTAL GEOMETRIC OBJECTS Convex Hull Computations, R. Seidel Voronoi Diagrams and Delaunay Triangulations, S. Fortune Arrangements, D. Halperin Triangulations and Mesh Generation, M. Bern Polygons, J. O'Rourke and S. Suri Shortest Paths and Networks, J.S.B. Mitchell Visibility, J. O'Rourke Geometric Reconstruction Problems, S.S. Skiena New! Curve and Surface Reconstruction, T.K. Dey Computational Convexity, P. Gritzmann and V. Klee Computational Topology, G. Vegter Computational Real Algebraic Geometry, B. Mishra GEOMETRIC DATA STRUCTURES AND SEARCHING Point Location, J. Snoeyink New! Collision and Proximity Queries, M.C. Lin and D. Manocha Range Searching, P.K. Agarwal Ray Shooting and Lines in Space, M. Pellegrini Geometric Intersection, D.M. Mount New! Nearest Neighbors in High-Dimensional Spaces, P. Indyk COMPUTATIONAL TECHNIQUES Randomization and Derandomization, O. Cheong, K. Mulmuley, and E. Ramos Robust Geometric Computation, C.K. Yap Parallel Algorithms in Geometry, M.T. Goodrich Parametric Search, J.S. Salowe New! The Discrepancy Method in Computational Geometry, B. Chazelle APPLICATIONS OF DISCRETE AND COMPUTATIONAL GEOMETRY Linear Programming, M. Dyer, N. Megiddo, and E. Welzl Mathematical Programming, M.H. Todd Algorithmic Motion Planning, M. Sharir Robotics, D. Halperin, L.E. Kavraki, and J.-C. Latombe Computer Graphics, D. Dobkin and S. Teller New! Modeling Motion, L.J. Guibas Pattern Recognition, J. O'Rourke and G.T. Toussaint Graph Drawing, R. Tamassia and G. Liotta Splines and Geometric Modeling, C.L. Bajaj New! Surface Simplification and 3D Geometry Compression, J. Rossignac Manufacturing Processes, R. Janardan and T.C. Woo Solid Modeling, C.M. Hoffmann New! Computation of Robust Statistics: Depth, Median, and Related Measures, P.J. Rousseeuw and A. Struyf New! Geographic Information Systems, M. van Kreveld Geometric Application of the Grassmann-Cayley Algebra, N.L. White Rigidity and Scene Analysis, W. Whiteley Sphere Packing and Coding Theory, G.A. Kabatiansky and J.A. Rush Crystals and Quasicrystals, M. Senechal New! Biological Applications of Computational Topology, H. Edelsbrunner New! GEOMETRIC SOFTWARE Software, J. Joswig Two Computation Geometry Libraries: LEDA and CGAL, L. Kettner and S. Naher Index of Defined Terms New! Index of Cited Authors

1,391 citations

Book
16 Jul 2007
TL;DR: Aimed primarily at advanced undergraduate and graduate students in mathematics or computer science, this lavishly illustrated book will fascinate a broad audience, from high school students to researchers.
Abstract: How can linkages, pieces of paper, and polyhedra be folded? The authors present hundreds of results and over 60 unsolved 'open problems' in this comprehensive look at the mathematics of folding, with an emphasis on algorithmic or computational aspects. Folding and unfolding problems have been implicit since Albrecht Drer in the early 1500s, but have only recently been studied in the mathematical literature. Over the past decade, there has been a surge of interest in these problems, with applications ranging from robotics to protein folding. A proof shows that it is possible to design a series of jointed bars moving only in a flat plane that can sign a name or trace any other algebraic curve. One remarkable algorithm shows you can fold any straight-line drawing on paper so that the complete drawing can be cut out with one straight scissors cut. Aimed primarily at advanced undergraduate and graduate students in mathematics or computer science, this lavishly illustrated book will fascinate a broad audience, from high school students to researchers.

509 citations

01 Jan 1979
TL;DR: In this paper, a system capable of analyzing image sequences of human motion is described, which is structured as a feedback loop between high and low levels: predictions are made at the semantic level, and verifications are sought at the image level.
Abstract: A system capable of analyzing image sequences of human motion is described. The system is structured as a ·feedback loop between high and low levels: predictions are made at the semantic level, and verifications are sought at the image level. The domain of human motion lends itself to a model-driven analysis, and the system includes a detailed model of the human body. All information extracted from the image is interpreted through a constraint network based on the structure of the human model. A constraint propagation operator is defined and its theoretical,properties developed. An implementation of this operator is described, and results of the analysis system for a short image sequence are presented.

462 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, a method of over-sampling the minority class involves creating synthetic minority class examples, which is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
Abstract: An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of "normal" examples with only a small percentage of "abnormal" or "interesting" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of over-sampling the minority (abnormal) class and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space) than only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space) than varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.

11,512 citations

MonographDOI
01 Jan 2006
TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Abstract: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.

6,340 citations

Proceedings ArticleDOI
01 Jul 1992
TL;DR: A general method for automatic reconstruction of accurate, concise, piecewise smooth surfaces from unorganized 3D points that is able to automatically infer the topological type of the surface, its geometry, and the presence and location of features such as boundaries, creases, and corners.
Abstract: This thesis describes a general method for automatic reconstruction of accurate, concise, piecewise smooth surfaces from unorganized 3D points. Instances of surface reconstruction arise in numerous scientific and engineering applications, including reverse-engineering--the automatic generation of CAD models from physical objects. Previous surface reconstruction methods have typically required additional knowledge, such as structure in the data, known surface genus, or orientation information. In contrast, the method outlined in this thesis requires only the 3D coordinates of the data points. From the data, the method is able to automatically infer the topological type of the surface, its geometry, and the presence and location of features such as boundaries, creases, and corners. The reconstruction method has three major phases: (1) initial surface estimation, (2) mesh optimization, and (3) piecewise smooth surface optimization. A key ingredient in phase 3, and another principal contribution of this thesis, is the introduction of a new class of piecewise smooth representations based on subdivision. The effectiveness of the three-phase reconstruction method is demonstrated on a number of examples using both simulated and real data. Phases 2 and 3 of the surface reconstruction method can also be used to approximate existing surface models. By casting surface approximation as a global optimization problem with an energy function that directly measures deviation of the approximation from the original surface, models are obtained that exhibit excellent accuracy to conciseness trade-offs. Examples of piecewise linear and piecewise smooth approximations are generated for various surfaces, including meshes, NURBS surfaces, CSG models, and implicit surfaces.

3,119 citations

Journal ArticleDOI
TL;DR: W/sup 4/ employs a combination of shape analysis and tracking to locate people and their parts and to create models of people's appearance so that they can be tracked through interactions such as occlusions.
Abstract: W/sup 4/ is a real time visual surveillance system for detecting and tracking multiple people and monitoring their activities in an outdoor environment. It operates on monocular gray-scale video imagery, or on video imagery from an infrared camera. W/sup 4/ employs a combination of shape analysis and tracking to locate people and their parts (head, hands, feet, torso) and to create models of people's appearance so that they can be tracked through interactions such as occlusions. It can determine whether a foreground region contains multiple people and can segment the region into its constituent people and track them. W/sup 4/ can also determine whether people are carrying objects, and can segment objects from their silhouettes, and construct appearance models for them so they can be identified in subsequent frames. W/sup 4/ can recognize events between people and objects, such as depositing an object, exchanging bags, or removing an object. It runs at 25 Hz for 320/spl times/240 resolution images on a 400 MHz dual-Pentium II PC.

2,870 citations

Book
01 Jan 2001
TL;DR: The complexity class P is formally defined as the set of concrete decision problems that are polynomial-time solvable, and encodings are used to map abstract problems to concrete problems.
Abstract: problems To understand the class of polynomial-time solvable problems, we must first have a formal notion of what a "problem" is. We define an abstract problem Q to be a binary relation on a set I of problem instances and a set S of problem solutions. For example, an instance for SHORTEST-PATH is a triple consisting of a graph and two vertices. A solution is a sequence of vertices in the graph, with perhaps the empty sequence denoting that no path exists. The problem SHORTEST-PATH itself is the relation that associates each instance of a graph and two vertices with a shortest path in the graph that connects the two vertices. Since shortest paths are not necessarily unique, a given problem instance may have more than one solution. This formulation of an abstract problem is more general than is required for our purposes. As we saw above, the theory of NP-completeness restricts attention to decision problems: those having a yes/no solution. In this case, we can view an abstract decision problem as a function that maps the instance set I to the solution set {0, 1}. For example, a decision problem related to SHORTEST-PATH is the problem PATH that we saw earlier. If i = G, u, v, k is an instance of the decision problem PATH, then PATH(i) = 1 (yes) if a shortest path from u to v has at most k edges, and PATH(i) = 0 (no) otherwise. Many abstract problems are not decision problems, but rather optimization problems, in which some value must be minimized or maximized. As we saw above, however, it is usually a simple matter to recast an optimization problem as a decision problem that is no harder. Encodings If a computer program is to solve an abstract problem, problem instances must be represented in a way that the program understands. An encoding of a set S of abstract objects is a mapping e from S to the set of binary strings. For example, we are all familiar with encoding the natural numbers N = {0, 1, 2, 3, 4,...} as the strings {0, 1, 10, 11, 100,...}. Using this encoding, e(17) = 10001. Anyone who has looked at computer representations of keyboard characters is familiar with either the ASCII or EBCDIC codes. In the ASCII code, the encoding of A is 1000001. Even a compound object can be encoded as a binary string by combining the representations of its constituent parts. Polygons, graphs, functions, ordered pairs, programs-all can be encoded as binary strings. Thus, a computer algorithm that "solves" some abstract decision problem actually takes an encoding of a problem instance as input. We call a problem whose instance set is the set of binary strings a concrete problem. We say that an algorithm solves a concrete problem in time O(T (n)) if, when it is provided a problem instance i of length n = |i|, the algorithm can produce the solution in O(T (n)) time. A concrete problem is polynomial-time solvable, therefore, if there exists an algorithm to solve it in time O(n) for some constant k. We can now formally define the complexity class P as the set of concrete decision problems that are polynomial-time solvable. We can use encodings to map abstract problems to concrete problems. Given an abstract decision problem Q mapping an instance set I to {0, 1}, an encoding e : I → {0, 1}* can be used to induce a related concrete decision problem, which we denote by e(Q). If the solution to an abstract-problem instance i I is Q(i) {0, 1}, then the solution to the concreteproblem instance e(i) {0, 1}* is also Q(i). As a technicality, there may be some binary strings that represent no meaningful abstract-problem instance. For convenience, we shall assume that any such string is mapped arbitrarily to 0. Thus, the concrete problem produces the same solutions as the abstract problem on binary-string instances that represent the encodings of abstract-problem instances. We would like to extend the definition of polynomial-time solvability from concrete problems to abstract problems by using encodings as the bridge, but we would like the definition to be independent of any particular encoding. That is, the efficiency of solving a problem should not depend on how the problem is encoded. Unfortunately, it depends quite heavily on the encoding. For example, suppose that an integer k is to be provided as the sole input to an algorithm, and suppose that the running time of the algorithm is Θ(k). If the integer k is provided in unary-a string of k 1's-then the running time of the algorithm is O(n) on length-n inputs, which is polynomial time. If we use the more natural binary representation of the integer k, however, then the input length is n = ⌊lg k⌋ + 1. In this case, the running time of the algorithm is Θ (k) = Θ(2), which is exponential in the size of the input. Thus, depending on the encoding, the algorithm runs in either polynomial or superpolynomial time. The encoding of an abstract problem is therefore quite important to our under-standing of polynomial time. We cannot really talk about solving an abstract problem without first specifying an encoding. Nevertheless, in practice, if we rule out "expensive" encodings such as unary ones, the actual encoding of a problem makes little difference to whether the problem can be solved in polynomial time. For example, representing integers in base 3 instead of binary has no effect on whether a problem is solvable in polynomial time, since an integer represented in base 3 can be converted to an integer represented in base 2 in polynomial time. We say that a function f : {0, 1}* → {0,1}* is polynomial-time computable if there exists a polynomial-time algorithm A that, given any input x {0, 1}*, produces as output f (x). For some set I of problem instances, we say that two encodings e1 and e2 are polynomially related if there exist two polynomial-time computable functions f12 and f21 such that for any i I , we have f12(e1(i)) = e2(i) and f21(e2(i)) = e1(i). That is, the encoding e2(i) can be computed from the encoding e1(i) by a polynomial-time algorithm, and vice versa. If two encodings e1 and e2 of an abstract problem are polynomially related, whether the problem is polynomial-time solvable or not is independent of which encoding we use, as the following lemma shows. Lemma 34.1 Let Q be an abstract decision problem on an instance set I , and let e1 and e2 be polynomially related encodings on I . Then, e1(Q) P if and only if e2(Q) P. Proof We need only prove the forward direction, since the backward direction is symmetric. Suppose, therefore, that e1(Q) can be solved in time O(nk) for some constant k. Further, suppose that for any problem instance i, the encoding e1(i) can be computed from the encoding e2(i) in time O(n) for some constant c, where n = |e2(i)|. To solve problem e2(Q), on input e2(i), we first compute e1(i) and then run the algorithm for e1(Q) on e1(i). How long does this take? The conversion of encodings takes time O(n), and therefore |e1(i)| = O(n), since the output of a serial computer cannot be longer than its running time. Solving the problem on e1(i) takes time O(|e1(i)|) = O(n), which is polynomial since both c and k are constants. Thus, whether an abstract problem has its instances encoded in binary or base 3 does not affect its "complexity," that is, whether it is polynomial-time solvable or not, but if instances are encoded in unary, its complexity may change. In order to be able to converse in an encoding-independent fashion, we shall generally assume that problem instances are encoded in any reasonable, concise fashion, unless we specifically say otherwise. To be precise, we shall assume that the encoding of an integer is polynomially related to its binary representation, and that the encoding of a finite set is polynomially related to its encoding as a list of its elements, enclosed in braces and separated by commas. (ASCII is one such encoding scheme.) With such a "standard" encoding in hand, we can derive reasonable encodings of other mathematical objects, such as tuples, graphs, and formulas. To denote the standard encoding of an object, we shall enclose the object in angle braces. Thus, G denotes the standard encoding of a graph G. As long as we implicitly use an encoding that is polynomially related to this standard encoding, we can talk directly about abstract problems without reference to any particular encoding, knowing that the choice of encoding has no effect on whether the abstract problem is polynomial-time solvable. Henceforth, we shall generally assume that all problem instances are binary strings encoded using the standard encoding, unless we explicitly specify the contrary. We shall also typically neglect the distinction between abstract and concrete problems. The reader should watch out for problems that arise in practice, however, in which a standard encoding is not obvious and the encoding does make a difference. A formal-language framework One of the convenient aspects of focusing on decision problems is that they make it easy to use the machinery of formal-language theory. It is worthwhile at this point to review some definitions from that theory. An alphabet Σ is a finite set of symbols. A language L over Σ is any set of strings made up of symbols from Σ. For example, if Σ = {0, 1}, the set L = {10, 11, 101, 111, 1011, 1101, 10001,...} is the language of binary representations of prime numbers. We denote the empty string by ε, and the empty language by Ø. The language of all strings over Σ is denoted Σ*. For example, if Σ = {0, 1}, then Σ* = {ε, 0, 1, 00, 01, 10, 11, 000,...} is the set of all binary strings. Every language L over Σ is a subset of Σ*. There are a variety of operations on languages. Set-theoretic operations, such as union and intersection, follow directly from the set-theoretic definitions. We define the complement of L by . The concatenation of two languages L1 and L2 is the language L = {x1x2 : x1 L1 and x2 L2}. The closure or Kleene star of a language L is the language L*= {ε} L L L ···, where Lk is the language obtained by

2,817 citations