scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 2008"


Proceedings ArticleDOI
11 Aug 2008
TL;DR: This course is targeted at software developers with geometric needs, and course graduates will be able to select and use the appropriate algorithms and data structures provided by CGAL in their current or upcoming projects.
Abstract: The CGAL C++ library offers geometric data structures and algorithms that are reliable, efficient, easy to use, and easy to integrate in existing software. Use of de facto standard libraries like CGAL increases productivity, because they allow software developers to focus on the application layer. This course is an overview of CGAL geometric algorithms and data structures. The lectures cover:•CGAL for 2D vector graphics, including Boolean operations on Bezier curves, offsets, simplification, and geometry on the sphere.•CGAL for 3D point sets, including principal component analysis, bounding volumes, simplification, outlier removal, normal estimation, normal orientation, denoising, triangulation, and surface reconstruction.•CGAL for mesh-based modeling and processing, including Boolean operations, convex decomposition, simplification, and parameterization.•CGAL for mesh generation, including surface and volume mesh generation, from 3D images, implicit functions, or polyhedral surfaces.The introductory lecture covers non-geometric topics: the exact geometric computing paradigm that makes CGAL reliable without sacrificing efficiency and the generic programming paradigm that facilitates integration into existing software.

565 citations


Journal ArticleDOI
TL;DR: In this paper, state-of-the-art sequential 2D EDT algorithms are reviewed and compared, in an effort to reach more solid conclusions regarding their differences in speed and their exactness.
Abstract: The distance transform (DT) is a general operator forming the basis of many methods in computer vision and geometry, with great potential for practical applications. However, all the optimal algorithms for the computation of the exact Euclidean DT (EDT) were proposed only since the 1990s. In this work, state-of-the-art sequential 2D EDT algorithms are reviewed and compared, in an effort to reach more solid conclusions regarding their differences in speed and their exactness. Six of the best algorithms were fully implemented and compared in practice.

451 citations


Book
01 Jan 2008
TL;DR: Theoretical Computer Science, Information Structures, and Networks and Flows.
Abstract: Foundations Counting Methods Sequences Number Theory Algebraic Structures Linear Algebra Discrete Probability Graph Theory Trees Networks and Flows Partially Ordered Sets Combinatorial Designs Discrete and Computational Geometry Coding Theory and Cryptology Discrete Optimization Theoretical Computer Science Information Structures Data Mining Bioinformatics

402 citations


Journal ArticleDOI
TL;DR: A new recursive enumeration strategy with bit pattern trees for adjacent rays--the ancestors of extreme rays--that is roughly one order of magnitude faster than previous methods is presented, and a rank updating method that is particularly well suited for parallel computation and a residue arithmetic method for matrix rank computations, which circumvents potential numerical instability problems.
Abstract: Motivation: Elementary flux modes (EFMs)—non-decomposable minimal pathways—are commonly accepted tools for metabolic network analysis under steady state conditions. Valid states of the network are linear superpositions of elementary modes shaping a polyhedral cone (the flux cone), which is a well-studied convex set in computational geometry. Computing EFMs is thus basically equivalent to extreme ray enumeration of polyhedral cones. This is a combinatorial problem with poorly scaling algorithms, preventing the large-scale analysis of metabolic networks so far. Results: Here, we introduce new algorithmic concepts that enable large-scale computation of EFMs. Distinguishing extreme rays from normal (composite) vectors is one critical aspect of the algorithm. We present a new recursive enumeration strategy with bit pattern trees for adjacent rays—the ancestors of extreme rays—that is roughly one order of magnitude faster than previous methods. Additionally, we introduce a rank updating method that is particularly well suited for parallel computation and a residue arithmetic method for matrix rank computations, which circumvents potential numerical instability problems. Multi-core architectures of modern CPUs can be exploited for further performance improvements. The methods are applied to a central metabolism network of Escherichia coli, resulting in ≈26 Mio. EFMs. Within the top 2% modes considering biomass production, most of the gain in flux variability is achieved. In addition, we compute ≈5 Mio. EFMs for the production of non-essential amino acids for a genome-scale metabolic network of Helicobacter pylori. Only large-scale EFM analysis reveals the >85% of modes that generate several amino acids simultaneously. Availability: An implementation in Java, with integration into MATLAB and support of various input formats, including SBML, is available at http://www.csb.ethz.ch in the tools section; sources are available from the authors upon request. Contact: joerg.stelling@inf.ethz.ch Supplementary information:Supplementary data are available at Bioinformatics online.

317 citations


Journal ArticleDOI
TL;DR: A novel geometry-based edge-clustering framework that can group edges into bundles to reduce the overall edge crossings is proposed, which is intuitive, flexible, and efficient.
Abstract: Graphs have been widely used to model relationships among data. For large graphs, excessive edge crossings make the display visually cluttered and thus difficult to explore. In this paper, we propose a novel geometry-based edge-clustering framework that can group edges into bundles to reduce the overall edge crossings. Our method uses a control mesh to guide the edge-clustering process; edge bundles can be formed by forcing all edges to pass through some control points on the mesh. The control mesh can be generated at different levels of detail either manually or automatically based on underlying graph patterns. Users can further interact with the edge-clustering results through several advanced visualization techniques such as color and opacity enhancement. Compared with other edge-clustering methods, our approach is intuitive, flexible, and efficient. The experiments on some large graphs demonstrate the effectiveness of our method.

315 citations


Journal ArticleDOI
TL;DR: A novel multidimensional projection technique based on least square approximations that is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.
Abstract: The problem of projecting multidimensional data into lower dimensions has been pursued by many researchers due to its potential application to data analyses of various kinds. This paper presents a novel multidimensional projection technique based on least square approximations. The approximations compute the coordinates of a set of projected points based on the coordinates of a reduced number of control points with defined geometry. We name the technique least square projections (LSP). From an initial projection of the control points, LSP defines the positioning of their neighboring points through a numerical solution that aims at preserving a similarity relationship between the points given by a metric in mD. In order to perform the projection, a small number of distance calculations are necessary, and no repositioning of the points is required to obtain a final solution with satisfactory precision. The results show the capability of the technique to form groups of points by degree of similarity in 2D. We illustrate that capability through its application to mapping collections of textual documents from varied sources, a strategic yet difficult application. LSP is faster and more accurate than other existing high-quality methods, particularly where it was mostly tested, that is, for mapping text sets.

285 citations


Journal ArticleDOI
TL;DR: A generic framework for 3D surface remeshing based on a metric-driven Discrete Voronoi Diagram construction that combines the robustness and theoretical strength of Delaunay criteria with the efficiency of an entirely discrete geometry processing.
Abstract: In this paper, we propose a generic framework for 3D surface remeshing. Based on a metric-driven Discrete Voronoi Diagram construction, our output is an optimized 3D triangular mesh with a user-defined vertex budget. Our approach can deal with a wide range of applications, from high-quality mesh generation to shape approximation. By using appropriate metric constraints, the method generates isotropic or anisotropic elements. Based on point sampling, our algorithm combines the robustness and theoretical strength of Delaunay criteria with the efficiency of an entirely discrete geometry processing. Besides the general described framework, we show the experimental results using isotropic, quadric-enhanced isotropic, and anisotropic metrics, which prove the efficiency of our method on large meshes at a low computational cost.

213 citations


Journal ArticleDOI
TL;DR: A new definition of a flock is given that is argued to be more realistic than the previous ones, and by the use of techniques from computational geometry the algorithms to detect and report flocks are presented.
Abstract: Data representing moving objects is rapidly getting more available, especially in the area of wildlife GPS tracking. It is a central belief that information is hidden in large data sets in the form of interesting patterns, where a pattern can be any configuration of some moving objects in a certain area and/or during a certain time period. One of the most common spatio-temporal patterns sought after is flocks. A flock is a large enough subset of objects moving along paths close to each other for a certain pre-defined time. We give a new definition that we argue is more realistic than the previous ones, and by the use of techniques from computational geometry we present fast algorithms to detect and report flocks. The algorithms are analysed both theoretically and experimentally.

204 citations


Journal ArticleDOI
TL;DR: A generic point cloud encoder is proposed that provides a unified framework for compressing different attributes of point samples corresponding to 3D objects with an arbitrary topology and employs attribute-dependent encoding techniques to exploit the different characteristics of various attributes.
Abstract: In this paper, we propose a generic point cloud encoder that provides a unified framework for compressing different attributes of point samples corresponding to 3D objects with an arbitrary topology. In the proposed scheme, the coding process is led by an iterative octree cell subdivision of the object space. At each level of subdivision, the positions of point samples are approximated by the geometry centers of all tree-front cells, whereas normals and colors are approximated by their statistical average within each of the tree-front cells. With this framework, we employ attribute-dependent encoding techniques to exploit the different characteristics of various attributes. All of these have led to a significant improvement in the rate-distortion (R-D) performance and a computational advantage over the state of the art. Furthermore, given sufficient levels of octree expansion, normal space partitioning, and resolution of color quantization, the proposed point cloud encoder can be potentially used for lossless coding of 3D point clouds.

190 citations


Book
01 Jan 2008
TL;DR: This dynamic reference work provides solutions to vital algorithmic problems for scholars, researchers, practitioners, teachers and students in fields such as computer science, mathematics, statistics, biology, economics, financial software, and medical informatics.
Abstract: This dynamic reference work provides solutions to vital algorithmic problems for scholars, researchers, practitioners, teachers and students in fields such as computer science, mathematics, statistics, biology, economics, financial software, and medical informatics. This second edition is broadly expanded, building upon the success of its former edition with more than 450 new and updated entries. These entries are designed to ensure algorithms are presented from growing areas of research such as bioinformatics, combinatorial group testing, differential privacy, enumeration algorithms, game theory, massive data algorithms, modern learning theory, social networks, and VLSI CAD algorithms. Over 630 entries are organized alphabetically by problem, with subentries allowing for distinct solutions. Each entry includes a description of the basic algorithmic problem; the input and output specifications; key results; examples of applications; citations to key literature, open problems, experimental results, links to data sets and downloadable code. All entries are peer-reviewed, written by leading experts in the fieldand each entry contains links to a summary of the authors research work. This defining reference is available in both print and onlinea dynamic living work with hyperlinks to related entries, cross references citations, and a myriad other valuable URLs. New and Updated entries include: Algorithmic Aspects of Distributed Sensor Networks, Algorithms for Modern Computers Bioinformatics Certified Reconstruction and Mesh Generation Combinatorial Group Testing Compression of Text and Data Structures Computational Counting Computational Economics Computational Geometry Differential Privacy Enumeration Algorithms Exact Exponential Algorithms Game Theory Graph Drawing Group Testing Internet Algorithms Kernels and Compressions Massive Data Algorithms Mathematical Optimization Modern Learning Theory Social Networks Stable Marriage Problems, k-SAT Algorithms Sublinear Algorithms Tile Self-Assembly VLSI CAD Algorithms

149 citations


Journal ArticleDOI
TL;DR: A practical method for finding the provably globally optimal solution to numerous problems in projective geometry including multiview triangulation, camera resectioning and homography estimation that relies on recent developments in fractional programming and the theory of convex underestimators.
Abstract: This paper presents a practical method for finding the provably globally optimal solution to numerous problems in projective geometry including multiview triangulation, camera resectioning and homography estimation. Unlike traditional methods which may get trapped in local minima due to the non-convex nature of these problems, this approach provides a theoretical guarantee of global optimality. The formulation relies on recent developments in fractional programming and the theory of convex underestimators and allows a unified framework for minimizing the standard L 2-norm of reprojection errors which is optimal under Gaussian noise as well as the more robust L 1-norm which is less sensitive to outliers. Even though the worst case complexity of our algorithm is exponential, the practical efficacy is empirically demonstrated by good performance on experiments for both synthetic and real data. An open source MATLAB toolbox that implements the algorithm is also made available to facilitate further research.

Book
17 Jul 2008
TL;DR: The intent of this book is to set the modern foundations of the theory of generalized curvature measures, and introduces the mathematical background of the subject, beginning with curves and surfaces, going on with convex subsets, smooth submanifolds, subsets of positive reach, polyhedra and triangulations, and ending with surface reconstruction.
Abstract: The intent of this book is to set the modern foundations of the theory of generalized curvature measures. This subject has a long history, beginning with J. Steiner (1850), H. Weyl (1939), H. Federer (1959), P. Wintgen (1982), and continues today with young and brilliant mathematicians. In the last decades, a renewal of interest in mathematics as well as computer science has arisen (finding new applications in computer graphics, medical imaging, computational geometry, visualization ). Following a historical and didactic approach, the book introduces the mathematical background of the subject, beginning with curves and surfaces, going on with convex subsets, smooth submanifolds, subsets of positive reach, polyhedra and triangulations, and ending with surface reconstruction. We focus on the theory of normal cycle, which allows to compute and approximate curvature measures of a large class of smooth or discrete objects of the Euclidean space. We give explicit computations when the object is a 2 or 3 dimensional polyhedron. This book can serve as a textbook to any mathematician or computer scientist, engineer or researcher who is interested in the theory of curvature measures.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: A real-time 3D reconstruction system which uses the proposed GPU-based reconstruction method to achieve real- time performance (30 fps) using 16 cameras and 4 PCs.
Abstract: In this paper we present two efficient GPU-based visual hull computation algorithms. We compare them in terms of performance using image sets of varying size and different voxel resolutions. In addition, we present a real-time 3D reconstruction system which uses the proposed GPU-based reconstruction method to achieve real-time performance (30 fps) using 16 cameras and 4 PCs.

Journal ArticleDOI
TL;DR: This paper proposes a strategy for sampling and meshing multimaterial volumes using dynamic particle systems, including a novel, differentiable representation of the material junctions that allows the particle system to explicitly sample corners, edges, and surfaces of material intersections.
Abstract: Methods that faithfully and robustly capture the geometry of complex material interfaces in labeled volume data are important for generating realistic and accurate visualizations and simulations of real-world objects. The generation of such multimaterial models from measured data poses two unique challenges: first, the surfaces must be well-sampled with regular, efficient tessellations that are consistent across material boundaries; and second, the resulting meshes must respect the nonmanifold geometry of the multimaterial interfaces. This paper proposes a strategy for sampling and meshing multimaterial volumes using dynamic particle systems, including a novel, differentiable representation of the material junctions that allows the particle system to explicitly sample corners, edges, and surfaces of material intersections. The distributions of particles are controlled by fundamental sampling constraints, allowing Delaunay-based meshing algorithms to reliably extract watertight meshes of consistently high-quality.

BookDOI
01 Jan 2008
TL;DR: Barvinok and Barany as mentioned in this paper gave a survey on the complexity of convex lattice polytopes with the maximum number of triangles and showed that the number of vertices in a convex body can be computed in polynomial time.
Abstract: Musings on discrete geometry and ""20 years of discrete & computational geometry"" by B. Grunbaum State of the union (of geometric objects) by P. K. Agarwal, J. Pach, and M. Sharir Metric graph theory and geometry: A survey by H.-J. Bandelt and V. Chepoi Extremal problems for convex lattice polytopes: a survey by I. Barany On simple arrangements of lines and pseudo-lines in $\mathbb{P}^2$ and $\mathbb{R}^2$ with the maximum number of triangles by N. Bartholdi, J. Blanc, and S. Loisel The computational complexity of convex bodies by A. Barvinok and E. Veomett Algorithmic semi-algebraic geometry and topology--Recent progress and open problems by S. Basu Expansive motions by R. Connelly All polygons flip finitely...right? by E. D. Demaine, B. Gassend, J. O'Rourke, and G. T. Toussaint Persistent homology--a survey by H. Edelsbrunner and J. Harer Recent progress on line transversals to families of translated ovals by A. F. Holmsen An improved, simple construction of many halving edges by G. Nivasch Unfolding orthogonal polyhedra by J. O'Rourke The discharging method in combinatorial geometry and the Pach-Sharir conjecture by R. Radoicic and G. Toth Pseudo-triangulations--a survey by G. Rote, F. Santos, and I. Streinu Line problems in nonlinear computational geometry by F. Sottile and T. Theobald On empty hexagons by P. Valtr $k$-sets and $k$-facets by U. Wagner An Erdos-Szekeres type problem for interior points by X. Wei and R. Ding The kissing number, blocking number and covering number of a convex body by C. Zong Open problems by J. Pach.

Journal ArticleDOI
TL;DR: It is shown that, given a parametric linear program (PLP), a polyhedron exists whose projection provides the solution to the PLP and the converse is tackled and it is shown how to formulate a PLP whose solution is the projection of an appropriately definedpolyhedron described as the intersection of a finite number of halfspaces.
Abstract: This paper brings together two fundamental topics: polyhedral projection and parametric linear programming. First, it is shown that, given a parametric linear program (PLP), a polyhedron exists whose projection provides the solution to the PLP. Second, the converse is tackled and it is shown how to formulate a PLP whose solution is the projection of an appropriately defined polyhedron described as the intersection of a finite number of halfspaces. The input to one operation can be converted to an input of the other operation and the resulting output can be converted back to the desired form in polynomial time—this implies that algorithms for computing projections or methods for solving parametric linear programs can be applied to either problem class.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: In this article, the authors present a framework for computing optimal transformations, aligning one point set to another, in the presence of outliers, based on theory from computational geometry, which is indeed possible to accomplish in polynomial-time.
Abstract: We present a framework for computing optimal transformations, aligning one point set to another, in the presence of outliers. Example applications include shape matching and registration (using, for example, similarity, affine or projective transformations) as well as multiview reconstruction problems (triangulation, camera pose etc.). While standard methods like RANSAC essentially use heuristics to cope with outliers, we seek to find the largest possible subset of consistent correspondences and the globally optimal transformation aligning the point sets. Based on theory from computational geometry, we show that this is indeed possible to accomplish in polynomial-time. We develop several algorithms which make efficient use of convex programming. The scheme has been tested and evaluated on both synthetic and real data for several applications.

BookDOI
01 Nov 2008
TL;DR: This work presents a methodology for Automated Cartographic Data Input, Drawing and Editing Using Kinetic Delaunay/Voronoi Diagrams, and results show efficient Swarm Neighborhood Management and Robust Point-Location in Generalized Voronoi diagrams.
Abstract: Computational Geometry Methods and Intelligent Computing.- Generalized Voronoi Diagrams: State-of-the-Art in Intelligent Treatment of Applied Problems.- Shapes of Delaunay Simplexes and Structural Analysis of Hard Sphere Packings.- The ?-Shape and ?-Complex for Analysis of Molecular Structures.- Computational Geometry Analysis of Quantum State Space and Its Applications.- Efficient Swarm Neighborhood Management Using the Layered Delaunay Triangulation.- Intelligent Solutions for Curve Reconstruction Problem.- A Methodology for Automated Cartographic Data Input, Drawing and Editing Using Kinetic Delaunay/Voronoi Diagrams.- Density-Based Clustering Based on Topological Properties of the Data Set.- Modeling Optimal Beam Treatment with Weighted Regions for Bio-medical Applications.- Advanced Treatment of Topics of Special Interest.- Constructing Centroidal Voronoi Tessellations on Surface Meshes.- Simulated Annealing and Genetic Algorithms in Quest of Optimal Triangulations.- Higher Order Voronoi Diagrams and Distance Functions in Art and Visualization.- Robust Point-Location in Generalized Voronoi Diagrams.- Conclusions and Future Trends in Intelligent Treatment of Applied Problems.

Journal ArticleDOI
TL;DR: This paper proposes and develops a novel quasi-conformal surface mapping framework to globally minimize the stretching energy inevitably introduced between two different shapes and designs and articulate an automatic variational algorithm that can reach the global distortion minimum for surface mapping between shapes of arbitrary topology.
Abstract: Computing smooth and optimal one-to-one maps between surfaces of same topology is a fundamental problem in graphics and such a method provides us a ubiquitous tool for geometric modeling and data visualization. Its vast variety of applications includes shape registration/matching, shape blending, material/data transfer, data fusion, information reuse, etc. The mapping quality is typically measured in terms of angular distortions among different shapes. This paper proposes and develops a novel quasi-conformal surface mapping framework to globally minimize the stretching energy inevitably introduced between two different shapes. The existing state-of-the-art intersurface mapping techniques only afford local optimization either on surface patches via boundary cutting or on the simplified base domain, lacking rigorous mathematical foundation and analysis. We design and articulate an automatic variational algorithm that can reach the global distortion minimum for surface mapping between shapes of arbitrary topology, and our algorithm is solely founded upon the intrinsic geometry structure of surfaces. To our best knowledge, this is the first attempt toward rigorously and numerically computing globally optimal maps. Consequently, we demonstrate our mapping framework, offers a powerful computational tool for graphics and visualization tasks such as data and texture transfer, shape morphing, and shape matching.

Journal IssueDOI
TL;DR: The software library STXXL is presented, an implementation of the C++ standard template library (STL) for processing huge data sets that can fit only on hard disks, and it is the first I-O-efficient algorithm library that supports the pipelining technique that can save more than half of the I-Os.
Abstract: We present the software library STXXL that is an implementation of the C++ standard template library (STL) for processing huge data sets that can fit only on hard disks. It supports parallel disks, overlapping between disk I-O and computation and it is the first I-O-efficient algorithm library that supports the pipelining technique that can save more than half of the I-Os. STXXL has been applied both in academic and industrial environments for a range of problems including text processing, graph algorithms, computational geometry, Gaussian elimination, visualization, and analysis of microscopic images, differential cryptographic analysis, etc. The performance of STXXL and its applications are evaluated on synthetic and real-world inputs. We present the design of the library, how its performance features are supported, and demonstrate how the library integrates with STL. Copyright © 2007 John Wiley & Sons, Ltd. Now at mental images GmbH, Berlin, Germany.

Journal ArticleDOI
TL;DR: A framework for 3D geometry processing that provides direct access to surface curvature to facilitate advanced shape editing, filtering, and synthesis algorithms is proposed and demonstrated to be effective with several applications, including anisotropic smoothing, feature enhancement, and multi‐scale curvature editing.
Abstract: We propose a framework for 3D geometry processing that provides direct access to surface curvature to facilitate advanced shape editing, filtering, and synthesis algorithms. The central idea is to map a given surface to the curvature domain by evaluating its principle curvatures, apply filtering and editing operations to the curvature distribution, and reconstruct the resulting surface using an optimization approach. Our system allows the user to prescribe arbitrary principle curvature values anywhere on the surface. The optimization solves a nonlinear least-squares problem to find the surface that best matches the desired target curvatures while preserving important properties of the original shape. We demonstrate the effectiveness of this processing metaphor with several applications, including anisotropic smoothing, feature enhancement, and multi-scale curvature editing.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: It is shown that many problems in multi-view geometry, when posed as minimization of the maximum reprojection error across observations, can be solved optimally in polynomial time.
Abstract: Many problems in multi-view geometry, when posed as minimization of the maximum reprojection error across observations, can be solved optimally in polynomial time. We show that these problems are instances of a convex-concave generalized fractional program. We survey the major solution methods for solving problems of this form and present them in a unified framework centered around a single parametric optimization problem. We propose two new algorithms and show that the algorithm proposed by Olsson et al. [21] is a special case of a classical algorithm for generalized fractional programming. The performance of all the algorithms is compared on a variety of datasets, and the algorithm proposed by Gugat [12] stands out as a clear winner. An open source MATLAB toolbox that implements all the algorithms presented here is made available.

Book
17 Dec 2008
TL;DR: This volume provides a comprehensive up-to-date survey of several core areas of combinatorial geometry, and describes the beginnings of the subject, going back to the nineteenth century, and explains why counting incidences and estimating the combinatorially complexity of various arrangements of geometric objects became the theoretical backbone of computational geometry in the 1980s and 1990s.
Abstract: Based on a lecture series given by the authors at a satellite meeting of the 2006 International Congress of Mathematicians and on many articles written by them and their collaborators, this volume provides a comprehensive up-to-date survey of several core areas of combinatorial geometry. It describes the beginnings of the subject, going back to the nineteenth century (if not to Euclid), and explains why counting incidences and estimating the combinatorial complexity of various arrangements of geometric objects became the theoretical backbone of computational geometry in the 1980s and 1990s. The combinatorial techniques outlined in this book have found applications in many areas of computer science from graph drawing through hidden surface removal and motion planning to frequency allocation in cellular networks. ""Combinatorial Geometry and Its Algorithmic Applications"" is intended as a source book for professional mathematicians and computer scientists as well as for graduate students interested in combinatorics and geometry. Most chapters start with an attractive, simply formulated, but often difficult and only partially answered mathematical question, and describes the most efficient techniques developed for its solution. The text includes many challenging open problems, figures, and an extensive bibliography.

Posted Content
TL;DR: In this article, the authors describe several data structures that approximate distributions of answers for shape fitting problems, and provide simple and efficient randomized algorithms for computing all of these data structures, which are easy to implement and practical.
Abstract: A typical computational geometry problem begins: Consider a set P of n points in R^d. However, many applications today work with input that is not precisely known, for example when the data is sensed and has some known error model. What if we do not know the set P exactly, but rather we have a probability distribution mu_p governing the location of each point p in P? Consider a set of (non-fixed) points P, and let mu_P be the probability distribution of this set. We study several measures (e.g. the radius of the smallest enclosing ball, or the area of the smallest enclosing box) with respect to mu_P. The solutions to these problems do not, as in the traditional case, consist of a single answer, but rather a distribution of answers. We describe several data structures that approximate distributions of answers for shape fitting problems. We provide simple and efficient randomized algorithms for computing all of these data structures, which are easy to implement and practical. We provide some experimental results to assert this. We also provide more involved deterministic algorithms for some of these data structures that run in time polynomial in n and 1/eps, where eps is the approximation factor.

Journal ArticleDOI
TL;DR: The proposed FRSDE has a competitive performance in the density accuracy and an overwhelming advantage over RSDE for large data sets in the data condensation rate and the training time for the weighting coefficients.

Journal ArticleDOI
TL;DR: This work reports the development of a new structure-based pharmacophore search method (called Shape4) for virtual screening that afforded similar or better enrichment ratios than other related methods, often with better diversity among the top ranking computational hits.
Abstract: Computationally efficient structure-based virtual screening methods have recently been reported that seek to find effective means to utilize experimental structure information without employing detailed molecular docking calculations. These tools can be coupled with efficient experimental screening technologies to improve the probability of identifying hits and leads for drug discovery research. Commercial software ROCS (rapid overlay of chemical structures) from Open Eye Scientific is such an example, which is a shape-based virtual screening method using the 3D structure of a ligand, typically from a bound X-ray costructure, as the query. We report here the development of a new structure-based pharmacophore search method (called Shape4) for virtual screening. This method adopts a variant of the ROCS shape technology and expands its use to work with an empty crystal structure. It employs a rigorous computational geometry method and a deterministic geometric casting algorithm to derive the negative image (i.e., pseudoligand) of a target binding site. Once the negative image (or pseudoligand) is generated, an efficient shape comparison algorithm in the commercial OE SHAPE Toolkit is adopted to compare and match small organic molecules with the shape of the pseudoligand. We report the detailed computational protocol and its computational validation using known biologically active compounds extracted from the WOMBAT database. Models derived for five selected targets were used to perform the virtual screening experiments to obtain the enrichment data for various virtual screening methods. It was found that our approach afforded similar or better enrichment ratios than other related methods, often with better diversity among the top ranking computational hits.

Journal ArticleDOI
TL;DR: An exact and efficient algorithm for computing a proper parametric representation of the intersection of two quadrics in three-dimensional real space given by implicit equations with rational coefficients and the first classification of pencils of quadrics according to the real type of the intersections is presented.

Journal ArticleDOI
TL;DR: A primal-dual interior point algorithm is used to solve the optimization problem and it is demonstrated that, for general convex objects represented as implicit surfaces, interior point approaches are globally convergent, and fast in practice.
Abstract: This paper presents a general method for exact distance computation between convex objects represented as intersections of implicit surfaces. Exact distance computation algorithms are particularly important for applications involving objects that make intermittent contact, such as in dynamic simulations and in haptic interactions. They can also be used in the narrow phase of hierarchical collision detection. In contrast to geometric approaches developed for polyhedral objects, we formulate the distance computation problem as a convex optimization problem. We use an interior point method to solve the optimization problem and demonstrate that, for general convex objects represented as implicit surfaces, interior point approaches are globally convergent, and fast in practice. Further, they provide polynomial-time guarantees for implicit surface objects when the implicit surfaces have self-concordant barrier functions. We use a primal-dual interior point algorithm that solves the Karush-Kuhn-Tucker (KKT) conditions obtained from the convex programming formulation. For the case of polyhedra and quadrics, we establish a theoretical time complexity of O(n1.5), where n is the number of constraints. We present implementation results for example implicit surface objects, including polyhedra, quadrics, and generalizations of quadrics such as superquadrics and hyperquadrics, as well as intersections of these surfaces. We demonstrate that in practice, the algorithm takes time linear in the number of constraints, and that distance computation rates of about 1 kHz can be achieved. We also extend the approach to proximity queries between deforming convex objects. Finally, we show that continuous collision detection for linearly translating objects can be performed by solving two related convex optimization problems. For polyhedra and quadrics, we establish that the computational complexity of this problem is also O(n1.5).

Journal ArticleDOI
TL;DR: A fast algorithm for sign-extraction of a number given in the Residue Number System (2n-1,2n, 2n+1) using three n-bit wide additions, two of which can be done in parallel.
Abstract: In this paper, we propose a fast algorithm for sign-extraction of a number given in the Residue Number System (2n-1,2n,2n+1) . The algorithm can be implemented using three n-bit wide additions, two of which can be done in parallel. It can be used in a wide variety of problems, i.e., in algorithms for dividing numbers in the RNS, or in evaluating the sign of determinant in computational geometry, etc.

Journal ArticleDOI
TL;DR: A novel approach is proposed that significantly reduces the computation and communication costs, while guaranteeing that the quality of the solution, with respect to the reevaluation approach, is bounded by a user-defined tolerance.
Abstract: Given a data set P, a k-means query returns k points in space (called centers), such that the average squared distance between each point in P and its nearest center is minimized. Since this problem is NP-hard, several approximate algorithms have been proposed and used in practice. In this paper, we study continuous k-means computation at a server that monitors a set of moving objects. Reevaluating k-means every time there is an object update imposes a heavy burden on the server (for computing the centers from scratch) and the clients (for continuously sending location updates). We overcome these problems with a novel approach that significantly reduces the computation and communication costs, while guaranteeing that the quality of the solution, with respect to the reevaluation approach, is bounded by a user-defined tolerance. The proposed method assigns each moving object a threshold (i.e., range) such that the object sends a location update only when it crosses the range boundary. First, we develop an efficient technique for maintaining the k-means. Then, we present mathematical formulas and algorithms for deriving the individual thresholds. Finally, we justify our performance claims with extensive experiments.