scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 2020"


Journal ArticleDOI
TL;DR: A new guidance law is proposed based on computational geometry with impact time and angle constraints against a stationary target in three dimensions based on the principle of following a specified trajectory toward the target.

23 citations



Posted Content
TL;DR: The approach is to solve an exponential-size linear programming formulation by efficiently implementing the corresponding separation oracle using techniques from computational geometry.
Abstract: Computing Wasserstein barycenters is a fundamental geometric problem with widespread applications in machine learning, statistics, and computer graphics. However, it is unknown whether Wasserstein barycenters can be computed in polynomial time, either exactly or to high precision (i.e., with $\textrm{polylog}(1/\varepsilon)$ runtime dependence). This paper answers these questions in the affirmative for any fixed dimension. Our approach is to solve an exponential-size linear programming formulation by efficiently implementing the corresponding separation oracle using techniques from computational geometry.

20 citations


Proceedings ArticleDOI
04 Jan 2020
TL;DR: In this article, a distributed, local algorithm for convex hull formation is presented, which runs in O(B) asynchronous rounds, where B is the length of the object's boundary.
Abstract: We envision programmable matter as a system of nanoscale agents (called particles) with very limited computational capabilities that move and compute collectively to achieve a desired goal. Motivated by the problem of sealing an object using minimal resources, we show how a particle system can self-organize to form an object's convex hull. We give a distributed, local algorithm for convex hull formation and prove that it runs in O(B) asynchronous rounds, where B is the length of the object's boundary. Within the same asymptotic runtime, this algorithm can be extended to also form the object's (weak) O-hull, which uses the same number of particles but minimizes the area enclosed by the hull. Our algorithms are the first to compute convex hulls with distributed entities that have strictly local sensing, constant-size memory, and no shared sense of orientation or coordinates. Ours is also the first distributed approach to computing restricted-orientation convex hulls. This approach involves coordinating particles as distributed memory; thus, as a supporting but independent result, we present and analyze an algorithm for organizing particles with constant-size memory as distributed binary counters that efficiently support increments, decrements, and zero-tests --- even as the particles move.

20 citations


Journal ArticleDOI
TL;DR: For any constant $d$ and parameter $\varepsilon \in (0,1/2]$, the existence of (roughly) $1/\varePSilon^d$ orderings on the unit cube such that for any two points $p, q\in [0, 1)^d...
Abstract: For any constant $d$ and parameter $\varepsilon \in (0,1/2]$, we show the existence of (roughly) $1/\varepsilon^d$ orderings on the unit cube $[0,1)^d$ such that for any two points $p, q\in [0,1)^d...

16 citations


Journal ArticleDOI
TL;DR: A discretization of the Laplace operator is presented which is consistent with its expression as the composition of divergence and gradient operators, and is applicable to general polygon meshes, including meshes with non‐convex, and even non‐planar, faces.
Abstract: The discrete Laplace‐Beltrami operator for surface meshes is a fundamental building block for many (if not most) geometry processing algorithms. While Laplacians on triangle meshes have been researched intensively, yielding the cotangent discretization as the de‐facto standard, the case of general polygon meshes has received much less attention. We present a discretization of the Laplace operator which is consistent with its expression as the composition of divergence and gradient operators, and is applicable to general polygon meshes, including meshes with non‐convex, and even non‐planar, faces. By virtually inserting a carefully placed point we implicitly refine each polygon into a triangle fan, but then hide the refinement within the matrix assembly. The resulting operator generalizes the cotangent Laplacian, inherits its advantages, and is empirically shown to be on par or even better than the recent polygon Laplacian of Alexa and Wardetzky [AW11] — while being simpler to compute.

15 citations


Journal ArticleDOI
TL;DR: While both constructions have linear precision, only the primal construction is positive semi‐definite and only the dual construction generates positive weights and provides a maximum principle for Delaunay meshes.
Abstract: Discrete Laplacians for triangle meshes are a fundamental tool in geometry processing. The so‐called cotan Laplacian is widely used since it preserves several important properties of its smooth counterpart. It can be derived from different principles: either considering the piecewise linear nature of the primal elements or associating values to the dual vertices. Both approaches lead to the same operator in the two‐dimensional setting. In contrast, for tetrahedral meshes, only the primal construction is reminiscent of the cotan weights, involving dihedral angles. We provide explicit formulas for the lesser‐known dual construction. In both cases, the weights can be computed by adding the contributions of individual tetrahedra to an edge. The resulting two different discrete Laplacians for tetrahedral meshes only retain some of the properties of their two‐dimensional counterpart. In particular, while both constructions have linear precision, only the primal construction is positive semi‐definite and only the dual construction generates positive weights and provides a maximum principle for Delaunay meshes. We perform a range of numerical experiments that highlight the benefits and limitations of the two constructions for different problems and meshes.

13 citations


DOI
28 Jul 2020
TL;DR: In this article, the authors give an O(n2/3)-time algorithm for the closest pair problem in constant dimensions, which is optimal up to a polylogarithmic factor by the lower bound on the quantum query complexity of element distinctness.
Abstract: The closest pair problem is a fundamental problem of computational geometry: given a set of n points in a d-dimensional space, find a pair with the smallest distance. A classical algorithm taught in introductory courses solves this problem in O(n log n) time in constant dimensions (i.e., when d = O(1)). This paper asks and answers the question of the problem's quantum time complexity. Specifically, we give an O(n2/3) algorithm in constant dimensions, which is optimal up to a polylogarithmic factor by the lower bound on the quantum query complexity of element distinctness. The key to our algorithm is an efficient history-independent data structure that supports quantum interference.In polylog(n) dimensions, no known quantum algorithms perform better than brute force search, with a quadratic speedup provided by Grover's algorithm. To give evidence that the quadratic speedup is nearly optimal, we initiate the study of quantum fine-grained complexity and introduce the Quantum Strong Exponential Time Hypothesis (QSETH), which is based on the assumption that Grover's algorithm is optimal for CNF-SAT when the clause width is large. We show that the naive Grover approach to closest pair in higher dimensions is optimal up to an no(1) factor unless QSETH is false. We also study the bichromatic closest pair problem and the orthogonal vectors problem, with broadly similar results.

12 citations


Journal ArticleDOI
TL;DR: An optimization algorithm for the design of post‐tensioned architectural shell structures, composed of triangular glass panels, in which glass has a load‐bearing function, produces glass shells that are lightweight and robust since they exploit the high compression strength of glass.
Abstract: We propose an optimization algorithm for the design of post‐tensioned architectural shell structures, composed of triangular glass panels, in which glass has a load‐bearing function. Due to its brittle nature, glass can fail when it is subject to tensile forces. Hence, we enrich the structure with a cable net, which is specifically designed to post‐tension the shell, relieving the underlying glass structure from tension. We automatically derive an optimized cable layout, together with the appropriate pre‐load of each cable. The method is driven by a physically based static analysis of the shell subject to its service load. We assess our approach by applying non‐linear finite element analysis to several real‐scale application scenarios. Such a method of cable tensioning produces glass shells that are optimized from the material usage viewpoint since they exploit the high compression strength of glass. As a result, they are lightweight and robust. Both aesthetic and static qualities are improved with respect to grid shell competitors.

11 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed two greedy heuristics for the construction of D-efficient saturated subsets: an improvement of the method suggested by Galil and Kiefer in the context of Doptimal experimental designs and a modification of the Kumar-Yildirim method for the initiation of the minimum-volume enclosing ellipsoid algorithms.

11 citations


Journal ArticleDOI
TL;DR: This work seeks a minimal GD as a concise representation of the groups maintaining the spatio-temporal structure of thegroups’ movement, and gives a comprehensive analysis of their computational complexity and present efficient approximation algorithms for their computation.
Abstract: Given the trajectories of one or several moving groups, we propose a new framework, the group diagram (GD) for representing these. Specifically, we seek a minimal GD as a concise representation of ...

Posted Content
TL;DR: This work shows that for intervals a $(1+\varepsilon)-approximate maximum independent set can be maintained with logarithmic worst-case update time, and shows how the interval structure can be used to design a data structure for maintaining an expected constant factor approximatemaximum independent set of axis-aligned squares in the plane.
Abstract: We present fully dynamic approximation algorithms for the Maximum Independent Set problem on several types of geometric objects: intervals on the real line, arbitrary axis-aligned squares in the plane and axis-aligned $d$-dimensional hypercubes. It is known that a maximum independent set of a collection of $n$ intervals can be found in $O(n\log n)$ time, while it is already \textsf{NP}-hard for a set of unit squares. Moreover, the problem is inapproximable on many important graph families, but admits a \textsf{PTAS} for a set of arbitrary pseudo-disks. Therefore, a fundamental question in computational geometry is whether it is possible to maintain an approximate maximum independent set in a set of dynamic geometric objects, in truly sublinear time per insertion or deletion. In this work, we answer this question in the affirmative for intervals, squares and hypercubes. First, we show that for intervals a $(1+\varepsilon)$-approximate maximum independent set can be maintained with logarithmic worst-case update time. This is achieved by maintaining a locally optimal solution using a constant number of constant-size exchanges per update. We then show how our interval structure can be used to design a data structure for maintaining an expected constant factor approximate maximum independent set of axis-aligned squares in the plane, with polylogarithmic amortized update time. Our approach generalizes to $d$-dimensional hypercubes, providing a $O(4^d)$-approximation with polylogarithmic update time. Those are the first approximation algorithms for any set of dynamic arbitrary size geometric objects; previous results required bounded size ratios to obtain polylogarithmic update time. Furthermore, it is known that our results for squares (and hypercubes) cannot be improved to a $(1+\varepsilon)$-approximation with the same update time.

Journal ArticleDOI
TL;DR: In this article, the medial axis transform is formulated as a least squares relaxation where the transform is obtained by minimizing a continuous optimization problem, and the proposed approach is inherently parallelizable by performing independant optimization of each sphere using Gauss-Newton, and its least-squares form allows it to be significantly more robust compared to traditional computational geometry approaches.
Abstract: The medial axis transform has applications in numerous fields including visualization, computer graphics, and computer vision. Unfortunately, traditional medial axis transformations are usually brittle in the presence of outliers, perturbations and/or noise along the boundary of objects. To overcome this limitation, we introduce a new formulation of the medial axis transform which is naturally robust in the presence of these artifacts. Unlike previous work which has approached the medial axis from a computational geometry angle, we consider it from a numerical optimization perspective. In this work, we follow the definition of the medial axis transform as "the set of maximally inscribed spheres". We show how this definition can be formulated as a least squares relaxation where the transform is obtained by minimizing a continuous optimization problem. The proposed approach is inherently parallelizable by performing independant optimization of each sphere using Gauss-Newton, and its least-squares form allows it to be significantly more robust compared to traditional computational geometry approaches. Extensive experiments on 2D and 3D objects demonstrate that our method provides superior results to the state of the art on both synthetic and real-data.

Journal ArticleDOI
TL;DR: An interactive tool compatible with existing software is presented to design ring structures with a paradoxic mobility, which are self‐collision‐free over the complete motion cycle, and is demonstrated in the context of an architectural and artistic application studied in a master‐level studio course.
Abstract: We present an interactive tool compatible with existing software (Rhino/Grasshopper) to design ring structures with a paradoxic mobility, which are self‐collision‐free over the complete motion cycle. Our computational approach allows non‐expert users to create these invertible paradoxic loops with six rotational joints by providing several interactions that facilitate design exploration. In a first step, a rational cubic motion is shaped either by means of a four pose interpolation procedure or a motion evolution algorithm. By using the representation of spatial displacements in terms of dual‐quaternions, the associated motion polynomial of the resulting motion can be factored in several ways, each corresponding to a composition of three rotations. By combining two suitable factorizations, an arrangement of six rotary axes is achieved, which possesses a 1‐parametric mobility. In the next step, these axes are connected by links in a way that the resulting linkage is collision‐free over the complete motion cycle. Based on an algorithmic solution for this problem, collision‐free design spaces of the individual links are generated in a post‐processing step. The functionality of the developed design tool is demonstrated in the context of an architectural and artistic application studied in a master‐level studio course. Two results of the performed design experiments were fabricated by the use of computer‐controlled machines to achieve the necessary accuracy ensuring the mobility of the models.

Journal ArticleDOI
06 Mar 2020-Networks
TL;DR: This work focuses on algorithms for listing SRLGs as a result of regional failures of circular or other fixed shape and generalizes some of the related results on the plane to the sphere.
Abstract: Funding information This article is based uponwork fromCOST Action CA15127 (”Resilient communication services protecting end-user applications from disaster-based failures RECODIS”) supported by COST (European Cooperation in Science and Technology). This work was partially supported by the High Speed Networks Laboratory (HSNLab) at the Budapest University of Technology and Economics, by the BME-Artificial Intelligence FIKP grant of EMMI (BME FIKP-MI/SC), and by the Hungarian Scientific Research Fund (grant No. OTKA FK_17 123957, KH_18 129589, K_17 124171, K_18 128062 and K_17 124171). Several recent works shed light on the vulnerability of networks against regional failures, which are failures ofmultiple pieces of equipment in a geographical region as a result of a natural disaster. To enhance the preparedness of a given network to natural disasters, regional failures and associated Shared Risk Link Groups (SRLGs) should be first identified. For simplicity, most of the previous works assume the network is embedded on an Euclidean plane. Nevertheless, they are on the Earth surface; this assumption causes distortion. In this work, we generalize some of the related results on the plane to the sphere. In particular, we focus on algorithms for listing SRLGs as a result of regional failures of circular or other fixed shape.

Journal ArticleDOI
TL;DR: This work investigates a Voronoi diagram-based tessellation of a body-centered cubic cell for applications in structural synthesis and computational design of 3D lattice structures and shows that the proposed parameterization generates complex search spaces using only four variables.
Abstract: A lattice structure is defined by a network of interconnected structural members whose architecture exhibits some degree of regularity. Although the overall architecture of a lattice may contain many members, its generation can be a simple process in which a unit cell composed of a small amount of members, in comparison to the overall structure, is mapped throughout the Euclidean space. However, finding the right lattice architecture in a vast search space that customizes the behavior of a design for a given purpose, subject to mechanical and manufacturing constraints, is a challenging task. In response to this challenge, this work investigates a Voronoi diagram-based tessellation of a body-centered cubic cell for applications in structural synthesis and computational design of 3D lattice structures. This work contributes by exploring how the Voronoi tessellation can be utilized to parametrically represent the architecture of a lattice structure and what the implications of the parametrization are on the optimization, for which a global direct search method is used. The work considers two benchmark studies, a cubic and a cantilever lattice structure, as well as the effect of isotropic and anisotropic material property models, stemming from applications to additive manufacturing. The results show that the proposed parameterization generates complex search spaces using only four variables and includes four different lattice structure types, a Kelvin cell, a hexagonal lattice, a diamond-core lattice structure, and a box-boom type lattice structure. The global direct search method applied is shown to be effective considering two different material property models from an additive manufacturing (AM) process.

Journal ArticleDOI
TL;DR: Experiments carried out on the IFHCDB database have shown that the suggested approach outperforms a normal CNN and yield satisfactory results, and the proposed approach uses an optimal architecture to exploit the advantages of the two techniques and overcome the computational time issue.
Abstract: Article history: Received: 28 June, 2020 Accepted: 19 August, 2020 Online: 09 September, 2020 Suppose we want to classify a query item Q with a classification model that consists of a large set of predefined classes L and suppose we have a knowledge that indicates to us that the target class of Q belongs to a small subset from L. Naturally, this filtering will improve the accuracy of any classifier, even random guessing. Based on this principle, this paper proposes a new classification approach using convolutional neural networks (CNN) and computational geometry (CG) algorithms. The approach is applied and tested on the recognition of isolated handwritten Arabic characters (IHAC). The main idea of the proposed approach is to direct CNN using a filtering layer, which reduces the set of possible classes for a query item. The rules of the relative neighborhood graph (RNG) and Gabriel’s graph (GG) are combined for this purpose. The choice of RNG-GG was based on its great capacity to correctly reduce the list of possible classes. This capacity is measured by a new indicator that we call \"the appearance rate\". In recent years and due to strong data growth, CNNs have performed classification tasks very well. On the contrary, CG algorithms yield limited results in huge datasets and suffer from high computational time, but they generally reach high appearance rates and do not require any training phase. Consequently, the proposed approach uses an optimal architecture to exploit the advantages of the two techniques and overcome the computational time issue. Experiments carried out on the IFHCDB database have shown that the suggested approach outperforms a normal CNN and yield satisfactory results.

Proceedings Article
05 Jan 2020
TL;DR: This work derives fast approximation schemes for LP relaxations of several well-studied geometric optimization problems that include packing, covering, and mixed packing and covering constraints and obtains the first near-linear constant factor approximation algorithms for several problems.
Abstract: We derive fast approximation schemes for LP relaxations of several well-studied geometric optimization problems that include packing, covering, and mixed packing and covering constraints. Previous work in computational geometry concentrated mainly on the rounding stage to prove approximation bounds, assuming that the underlying LPs can be solved efficiently. This work demonstrates that many of those results can be made to run in nearly linear time. In contrast to prior work on this topic our algorithms handle weights and capacities, side constraints, and also apply to mixed packing and covering problems, in a unified fashion. Our framework relies crucially on the properties of a randomized MWU algorithm of [41]; we demonstrate that it is well-suited for range spaces that admit efficient approximate dynamic data structures for emptiness oracles. Our framework cleanly separates the MWU algorithm for solving the LP from the key geometric data structure primitives, and this enables us to handle side constraints in a simple way. Combined with rounding algorithms that can also be implemented efficiently, we obtain the first near-linear constant factor approximation algorithms for several problems.

Proceedings ArticleDOI
31 Jan 2020
TL;DR: Examples of computer vision tasks in which topological data analysis gave new effective solutions are provided, with a brief introduction to subject is given throughout the text.
Abstract: The paper will provide examples of computer vision tasks in which topological data analysis gave new effective solutions. Ideas underlying topological data analysis and its basic methods will be briefly described and illustrated with examples of computer vision problems. No prior knowledge in topological data analysis and computational geometry is assumed, a brief introduction to subject is given throughout the text.

Posted Content
TL;DR: A secure two-party protocol to determine the existence of an intersection between entities, which applies to any form of convex three-dimensional shape and is secured by modifying the separating set computation method as a privacy-preserver.
Abstract: Intersection detection between three-dimensional bodies has various applications in computer graphics, video game development, robotics as well as military industries. In some respects, entities do not want to disclose sensitive information about themselves, including their location. In this paper, we present a secure two-party protocol to determine the existence of an intersection between entities. The protocol presented in this paper allows for intersection detection in three-dimensional spaces in geometry. Our approach is to use an intersecting plane between two spaces to determine their separation or intersection. For this purpose, we introduce a computational geometry protocol to determine the existence of an intersecting plane. In this paper, we first use the Minkowski difference to reduce the two-space problem into one-space. Then, the separating set is obtained and the separation of two shapes is determined based on the inclusion of the center point. We then secure the protocol by modifying the separating set computation method as a privacy-preserver and changing the Minkowski difference method to achieve this goal. The proposed protocol applies to any form of convex three-dimensional shape. The experiments successfully found a secure protocol for intersection detection between two convex hulls in geometrical shapes such as the pyramid and cuboid.

Journal ArticleDOI
TL;DR: This work focuses on the definition and study of the n-dimensional k-vector, an algorithm devised to perform orthogonal range searching in static databases with multiple dimensions that has a worst case complexity of O ( n d ( k / n ) 2 / d ) .

Journal ArticleDOI
TL;DR: The main contribution is a WYSIWYG (i.e. what you see is what you get) way of building new solvers that produces multi‐style bas‐reliefs with their geometric structures and/or details preserved.
Abstract: We present a near‐lighting photometric stereo (NL‐PS) system to produce digital bas‐reliefs from a physical object (set) directly. Unlike both the 2D image and 3D model‐based modelling methods that require complicated interactions and transformations, the technique using NL‐PS is easy to use with cost‐effective hardware, providing users with a trade‐off between abstract and representation when creating bas‐reliefs. Our algorithm consists of two steps: normal map acquisition and constrained 3D reconstruction. First, we introduce a lighting model, named the quasi‐point lighting model (QPLM), and provide a two‐step calibration solution in our NL‐PS system to generate a dense normal map. Second, we filter the normal map into a detail layer and a structure layer, and formulate detail‐ or structure‐preserving bas‐relief modelling as a constrained surface reconstruction problem of solving a sparse linear system. The main contribution is a WYSIWYG (i.e. what you see is what you get) way of building new solvers that produces multi‐style bas‐reliefs with their geometric structures and/or details preserved. The performance of our approach is experimentally validated via comparisons with the state‐of‐the‐art methods.

Proceedings ArticleDOI
18 Aug 2020
TL;DR: It is concluded that exported models have geometric flaws and that several relationships can indeed be inferred by means of generic geometric intersection logic, and the validation and inference rules are applied to a public set of building models.
Abstract: The Industry Foundation Classes are a prevalent open standard to exchange Building Information Models. In such a model, geometric representations are provided for individual building elements along with semantic information, including a significant amount of properties related togeometry and explicit topological relationships. These relationships and quantities introduce redundancies and often inconsistencies as well. Moreover, they introduce complexity in down-stream processing. Combining multiple aspect models into a single model has non-trivial consequences for the connectivity graphs. Programmatic mutations are complicated because of the relationships that need to be updated as a result of changes.In order to alleviate these issues, this paper provides a theoretical frameworkand implementation for both validating and inferring semantic and topological con-structs from the geometric representations, rooted on Egenhofer spatial predicates and extended with the IFC modelling tolerance. Combining these two concepts, wall connectivity is equivalent to the intersection of the wall representation boundaries, where a boundary is not a surface, but rather a hollow solid with a thickness derived from the modelling tolerance.The algorithms presented in this paper are implemented in fully open source software based on the IfcOpenShell software library and the CGAL computational geometry library using Nef polyhedra. We provide a formalization of space boundaries, spatial containment and wall connectivity relationships. The validation and inference rules are applied to a public set of building models. We conclude that exported models have geometric flaws and that several relationships can indeed be inferred by means of generic geometric intersection logic.

Journal ArticleDOI
TL;DR: This study proposes an efficient linear interpolation algorithm for higher-dimensional regularization using Delaunay tessellation, which has been used for various purposes, such as surface reconstruction and determining the nearest neighbourhood in computational geometry.

Proceedings ArticleDOI
14 Jun 2020
TL;DR: This paper evaluates four computational geometry libraries to assess their suitability for various workloads in big spatial data exploration, namely, GEOS, JTS, Esri Geometry API, and GeoLite.
Abstract: With the rise of big spatial data, many systems were developed on Hadoop, Spark, Storm, Flink, and similar big data systems to handle big spatial data. At the core of all these systems, they use a computational geometry library to represent points, lines, and polygons, and to process them to evaluate spatial predicates and spatial analysis queries. This paper evaluates four computational geometry libraries to assess their suitability for various workloads in big spatial data exploration, namely, GEOS, JTS, Esri Geometry API, and GeoLite. The latter is a library that we built specifically for this paper to test some ideas that are not present in other libraries. For all the four libraries, we evaluate their computational efficiency and memory usage using a combination of micro- and macro-benchmarks on Spark. The paper gives recommendations on how to use these libraries for big spatial data exploration.

Journal ArticleDOI
TL;DR: In this article, the authors review the main known results on systems of an arbitrary (possibly infinite) number of weak linear inequalities posed in the Euclidean space, and show the potential power of this theoretical tool by developing in detail two significant applications, one to computational geometry: the Voronoi cells, and the other to mathematical analysis.
Abstract: In this paper we, firstly, review the main known results on systems of an arbitrary (possibly infinite) number of weak linear inequalities posed in the Euclidean space $\mathbb {R}^{n}$ (i.e., with n unknowns), and, secondly, show the potential power of this theoretical tool by developing in detail two significant applications, one to computational geometry: the Voronoi cells, and the other to mathematical analysis: approximate subdifferentials, recovering known results in both fields and proving new ones. In particular, this paper completes the existing theory of farthest Voronoi cells of infinite sets of sites by appealing to well-known results on linear semi-infinite systems.

Posted Content
TL;DR: A general toolbox for precision control and complexity analysis of subdivision based algorithms in computational geometry is introduced, and polynomial time complexity estimates are proved for both interval arithmetic and finite precision versions of the Plantinga-Vegter algorithm.
Abstract: We introduce a general toolbox for precision control and complexity analysis of subdivision based algorithms in computational geometry. We showcase the toolbox on a well known example from this family; the adaptive subdivision algorithm due to Plantinga and Vegter. The only existing complexity estimate on this rather fast algorithm was an exponential worst-case upper bound for its interval arithmetic version. We go beyond the worst-case by considering smoothed analysis, and prove polynomial time complexity estimates for both interval arithmetic and finite precision versions of the Plantinga-Vegter algorithm. The employed toolbox is a blend of robust probabilistic techniques coming from geometric functional analysis with condition numbers and the continuous amortization paradigm introduced by Burr, Krahmer and Yap. We hope this combination of tools from different disciplines would prove useful for understanding complexity aspects of the broad family of subdivision based algorithms in computational geometry.

Journal ArticleDOI
TL;DR: The method presented is an iterative edge contraction algorithm based on the work of Garland and Heckberts, which helps preserve the visually salient features of the model without compromising performance.
Abstract: Polygonal meshes have a significant role in computer graphics, design and manufacturing technology for surface representation and it is often required to reduce their complexity to save memory. An efficient algorithm for detail retaining mesh simplification is proposed; in particular, the method presented is an iterative edge contraction algorithm based on the work of Garland and Heckberts. The original algorithm is improved by enhancing the quadratic error metrics with a penalizing factor based on discrete Gaussian curvature, which is estimated efficiently through the Gauss-Bonnet theorem, to account for the presence of fine details during the edge decimation process. Experimental results show that this new algorithm helps preserve the visually salient features of the model without compromising performance.

Journal ArticleDOI
16 Jul 2020-Symmetry
TL;DR: Evidence of the convenience of implementing the geometric places of the plane into commercial computer-aided design (CAD) software as auxiliary tools in the computer- aided sketching process is presented and the possibility of adding several intuitive spatial geometric places to improve the efficiency of the three-dimensional geometric design is considered.
Abstract: This article presents evidence of the convenience of implementing the geometric places of the plane into commercial computer-aided design (CAD) software as auxiliary tools in the computer-aided sketching process. Additionally, the research considers the possibility of adding several intuitive spatial geometric places to improve the efficiency of the three-dimensional geometric design. For demonstrative purposes, four examples are presented. A two-dimensional figure positioned on the flat face of an object shows the significant improvement over tools currently available in commercial CAD software, both vector and parametric: it is more intuitive and does not require the designer to execute as many operations. Two more complex three-dimensional examples are presented to show how the use of spatial geometric places, implemented as CAD software functions, would be an effective and highly intuitive tool. Using these functions produces auxiliary curved surfaces with points whose notable features are a significant innovation. A final example provided solves a geometric place problem using own software designed for this purpose. The proposal to incorporate geometric places into CAD software would lead to a significant improvement in the field of computational geometry. Consequently, the incorporation of geometric places into CAD software could increase technical-design productivity by eliminating some intermediate operations, such as symmetry, among others, and improving the geometry training of less skilled users.

Proceedings ArticleDOI
01 Apr 2020
TL;DR: A quantum algorithm is constructed that solves POINT-ON-3-LINES and other 3SUM-HARD problems in $O(n^c)$ time quantumly, for $c<2$ by constructing a quantum algorithm that combines recursive use of amplitude amplification with geometrical ideas.
Abstract: We study quantum algorithms for problems in computational geometry, such as Point-On-3-Lines problem In this problem, we are given a set of lines and we are asked to find a point that lies on at least 3 of these lines Point-On-3-Lines and many other computational geometry problems are known to be 3Sum-Hard That is, solving them classically requires time Ω(n^{2-o(1)}), unless there is faster algorithm for the well known 3Sum problem (in which we are given a set S of n integers and have to determine if there are a, b, c ∈ S such that a + b + c = 0) Quantumly, 3Sum can be solved in time O(n log n) using Grover’s quantum search algorithm This leads to a question: can we solve Point-On-3-Lines and other 3Sum-Hard problems in O(n^c) time quantumly, for c<2? We answer this question affirmatively, by constructing a quantum algorithm that solves Point-On-3-Lines in time O(n^{1 + o(1)}) The algorithm combines recursive use of amplitude amplification with geometrical ideas We show that the same ideas give O(n^{1 + o(1)}) time algorithm for many 3Sum-Hard geometrical problems