scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 2019"


Proceedings ArticleDOI
01 Jun 2019
TL;DR: Qualitative and quantitative comparisons with the state-of-the-art demonstrate the superiority of the proposed Spatial Fusion GAN (SF-GAN), which combines a geometry synthesizer and an appearance synthesizer to achieve synthesis realism in both geometry and appearance spaces.
Abstract: Recent advances in generative adversarial networks (GANs) have shown great potentials in realistic image synthesis whereas most existing works address synthesis realism in either appearance space or geometry space but few in both. This paper presents an innovative Spatial Fusion GAN (SF-GAN) that combines a geometry synthesizer and an appearance synthesizer to achieve synthesis realism in both geometry and appearance spaces. The geometry synthesizer learns contextual geometries of background images and transforms and places foreground objects into the background images unanimously. The appearance synthesizer adjust the color, brightness and styles of the foreground objects and embeds them into background images harmoniously, where a guided filter is incorporated for detail preserving. The two synthesizers are inter-connected as mutual references which can be trained end-to-end with little supervision. The SF-GAN has been evaluated in two tasks: (1) realistic scene text image synthesis for training better recognition models; (2) glass and hat wearing for realistic matching glasses and hats with real portraits. Qualitative and quantitative comparisons with the state-of-the-art demonstrate the superiority of the proposed SF-GAN.

128 citations


Journal ArticleDOI
TL;DR: This survey reviews the algorithms which extract simple geometric primitives from raw dense 3D data and proposes an application‐oriented characterization, designed to help select an appropriate method based on one's application needs and compare recent approaches.
Abstract: The amount of captured 3D data is continuously increasing, with the democratization of consumer depth cameras, the development of modern multi‐view stereo capture setups and the rise of single‐view 3D capture based on machine learning. The analysis and representation of this ever growing volume of 3D data, often corrupted with acquisition noise and reconstruction artefacts, is a serious challenge at the frontier between computer graphics and computer vision. To that end, segmentation and optimization are crucial analysis components of the shape abstraction process, which can themselves be greatly simplified when performed on lightened geometric formats. In this survey, we review the algorithms which extract simple geometric primitives from raw dense 3D data. After giving an introduction to these techniques, from the acquisition modality to the underlying theoretical concepts, we propose an application‐oriented characterization, designed to help select an appropriate method based on one's application needs and compare recent approaches. We conclude by giving hints for how to evaluate these methods and a set of research challenges to be explored.

115 citations


Journal ArticleDOI
TL;DR: This work proposes a new patch normal co-filter (PcFilter) for mesh denoising, inspired by the geometry statistics which show that surface patches with similar intrinsic properties exist on the underlying surface of a noisy mesh, aiming at removing different levels of noise, yet preserving various surface features.
Abstract: Mesh denoising is a classical, yet not well-solved problem in digital geometry processing. The challenge arises from noise removal with the minimal disturbance of surface intrinsic properties (e.g., sharp features and shallow details). We propose a new patch normal co-filter (PcFilter) for mesh denoising. It is inspired by the geometry statistics which show that surface patches with similar intrinsic properties exist on the underlying surface of a noisy mesh. We model the PcFilter as a low-rank matrix recovery problem of similar-patch collaboration, aiming at removing different levels of noise, yet preserving various surface features. We generalize our model to pursue the low-rank matrix recovery in the kernel space for handling the nonlinear structure contained in the data. By making use of the block coordinate descent minimization and the specifics of a proximal based coordinate descent method, we optimize the nonlinear and nonconvex objective function efficiently. The detailed quantitative and qualitative results on synthetic and real data show that the PcFilter competes favorably with the state-of-the-art methods in surface accuracy and noise-robustness.

64 citations


Journal ArticleDOI
TL;DR: Five discrete results are discussed: the lemmas of Sperner and Tucker from combinatorial topology and the theorems of Carath\'eodory, Helly, and Tverberg fromCombinatorial geometry, which explore their connections and emphasize their broad impact in application areas.
Abstract: We discuss five discrete results: the lemmas of Sperner and Tucker from combinatorial topology and the theorems of Carath\'eodory, Helly, and Tverberg from combinatorial geometry. We explore their connections and emphasize their broad impact in application areas such as game theory, graph theory, mathematical optimization, computational geometry, etc.

56 citations


Journal ArticleDOI
TL;DR: A data structure that makes it easy to run a large class of algorithms from computational geometry and scientific computing on extremely poor-quality surface meshes by considering intrinsic triangulations which connect vertices by straight paths along the exact geometry of the input mesh.
Abstract: We present a data structure that makes it easy to run a large class of algorithms from computational geometry and scientific computing on extremely poor-quality surface meshes. Rather than changing the geometry, as in traditional remeshing, we consider intrinsic triangulations which connect vertices by straight paths along the exact geometry of the input mesh. Our key insight is that such a triangulation can be encoded implicitly by storing the direction and distance to neighboring vertices. The resulting signpost data structure then allows geometric and topological queries to be made on-demand by tracing paths across the surface. Existing algorithms can be easily translated into the intrinsic setting, since this data structure supports the same basic operations as an ordinary triangle mesh (vertex insertions, edge splits, etc.). The output of intrinsic algorithms can then be stored on an ordinary mesh for subsequent use; unlike previous data structures, we use a constant amount of memory and do not need to explicitly construct an overlay mesh unless it is specifically requested. Working in the intrinsic setting incurs little computational overhead, yet we can run algorithms on extremely degenerate inputs, including all manifold meshes from the Thingi10k data set. To evaluate our data structure we implement several fundamental geometric algorithms including intrinsic versions of Delaunay refinement and optimal Delaunay triangulation, approximation of Steiner trees, adaptive mesh refinement for PDEs, and computation of Poisson equations, geodesic distance, and flip-free tangent vector fields.

49 citations


Journal ArticleDOI
TL;DR: A generalization of vector fields, referred to as $N$-direction fields or cross fields when $N = 4, has been recently introduced and studied for geometry processing, with applications in quadrilat....
Abstract: A generalization of vector fields, referred to as $N$-direction fields or cross fields when $N = 4$, has been recently introduced and studied for geometry processing, with applications in quadrilat...

48 citations


Proceedings ArticleDOI
16 Apr 2019
TL;DR: SReachTools implements several new algorithms based on convex optimization, computational geometry, and Fourier transforms, to efficiently compute over- and under-approximations of the stochastic reach set.
Abstract: We present SReachTools, an open-source MATLAB toolbox for performing stochastic reachability of linear, potentially time-varying, discrete-time systems that are perturbed by a stochastic disturbance. The toolbox addresses the problem of stochastic reachability of a target tube, which also encompasses the terminal-time hitting reach-avoid and viability problems. The stochastic reachability of a target tube problem maximizes the likelihood that the state of a stochastic system will remain within a collection of time-varying target sets for a give time horizon, while respecting the system dynamics and bounded control authority. SReachTools implements several new algorithms based on convex optimization, computational geometry, and Fourier transforms, to efficiently compute over- and under-approximations of the stochastic reach set. SReachTools can be used to perform probabilistic verification of closed-loop systems and can also perform controller synthesis via open-loop, affine, and state-feedback controllers. The code base is available online at https://github.com/unm-hscl/SReachTools, and it is designed to be extensible and user friendly.

43 citations


Journal ArticleDOI
TL;DR: Producibility Index, which is a weighted optimization metric, brings together the quantified outputs of the DFAM analysis, support structure parameters, accessibility analysis and suggests the best build orientations for the given part geometry.

42 citations


Journal ArticleDOI
TL;DR: In this article, a region is defined as the set of all points that have at least Tukey depth κ w.r.t. the data, and a region can be defined as a set of points that are affine equivariant and robust.
Abstract: Given data in Rp , a Tukey κ-trimmed region is the set of all points that have at least Tukey depth κ w.r.t. the data. As they are visual, affine equivariant and robust, Tukey regions are useful to...

27 citations


Proceedings ArticleDOI
01 May 2019
TL;DR: It turns out that the Moment-SOS hierarchy can be also applied to solve problems with positivity constraints " f (x) $\ge$ 0 for all x $\in$ K " and/or linear constraints on Borel measures.
Abstract: The moment-SOS hierarchy is a powerful methodology that is used to solve the Generalized Moment Problem (GMP) where the list of applications in various areas of Science and Engineering is almost endless. Initially designed for solving polynomial optimization problems (the simplest example of the GMP), it applies to solving any instance of the GMP whose description only involves semi-algebraic functions and sets. It consists of solving a sequence (a hierarchy) of convex relaxations of the initial problem, and each convex relaxation is a semidefinite program whose size increases in the hierarchy. The goal of this book is to describe in a unified and detailed manner how this methodology applies to solving various problems in different areas ranging from Optimization, Probability, Statistics, Signal Processing, Computational Geometry, Control, Optimal Control and Analysis of a certain class of nonlinear PDEs. For each application, this unconventional methodology differs from traditional approaches and provides an unusual viewpoint. Each chapter is devoted to a particular application, where the methodology is thoroughly described and illustrated on some appropriate examples. The exposition is kept at an appropriate level of detail to aid the different levels of readers not necessarily familiar with these tools, to better know and understand this methodology

24 citations



Journal ArticleDOI
TL;DR: A non-convex mixed-integer non-linear programming formulation is derived for this circle arrangement or packing problem and some theoretical insights are presented presenting a relaxed objective function for circles with equal radius leading to the same circle arrangement as for the original objective function.
Abstract: We present and solve a new computational geometry optimization problem in which a set of circles with given radii is to be arranged in unspecified area such that the length of the boundary, i.e., the perimeter, of the convex hull enclosing the non-overlapping circles is minimized. The convex hull boundary is established by line segments and circular arcs. To tackle the problem, we derive a non-convex mixed-integer non-linear programming formulation for this circle arrangement or packing problem. Moreover, we present some theoretical insights presenting a relaxed objective function for circles with equal radius leading to the same circle arrangement as for the original objective function. If we minimize only the sum of lengths of the line segments, for selected cases of up to 10 circles we obtain gaps smaller than $$10^{-4}$$ using BARON or LINDO embedded in GAMS, while for up to 75 circles we are able to approximate the optimal solution with a gap of at most $$14\%$$ .

Posted Content
TL;DR: This paper gives an Õ(n2/3) algorithm in constant dimensions, which is optimal up to a polylogarithmic factor by the lower bound on the quantum query complexity of element distinctness, and introduces the Quantum Strong Exponential Time Hypothesis (QSETH), based on the assumption that Grover's algorithm is optimal for CNF-SAT when the clause width is large.
Abstract: The closest pair problem is a fundamental problem of computational geometry: given a set of $n$ points in a $d$-dimensional space, find a pair with the smallest distance. A classical algorithm taught in introductory courses solves this problem in $O(n\log n)$ time in constant dimensions (i.e., when $d=O(1)$). This paper asks and answers the question of the problem's quantum time complexity. Specifically, we give an $\tilde{O}(n^{2/3})$ algorithm in constant dimensions, which is optimal up to a polylogarithmic factor by the lower bound on the quantum query complexity of element distinctness. The key to our algorithm is an efficient history-independent data structure that supports quantum interference. In $\mathrm{polylog}(n)$ dimensions, no known quantum algorithms perform better than brute force search, with a quadratic speedup provided by Grover's algorithm. To give evidence that the quadratic speedup is nearly optimal, we initiate the study of quantum fine-grained complexity and introduce the Quantum Strong Exponential Time Hypothesis (QSETH), which is based on the assumption that Grover's algorithm is optimal for CNF-SAT when the clause width is large. We show that the naive Grover approach to closest pair in higher dimensions is optimal up to an $n^{o(1)}$ factor unless QSETH is false. We also study the bichromatic closest pair problem and the orthogonal vectors problem, with broadly similar results.

Proceedings ArticleDOI
16 Jun 2019
TL;DR: This paper proposes a novel algorithm using embedded topological graphs and computational geometry that can extract skeletons from input binary images and compares three well-known thinning algorithms with this method.
Abstract: Skeletonization, also called thinning, is an important pre-processing step in computer vision and image processing tasks such as shape analysis and vectorization. It is a morphological process that generates a skeleton from an input image. Many thinning algorithms have been proposed, but accurate and fast algorithms are still in demand. In this paper, we propose a novel algorithm using embedded topological graphs and computational geometry that can extract skeletons from input binary images. We compare three well-known thinning algorithms with our method, with the experimental results showing effectiveness of the proposed method and algorithms.

Proceedings ArticleDOI
17 Jun 2019
TL;DR: In this paper, the authors give an approximation algorithm for the geometric transportation problem with running time asymptotically linear in the number of points and polynomial in the logarithm of the total positive supply.
Abstract: $ ewcommand{\eps}{\varepsilon}$In the geometric transportation problem, we are given a collection of points $P$ in $d$-dimensional Euclidean space, and each point is given a (positive or negative integer) supply. The goal is to find a transportation map that satisfies the supplies, while minimizing the total distance traveled. This problem has been widely studied in many fields of computer science: from computational geometry, to computer vision, graphics, and machine learning. In this work we study approximation algorithms for the geometric transportation problem. We give an algorithm which, for any fixed dimension $d$, finds a $(1+\eps)$-approximate transportation map in time nearly-linear in $n$, and polynomial in $\eps^{-1}$ and in the logarithm of the total positive supply. This is the first approximation scheme for the problem whose running time depends on $n$ as $n\cdot \mathrm{polylog}(n)$. Our techniques combine the generalized preconditioning framework of Sherman, which is grounded in continuous optimization, with simple geometric arguments to first reduce the problem to a minimum cost flow problem on a sparse graph, and then to design a good preconditioner for this latter problem.

Journal ArticleDOI
TL;DR: This work extends the computation of geodesic distance by heat diffusion to also determine angular information for theGeodesic curves, exploiting the factorization of the global Laplace–Beltrami operator of the mesh and using recent localized solution techniques.
Abstract: Many applications in geometry processing require the computation of local parameterizations on a surface mesh at interactive rates. A popular approach is to compute local exponential maps, i.e. parameterizations that preserve distance and angle to the origin of the map. We extend the computation of geodesic distance by heat diffusion to also determine angular information for the geodesic curves. This approach has two important benefits compared to fast approximate as well as exact forward tracing of the distance function: First, it allows generating smoother maps, avoiding discontinuities. Second, exploiting the factorization of the global Laplace–Beltrami operator of the mesh and using recent localized solution techniques, the computation is more efficient even compared to fast approximate solutions based on Dijkstra's algorithm.

Journal ArticleDOI
TL;DR: This work presents a method for simulating neural populations based on two dimensional (2D) point spiking neuron models that defines the state of the population in terms of a density function over the neural state space, and argues that the study of 2D systems subject to noise is important complementary to 1D systems.
Abstract: The importance of a mesoscopic description level of the brain has now been well established. Rate based models are widely used, but have limitations. Recently, several extremely efficient population-level methods have been proposed that go beyond the characterization of a population in terms of a single variable. Here, we present a method for simulating neural populations based on two dimensional (2D) point spiking neuron models that defines the state of the population in terms of a density function over the neural state space. Our method differs in that we do not make the diffusion approximation, nor do we reduce the state space to a single dimension (1D). We do not hard code the neural model, but read in a grid describing its state space in the relevant simulation region. Novel models can be studied without even recompiling the code. The method is highly modular: variations of the deterministic neural dynamics and the stochastic process can be investigated independently. Currently, there is a trend to reduce complex high dimensional neuron models to 2D ones as they offer a rich dynamical repertoire that is not available in 1D, such as limit cycles. We will demonstrate that our method is ideally suited to investigate noise in such systems, replicating results obtained in the diffusion limit and generalizing them to a regime of large jumps. The joint probability density function is much more informative than 1D marginals, and we will argue that the study of 2D systems subject to noise is important complementary to 1D systems.

Journal ArticleDOI
TL;DR: This work introduces a new formulation of the medial axis transform which is naturally robust in the presence of outliers, perturbations and/or noise along the boundary of objects and can be formulated as a least squares relaxation where the transform is obtained by minimizing a continuous optimization problem.
Abstract: The medial axis transform has applications in numerous fields including visualization, computer graphics, and computer vision. Unfortunately, traditional medial axis transformations are usually brittle in the presence of outliers, perturbations and/or noise along the boundary of objects. To overcome this limitation, we introduce a new formulation of the medial axis transform which is naturally robust in the presence of these artifacts. Unlike previous work which has approached the medial axis from a computational geometry angle, we consider it from a numerical optimization perspective. In this work, we follow the definition of the medial axis transform as "the set of maximally inscribed spheres". We show how this definition can be formulated as a least squares relaxation where the transform is obtained by minimizing a continuous optimization problem. The proposed approach is inherently parallelizable by performing independant optimization of each sphere using Gauss-Newton, and its least-squares form allows it to be significantly more robust compared to traditional computational geometry approaches. Extensive experiments on 2D and 3D objects demonstrate that our method provides superior results to the state of the art on both synthetic and real-data.

Journal ArticleDOI
TL;DR: An algorithm is presented that solves the 3SUM problem for n real numbers in O((n2/ log2n)(log log n)O(1)) time, improving previous solutions by about a logarithmic factor.
Abstract: This article presents an algorithm that solves the 3SUM problem for n real numbers in O((n2/ log2n)(log log n)O(1)) time, improving previous solutions by about a logarithmic factor. Our framework for shaving off two logarithmic factors can be applied to other problems, such as (median,+)-convolution/matrix multiplication and algebraic generalizations of 3SUM. This work also obtains the first subquadratic results on some 3SUM-hard problems in computational geometry, for example, deciding whether (the interiors of) a constant number of simple polygons have a common intersection.

Journal ArticleDOI
22 Jan 2019-Chaos
TL;DR: This work introduces the Ensemble-based Topological Entropy Calculation, or E-tec, a method to derive a lower-bound on topological entropy of two-dimensional systems by considering the evolution of a "rubber band" wrapped around the data points and evolving with their trajectories.
Abstract: Topological entropy measures the number of distinguishable orbits in a dynamical system, thereby quantifying the complexity of chaotic dynamics. One approach to computing topological entropy in a two-dimensional space is to analyze the collective motion of an ensemble of system trajectories taking into account how trajectories "braid" around one another. In this spirit, we introduce the Ensemble-based Topological Entropy Calculation, or E-tec, a method to derive a lower-bound on topological entropy of two-dimensional systems by considering the evolution of a "rubber band" (piece-wise linear curve) wrapped around the data points and evolving with their trajectories. The topological entropy is bounded below by the exponential growth rate of this band. We use tools from computational geometry to track the evolution of the rubber band as data points strike and deform it. Because we maintain information about the configuration of trajectories with respect to one another, updating the band configuration is performed locally, which allows E-tec to be more computationally efficient than some competing methods. In this work, we validate and illustrate many features of E-tec on a chaotic lid-driven cavity flow. In particular, we demonstrate convergence of E-tec's approximation with respect to both the number of trajectories (ensemble size) and the duration of trajectories in time.

Journal ArticleDOI
TL;DR: A geometry relation judgment and contact searching algorithm based on Contact Theory is reported, which is compacted and applicable to the discontinuous computation, such as robotic control, rock mass stability, dam stability etc.
Abstract: The geometry relation and the contact point-pairs detection between two three dimensional (3D) objects with arbitrary shapes are essential problems involved in discontinuous computation and computational geometry. This paper reported a geometry relation judgment and contact searching algorithm based on Contact Theory. A contact cover search algorithm is proposed to find all the possible contact cover between two blocks. Two blocks can come to contact only on these covers. Each contact cover can define a possible contact point-pair between two blocks. Data structure and flow chart are provided, as well as some examples in details. Contact problems involving concave blocks or parallel planes are considered to be very difficult in past and are solved by this algorithm. The proposed algorithm is compacted and applicable to the discontinuous computation, such as robotic control, rock mass stability, dam stability etc. A 3D cutting and block searching algorithm is also proposed in this study and used to search the outer boundary of the 3D entrance block when 3D concave blocks are encountered. The 3D cutting and block searching algorithm can be also used to form the block system for jointed rock.

Journal ArticleDOI
16 Jan 2019
TL;DR: CG_Hadoop is introduced; a suite of scalable and efficient MapReduce algorithms for various fundamental computational geometry operations, namely polygon union, Voronoi diagram, skyline, convex hull, farthest pair, and closest pair, which present a set of key components for other geometric algorithms.
Abstract: Hadoop, employing the MapReduce programming paradigm, has been widely accepted as the standard framework for analyzing big data in distributed environments. Unfortunately, this rich framework has not been exploited for processing large-scale computational geometry operations. This paper introduces CG_Hadoop; a suite of scalable and efficient MapReduce algorithms for various fundamental computational geometry operations, namely polygon union, Voronoi diagram, skyline, convex hull, farthest pair, and closest pair, which present a set of key components for other geometric algorithms. For each computational geometry operation, CG_Hadoop has two versions, one for the Apache Hadoop system and one for the SpatialHadoop system, a Hadoop-based system that is more suited for spatial operations. These proposed algorithms form the nucleus of a comprehensive MapReduce library of computational geometry operations. Extensive experimental results run on a cluster of 25 machines over datasets of size up to 3.8B records show that CG_Hadoop achieves up to 14x and 115x better performance than traditional algorithms when using Hadoop and SpatialHadoop systems, respectively.

Proceedings ArticleDOI
09 Sep 2019
TL;DR: In this article, the authors provide a systematic overview of curve simplification problems under global distance measures that bound the distance between a polygonal curve P and its corresponding section of the curve.
Abstract: Due to its many applications, curve simplification is a long-studied problem in computational geometry and adjacent disciplines, such as graphics, geographical information science, etc. Given a polygonal curve P with n vertices, the goal is to find another polygonal curve P' with a smaller number of vertices such that P' is sufficiently similar to P. Quality guarantees of a simplification are usually given in a local sense, bounding the distance between a shortcut and its corresponding section of the curve. In this work we aim to provide a systematic overview of curve simplification problems under global distance measures that bound the distance between P and P'. We consider six different curve distance measures: three variants of the Hausdorff distance and three variants of the Frechet distance. And we study different restrictions on the choice of vertices for P'. We provide polynomial-time algorithms for some variants of the global curve simplification problem, and show NP-hardness for other variants. Through this systematic study we observe, for the first time, some surprising patterns, and suggest directions for future research in this important area.

Journal ArticleDOI
TL;DR: A novel three-phase solution method, which consists of grid-based split, cover optimization and strip selection, for solving the scheduling problem of using Earth observation satellites to observe polygon requests, which outperforms the other solution methods in the case of all the tested instances and parameter settings.

Journal ArticleDOI
TL;DR: In this article, the problem of augmenting an n-vertex graph embedded in a metric space, by inserting one additional edge in order to minimize the diameter of the resulting graph is considered.
Abstract: We consider the problem of augmenting an n-vertex graph embedded in a metric space, by inserting one additional edge in order to minimize the diameter of the resulting graph. We present exact algor...

Proceedings ArticleDOI
01 Jun 2019
TL;DR: An improved algorithm that computes (roughly) a $1/d^2$-centerpoint with running time $\tldO(d^7)$ with the first progress on this well known problem in over twenty years is presented.
Abstract: We revisit an algorithm of Clarkson et al. [K. L. Clarkson et al., 1996], that computes (roughly) a 1/(4d^2)-centerpoint in O~(d^9) time, for a point set in R^d, where O~ hides polylogarithmic terms. We present an improved algorithm that computes (roughly) a 1/d^2-centerpoint with running time O~(d^7). While the improvements are (arguably) mild, it is the first progress on this well known problem in over twenty years. The new algorithm is simpler, and the running time bound follows by a simple random walk argument, which we believe to be of independent interest. We also present several new applications of the improved centerpoint algorithm.

Journal ArticleDOI
TL;DR: A new complexity analysis is proposed for the double hashing sort algorithm based on the relation between the size of the input and the domain of theinput elements to reveal that the previous complexity analysis was not accurate.

Journal ArticleDOI
TL;DR: A dot matrix method using the principles of computational geometry to place aggregates into matrices for the construction of mesolevel concrete models efficiently and rapidly is developed and several examples show that DDM is a robust and valid method to construct mesostructure concrete models.
Abstract: We develop a dot matrix method (DMM) using the principles of computational geometry to place aggregates into matrices for the construction of mesolevel concrete models efficiently and rapidly. The basic idea of the approach is to transform overlap detection between polygons (or polyhedrons) into checking the possibility of any intersection between the point sets within a trial placement aggregate and the already placed ones in mortar. Through the arithmetic operation of integer point sets, the efficiency of the underlying algorithm in the dot matrix method is higher. Our parking algorithm holds several advantages comparing with the conventional placement issues. First, it is suitable for arbitrary-shape aggregate particles. Second, it only needs two sets for examining if the overlap between a trial placement aggregate and the already placed ones. Third, it accurately places aggregates according to aggregate grading curves, by order of reduction, led to more efficiently reducing aggregate placement time. The present method is independent of the size of aggregate particles. Combing with 3D laser scanning technology, the present method can also be used to create mesostructure concrete models conveniently and flexibly. Several examples show that DDM is a robust and valid method to construct mesostructure concrete models.

Journal ArticleDOI
TL;DR: The proposed method of modelling projection screen and the corresponding automatic geometric correction scheme effectively increase the utilisation ratio of the original projection area of each projector and improve the calibration accuracy of multi-projector system with continuous curved surface.
Abstract: A large-scale multi-projector display system offers high-resolution, high-brightness and immersive visualisation for realistic experience to end users. It has been demonstrated to be effective tackling the conflict between the increasing demands of super-resolution display and the resolution limitation of a single display system. However, there is still no standardisation method for curved-surface projection screen. In this study, we propose a novel approach for calibrating multi-projector display systems, which have curved surfaces. First, based on a detailed analysis on arbitrarily curved surfaces, we present a three-dimensional reconstruction algorithm based on Bezier surface models. Then, for fully utilising the projection area of each projector, we propose a novel curved-surface stitching algorithm to achieve geometry seamlessness of multi-projector display systems. Experimental results show that by constructing local Bessel models for the curved screen, the proposed method performs better than traditional approaches, i.e. the new method achieves geometric calibration with higher accuracy. The proposed method of modelling projection screen and the corresponding automatic geometric correction scheme effectively increase the utilisation ratio of the original projection area of each projector and improve the calibration accuracy of multi-projector system with continuous curved surface.

Journal ArticleDOI
27 Feb 2019-Sensors
TL;DR: This work aims to present a new approach that expands upon current techniques and methods to locate the 2D position of a signal source sent by an emitter device by proposing to perform this triangulation by geometric models that exploit elements of pole-polar geometry.
Abstract: The 2D point location problem has applications in several areas, such as geographic information systems, navigation systems, motion planning, mapping, military strategy, location and tracking moves. We aim to present a new approach that expands upon current techniques and methods to locate the 2D position of a signal source sent by an emitter device. This new approach is based only on the geometric relationship between an emitter device and a system composed of m≥2 signal receiving devices. Current approaches applied to locate an emitter can be deterministic, statistical or machine-learning methods. We propose to perform this triangulation by geometric models that exploit elements of pole-polar geometry. For this purpose, we are presenting five geometric models to solve the point location problem: (1) based on centroid of points of pole-polar geometry, PPC; (2) based on convex hull region among pole-points, CHC; (3) based on centroid of points obtained by polar-lines intersections, PLI; (4) based on centroid of points obtained by tangent lines intersections, TLI; (5) based on centroid of points obtained by tangent lines intersections with minimal angles, MAI. The first one has computational cost On and whereas has the computational cost Onlognwhere n is the number of points of interest.