scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 2015"


Journal ArticleDOI
TL;DR: A self-contained and versatile tension distribution algorithm that is efficient and real-time compatible even in the worst case, and the worst-case maximum number of iterations of this algorithm is established.
Abstract: Redundancy resolution of redundantly actuated cable-driven parallel robots (CDPRs) requires the computation of feasible and continuous cable tension distributions along a trajectory. This paper focuses on n -DOF CDPRs driven by n + 2 cables, since, for n = 6, these redundantly actuated CDPRs are relevant in many applications. The set of feasible cable tensions of n -DOF ( n + 2)-cable CDPRs is a 2-D convex polygon. An algorithm that determines the vertices of this polygon in a clockwise or counterclockwise order is first introduced. This algorithm is efficient and can deal with infeasibility. It is then pointed out that straightforward modifications of this algorithm allow the determination of various (optimal) cable tension distributions. A self-contained and versatile tension distribution algorithm is thereby obtained. Moreover, the worst-case maximum number of iterations of this algorithm is established. Based on this result, its computational cost is analyzed in detail, showing that the algorithm is efficient and real-time compatible even in the worst case. Finally, experiments on two six-degree-of-freedom eight-cable CDPR prototypes are reported.

124 citations


Journal ArticleDOI
TL;DR: This method has been integrated into a CNC system with an open architecture to implement on-line linear five-axis tool path smoothing and its high computational efficiency allows it to be implemented in real-time applications.
Abstract: The widespread linear five-axis tool path (G01 blocks) is usually described by two trajectories. One trajectory describes the position of the tool tip point, and the other one describes the position of the second point on the tool axis. The inherent disadvantages of linear tool path are tangential and curvature discontinuities at the corners in five-axis tool path, which will result in feedrate fluctuation and decrease due to the kinematic constraints of the machine tools. In this paper, by using a pair of quintic PH curves, a smoothing method is proposed to round the corners. There are two steps involved in our method. Firstly, according to the accuracy requirements of the tool tip contour and tool orientation tolerances, the corner is rounded with a pair of PH curves directly. Then, the control polygon lengths of PH curves are adjusted simply to guarantee the continuous variation of the tool orientation at the junctions between the transition curves and the remainder linear segments. Because the PH curves for corner rounding can be constructed without any iteration, and those two rounded trajectories are synchronized linearly in interpolation, which makes this smoothing method can be applied in a high efficiency way. Its high computational efficiency allows it to be implemented in real-time applications. This method has been integrated into a CNC system with an open architecture to implement on-line linear five-axis tool path smoothing. Simulations and experiments validate its practicability and reliability.

89 citations


Patent
14 Aug 2015
TL;DR: In this article, the authors described an improved scanning ladar transmission system using a spinning polygon mirror for targeting range points according to a dynamic scan pattern, including a dynamic scanning pattern.
Abstract: Various embodiments are disclosed for improved scanning ladar transmission, including but not limited to an example embodiment where the scanning ladar transmission system includes a spinning polygon mirror for targeting range points according to a dynamic scan pattern.

83 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a multiscale zonation approach to characterize the spatial variability of Arctic polygonal ground geomorphology and assess the relative controls of these elements on land surface and subsurface properties and carbon fluxes.
Abstract: We develop a multiscale zonation approach to characterize the spatial variability of Arctic polygonal ground geomorphology and to assess the relative controls of these elements on land surface and subsurface properties and carbon fluxes. Working within an ice wedge polygonal region near Barrow, Alaska, we consider two scales of zonation: polygon features (troughs, centers, and rims of polygons) that are nested within different polygon types (high, flat, and low centered). In this study, we first delineated polygons using a digital elevation map and clustered the polygons into four types along two transects, using geophysical and kite-based landscape-imaging data sets. We extrapolated those data-defined polygon types to all the polygons over the study site, using the polygon statistics extracted from the digital elevation map. Based on the point measurements, we characterized the distribution of vegetation, hydrological, thermal, and geochemical properties, as well as carbon fluxes, all as a function of polygon types and polygon features. Results show that nested polygon geomorphic zonation—polygon types and polygon features—can be used to represent distinct distributions of carbon fluxes and associated properties, as well as covariability among those properties. Importantly, the results indicate that polygon types have more power to explain the variations in those properties than polygon features. The approach is expected to be useful for improved system understanding, site characterization, and parameterization of numerical models aimed at predicting ecosystem feedbacks to the climate.

79 citations


Journal ArticleDOI
TL;DR: It is proved that two natural generalizations of the problem of computing the minimum number of flips between two triangulations of a polygon with holes; a set of points in the plane are NP-complete.
Abstract: Given two triangulations of a convex polygon, computing the minimum number of flips required to transform one to the other is a long-standing open problem. It is not known whether the problem is in P or NP-complete. We prove that two natural generalizations of the problem are NP-complete, namely computing the minimum number of flips between two triangulations of (1) a polygon with holes; (2) a set of points in the plane.

62 citations


Journal ArticleDOI
TL;DR: In this article, a contact between two polyhedra is decomposed as a set of contacts between individual polygonal facets, and an individual contact force is obtained by integrating a linear pressure over the area of its intersection.

59 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that bounded penetrable obstacles with corners or edges scatter every incident wave nontrivially, provided the function of refractive index is real-analytic.
Abstract: Consider time-harmonic acoustic scattering problems governed by the Helmholtz equation in two and three dimensions. We prove that bounded penetrable obstacles with corners or edges scatter every incident wave nontrivially, provided the function of refractive index is real-analytic. Moreover, if such a penetrable obstacle is a convex polyhedron or polygon, then its shape can be uniquely determined by the far-field pattern over all observation directions incited by a single incident wave. Our arguments are elementary and rely on the expansion of solutions to the Helmholtz equation.

56 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed MapReduce algorithm with spatial grid index consumes less time than its peer without spatial index, and the proposed algorithm has an upward speed-up ratio when more nodes of Hadoop framework are used.
Abstract: As one of the important operations in Geographic Information System (GIS) spatial analysis, polygon overlay processing is a time-consuming task in many big data cases. In this paper, a specially designed MapReduce algorithm with grid index is proposed to decrease the running time. Our proposed algorithm can reduce the times of calling intersection computation by the aid of grid index. The experiment is carried out on the cloud framework based on Hadoop built by ourselves. Experimental results show that our algorithm with spatial grid index consumes less time than its peer without spatial index. Moreover, the proposed algorithm has an upward speed-up ratio when more nodes of Hadoop framework are used. Nevertheless, with the increase of nodes, the upward trend of speed-up ratio slows down.

50 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the first comprehensive spatial examination of active layer biogeochemistry that extends across high and low-centered, ice wedge polygons, their features, and with depth.
Abstract: Polygonal ground is a signature characteristic of Arctic lowlands, and carbon release from permafrost thaw can alter feedbacks to Arctic ecosystems and climate. This study describes the first comprehensive spatial examination of active layer biogeochemistry that extends across high- and low-centered, ice wedge polygons, their features, and with depth. Water chemistry measurements of 54 analytes were made on surface and active layer pore waters collected near Barrow, Alaska, USA. Significant differences were observed between high- and low-centered polygons suggesting that polygon types may be useful for landscape-scale geochemical classification. However, differences were found for polygon features (centers and troughs) for analytes that were not significant for polygon type, suggesting that finer-scale features affect biogeochemistry differently from polygon types. Depth variations were also significant, demonstrating important multidimensional aspects of polygonal ground biogeochemistry. These results have major implications for understanding how polygonal ground ecosystems function, and how they may respond to future change.

50 citations


14 Dec 2015
TL;DR: In this article, 155 participants responded to 15 warning polygons and rated the likelihood of a tornado striking their location and the likelihood that they would take nine different response actions ranging from continuing normal activities to getting in a car and driving somewhere safer.
Abstract: To provide people with more specific information about tornado threats, the National Weather Service has replaced its county-wide warnings with smaller warning polygons that more specifically indicate the risk area. However, tornado warning polygons do not have a standardized definition regarding tornado strike probabilities (p s) so it is unclear how warning recipients interpret them. To better understand this issue, 155 participants responded to 15 hypothetical warning polygons. After viewing each polygon, they rated the likelihood of a tornado striking their location and the likelihood that they would take nine different response actions ranging from continuing normal activities to getting in a car and driving somewhere safer. The results showed participants inferred that the p s was highest at the polygon’s centroid, lower just inside the edges of the polygon, still lower (but not zero) just outside the edges of the polygon, and lowest in locations beyond that. Moreover, higher p s values were associated with lower expectations of continuing normal activities and higher expectations of seeking information from social sources (but not environmental cues) and higher expectations of seeking shelter (but not evacuating in their cars). These results indicate that most people make some errors in their p s judgments but are likely to respond appropriately to the p s they infer from the warning polygons. Overall, the findings from this study and other research can help meteorologists to better understand how people interpret the uncertainty associated with warning polygons and, thus, improve tornado warning systems.

45 citations


Patent
29 Jul 2015
TL;DR: In this paper, a concave convex mixed complex polygon farmland unmanned aerial vehicle spraying operation flight path planning method was proposed, where all feature points on the boundary of the polygon farm were acquired sequentially, adjacent two points were connected by using a straight line sequentially according to the point acquisition sequence of the feature points, and an operation region of the boundary was generated.
Abstract: The invention discloses a concave convex mixed complex polygon farmland unmanned aerial vehicle spraying operation flight path planning method. Starting from any feature point on the boundary of the polygon farmland, all feature points of the boundary of the polygon farmland are acquired sequentially, adjacent two points are connected by using a straight line sequentially according to the point acquisition sequence of the feature points, and an operation region of the boundary of the polygon farmland is generated; the longest edge of the boundary of the operation region is found out; a feature point farthest from the longest edge is found out; N flight path lines are drawn between the longest edge and the feature point farthest from the longest edge; coordinates of all boundary line segments crossed with the flight path lines and flight path line crossed points are obtained; the number of the crossed points between the flight path lines and the boundary line segments is judged, and when the number of the crossed points is larger than 2, flight path lines outside the operation region range are deleted; and starting from a flight path line nearest to the longest edge, the flight path lines and side flight lines are connected in sequence, and an S-shaped flight path planning line is obtained. The flight path of the concave convex mixed complex polygon farmland unmanned aerial vehicle spraying operation can be planned.

Journal ArticleDOI
TL;DR: In this article, an analytical solution is presented for the gravity anomaly produced by a 2D body whose geometrical shape is arbitrary and where the density contrast is a polynomial function in both the horizontal and vertical directions.
Abstract: An analytical solution is presented for the gravity anomaly produced by a 2D body whose geometrical shape is arbitrary and where the density contrast is a polynomial function in both the horizontal and vertical directions. Approximating the real shape of the body by a polygon, the solution is expressed as sum of algebraic quantities that depend only upon the coordinates of the vertices of the polygon and upon the polynomial density function. The solution presented in the paper, which refers to a third-order polynomial function as a maximum, exhibits an intrinsic symmetry that naturally suggests its extension to the case of higher-order polynomials describing the density contrast. Furthermore, the gravity anomaly is evaluated at an arbitrary point that does not necessarily coincide with the origin of the reference frame in which the density function is assigned. Invoking recent results of potential theory, the solution derived in the paper is shown to be singularity-free and numerically robust. The accuracy and effectiveness of the proposed approach is witnessed by the numerical comparisons with examples derived from the existing literature.

Journal ArticleDOI
TL;DR: This letter proposes an approach toward standardizing building extraction evaluation, which may also have broader applications in the field of shape similarity, and shows that quantification by the PoLiS distance of the dissimilarity between polygons is consistent with visual perception.
Abstract: The standardization of evaluation techniques for building extraction is an unresolved issue in the fields of remote sensing, photogrammetry, and computer vision. In this letter, we propose a metric with a working title “PoLiS metric” to compare two polygons. The PoLiS metric is a positive-definite and symmetric function that satisfies a triangle inequality. It accounts for shape and accuracy differences between the polygons, is straightforward to apply, and requires no thresholds. We show through an example that the PoLiS metric between two polygons changes approximately linearly with respect to small translation, rotation, and scale changes. Furthermore, we compare building polygons extracted from a digital surface model to the reference building polygons by computing PoLiS, Hausdorff, and Chamfer distances. The results show that quantification by the PoLiS distance of the dissimilarity between polygons is consistent with visual perception. Furthermore, Hausdorff and Chamfer distances overrate the dissimilarity when one polygon has more vertices than the other. We propose an approach toward standardizing building extraction evaluation, which may also have broader applications in the field of shape similarity.

Patent
29 Jun 2015
TL;DR: In this article, a computer-implemented method was proposed to determine a set of mesh polygons or fragments of the mesh polygon visible from a navigation cell using the determined one or more intersections of the at least one wedge with the contained meshes.
Abstract: In an exemplary embodiment, a computer-implemented method determines a set of mesh polygons or fragments of the mesh polygons visible from a navigation cell. The method includes determining a composite view frustum containing predetermined view frusta and determining mesh polygons contained in the composite view frustum. The method includes determining at least one supporting polygon between the navigation cell and the contained mesh polygons. The method further includes constructing at least one wedge from the at least one supporting polygon, the at least one wedge extending away from the navigation cell beyond at least the contained mesh polygons. The method includes determining one or more intersections of the at least one wedge with the contained mesh polygons. The method also includes determining the set of the contained mesh polygons or fragments of the contained mesh polygons visible from the navigation cell using the determined one or more intersections.

Journal ArticleDOI
TL;DR: A novel concept of sparse computation of polygon CGH is proposed, inspired by an observation of the sparsity in the angular spectrum of a unit triangular polygon, and the accelerated algorithm using the intrinsic sparsity is presented for the enhancement of computational efficiency effectively.
Abstract: For the real-time computation of computer-generated holograms (CGHs), various accelerated algorithms have been actively investigated. This paper proposes a novel concept of sparse computation of polygon CGH, which is inspired by an observation of the sparsity in the angular spectrum of a unit triangular polygon and present the accelerated algorithm using the intrinsic sparsity in the polygon CGH pattern for the enhancement of computational efficiency effectively. It is shown with numerical results that computation efficiency can be greatly improved without degrading the quality of holographic image.

Journal ArticleDOI
13 May 2015
TL;DR: In this article, the explicitly represented polygon (ERP) wall boundary model is proposed to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations.
Abstract: This study presents an accurate and robust boundary model, the explicitly represented polygon (ERP) wall boundary model, to treat arbitrarily shaped wall boundaries in the explicit moving particle simulation (E-MPS) method, which is a mesh-free particle method for strong form partial differential equations. The ERP model expresses wall boundaries as polygons, which are explicitly represented without using the distance function. These are derived so that for viscous fluids, and with less computational cost, they satisfy the Neumann boundary condition for the pressure and the slip/no-slip condition on the wall surface. The proposed model is verified and validated by comparing computed results with the theoretical solution, results obtained by other models, and experimental results. Two simulations with complex boundary movements are conducted to demonstrate the applicability of the E-MPS method to the ERP model.

Journal ArticleDOI
TL;DR: In this article, an automatic crack propagation remeshing procedure using the polygon scaled boundary FEM is presented, where only minimal changes are made to the global mesh in each step.

Journal ArticleDOI
TL;DR: In this article, a scaled boundary finite element formulation was developed to model crack propagation in two-dimensions, which can accurately model the stress singularity at the crack tip in heterogeneous materials.
Abstract: A recently developed scaled boundary finite element formulation that can model the response of functionally graded materials is further developed to model crack propagation in two-dimensions. This formulation can accurately model the stress singularity at the crack tip in heterogeneous materials. The asymptotic behaviour at the crack tip is analytically represented in the scaled boundary shape functions of a cracked polygon. This enables accurate stress intensity factors to be computed directly from their definitions. Neither local mesh refinement nor asymptotic enrichment functions are required. This novel formulation can be implemented on polygons with an arbitrary number of sides. When modelling crack propagation, the remeshing process is more flexible and leads to only minimal changes to the global mesh structure. Six numerical examples involving crack propagation in functionally graded materials are modelled to demonstrate the salient features of the developed method.

Journal ArticleDOI
TL;DR: In this article, a geometric, graph-theoretical, and combinatorial view of the straight-skeleton of a simple polygon with arbitrary weights is presented. And a non-procedural description and a linear-time construction algorithm for the straight skeleton of strictly convex polygons of arbitrary weights are given.
Abstract: We investigate weighted straight skeletons from a geometric, graph-theoretical, and combinatorial point of view. We start with a thorough definition and shed light on some ambiguity issues in the procedural definition. We investigate the geometry, combinatorics, and topology of faces and the roof model, and we discuss in which cases a weighted straight skeleton is connected. Finally, we show that the weighted straight skeleton of even a simple polygon may be non-planar and may contain cycles, and we discuss under which restrictions on the weights and/or the input polygon the weighted straight skeleton still behaves similar to its unweighted counterpart. In particular, we obtain a non-procedural description and a linear-time construction algorithm for the straight skeleton of strictly convex polygons with arbitrary weights.

Proceedings ArticleDOI
04 May 2015
TL;DR: This work effective parallelization of the classic, highly sequential Greiner-Hormann algorithm, which yields the first output-sensitive CREW PRAM algorithm for a pair of simple polygons, and a practical, parallel GIS system for polygon overlay processing of two GIS layers containing large number of polygons over a cluster of compute nodes.
Abstract: Clipping arbitrary polygons is one of the complex operations in computer graphics and computational geometry. It is applied in many fields such as Geographic Information Systems (GIS) and VLSI CAD. We have two significant results to report. Our first result is the effective parallelization of the classic, highly sequential Greiner-Hormann algorithm, which yields the first output-sensitive CREW PRAM algorithm for a pair of simple polygons, and can perform clipping in O(logn) time using O(n + k) processors, where n is the total number of vertices and k is the number of edge intersections. This improves upon our previous clipping algorithm based on the parallelization of Vatti's sweepline algorithm, which requires O(n + k + k') processors to achieve logarithmic time complexity where k' can be O(n2) [1]. This also improves upon another O(logn) time algorithm by Karinthi, Srinivas, and Almasi which unlike our algorithm does not handle self-intersecting polygons, is not output-sensitive, and must employ Θ(n2) processors to achieve O(logn) time [2]. We also study multi-core and many-core implementations of our parallel Greiner-Hormann algorithm.Our second result is a practical, parallel GIS system, namely MPI-GIS, for polygon overlay processing of two GIS layers containing large number of polygons over a cluster of compute nodes. It employs R-tree for efficient indexing and identification of potentially intersecting set of polygons across two input GIS layers. Spatial data files tend to be large in size (in GBs) and the underlying overlay computation is highly irregular and compute intensive. This system achieves 44X speedup on a 32-node NERSC's CARVER cluster while processing about 600K polygons in two GIS layers within 19 seconds which takes over 13 minutes on state-of-art ArcGIS system.

Proceedings ArticleDOI
04 Jan 2015
TL;DR: An algorithm to determine whether a closed walk of length n in a simple plane graph is weakly simple in O(n log n) time is described, improving an earlier O( n3)-time algorithm of Cortese et al.
Abstract: A closed curve in the plane is weakly simple if it is the limit (in the Frechet metric) of a sequence of simple closed curves. We describe an algorithm to determine whether a closed walk of length n in a simple plane graph is weakly simple in O(n log n) time, improving an earlier O(n3)-time algorithm of Cortese et al. [Discrete Math. 2009]. As an immediate corollary, we obtain the first efficient algorithm to determine whether an arbitrary n-vertex polygon is weakly simple; our algorithm runs in O(n2 log n) time. We also describe algorithms that detect weak simplicity in O(n log n) time for two interesting classes of polygons. Finally, we discuss subtle errors in several previously published definitions of weak simplicity.

Proceedings ArticleDOI
17 Dec 2015
TL;DR: It is shown that if the curve satisfies a special property, termed chain-visibility, then there exists an optimal algorithm for monitoring a given set of target locations and a constant factor approximation for two special versions of the problem.
Abstract: We study the problem of planning paths for a team of robots motivated by coverage, persistent monitoring and surveillance applications. The input is a set of target points in a polygonal environment that must be monitored using robots with omni-directional cameras. The goal is to compute paths for all robots such that every target is visible from at least one path. The cost of a path is given by the weighted combination of the length of the path (travel time) and the number of viewpoints along the path (measurement time). The overall cost is given by the maximum cost over all robot paths and the objective is to minimize the maximum cost. In its general form, this problem is NP-hard. In this paper, we present an optimal algorithm and a constant factor approximation for two special versions of the problem. In both cases, the paths are restricted to lie on a pre-defined curve in the polygon. We show that if the curve satisfies a special property, termed chain-visibility, then there exists an optimal algorithm for monitoring a given set of target locations. Furthermore, if we restrict the input polygon to the class of street polygons, then we present a constant-factor approximation which is applicable even if the set of target locations is the entire polygon. In addition to theoretical proofs, we also present results from simulation studies.

Patent
02 Dec 2015
TL;DR: In this paper, a plant protection UAV operation route planning method and a device is presented, which consists of the steps of obtaining the image of an area to be operated and fitting the image as a convex polygon in a first coordinate system, a step of carrying out coordinate conversion from the first-coordinate system to a second-coordination system on the convex area corresponding to the polygon, and a step that divides the area in the second-coordination system into a plurality of sub-operation areas, each of which operates each of the sub-
Abstract: The present invention discloses a plant protection UAV operation route planning method and a device. The accuracy of operation can be improved, a repeated coverage rate and a deficient spraying rate are reduced, and energy consumption and dose consumption are saved. The method comprises the steps of obtaining the image of an area to be operated and fitting the image as a convex polygon in a first coordinate system, a step of carrying out coordinate conversion from the first coordinate system to a second coordinate system on the convex polygon area corresponding to the convex polygon, a step of dividing the polygon area in the second coordinate system into a plurality of sub operation areas, a step of operating each of the sub operation areas in the second coordinate system to obtain a flight route segment corresponding to the sub operation area in the second coordinate system, a step of obtaining the flight route segment of a plant protection UAV in the second coordination system, and a step of converting the flight route segment of the plant protection UAV in the second coordination system into the first coordinate system to obtain the flight route segment of the plant protection UAV in the first coordination system.

Journal ArticleDOI
TL;DR: This work studies the problem of computing the visibility polygon of any query point and the ray-shooting problem of finding the first point on the polygon boundaries that is hit by any query ray.
Abstract: Given a polygonal domain (or polygon with holes) in the plane, we study the problem of computing the visibility polygon of any query point. As a special case of visibility problems, we also study the ray-shooting problem of finding the first point on the polygon boundaries that is hit by any query ray. These are fundamental problems in computational geometry and have been studied extensively. We present new algorithms and data structures that improve the previous results.

Patent
17 Aug 2015
TL;DR: In this paper, the authors present a program of instructions that causes a computer system having a processor running the program to: receive design information indicative of the design of a three-dimensional object to be printed by a 3-dimensional printer; generate test product deformation information indicating of deformation in the profiles of no more than five, 3D test products that have a circular or polygonal cross section that were made by the 3D printer.
Abstract: A non-transitory, tangible, computer-readable storage media may contain a program of instructions that causes a computer system having a processor running the program of instructions to: receive design information indicative of the design of a three-dimensional object to be printed by a three-dimensional printer; receive test product deformation information indicative of deformation in the profiles of no more than five, three-dimensional test products that have a circular or polygonal cross section that were made by the three-dimensional printer; generate polygon product deformation information indicative of a predicted deformation of a polygon shape that the three-dimensional printer will print and that has a number of sides and a number of sizes that are both different from each of the number of sides and number of sizes that the no more than five, three-dimensional test products have; and generate adjustment information indicative of an adjustment needed to print a desired profile of the polygon shape that the three-dimensional printer will print to make the printed shape accurate.

Patent
Yulin Jin1, Emmett Lalish1, Kris Iverson1, Jesse D. McGatha1, Shanen J. Boettcher1 
27 Feb 2015
TL;DR: In this paper, a method, computing system, and one or more computer-readable storage media for fabricating full color three-dimensional objects are provided, which includes transforming a 3D model into instructions for a fabrication device by slicing the model into layers with color information preserved, generating two-dimensional polygons for each layer based on colors on faces, colors on textures, and/or gradient colors.
Abstract: A method, computing system, and one or more computer-readable storage media for fabricating full color three-dimensional objects are provided herein. The method includes transforming a three-dimensional model into instructions for a fabrication device by slicing the three-dimensional model into layers with color information preserved, generating two-dimensional polygons for each layer based on colors on faces, colors on textures, and/or gradient colors, and determining a tool path for fabricating an object from colored materials based on the two-dimensional polygons for each layer. Determining the tool path includes generating instructions that direct the fabrication device to apply colored material for all two-dimensional polygons of a same color before switching colors, smooth an exterior of the object by switching colors at an internal vertex of each two-dimensional polygon within each layer, and deposit transitional material within an infill area, a support structure, and/or an area outside the object when switching colors.

Journal ArticleDOI
TL;DR: The results demonstrate that WM capacity allocation cannot be explained by complexity alone and is highly sensitive to objecthood, as suggested by discrete slot models.

Journal ArticleDOI
TL;DR: A Delaunay-based, unified method for reconstruction irrespective of the type of the input point set, which works for boundary samples as well as dot patterns and has been shown to perform well independent of sampling models.

Journal ArticleDOI
TL;DR: In this article, the authors present an algorithm for multivariate interpolation of scattered data sets lying in convex domains, for any N ≥ 2, where a kd-tree space partitioning data structure is used to efficiently apply a partition of unity interpolant.
Abstract: In this paper, we present an algorithm for multivariate interpolation of scattered data sets lying in convex domains Ω ⊆ ℝN, for any N ≥ 2. To organize the points in a multidimensional space, we build a kd-tree space-partitioning data structure, which is used to efficiently apply a partition of unity interpolant. This global scheme is combined with local radial basis function (RBF) approximants and compactly supported weight functions. A detailed description of the algorithm for convex domains and a complexity analysis of the computational procedures are also considered. Several numerical experiments show the performances of the interpolation algorithm on various sets of Halton data points contained in Ω, where Ω can be any convex domain, like a 2D polygon or a 3D polyhedron. Finally, an application to topographical data contained in a pentagonal domain is presented.

Journal ArticleDOI
TL;DR: In this paper, generalized Fourier and Mellin transforms for analytic functions defined in simply connected circular domains are derived for convex polygons and the notions of spectral matrix and fundamental contour are introduced.
Abstract: Generalized Fourier–Mellin transforms for analytic functions defined in simply connected circular domains are derived. Circular domains are taken to be those with boundaries that are a finite union of circular arcs, including straight line edges. The results are an extension to circular domains of the generalized Fourier transforms for convex polygons (having only straight line edges) derived by Fokas and Kapaev (IMA J Appl Math 68:355–408, 2003). First, a new, elementary derivation of the latter result for polygons is given based on Cauchy’s integral formula and a spectral representation of the Cauchy kernel. This rederivation extends in a natural way to the case of circular domains once an adapted spectral representation of the Cauchy kernel is established. Domains with boundaries that are a combination of circular arc and straight line edges can be treated similarly. The newly derived transforms are generalizations of the classical Fourier and Mellin transforms to general circular domains. It is shown by example how they can be used to solve boundary value problems for Laplace’s equation in such domains. The notions of spectral matrix and fundamental contour, which arise naturally in the formulation, are also introduced.