scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 2020"


Proceedings ArticleDOI
14 Jun 2020
TL;DR: In this article, the authors propose a network architecture to represent a low dimensional family of convex polytopes via an auto-encoding process, and investigate the applications of this architecture including automatic convex decomposition, image to 3D reconstruction, and part-based shape retrieval.
Abstract: Any solid object can be decomposed into a collection of convex polytopes (in short, convexes). When a small number of convexes are used, such a decomposition can be thought of as a piece-wise approximation of the geometry. This decomposition is fundamental in computer graphics, where it provides one of the most common ways to approximate geometry, for example, in real-time physics simulation. A convex object also has the property of being simultaneously an explicit and implicit representation: one can interpret it explicitly as a mesh derived by computing the vertices of a convex hull, or implicitly as the collection of half-space constraints or support functions. Their implicit representation makes them particularly well suited for neural network training, as they abstract away from the topology of the geometry they need to represent. However, at testing time, convexes can also generate explicit representations – polygonal meshes – which can then be used in any downstream application. We introduce a network architecture to represent a low dimensional family of convexes. This family is automatically derived via an auto-encoding process. We investigate the applications of this architecture including automatic convex decomposition, image to 3D reconstruction, and part-based shape retrieval.

155 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: BSP-Net as mentioned in this paper learns to represent a 3D shape via convex decomposition, which can be used to generate polygonal meshes without any need for iso-surfacing.
Abstract: Polygonal meshes are ubiquitous in the digital 3D domain, yet they have only played a minor role in the deep learning revolution. Leading methods for learning generative models of shapes rely on implicit functions, and generate meshes only after expensive iso-surfacing routines. To overcome these challenges, we are inspired by a classical spatial data structure from computer graphics, Binary Space Partitioning (BSP), to facilitate 3D learning. The core ingredient of BSP is an operation for recursive subdivision of space to obtain convex sets. By exploiting this property, we devise BSP-Net, a network that learns to represent a 3D shape via convex decomposition. Importantly, BSP-Net is unsupervised since no convex shape decompositions are needed for training. The network is trained to reconstruct a shape using a set of convexes obtained from a BSP-tree built on a set of planes. The convexes inferred by BSP-Net can be easily extracted to form a polygon mesh, without any need for iso-surfacing. The generated meshes are compact (i.e., low-poly) and well suited to represent sharp geometry; they are guaranteed to be watertight and can be easily parameterized. We also show that the reconstruction quality by BSP-Net is competitive with state-of-the-art methods while using much fewer primitives. Code is available at https://github.com/czq142857/BSP-NET-original.

155 citations


Proceedings ArticleDOI
Justin Liang1, Namdar Homayounfar1, Wei-Chiu Ma1, Yuwen Xiong1, Rui Hu1, Raquel Urtasun1 
14 Jun 2020
TL;DR: The proposed PolyTransform is a novel instance segmentation algorithm that produces precise, geometry-preserving masks by combining the strengths of prevailing segmentation approaches and modern polygon-based methods.
Abstract: In this paper, we propose PolyTransform, a novel instance segmentation algorithm that produces precise, geometry-preserving masks by combining the strengths of prevailing segmentation approaches and modern polygon-based methods. In particular, we first exploit a segmentation network to generate instance masks. We then convert the masks into a set of polygons that are then fed to a deforming network that transforms the polygons such that they better fit the object boundaries. Our experiments on the challenging Cityscapes dataset show that our PolyTransform significantly improves the performance of the backbone instance segmentation network and ranks 1st on the Cityscapes test-set leaderboard. We also show impressive gains in the interactive annotation setting.

129 citations


Journal ArticleDOI
TL;DR: This study proposes an automatic building footprint extraction framework that consists of a convolutional neural network (CNN)-based segmentation and an empirical polygon regularization that transforms segmentation maps into structured individual building polygons.
Abstract: This study proposes an automatic building footprint extraction framework that consists of a convolutional neural network (CNN)-based segmentation and an empirical polygon regularization that transforms segmentation maps into structured individual building polygons. The framework attempts to replace part of the manual delineation of building footprints that are involved in surveying and mapping field with algorithms. First, we develop a scale robust fully convolutional network (FCN) by introducing multiple scale aggregation of feature pyramids from convolutional layers. Two postprocessing strategies are introduced to refine the segmentation maps from the FCN. The refined segmentation maps are vectorized and polygonized. Then, we propose a polygon regularization algorithm consisting of a coarse and fine adjustment, to translate the initial polygons into structured footprints. Experiments on a large open building data set including 181 000 buildings showed that our algorithm reached a high automation level where at least 50% of individual buildings in the test area could be delineated to replace manual work. Experiments on different data sets demonstrated that our FCN-based segmentation method outperformed several most recent segmentation methods, and our polygon regularization algorithm is robust in challenging situations with different building styles, image resolutions, and even low-quality segmentation.

126 citations


Posted Content
TL;DR: A new version of Y OLO with better performance and extended with instance segmentation called Poly-YOLO, which has the same precision as YOLOv3, but it is three times smaller and twice as fast, thus suitable for embedded devices.
Abstract: We present a new version of YOLO with better performance and extended with instance segmentation called Poly-YOLO. Poly-YOLO builds on the original ideas of YOLOv3 and removes two of its weaknesses: a large amount of rewritten labels and inefficient distribution of anchors. Poly-YOLO reduces the issues by aggregating features from a light SE-Darknet-53 backbone with a hypercolumn technique, using stairstep upsampling, and produces a single scale output with high resolution. In comparison with YOLOv3, Poly-YOLO has only 60% of its trainable parameters but improves mAP by a relative 40%. We also present Poly-YOLO lite with fewer parameters and a lower output resolution. It has the same precision as YOLOv3, but it is three times smaller and twice as fast, thus suitable for embedded devices. Finally, Poly-YOLO performs instance segmentation using bounding polygons. The network is trained to detect size-independent polygons defined on a polar grid. Vertices of each polygon are being predicted with their confidence, and therefore Poly-YOLO produces polygons with a varying number of vertices.

71 citations


Journal ArticleDOI
03 Apr 2020
TL;DR: Zhang et al. as mentioned in this paper proposed a novel unified framework which decomposes the detection problem into a structured polygon prediction task and a depth recovery task, which consists of several projected surfaces of the target object.
Abstract: Monocular 3D object detection task aims to predict the 3D bounding boxes of objects based on monocular RGB images. Since the location recovery in 3D space is quite difficult on account of absence of depth information, this paper proposes a novel unified framework which decomposes the detection problem into a structured polygon prediction task and a depth recovery task. Different from the widely studied 2D bounding boxes, the proposed novel structured polygon in the 2D image consists of several projected surfaces of the target object. Compared to the widely-used 3D bounding box proposals, it is shown to be a better representation for 3D detection. In order to inversely project the predicted 2D structured polygon to a cuboid in the 3D physical world, the following depth recovery task uses the object height prior to complete the inverse projection transformation with the given camera projection matrix. Moreover, a fine-grained 3D box refinement scheme is proposed to further rectify the 3D detection results. Experiments are conducted on the challenging KITTI benchmark, in which our method achieves state-of-the-art detection accuracy.

55 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare different approaches and mapping units to provide a robust methodology for susceptibility mapping using a combination of landslide point and polygon data, characterized by different meanings, uncertainties and levels of reliability.

49 citations


Journal ArticleDOI
TL;DR: PolygonCNN is introduced, a learnable end-to-end vector shape modeling framework for generating building outlines from aerial images and proposes a simplify-and-densify sampling strategy to generate homogeneously sampled polygon with well-kept geometric signals for shape prior learning.
Abstract: The identification and annotation of buildings has long been a tedious and expensive part of high-precision vector map production. The deep learning techniques such as fully convolution network (FCN) have largely promoted the accuracy of automatic building segmentation from remote sensing images. However, compared with the deep-learning-based building segmentation methods that greatly benefit from data-driven feature learning, the building boundary vector representation generation techniques mainly rely on handcrafted features and high human intervention. These techniques continue to employ manual design and ignore the opportunity of using the rich feature information that can be learned from training data to directly generate vectorized boundary descriptions. Aiming to address this problem, we introduce PolygonCNN, a learnable end-to-end vector shape modeling framework for generating building outlines from aerial images. The framework first performs an FCN-like segmentation to extract initial building contours. Then, by encoding the vertices of the building polygons along with the pooled image features extracted from segmentation step, a modified PointNet is proposed to learn shape priors and predict a polygon vertex deformation to generate refined building vector results. Additionally, we propose 1) a simplify-and-densify sampling strategy to generate homogeneously sampled polygon with well-kept geometric signals for shape prior learning; and 2) a novel loss function for estimating shape similarity between building polygons with vastly different vertex numbers. The experiments on over 10,000 building samples verify that PolygonCNN can generate building vectors with higher vertex-based F1-score than the state-of-the-art method, and simultaneously well maintains the building segmentation accuracy achieved by the FCN-like model.

49 citations


Proceedings Article
12 Jul 2020
TL;DR: In this article, a Transformer-based architecture is proposed to predict mesh vertices and faces using a range of inputs, including object classes, voxels and images, and because the model is probabilistic it can produce samples that capture uncertainty in ambiguous scenarios.
Abstract: Polygon meshes are an efficient representation of 3D geometry, and are of central importance in computer graphics, robotics and games development. Existing learning-based approaches have avoided the challenges of working with 3D meshes, instead using alternative object representations that are more compatible with neural architectures and training approaches. We present an approach which models the mesh directly, predicting mesh vertices and faces sequentially using a Transformer-based architecture. Our model can condition on a range of inputs, including object classes, voxels, and images, and because the model is probabilistic it can produce samples that capture uncertainty in ambiguous scenarios. We show that the model is capable of producing high-quality, usable meshes, and establish log-likelihood benchmarks for the mesh-modelling task. We also evaluate the conditional models on surface reconstruction metrics against alternative methods, and demonstrate competitive performance despite not training directly on this task.

44 citations


Posted Content
TL;DR: A novel unified framework which decomposes the detection problem into a structured polygon prediction task and a depth recovery task is proposed, in which the method achieves state-of-the-art detection accuracy.
Abstract: Monocular 3D object detection task aims to predict the 3D bounding boxes of objects based on monocular RGB images. Since the location recovery in 3D space is quite difficult on account of absence of depth information, this paper proposes a novel unified framework which decomposes the detection problem into a structured polygon prediction task and a depth recovery task. Different from the widely studied 2D bounding boxes, the proposed novel structured polygon in the 2D image consists of several projected surfaces of the target object. Compared to the widely-used 3D bounding box proposals, it is shown to be a better representation for 3D detection. In order to inversely project the predicted 2D structured polygon to a cuboid in the 3D physical world, the following depth recovery task uses the object height prior to complete the inverse projection transformation with the given camera projection matrix. Moreover, a fine-grained 3D box refinement scheme is proposed to further rectify the 3D detection results. Experiments are conducted on the challenging KITTI benchmark, in which our method achieves state-of-the-art detection accuracy.

43 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: An algorithm for extracting and vectorizing objects in images with polygons is presented, which refines the geometry of the partition while labeling its cells by a semantic class and demonstrates its efficiency compared to existing vectorization methods.
Abstract: We present an algorithm for extracting and vectorizing objects in images with polygons. Departing from a polygonal partition that oversegments an image into convex cells, the algorithm refines the geometry of the partition while labeling its cells by a semantic class. The result is a set of polygons, each capturing an object in the image. The quality of a configuration is measured by an energy that accounts for both the fidelity to input data and the complexity of the output polygons. To efficiently explore the configuration space, we perform splitting and merging operations in tandem on the cells of the polygonal partition. The exploration mechanism is controlled by a priority queue that sorts the operations most likely to decrease the energy. We show the potential of our algorithm on different types of scenes, from organic shapes to man-made objects through floor maps, and demonstrate its efficiency compared to existing vectorization methods.

Journal ArticleDOI
TL;DR: In this paper, the structure of the Higgs branch of superconformal field theories or gauge theories was derived from their realization as a generalized toric polygon (or dot diagram), motivated by a dual, tropical curve decomposition of the (p, q) 5-brane web system.
Abstract: We derive the structure of the Higgs branch of 5d superconformal field theories or gauge theories from their realization as a generalized toric polygon (or dot diagram). This approach is motivated by a dual, tropical curve decomposition of the (p, q) 5-brane-web system. We define an edge coloring, which provides a decomposition of the generalized toric polygon into a refined Minkowski sum of sub-polygons, from which we compute the magnetic quiver. The Coulomb branch of the magnetic quiver is then conjecturally identified with the 5d Higgs branch. Furthermore, from partial resolutions, we identify the symplectic leaves of the Higgs branch and thereby the entire foliation structure. In the case of strictly toric polygons, this approach reduces to the description of deformations of the Calabi-Yau singularities in terms of Minkowski sums.

Journal ArticleDOI
09 Sep 2020
TL;DR: In this article, the authors proved that the sum of cosines of the angles of a periodic billiard polygon remains constant in the 1-parameter family of such polygons (that exist due to the Poncelet porism).
Abstract: We prove some recent experimental observations of Dan Reznik concerning periodic billiard orbits in ellipses. For example, the sum of cosines of the angles of a periodic billiard polygon remains constant in the 1-parameter family of such polygons (that exist due to the Poncelet porism). In our proofs, we use geometric and complex analytic methods.

Journal ArticleDOI
TL;DR: In this article, the structure of the Higgs branch of 5d superconformal field theories or gauge theories was derived from their realization as a generalized toric polygon (or dot diagram).
Abstract: We derive the structure of the Higgs branch of 5d superconformal field theories or gauge theories from their realization as a generalized toric polygon (or dot diagram). This approach is motivated by a dual, tropical curve decomposition of the $(p,q)$ 5-brane-web system. We define an edge coloring, which provides a decomposition of the generalized toric polygon into a refined Minkowski sum of sub-polygons, from which we compute the magnetic quiver. The Coulomb branch of the magnetic quiver is then conjecturally identified with the 5d Higgs branch. Furthermore, from partial resolutions, we identify the symplectic leaves of the Higgs branch and thereby the entire foliation structure. In the case of strictly toric polygons, this approach reduces to the description of deformations of the Calabi-Yau singularities in terms of Minkowski sums.

Journal ArticleDOI
TL;DR: In this paper, high optical quality factor (Q) polygonal and star coherent optical modes in a lithium niobate microdisk have been observed, and the resulting high intracavity optical power of the polygon modes triggers second harmonic generation at high efficiency.
Abstract: We observe high optical quality factor (Q) polygonal and star coherent optical modes in a lithium niobate microdisk. In contrast to the previous polygon modes achieved by deformed microcavities at lower mechanical and optical Q, we adopt weak perturbation from a tapered fiber for the polygon mode formation. The resulting high intracavity optical power of the polygon modes triggers second harmonic generation at high efficiency. With the combined advantages of a high mechanical Q cavity, we observe optomechanical oscillation in polygon modes for the first time. Finally, we observe frequency microcomb generation from the polygon modes with an ultrastable taper-on-disk coupling mechanism.

Posted Content
TL;DR: This work designs a novel curved bounding box model that has optimal properties for fisheye distortion models and designs a curvature adaptive perimeter sampling method for obtaining polygon vertices, improving relative mAP score by 4.9% compared to uniform sampling.
Abstract: Object detection is a comprehensively studied problem in autonomous driving. However, it has been relatively less explored in the case of fisheye cameras. The standard bounding box fails in fisheye cameras due to the strong radial distortion, particularly in the image's periphery. We explore better representations like oriented bounding box, ellipse, and generic polygon for object detection in fisheye images in this work. We use the IoU metric to compare these representations using accurate instance segmentation ground truth. We design a novel curved bounding box model that has optimal properties for fisheye distortion models. We also design a curvature adaptive perimeter sampling method for obtaining polygon vertices, improving relative mAP score by 4.9% compared to uniform sampling. Overall, the proposed polygon model improves mIoU relative accuracy by 40.3%. It is the first detailed study on object detection on fisheye cameras for autonomous driving scenarios to the best of our knowledge. The dataset comprising of 10,000 images along with all the object representations ground truth will be made public to encourage further research. We summarize our work in a short video with qualitative results at this https URL.

Journal ArticleDOI
TL;DR: Simulation and experimental results demonstrate that the proposed B-spline approximation approach can significantly improve machining efficiency while ensuring the surface quality.
Abstract: This paper presents a unified framework for computing a B-spline curve to approximate the micro-line toolpath within the desired fitting accuracy. First, a bi-chord error test extended from our previous work is proposed to select the dominant points that govern the overall shape of the micro-line toolpath. It fully considers the geometric characteristics of the micro-line toolpath, i.e., the curvature, the curvature variation and the torsion, appropriately determining the distribution of the dominant points. Second, an initial B-spline curve is constructed by the dominant points in the least square sense. The fitting error is unpredictable and uncontrollable. It is classified into two types: (a) the geometric deviations between the vertices of the polygon formed by the data points and the constructed B-spline curve; (b) those between the edges of the polygon and the constructed B-spline curve. Herein, an applicable dominant point insertion is employed to keep the first geometric deviation within the specified tolerance of fitting error. A geometric deviation model extended from our previous work is developed to estimate the second geometric deviation. It can be effectively integrated into global toolpath optimization. Computational results demonstrate that the bi-chord error test applies to both the planar micro-line toolpath and the spatial micro-line toolpath, and it can greatly reduce the number of the control points. Simulation and experimental results demonstrate that the proposed B-spline approximation approach can significantly improve machining efficiency while ensuring the surface quality.

Journal ArticleDOI
TL;DR: A robust MAT-based method for detecting building corner points and a hierarchical corner-aware segmentation to cluster skeleton points based on their properties, which imply that skeletonization is a promising tool to extract relevant geometric information on e.g. building outlines even from far from perfect geographical point cloud data.

Posted Content
TL;DR: In this article, the authors proved that the sum of cosines of the angles of a periodic billiard polygon remains constant in the one-parameter family of such polygons (that exist due to the Poncelet porism).
Abstract: We prove some recent experimental observations of D. Reznik concerning periodic billiard orbits in ellipses. For example, the sum of cosines of the angles of a periodic billiard polygon remains constant in the one-parameter family of such polygons (that exist due to the Poncelet porism). In our proofs, we use geometric and complex analytic methods.

Journal ArticleDOI
TL;DR: Compared with the solid isotropic material penalization (SIMP) approach, the underlying approach can produce a stiffer optimum design with an explicit boundary description whilst the number of design variables dramatically reduces.

Journal ArticleDOI
TL;DR: A framework that creates a new polygonal mesh representation of the sparse infill domain of a layer-by-layer 3D printing job and guarantees the existence of a single, continuous tool path covering each connected piece of the domain in every layer in this graphical model is developed.
Abstract: We develop a framework that creates a new polygonal mesh representation of the sparse infill domain of a layer-by-layer 3D printing job. We guarantee the existence of a single, continuous tool path covering each connected piece of the domain in every layer in this graphical model. We also present a tool path algorithm that traverses each such continuous tool path with no crossovers. The key construction at the heart of our framework is a novel Euler transformation which converts a 2-dimensional cell complex K into a new 2-complex K ˆ such that every vertex in the 1-skeleton G ˆ of K ˆ has even degree. Hence G ˆ is Eulerian, and an Eulerian tour can be followed to print all edges in a continuous fashion without stops. We start with a mesh K of the union of polygons obtained by projecting all layers to the plane. First we compute its Euler transformation K ˆ . In the slicing step, we clip K ˆ at each layer using its polygon to obtain a complex that may not necessarily be Euler. We then patch this complex by adding edges such that any odd-degree nodes created by slicing are transformed to have even degrees again. We print extra support edges in place of any segments left out to ensure there are no edges without support in the next layer above. These support edges maintain the Euler nature of the complex. Finally, we describe a tree-based search algorithm that builds the continuous tool path by traversing “concentric” cycles in the Euler complex. Our algorithm produces a tool path that avoids material collisions and crossovers, and can be printed in a continuous fashion irrespective of complex geometry or topology of the domain (e.g., holes). We implement our test our framework on several 3D objects. Apart from standard geometric shapes including a nonconvex star, we demonstrate the framework on the Stanford bunny. Several intermediate layers in the bunny have multiple components as well as complicated geometries.

Journal ArticleDOI
TL;DR: Vectorizing raster clip-art inputs results in visible artifacts, but outputs created using an intermediate polygonal approximation step are more consistent with viewer expectations than those produced by these alternatives.
Abstract: Raster clip-art images, which consist of distinctly colored regions separated by sharp boundaries typically allow for a clear mental vector interpretation. Converting these images into vector format can facilitate compact lossless storage and enable numerous processing operations. Despite recent progress, existing vectorization methods that target such data frequently produce vectorizations that fail to meet viewer expectations. We present PolyFit, a new clip-art vectorization method that produces vectorizations well aligned with human preferences. Since segmentation of such inputs into regions had been addressed successfully, we specifically focus on fitting piecewise smooth vector curves to the raster input region boundaries, a task prior methods are particularly prone to fail on. While perceptual studies suggest the criteria humans are likely to use during mental boundary vectorization, they provide no guidance as to the exact interaction between them; learning these interactions directly is problematic due to the large size of the solution space. To obtain the desired solution, we first approximate the raster region boundaries with coarse intermediate polygons leveraging a combination of perceptual cues with observations from studies of human preferences. We then use these intermediate polygons as auxiliary inputs for computing piecewise smooth vectorizations of raster inputs. We define a finite set of potential polygon to curve primitive maps, and learn the mapping from the polygons to their best fitting primitive configurations from human annotations, arriving at a compact set of local raster and polygon properties whose combinations reliably predict human-expected primitive choices. We use these primitives to obtain a final globally consistent spline vectorization. Extensive comparative user studies show that our method outperforms state-of-the-art approaches on a wide range of data, where our results are preferred three times as often as those of the closest competitor across multiple types of inputs with various resolutions.

Journal ArticleDOI
TL;DR: Two secure transmission algorithms for millimeter-wave wireless communication are presented, which are computationally attractive and have analytical solutions for solving the traditional constellation synthesis problem with the aid of polygon construction in the complex plane.
Abstract: This paper presents two secure transmission algorithms for millimeter-wave wireless communication, which are computationally attractive and have analytical solutions. In the proposed algorithms, we consider phased-array transmission structure and focus on phase shift keying (PSK) modulation. It is found that the traditional constellation synthesis problem can be solved with the aid of polygon construction in the complex plane. A detailed analysis is then carried out and an analytical procedure is developed to obtain a qualified phase solution. For a given synthesis task, it is derived that there exist infinite weight vector solutions under a mild condition. Based on this result, we propose the first secure transmission algorithm by varying the transmitting weight vector at symbol rate, thus resulting exact phases at the intended receiver and producing randomnesses at the undesired eavesdroppers. To improve the security without significantly degrading the symbol detection reliability for target receiver, the second secure transmission algorithm is devised by allowing a relaxed symbol region for the intended receiver. Compared to the first algorithm, the second one incorporates an additional random phase rotation operation to the transmitting weight vector and brings extra disturbance for the undesired eavesdroppers. Different from the existing works that are only feasible for the case of single-path mmWave channels, our proposed algorithms are applicable to more general multi-path channels. Moreover, all the antennas are active in the proposed algorithms and the on-off switching circuit is not needed. Simulations are presented to demonstrate the effectivenesses of the proposed algorithms under various situations.

Journal ArticleDOI
TL;DR: This work introduces Delaunay Point Processes, a framework for the extraction of geometric structures from images that uses Markov Chain Monte Carlo to minimize an energy that balances fidelity to the input image data with geometric priors on the output structures.
Abstract: We introduce Delaunay Point Processes, a framework for the extraction of geometric structures from images. Our approach simultaneously locates and groups geometric primitives (line segments, triangles) to form extended structures (line networks, polygons) for a variety of image analysis tasks. Similarly to traditional point processes, our approach uses Markov Chain Monte Carlo to minimize an energy that balances fidelity to the input image data with geometric priors on the output structures. However, while existing point processes struggle to model structures composed of inter-connected components, we propose to embed the point process into a Delaunay triangulation, which provides high-quality connectivity by construction. We further leverage key properties of the Delaunay triangulation to devise a fast Markov Chain Monte Carlo sampler. We demonstrate the flexibility of our approach on a variety of applications, including line network extraction, object contouring, and mesh-based image compression.

Proceedings ArticleDOI
14 Jun 2020
TL;DR: An hybrid method that successively connects and slices planes detected from 3D data, constructing an efficient and compact partitioning data structure that is spatially-adaptive and scalable.
Abstract: Converting point clouds generated by Laser scanning, multiview stereo imagery or depth cameras into compact polygon meshes is a challenging problem in vision. Existing methods are either robust to imperfect data or scalable, but rarely both. In this paper, we address this issue with an hybrid method that successively connects and slices planes detected from 3D data. The core idea consists in constructing an efficient and compact partitioning data structure. The later is i) spatially-adaptive in the sense that a plane slices a restricted number of relevant planes only, and ii) composed of components with different structural meaning resulting from a preliminary analysis of the plane connectivity. Our experiments on a variety of objects and sensors show the versatility of our approach as well as its competitiveness with respect to existing methods.

Journal ArticleDOI
Jiansheng Zhang1, Jie Zhou1, Qiuyun Wang1, Guiqian Xiao1, Guo-zheng Quan1 
TL;DR: The WAAM process of a failed crankshaft forging die is studied, and an interpolation fitting algorithm is introduced to improve the accuracy of outline polygon and the criterion equation of outlinepolygon’s orientation is deduced.
Abstract: Many problems like precision control difficulty, unstable structure properties, and waste of welding materials were encountered when remanufacturing hot forging die with manual arc surfacing method. Therefore, the automatic wire arc additive remanufacturing (WAAM) technology is put forward. By using failed forging die as remanufacturing substrate, this technology can bring out double advantages of cyclic utilization and high accuracy, which is of great application prospect. In this paper, the WAAM process of a failed crankshaft forging die is studied. Firstly, a new slicing algorithm which gets its outline polygon by classifying and reconstructing the intersection points of triangular patches and tangent planes is proposed. The key thought of this algorithm is to judge the order of these intersection points with normal vector and put them into the “starting” or “ending” arrays. By traversing through these two arrays, the treatment efficiency gets much better than before. Simultaneously, by parameterizing the chord length accumulation, an interpolation fitting algorithm is introduced to improve the accuracy of outline polygon and the criterion equation of outline polygon’s orientation is deduced. Then a composite filling algorithm with uniform inside and smooth margin is developed to meet the requirements of WAAM process. Finally, the WAAM process for a failed crankshaft forging die is conducted, which justifies the feasibility of the whole process.

Journal ArticleDOI
Sujit Kumar De1
TL;DR: The relative error is constructed as the measure of the degree of fuzziness and the index value of the fuzzy parameter after K years is presented and its role in modern decision making problems is discussed.
Abstract: In this article, we develop the definition of the degree of fuzziness for a polygonal fuzzy set. First of all we take a polygon of sides 2n + 1 and then we calculate its area. Also we consider the ...

Posted Content
TL;DR: A sharper error analysis for the Virtual Element Method is developed that separates the element boundary and element interior contributions to the error and a variant of the scheme is proposed that allows to take advantage of polygons with many edges in order to yield a more accurate discrete solution.
Abstract: In the present contribution we develop a sharper error analysis for the Virtual Element Method, applied to a model elliptic problem, that separates the element boundary and element interior contributions to the error. As a consequence we are able to propose a variant of the scheme that allows to take advantage of polygons with many edges (such as those composing Voronoi meshes or generated by agglomeration procedures) in order to yield a more accurate discrete solution. The theoretical results are supported by numerical experiments.

Journal ArticleDOI
TL;DR: The scaled boundary finite element method for transient thermoelastic fracture analysis facilitates an accurate and direct evaluation of the stress intensity factors from their definition without resorting to any post-processing techniques using relatively coarse meshes.

Journal ArticleDOI
TL;DR: This high-resolution inventory of polygonal geomorphology provides rich spatial context for extrapolating observations of environmental processes across the landscape and represents an extensive baseline dataset for quantifying contemporary land surface deformation at the survey area, through future topographic surveys.
Abstract: It is well known that microtopography associated with ice wedge polygons drives pronounced, meter-scale spatial gradients in hydrologic and ecological processes on the tundra. However, high-resolution maps of polygonal geomorphology are rarely available, due to the complexity and subtlety of ice wedge polygon relief at landscape scales. Here we present a sub-meter resolution map of >106 discrete ice wedge polygons across a ~1200 km2 landscape, delineated within a lidar-derived digital elevation model. The delineation procedure relies on a convolutional neural network paired with a set of common image processing operations and permits explicit measurement of relative elevation at the center of each ice wedge polygon. The resulting map visualizes meter- to kilometer-scale spatial gradients in polygonal geomorphology across an extensive landscape with unprecedented detail. This high-resolution inventory of polygonal geomorphology provides rich spatial context for extrapolating observations of environmental processes across the landscape. The map also represents an extensive baseline dataset for quantifying contemporary land surface deformation (i.e., thermokarst) at the survey area, through future topographic surveys.