scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 2019"


Proceedings ArticleDOI
01 Oct 2019
TL;DR: This work proposes a new approach, named PolyMapper, to circumvent the conventional pixel-wise segmentation of (aerial) images and predict objects in a vector representation directly, which will help develop models with more informed geometrical constraints.
Abstract: We propose a new approach, named PolyMapper, to circumvent the conventional pixel-wise segmentation of (aerial) images and predict objects in a vector representation directly. PolyMapper directly extracts the topological map of a city from overhead images as collections of building footprints and road networks. In order to unify the shape representation for different types of objects, we also propose a novel sequentialization method that reformulates a graph structure as closed polygons. Experiments are conducted on both existing and self-collected large-scale datasets of several cities. Our empirical results demonstrate that our end-to-end learnable model is capable of drawing polygons of building footprints and road networks that very closely approximate the structure of existing online map services, in a fully automated manner. Quantitative and qualitative comparison to the state-of-the-arts also show that our approach achieves good levels of performance. To the best of our knowledge, the automatic extraction of large-scale topological maps is a novel contribution in the remote sensing community that we believe will help develop models with more informed geometrical constraints.

116 citations


Posted Content
TL;DR: BSP-Net is a network that learns to represent a 3D shape via convex decomposition, which is unsupervised since no convex shape decompositions are needed for training and the reconstruction quality is competitive with state-of-the-art methods while using much fewer primitives.
Abstract: Polygonal meshes are ubiquitous in the digital 3D domain, yet they have only played a minor role in the deep learning revolution. Leading methods for learning generative models of shapes rely on implicit functions, and generate meshes only after expensive iso-surfacing routines. To overcome these challenges, we are inspired by a classical spatial data structure from computer graphics, Binary Space Partitioning (BSP), to facilitate 3D learning. The core ingredient of BSP is an operation for recursive subdivision of space to obtain convex sets. By exploiting this property, we devise BSP-Net, a network that learns to represent a 3D shape via convex decomposition. Importantly, BSP-Net is unsupervised since no convex shape decompositions are needed for training. The network is trained to reconstruct a shape using a set of convexes obtained from a BSP-tree built on a set of planes. The convexes inferred by BSP-Net can be easily extracted to form a polygon mesh, without any need for iso-surfacing. The generated meshes are compact (i.e., low-poly) and well suited to represent sharp geometry; they are guaranteed to be watertight and can be easily parameterized. We also show that the reconstruction quality by BSP-Net is competitive with state-of-the-art methods while using much fewer primitives. Code is available at this https URL.

115 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: This paper proposes a Deep Active Ray Network (DARNet) for automatic building segmentation that is trained end-to-end by back-propagating through the energy minimization and the backbone CNN, which makes the CNN adapt to the dynamics of the contour evolution.
Abstract: In this paper, we propose a Deep Active Ray Network (DARNet) for automatic building segmentation. Taking an image as input, it first exploits a deep convolutional neural network (CNN) as the backbone to predict energy maps, which are further utilized to construct an energy function. A polygon-based contour is then evolved via minimizing the energy function, of which the minimum defines the final segmentation. Instead of parameterizing the contour using Euclidean coordinates, we adopt polar coordinates, i.e., rays, which not only prevents self-intersection but also simplifies the design of the energy function. Moreover, we propose a loss function that directly encourages the contours to match building boundaries. Our DARNet is trained end-to-end by back-propagating through the energy minimization and the backbone CNN, which makes the CNN adapt to the dynamics of the contour evolution. Experiments on three building instance segmentation datasets demonstrate our DARNet achieves either state-of-the-art or comparable performances to other competitors.

99 citations


Posted Content
TL;DR: This work introduces a network architecture to represent a low dimensional family of convexes, automatically derived via an auto-encoding process, and investigates the applications including automatic convex decomposition, image to 3D reconstruction, and part-based shape retrieval.
Abstract: Any solid object can be decomposed into a collection of convex polytopes (in short, convexes). When a small number of convexes are used, such a decomposition can be thought of as a piece-wise approximation of the geometry. This decomposition is fundamental in computer graphics, where it provides one of the most common ways to approximate geometry, for example, in real-time physics simulation. A convex object also has the property of being simultaneously an explicit and implicit representation: one can interpret it explicitly as a mesh derived by computing the vertices of a convex hull, or implicitly as the collection of half-space constraints or support functions. Their implicit representation makes them particularly well suited for neural network training, as they abstract away from the topology of the geometry they need to represent. However, at testing time, convexes can also generate explicit representations -- polygonal meshes -- which can then be used in any downstream application. We introduce a network architecture to represent a low dimensional family of convexes. This family is automatically derived via an auto-encoding process. We investigate the applications of this architecture including automatic convex decomposition, image to 3D reconstruction, and part-based shape retrieval.

78 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe a method for efficiently computing parallel transport of tangent vectors on curved surfaces, or more generally, any vector-valued data on a curved manifold.
Abstract: This article describes a method for efficiently computing parallel transport of tangent vectors on curved surfaces, or more generally, any vector-valued data on a curved manifold. More precisely, it extends a vector field defined over any region to the rest of the domain via parallel transport along shortest geodesics. This basic operation enables fast, robust algorithms for extrapolating level set velocities, inverting the exponential map, computing geometric medians and Karcher/Frechet means of arbitrary distributions, constructing centroidal Voronoi diagrams, and finding consistently ordered landmarks. Rather than evaluate parallel transport by explicitly tracing geodesics, we show that it can be computed via a short-time heat flow involving the connection Laplacian. As a result, transport can be achieved by solving three prefactored linear systems, each akin to a standard Poisson problem. To implement the method, we need only a discrete connection Laplacian, which we describe for a variety of geometric data structures (point clouds, polygon meshes, etc.). We also study the numerical behavior of our method, showing empirically that it converges under refinement, and augment the construction of intrinsic Delaunay triangulations so that they can be used in the context of tangent vector field processing.

54 citations


Journal ArticleDOI
TL;DR: A quadtree-polygon scaled boundary finite element-based approach for image-based modelling of concrete fracture at the mesoscale is developed in this article, where the digital images are automatically discretised for analysis by applying a balanced quadtree decomposition in combination with a smoothing operation.

44 citations


Journal ArticleDOI
TL;DR: An unsupervised approach that extracts reliable labeled units from outdated maps to update them using time series of recent multispectral (MS) images using an ensemble of classifiers trained using only the reference data derived from the map.
Abstract: This paper presents an unsupervised approach that extracts reliable labeled units from outdated maps to update them using time series (TS) of recent multispectral (MS) images. The method assumes that: 1) the source of the map is unknown and may be different from remote sensing data; 2) no ground truth is available; 3) the map is provided at polygon level, where the polygon label represents the dominant class; and 4) the map legend can be converted into a set of classes discriminable with the TS of images (i.e., no land-use classes that require manual analysis are considered). First, the outdated map is adapted to the spatial and spectral properties of the MS images. Then, the method identifies reliable labeled units in an unsupervised way by a two-step procedure: 1) a clustering analysis performed at polygon level to detect samples correctly associated to their labels and 2) a consistency analysis to discard polygons far from the distribution of the related land-cover class (i.e., having high probability of being mislabeled). Finally, the map is updated by classifying the recent TS of MS image with an ensemble of classifiers trained using only the reference data derived from the map. The experimental results obtained updating the 2012 Corine Land Cover (CLC) and the GlobLand30 in Trentino Alto Adige (Italy) achieved 93.2% and 93.3% overall accuracy (OA) on the validation data set. The method increased the OA up to 18% and 11.5% with respect to the reference methods on the 2012 CLC and the GlobLand30, respectively.

33 citations


Journal ArticleDOI
TL;DR: A new, automated method for deriving closed polygons around fields from time-series satellite imagery, proposing that edge linearity over a long distance is a more important criterion than spectral difference for separating fields, so edge responses are thresholded primarily by length rather than strength.
Abstract: Agricultural land-use statistics are more informative per-field than per-pixel. Land-use classification requires up-to-date field boundary maps potentially covering large areas containing thousands of farms. This kind of map is usually difficult to obtain. We have developed a new, automated method for deriving closed polygons around fields from time-series satellite imagery. We have been using this method operationally in New Zealand to map whole districts using imagery from several satellite sensors, with little need to vary parameters. Our method looks for boundaries—either step edges or linear features—surrounding regions of low variability throughout the time series. Local standard deviations from all image dates are combined, and the result is convolved with a series of extended directional edge filters. We propose that edge linearity over a long distance is a more important criterion than spectral difference for separating fields, so edge responses are thresholded primarily by length rather than strength. The resulting raster edge map (combined from all directions) is converted to vector (GIS) format and the final polygon topology is built. The method successfully segments parcels containing different crops and pasture, as well as those separated by boundaries such as roads and hedgerows. Here we describe the technique and demonstrate it for an agricultural study site (4000 km2) using SPOT satellite imagery. We show that our result compares favorably with that from existing segmentation methods in terms of both quantitative quality metrics and suitability for land-use classification.

31 citations


Journal ArticleDOI
TL;DR: In this paper, the authors reformulate several known results about continued fractions in combinatorial terms, such as the theorem of Conway and Coxeter and that of Series, relating continued fractions and triangulations.
Abstract: We reformulate several known results about continued fractions in combinatorial terms. Among them the theorem of Conway and Coxeter and that of Series, both relating continued fractions and triangulations. More general polygon dissections appear when extending these theorems for elements of the modular group $\mathrm{PSL}(2,\mathbb {Z})$ . These polygon dissections are interpreted as walks in the Farey tessellation. The combinatorial model of continued fractions can be further developed to obtain a canonical presentation of elements of $\mathrm{PSL}(2,\mathbb {Z})$ .

30 citations


Journal ArticleDOI
Jiexian Zeng1, Min Liu1, Xiang Fu1, Ruiyu Gu1, Lu Leng1 
TL;DR: The proposed shape recognition algorithm achieves a high recognition rate and has good robustness, which can be applied to the target shape recognition field for nonrigid transformations and local deformations.
Abstract: The object shape recognition of nonrigid transformations and local deformations is a difficult problem. In this paper, a shape recognition algorithm based on the curvature bag of words (CBoW) model is proposed to solve that problem. First, an approximate polygon of the object contour is obtained by using the discrete contour evolution algorithm. Next, based on the polygon vertices, the shape contour is decomposed into contour fragments. Then, the CBoW model is used to represent the contour fragments. Finally, a linear support vector machine is applied to classify the shape feature descriptors. Our main innovations are as follows: 1) A multi-scale curvature integral descriptor is proposed to extend the representativeness of the local descriptor; 2) The curvature descriptor is encoded to break through the limitation of the correspondence relationship of the sampling points for shape matching, and accordingly it forms the feature of middle-level semantic description; 3) The equal-curvature integral ranking pooling is employed to enhance the feature discrimination, and also improves the performance of the middle-level descriptor. The experimental results show that the recognition rate of the proposed algorithm in the MPEG-7 database can reach 98.21%. The highest recognition rates of the Swedish Leaf and the Tools databases are 97.23% and 97.14%, respectively. The proposed algorithm achieves a high recognition rate and has good robustness, which can be applied to the target shape recognition field for nonrigid transformations and local deformations.

28 citations


Posted Content
TL;DR: This work proposes the Signed Polygon as a class of new tests and shows that both the SgnT and SgnQ tests satisfy (a)-(d), and especially, they work well for both very sparse and less sparse networks.
Abstract: Given a symmetric social network, we are interested in testing whether it has only one community or multiple communities. The desired tests should (a) accommodate severe degree heterogeneity, (b) accommodate mixed-memberships, (c) have a tractable null distribution, and (d) adapt automatically to different levels of sparsity, and achieve the optimal phase diagram. How to find such a test is a challenging problem. We propose the Signed Polygon as a class of new tests. Fixing $m \geq 3$, for each $m$-gon in the network, define a score using the centered adjacency matrix. The sum of such scores is then the $m$-th order Signed Polygon statistic. The Signed Triangle (SgnT) and the Signed Quadrilateral (SgnQ) are special examples of the Signed Polygon. We show that both the SgnT and SgnQ tests satisfy (a)-(d), and especially, they work well for both very sparse and less sparse networks. Our proposed tests compare favorably with the existing tests. For example, the EZ and GC tests behave unsatisfactorily in the less sparse case and do not achieve the optimal phase diagram. Also, many existing tests do not allow for severe heterogeneity or mixed-memberships, and they behave unsatisfactorily in our settings. The analysis of the SgnT and SgnQ tests is delicate and extremely tedious, and the main reason is that we need a unified proof that covers a wide range of sparsity levels and a wide range of degree heterogeneity. For lower bound theory, we use a phase transition framework, which includes the standard minimax argument, but is more informative. The proof uses classical theorems on matrix scaling.

Journal ArticleDOI
TL;DR: A coupling method using arbitrary polyhedron elements using scaled boundary finite element method to reduce meshing burden by allowing users to divide a problem domain into several simpler subdomains and generate meshes for them independently.

Journal ArticleDOI
25 Sep 2019-Sensors
TL;DR: The proposed method is designed based on the idea that, given an area of interest represented as a polygon, a convex decomposition of thepolygon mainly occurs at the points where an interior angle between two edges of the polygon is greater than 180 degrees.
Abstract: To cover an area of interest by an autonomous vehicle, such as an Unmanned Aerial Vehicle (UAV), planning a coverage path which guides the unit to cover the area is an essential process. However, coverage path planning is often problematic, especially when the boundary of the area is complicated and the area contains several obstacles. A common solution for this situation is to decompose the area into disjoint convex sub-polygons and to obtain coverage paths for each sub-polygon using a simple back-and-forth pattern. Aligned with the solution approach, we propose a new convex decomposition method which is simple and applicable to any shape of target area. The proposed method is designed based on the idea that, given an area of interest represented as a polygon, a convex decomposition of the polygon mainly occurs at the points where an interior angle between two edges of the polygon is greater than 180 degrees. The performance of the proposed method is demonstrated by comparison with existing convex decomposition methods using illustrative examples.

Journal ArticleDOI
TL;DR: The fact that ps judgments had moderately positive correlations with both sheltering and evacuation suggests that experiment participants experience the same ambivalence about these two protective actions as people threatened by actual tornadoes.
Abstract: The National Weather Service has adopted warning polygons that more specifically indicate the risk area than its previous county-wide warnings. However, these polygons are not defined in terms of numerical strike probabilities (ps ). To better understand people's interpretations of warning polygons, 167 participants were shown 23 hypothetical scenarios in one of three information conditions-polygon-only (Condition A), polygon + tornadic storm cell (Condition B), and polygon + tornadic storm cell + flanking nontornadic storm cells (Condition C). Participants judged each polygon's ps and reported the likelihood of taking nine different response actions. The polygon-only condition replicated the results of previous studies; ps was highest at the polygon's centroid and declined in all directions from there. The two conditions displaying storm cells differed from the polygon-only condition only in having ps just as high at the polygon's edge nearest the storm cell as at its centroid. Overall, ps values were positively correlated with expectations of continuing normal activities, seeking information from social sources, seeking shelter, and evacuating by car. These results indicate that participants make more appropriate ps judgments when polygons are presented in their natural context of radar displays than when they are presented in isolation. However, the fact that ps judgments had moderately positive correlations with both sheltering (a generally appropriate response) and evacuation (a generally inappropriate response) suggests that experiment participants experience the same ambivalence about these two protective actions as people threatened by actual tornadoes.

Journal ArticleDOI
15 Aug 2019-Wear
TL;DR: In this paper, a vertical model for vehicle-track-subgrade dynamic interactions is established by the Green function method with high accurateness and efficiency, and three types of excitations are considered, i.e., the measured polygon, the harmonic polygon and the track random irregularity.

Journal ArticleDOI
TL;DR: This work proposes a hierarchical filtering and clustering approach to obtain accurate line based on detected hotspots and ordered points, and is the first Hough method that is highly adaptable since it works for buildings with edges of different lengths and arbitrary relative orientations.
Abstract: Many urban applications require building polygons as input. However, manual extraction from point cloud data is time- and labor-intensive. Hough transform is a well-known procedure to extract line features. Unfortunately, current Hough-based approaches lack flexibility to effectively extract outlines from arbitrary buildings. We found that available point order information is actually never used. Using ordered building edge points allows us to present a novel ordered points–aided Hough Transform (OHT) for extracting high quality building outlines from an airborne LiDAR point cloud. First, a Hough accumulator matrix is constructed based on a voting scheme in parametric line space (θ, r). The variance of angles in each column is used to determine dominant building directions. We propose a hierarchical filtering and clustering approach to obtain accurate line based on detected hotspots and ordered points. An Ordered Point List matrix consisting of ordered building edge points enables the detection of line segments of arbitrary direction, resulting in high-quality building roof polygons. We tested our method on three different datasets of different characteristics: one new dataset in Makassar, Indonesia, and two benchmark datasets in Vaihingen, Germany. To the best of our knowledge, our algorithm is the first Hough method that is highly adaptable since it works for buildings with edges of different lengths and arbitrary relative orientations. The results prove that our method delivers high completeness (between 90.1% and 96.4%) and correctness percentages (all over 96%). The positional accuracy of the building corners is between 0.2–0.57 m RMSE. The quality rate (89.6%) for the Vaihingen-B benchmark outperforms all existing state of the art methods. Other solutions for the challenging Vaihingen-A dataset are not yet available, while we achieve a quality score of 93.2%. Results with arbitrary directions are demonstrated on the complex buildings around the EYE museum in Amsterdam.

Journal ArticleDOI
TL;DR: In this paper, a workflow for the rapid delineation and micro-topographic characterization of ice wedge polygons within high-resolution digital elevation models is presented, where a convolutional neural network is used to detect pixels representing polygon boundaries.
Abstract: . We present a workflow for the rapid delineation and microtopographic characterization of ice wedge polygons within high-resolution digital elevation models. At the core of the workflow is a convolutional neural network used to detect pixels representing polygon boundaries. A watershed transformation is subsequently used to segment imagery into discrete polygons. Fast training times ( min) permit an iterative approach to improving skill as the routine is applied across broad landscapes. Results from study sites near Utqiaġvik (formerly Barrow) and Prudhoe Bay, Alaska, demonstrate robust performance in diverse tundra settings, with manual validations demonstrating 70–96 % accuracy by area at the kilometer scale. The methodology permits precise, spatially extensive measurements of polygonal microtopography and trough network geometry.

Journal ArticleDOI
TL;DR: The assessment reflects the landscape condition of the first terrace surface of Samoylov Island, which is the typical island of the southern part of the Lena Delta, and illustrates the potential of UAV data GIS analysis for highly accurate investigations of Arctic landscape changes.
Abstract: Modern degradation of Arctic permafrost promotes changes in tundra landscapes and leads to degradation of ice wedge polygons, which are the most widespread landforms of Arctic wetlands. Status assessment of polygon degradation is important for various environmental studies. We have applied the geographic information systems’ (GIS) analysis of data from unmanned aerial vehicles (UAV) to accurately assess the status of ice wedge polygon degradation on Samoylov Island. We used several modern models of polygon degradation for revealing polygon types, which obviously correspond to different stages of degradation. Manual methods of mapping and a high spatial resolution of used UAV data allowed for a high degree of accuracy in the identification of all land units. The study revealed the following: 41.79% of the first terrace surface was composed of non-degraded polygonal tundra; 18.37% was composed of polygons, which had signs of thermokarst activity and corresponded to various stages of degradation in the models; and 39.84% was composed of collapsed polygons, slopes, valleys, and water bodies, excluding ponds of individual polygons. This study characterizes the current status of polygonal tundra degradation of the first terrace surface on Samoylov Island. Our assessment reflects the landscape condition of the first terrace surface of Samoylov Island, which is the typical island of the southern part of the Lena Delta. Moreover, the study illustrates the potential of UAV data GIS analysis for highly accurate investigations of Arctic landscape changes.

Journal ArticleDOI
TL;DR: The method developed in the present study is general and straightforward in comparison with those available in the literature as it includes only required sequential steps to calculate the mutual inductance between the independent segments of the coils.

Journal ArticleDOI
TL;DR: This paper is the first time to create GS-Galerkin weak-form models without using a background mesh that tied with nodes, and hence the EFS-RPIM is a true meshfree approach.
Abstract: This paper presents a novel element-free smoothed radial point interpolation method (EFS-RPIM) for solving 2D and 3D solid mechanics problems. The idea of the present technique is that field nodes and smoothing cells (SCs) used for smoothing operations are created independently and without using a background grid, which saves tedious mesh generation efforts and makes the pre-process more flexible. In the formulation, we use the generalized smoothed Galerkin (GS-Galerkin) weak-form that requires only discrete values of shape functions that can be created using the RPIM. By varying the amount of nodes and SCs as well as their ratio, the accuracy can be improved and upper bound or lower bound solutions can be obtained by design. The SCs can be of regular or irregular polygons. In this work we tested triangular, quadrangle, n -sided polygon and tetrahedron as examples. Stability condition is examined and some criteria are found to avoid the presence of spurious zero-energy modes. This paper is the first time to create GS-Galerkin weak-form models without using a background mesh that tied with nodes, and hence the EFS-RPIM is a true meshfree approach. The proposed EFS-RPIM is so far the only technique that can offer both upper and lower bound solutions. Numerical results show that the EFS-RPIM gives accurate results and desirable convergence rate when comparing with the standard finite element method (FEM) and the cell-based smoothed FEM (CS-FEM).

Journal ArticleDOI
TL;DR: Compared with the existing traditional method in ArcGIS software, the results show that the proposed SUPA method can preserve the global features of general polygons and the orthogonal features of buildings while maintaining reliable aggregation results.

Journal ArticleDOI
TL;DR: The numerical results demonstrate the applicability of the modeling and optimization approach to a broad class of highly non-convex ellipse packing problems, by consistently returning good quality feasible solutions in all (231) illustrative model instances considered here.
Abstract: We present model development and numerical solution approaches to the problem of packing a general set of ellipses without overlaps into an optimized polygon. Specifically, for a given set of ellipses, and a chosen integer m ≥ 3, we minimize the apothem of the regular m-polygon container. Our modeling and solution strategy is based on the concept of embedded Lagrange multipliers. To solve models with up to n ≤ 10 ellipses, we use the LGO solver suite for global–local nonlinear optimization. In order to reduce increasing runtimes, for model instances with 10 ≤ n ≤ 20 ellipses, we apply local search launching the Ipopt solver from selected random starting points. The numerical results demonstrate the applicability of our modeling and optimization approach to a broad class of highly non-convex ellipse packing problems, by consistently returning good quality feasible solutions in all (231) illustrative model instances considered here.

Journal ArticleDOI
TL;DR: A solution approach is proposed combining a new starting point algorithm and a new modification of the LOFRT procedure (J Glob Optim 65(2):283–307) to search for locally optimal solutions.
Abstract: Packing ellipses with arbitrary orientation into a convex polygonal container which has a given shape is considered. The objective is to find a minimum scaling (homothetic) coefficient for the polygon still containing a given collection of ellipses. New phi-functions and quasi phi-functions to describe non-overlapping and containment constraints are introduced. The packing problem is then stated as a continuous nonlinear programming problem. A solution approach is proposed combining a new starting point algorithm and a new modification of the LOFRT procedure (J Glob Optim 65(2):283–307, 2016) to search for locally optimal solutions. Computational results are provided to demonstrate the efficiency of our approach. The computational results are presented for new problem instances, as well as for instances presented in the recent paper ( http://www.optimization-online.org/DB_FILE/2016/03/5348.pdf , 2016).

Journal ArticleDOI
Kong Ling1, Shuai Zhang1, Peng-Zhan Wu1, Si-Yuan Yang, Wen-Quan Tao1 
TL;DR: An extension of coupled volume-of-fluid and level-set method (VOSET) for simulating free surfaces flows in arbitrary 2D polygon meshes and shows excellent agreements with experimental data and benchmark solutions in literatures is presented.

Posted Content
Justin Liang1, Namdar Homayounfar1, Wei-Chiu Ma1, Yuwen Xiong1, Rui Hu1, Raquel Urtasun1 
TL;DR: PolyTransform as discussed by the authors uses a segmentation network to generate instance masks and then converts the masks into a set of polygons that are then fed to a deforming network that transforms the polygons such that they better fit the object boundaries.
Abstract: In this paper, we propose PolyTransform, a novel instance segmentation algorithm that produces precise, geometry-preserving masks by combining the strengths of prevailing segmentation approaches and modern polygon-based methods. In particular, we first exploit a segmentation network to generate instance masks. We then convert the masks into a set of polygons that are then fed to a deforming network that transforms the polygons such that they better fit the object boundaries. Our experiments on the challenging Cityscapes dataset show that our PolyTransform significantly improves the performance of the backbone instance segmentation network and ranks 1st on the Cityscapes test-set leaderboard. We also show impressive gains in the interactive annotation setting. We release the code at this https URL.

Book ChapterDOI
24 Apr 2019
TL;DR: It is possible to approximate artistic images from a limited number of stacked semi-transparent colored polygons but the locations of the vertices, the drawing order of the polygons and the RGBA color values must be optimized for the entire set at once.
Abstract: It is possible to approximate artistic images from a limited number of stacked semi-transparent colored polygons. To match the target image as closely as possible, the locations of the vertices, the drawing order of the polygons and the RGBA color values must be optimized for the entire set at once. Because of the vast combinatorial space, the relatively simple constraints and the well-defined objective function, these optimization problems appear to be well suited for nature-inspired optimization algorithms.

Journal ArticleDOI
TL;DR: A hierarchical pipeline for functional map inference is suggested, allowing us to compute correspondences between surfaces at fine subdivision levels, with hundreds of thousands of polygons, an order of magnitude faster than existing correspondence methods.
Abstract: We propose a novel approach for computing correspondences between subdivision surfaces with different control polygons. Our main observation is that the multi-resolution spectral basis functions that are open used for computing a functional correspondence can be compactly represented on subdivision surfaces, and therefore can be efficiently computed. Furthermore, the reconstruction of a pointwise map from a functional correspondence also greatly benefits from the subdivision structure. Leveraging these observations, we suggest a hierarchical pipeline for functional map inference, allowing us to compute correspondences between surfaces at fine subdivision levels, with hundreds of thousands of polygons, an order of magnitude faster than existing correspondence methods. We demonstrate the applicability of our results by transferring high-resolution sculpting displacement maps and textures between subdivision models.

Journal ArticleDOI
TL;DR: In this article, an alternative unit cell called the equivalent rectangle is defined, which has the same tensor impedance properties of a general polygon unit cell in the surface impedance pattern and is calculated by using the moment of inertia equations between the polygon and the rectangle.
Abstract: A patterning technique known as the point shifting method has enabled the generation of smoothly varying and highly anisotropic impedance surfaces with a wide range of patch sizes and shapes. Previously, the surface impedances of different shapes of unit cells were assumed by the impedance of similar size rectangle cells. In this paper, we study an approach to calculate the surface impedances for anisotropic polygon unit cells more accurately, based on the area moment of inertia equations. We define an alternative unit cell called the equivalent rectangle, which has the same tensor impedance properties of a general polygon unit cell in the surface impedance pattern. The size of the equivalent rectangle cell is calculated by using the moment of inertia equations between the polygon and the rectangle. The extracted surface impedance from the equivalent rectangle is compared to the surface impedance of polygon in the unit cell simulation, validating our method. We also verify the method by comparing the results between PEC patterns and impedance boundary sheets to which the extracted impedances are applied. Simulations of the patterns are verified by measurements as well.

Journal ArticleDOI
11 Jan 2019
TL;DR: This letter proposes a nonimpact rolling locomotion scheme to avoid VGT damage and optimizes the velocity of the VGT's nodes at every time step so that the center of mass follows a desired trajectory of rolling motion.
Abstract: A variable geometry truss (VGT) is a modular truss-structured robot consisting of linear actuators and three-degree-of-freedom joints. Having a sophisticated structure, the VGT can easily be damaged when it rolls and impacts the ground. This letter proposes a nonimpact rolling locomotion scheme to avoid VGT damage. It is assumed that the VGT moves quasi-statically and maintains a static stability. There exists a control phase and a rolling phase during locomotion. During the control phase, the VGT can freely move its center of mass within the supporting polygon. During the rolling phase, the VGT's center of mass is fixed at the edge of the support polygon, and it tilts forward until a node touches the ground to make a new support polygon. This algorithm optimizes the velocity of the VGT's nodes at every time step so that the center of mass follows a desired trajectory of rolling motion. A simulation verifies that the algorithm ensures that the VGT maintains its static stability, does not tumble, and accurately follows its desired trajectory.

Journal ArticleDOI
TL;DR: Numerical results demonstrate that the proposed Hybrid Trefftz Voronoi finite elements on polygon meshes is remarkably more efficient than the conventional finite element method (ABAQUS) and consequently becomes popular in micromechanical and microthermal modeling of composite and porous materials.
Abstract: Hybrid Trefftz Voronoi finite elements on polygon meshes are developed to capture thermal response of heterogeneous materials with randomly dispersed inclusions/voids. In this approach, the temperature intra-element field within the polygon is approximated by the homogeneous solutions to the heat conduction governing equation which are also called the T-complete functions. Whereas an auxiliary temperature frame field is independently defined on the polygonal boundary. The continuity at the matrix–inclusion interface is guaranteed by exterior and interior T-complete functions and that across element boundaries is enforced by incorporating the known hybrid functional at the element level. Thus, only the hybrid functional should be established for the matrix region. In conjunction with the divergence theorem, the domain integral involved in the functional vanishes, which leads to the element stiffness equation including boundary integrals only. It is reported that there exists a linear relationship between the maximum number of T-complete functions and the number of Gauss points sampled on each element side. Numerical results demonstrate that the proposed methodology is remarkably more efficient than the conventional finite element method (ABAQUS) and consequently becomes popular in micromechanical and microthermal modeling of composite and porous materials.