scispace - formally typeset
Search or ask a question

Showing papers on "Polygon published in 2021"


Journal ArticleDOI
TL;DR: A new fully automatic three-dimensional building reconstruction method that can generate first level of detail (LoD 1) building models from multi-view aerial images without any assistance from other data is introduced.
Abstract: The study presented in this paper introduced a new fully automatic three-dimensional building reconstruction method that can generate first level of detail (LoD 1) building models from multi-view aerial images without any assistance from other data. The accuracy and completeness of our reconstructed models have approached that of manually delineated models to a large extent. The presented method consists of three parts: (1) efficient dense matching and Earth surface reconstruction, (2) reliable building footprint extraction and polygon regularization, and (3) highly accurate height inference of building roofs and bases. First, our novel deep learning-based multi-view matching method, composed of a convolutional neural network, gated recurrent convolutions, and a multi-scale pyramid matching structure, is used to reconstruct the digital surface model (DSM) and digital orthophoto map (DOM) efficiently without generating epipolarly rectified images. Second, our three-stage 2D building extraction method is introduced to deliver reliable and accurate building contours. Deep-learning based segmentation, assisted with DSM, is used to segment buildings from backgrounds; and the generated building maps are fused with a terrain classification algorithm to reach better segmentation results. A polygon regularization algorithm and a level set algorithm are thereafter employed to transfer the binary segmentation maps to structured vector-form building polygons. Third, a novel method is introduced to infer the height of building roofs and bases using adaptive local terrain filtering and neighborhood buffer analysis. We tested our method on a large experimental area that covered 2284 aerial images and 782 various types of buildings. Our results as far as correctness and completeness exceeded the results of other similar methods in a between-method comparison by at least 15% for individual 3D building models with many of them comparable to manual delineation results.

50 citations


Proceedings ArticleDOI
01 Jan 2021
TL;DR: In this article, the authors explore better representations like oriented bounding box, ellipse, and generic polygon for object detection in fisheye images and use the IoU metric to compare these representations using accurate instance segmentation ground truth.
Abstract: Object detection is a comprehensively studied problem in autonomous driving. However, it has been relatively less explored in the case of fisheye cameras. The standard bounding box fails in fisheye cameras due to the strong radial distortion, particularly in the image’s periphery. We explore better representations like oriented bounding box, ellipse, and generic polygon for object detection in fisheye images in this work. We use the IoU metric to compare these representations using accurate instance segmentation ground truth. We design a novel curved bounding box model that has optimal properties for fisheye distortion models. We also design a curvature adaptive perimeter sampling method for obtaining polygon vertices, improving relative mAP score by 4.9% compared to uniform sampling. Overall, the proposed polygon model improves mIoU relative accuracy by 40.3%. It is the first detailed study on object detection on fisheye cameras for autonomous driving scenarios to the best of our knowledge. The dataset1 comprising of 10,000 images along with all the object representations ground truth will be made public to encourage further research. We summarize our work in a short video with qualitative results at https://youtu.be/iLkOzvJpL-A.

29 citations


Journal ArticleDOI
01 Mar 2021
TL;DR: In this paper, the locus of the center of mass of Poncelet polygons and the limit of the Center of Mass of degenerate Poncelets are investigated. But the locuses of the Poncelett polygons are not considered in this paper.
Abstract: We study the locus of the Circumcenter of Mass of Poncelet polygons, and the limit of the Center of Mass (when we consider the polygon as a “homogeneous lamina”) for degenerate Poncelet polygons. We also provide a proof for one of Dan Reznik invariants for billiard trajectories. Plus, we take a look at how the scene looks like when we shift to spherical geometry.

28 citations


Journal ArticleDOI
TL;DR: In this article, the Innovative Polygon Trend Analysis (IPTA) method was applied to total monthly precipitation data of Susurluk Basin, one of Turkey's important basins.
Abstract: The effects of climate change caused by global warming can be seen in changes of climate variables such as precipitation, humidity, and temperatures. These effects of global climate change can be interpreted as a result of the examination of meteorological parameters. One of the most effective methods to investigate these effects is trend analysis. The Innovative Polygon Trend Analysis (IPTA) method is a trend analysis method that has emerged in recent years. The distinctive features of this method compared with other trend methods are that it depends on time series and can compare data series among themselves. Therefore, in this study, the IPTA method was applied to total monthly precipitation data of Susurluk Basin, one of Turkey’s important basins. Data from ten precipitation observation stations in Susurluk Basin were used. Data were provided by the General Directorate of State Meteorology Affairs. The length of this data series was 12 years (2006–2017). As a result of the study, since there is no regular polygon in IPTA graphics of each station, it is seen that precipitation data varies by years. While this change is seen increasingly at some stations, it is seen decreasingly at other stations.

26 citations


DOI
01 Jan 2021
TL;DR: This paper presents an approach that combines several algorithms to detect basic polygons from a set of arbitrary line segments in a plane in polynomial time and space, with complexities of O((N +M)) and O((M +M) respectively, where N is the number of line segments and M is thenumber of intersections between line segments.
Abstract: Detecting polygons defined by a set of line segments in a plane is an important step in the analysis of vectorial drawings. This paper presents an approach that combines several algorithms to detect basic polygons from a set of arbitrary line segments. The resulting algorithm runs in polynomial time and space, with complexities of O((N +M)) and O((N +M)) respectively, where N is the number of line segments and M is the number of intersections between line segments. Our choice of algorithms was made to strike a good compromise between efficiency and ease of implementation. The result is a simple and efficient solution to detect polygons from lines.

24 citations


Journal ArticleDOI
TL;DR: In this article, a systematic study of local positive spaces which arise in the context of the Amplituhedron construction for scattering amplitudes in planar maximally supersymmetric Yang-Mills theory was initiated.
Abstract: We initiate the systematic study of local positive spaces which arise in the context of the Amplituhedron construction for scattering amplitudes in planar maximally supersymmetric Yang-Mills theory. We show that all local positive spaces relevant for one-loop MHV amplitudes are characterized by certain sign-flip conditions and are associated with surprisingly simple logarithmic forms. In the maximal sign-flip case they are finite one-loop octagons. Particular combinations of sign-flip spaces can be glued into new local positive geometries. These correspond to local pentagon integrands that appear in the local expansion of the MHV one-loop amplitude. We show that, geometrically, these pentagons do not triangulate the original Amplituhedron space but rather its twin “Amplituhedron-Prime”. This new geometry has the same boundary structure as the Amplituhedron (and therefore the same logarithmic form) but differs in the bulk as a geometric space. On certain two-dimensional boundaries, where the Amplituhedron geometry reduces to a polygon, we check that both spaces map to the same dual polygon. Interestingly, we find that the pentagons internally triangulate that dual space. This gives a direct evidence that the chiral pentagons are natural building blocks for a yet-to-be discovered dual Amplituhedron.

21 citations


Journal ArticleDOI
TL;DR: An adaptive polygon generation algorithm (APGA), a novel method that aims at directly generating a polygonal output, parameterized as a sequence of building vertices, to outline each building instance, outperformed state-of-the-art methods in terms of building coverage and geometric similarity.
Abstract: Buildings serve as the main places of human activities, and it is essential to automatically extract each building instance for a wide range of applications. Recently, automatic building segmentation approaches have made great progress in both detection and segmentation accuracy due to the rapid development of deep learning. However, these approaches struggle to delineate regular and accurate building boundaries due to the limitations in inferring overall structure of the building instance; this might lead to inconsistency in building geometry and difficulty in being applied directly to practical engineering. To tackle this challenge, this article presents an adaptive polygon generation algorithm (APGA), a novel method that aims at directly generating a polygonal output, parameterized as a sequence of building vertices, to outline each building instance. To achieve this, APGA predicts the candidate locations of building vertices and determines the arrangement of these vertices with the help of the position and orientation of the building boundary. Moreover, to introduce local context features and achieve improved performance of the predicted building polygon, APGA integrates finer structures around the candidate vertices to refine their positions. Experiments on several challenging building extraction datasets demonstrated that APGA outperformed state-of-the-art methods in terms of building coverage and geometric similarity.

20 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed consecutive k -out-of-n : F systems with shared components between adjacent subsystems which extend the consecutive-k -out of n : F linear and circular models and combine the sum of disjoint products with finite Markov chain imbedding approach.

19 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a real-time corner smoothing algorithm for five-axis machine tools by constructing the C3 continuous asymmetrical PH splines, where the control points related to the two ends of the PH spline can be independently adjusted through reasonably introducing two more adjustable variables.

19 citations


Journal ArticleDOI
TL;DR: A method based on convex hulls and position graphs to measure the similarity between multipolygons and accounts for the relationships across the entire complex geometrical shape and components of multipolygon during measuring similarity.
Abstract: Polygon similarity can play an important role in geographic information retrieval, map matching and updating, and spatial data mining applications. Geographic information science (GIS) represents v...

19 citations


Journal ArticleDOI
TL;DR: This article proposes an approach to content-preserving image stitching with regular boundary constraints, which aims to stitch multiple images to generate a panoramic image with piecewise rectangular boundaries and efficiently produces visually pleasing panoramas with regular boundaries and unnoticeable distortions.
Abstract: This article proposes an approach to content-preserving image stitching with regular boundary constraints, which aims to stitch multiple images to generate a panoramic image with piecewise rectangular boundaries. Existing methods treat image stitching and rectangling as two separate steps, which may result in suboptimal results as the stitching process is not aware of the further warping needs for rectangling. We address these limitations by formulating image stitching with regular boundaries in a unified optimization framework. Starting from the initial stitching result produced by traditional warping-based optimization, we obtain the irregular boundary from the warped meshes by polygon Boolean operations which robustly handle arbitrary mesh compositions. By analyzing the irregular boundary, we construct a piecewise rectangular boundary. Based on this, we further incorporate line and regular boundary preservation constraints into the image stitching framework, and conduct iterative optimizations to obtain an optimal piecewise rectangular boundary. Thus we can make the boundary of the stitching result as close as possible to a rectangle, while reducing unwanted distortions. We further extend our method to video stitching, by integrating the temporal coherence into the optimization. Experiments show that our method efficiently produces visually pleasing panoramas with regular boundaries and unnoticeable distortions.

Journal ArticleDOI
TL;DR: Through a set of computational and optimization efficiencies, the approach is able to apply in complex images comprised of a number of overlapped regions and shows superior accuracy and flexibility of the method in ellipse recognition, relative to other methods.
Abstract: Recognition of overlapping objects is required in many applications in the field of computer vision. Examples include cell segmentation, bubble detection and bloodstain pattern analysis. This paper presents a method to identify overlapping objects by approximating them with ellipses. The method is intended to be applied to complex-shaped regions which are believed to be composed of one or more overlapping objects. The method has two primary steps. First, a pool of candidate ellipses are generated by applying the Euclidean distance transform on a compressed image and the pool is filtered by an overlaying method. Second, the concave points on the contour of the region of interest are extracted by polygon approximation to divide the contour into segments. Then, the optimal ellipses are selected from among the candidates by choosing a minimal subset that best fits the identified segments. We propose the use of the adjusted Rand index, commonly applied in clustering, to compare the fitting result with ground truth. Through a set of computational and optimization efficiencies, we are able to apply our approach in complex images comprised of a number of overlapped regions. Experimental results on a synthetic data set, two types of cell images and bloodstain patterns show superior accuracy and flexibility of our method in ellipse recognition, relative to other methods.

Journal ArticleDOI
01 Dec 2021
TL;DR: An efficient line-offset algorithm for general polygonal shapes with islands that has been implemented in Visual C++ and applied to offset point sequence curves, which contain several islands.
Abstract: This paper presents an efficient line-offset algorithm for general polygonal shapes with islands. A developed sweep-line algorithm (SL) is introduced to find all sell-intersection points accurately and quickly. The previous work is limited to handle polygons that having no line-segments in parallel to $Weep-line directions. An invalid loop detection and removing (ILDR) algorithm is proposed. The invalid loops detection algorithm divides the polygon al self-intersection points into a set of small polygons, and re-polygonized them. The polygons are checked for direction; invalid polygons are always having inverse direction with the boundary polygon. The proposed algorithm has been implemented in Visual C++ and applied to offset point scquence curves, which contain several islands.

Journal ArticleDOI
TL;DR: In this article, the stability analysis of a foldable helical antenna with the Kresling origami pattern, which is subjected to tip follower force, was performed using the Hopf-bifurcation method.

Posted Content
TL;DR: In this paper, the authors show that the Virtual Element Method still converges with almost optimal rates and low errors in the L2 and H1 norms even if they significantly break the regularity assumptions.
Abstract: Since its introduction, the Virtual Element Method (VEM) was shown to be able to deal with a large variety of polygons, while achieving good convergence rates. The regularity assumptions proposed in the VEM literature to guarantee the convergence on a theoretical basis are therefore quite general. They have been deduced in analogy to the similar conditions developed in the Finite Element Methods (FEMs) analysis. In this work, we experimentally show that the VEM still converges with almost optimal rates and low errors in the L2 and H1 norms even if we significantly break the regularity assumptions that are used in the literature. These results suggest that the regularity assumptions proposed so far might be overestimated. We also exhibit examples on which the VEM sub-optimally converges or diverges. Finally, we introduce a mesh quality indicator that experimentally correlates the entity of the violation of the regularity assumptions and the performance of the VEM solution, thus predicting if a dataset is potentially critical for VEM.

Journal ArticleDOI
Kai Chen1, Degao Zou1, Hongxiang Tang1, Jingmao Liu1, Yue Zhuo1 
TL;DR: In this article, a flexible polygonal Cosserat continuum analysis method is firstly deduced and numerically developed based on the theory of Scaled Boundary FEM, and stress concentration on the holes embedded in different structures is then investigated using the proposed method and verified against theoretical solution, which not only shows good agreement, but also reasonably weakens the stress concentration.
Abstract: Cosserat continuum method can be used to solve stress concentration of holes. However, with the shape limitation of its elements, it is worthwhile to improve the element quality so that this method can be universal and feasible to complex situations. In this paper, a flexible polygonal Cosserat continuum analysis method is firstly deduced and numerically developed based on the theory of Scaled Boundary FEM. Stress concentration on the holes embedded in different structures is then investigated using the proposed method and verified against theoretical solution, which not only shows good agreement, but also reasonably weakens the stress concentration. The proposed method can closely replicate the theoretical solution for the case when the material is nearly incompressible (Poisson's ratio close to 0.5), also indicating the robustness of this method. Additionally, complex polygonal elements can be solved directly, coupling the quadtree and polygon discretization techniques seamlessly, wherein the efficiency and convenience are improved for processing complex geometries. The proposed method can provide important technical support for stress concentration analysis of structures with complex holes, and contribute to facilitating shape optimization of holes design.

Journal ArticleDOI
TL;DR: In this article, a theory of multiplicities of roots for polynomials over hyperfields is developed, which is used to provide a unified and conceptual proof of both Descartes' rule of signs and Newton's polygon rule.

Journal ArticleDOI
TL;DR: In this article, the general structure of (2n + 1)-gon and 2n-simplex equations in direct sums of vector spaces is examined, and a construction for their solutions, parameterized by elements of the Grassmannian Gr(n+ 1, 2n+1).
Abstract: We consider polygon and simplex equations, of which the simplest nontrivial examples are pentagon (5-gon) and Yang–Baxter (2-simplex), respectively. We examine the general structure of (2n + 1)-gon and 2n-simplex equations in direct sums of vector spaces. Then, we provide a construction for their solutions, parameterized by elements of the Grassmannian Gr(n + 1, 2n + 1).

Journal ArticleDOI
TL;DR: This work presents a multitask learning approach to predict rooftop corners in a sequent way using the attention learned from where the boundaries are in a given image region, called as object-oriented, edges and corners (OEC)-RNN.
Abstract: It is an important task to automatically and accurately map rooftops from very high resolution remote sensing images since buildings are very closely related to human activity. Two typical technologies are often utilized to accomplish the task, i.e., semantic segmentation and instance segmentation. The semantic segmentation is to independently allocate a label (e.g., ``building'' or not) to each pixel, resulting in blob-like segments. On the contrary, one might model the boundary of a rooftop as a polygon to improve the shape of the rooftop by encouraging vertices of polygon to adhere to the rooftop's boundary. Following this line of work, we present a multitask learning approach to predict rooftop corners in a sequent way using the attention learned from where the boundaries are in a given image region. The approach simulates the process of manual delineation of rooftops' outline in a given image, which can produce accurate boundaries of rooftops with sharp corners and straight lines between them. Specifically, the proposed method consists of three components, i.e., object detection, pixel-by-pixel classification of both edges and corners, and delineation of rooftops in a sequent manner using a convolutional recurrent neural network (RNN). It is called as object-oriented, edges and corners (OEC)-RNN in this article. Three image datasets of buildings are employed to validate the performance of the OEC-RNN, which are compared with state-of-the-art methods for instance segmentation. The experimental results show that the OEC-RNN achieves the best performance in terms of overlay, boundary adherence, and vertex location between ground-truth and predicted polygons.

Journal ArticleDOI
TL;DR: A novel data preprocessing method that converts numeric data into representative graphs (polygons) expressing all of the relationships between data variables in a systematic way based on Hamiltonian cycles, thereby supporting accurate “end-to-end learning” in industrial fault classification applications.
Abstract: This paper proposes a novel data preprocessing method that converts numeric data into representative graphs (polygons) expressing all of the relationships between data variables in a systematic way based on Hamiltonian cycles. The advantage of the proposed method is that it has an embedded feature extraction capability in which each generated polygon depicts a class-specific representation in the data, thereby supporting accurate “end-to-end learning” in industrial fault classification applications. Moreover, the generated polygons can play a significant role in the interpretation of trained deep learning fault classifiers. The performance of the proposed method was demonstrated using a benchmark dataset in the process industry. It was also tested successfully to classify challenging faults in major equipment in a thermomechanical pulp mill located in Canada. The results of the proposed method show better performance than other comparable fault classifiers.

Journal ArticleDOI
TL;DR: In this paper, a topology optimization approach for additive manufacturing of 2D and 3D self-supporting structures is developed for overhang angle control, avoidance of the so-called V-shaped areas and minimum length scale control.

Journal ArticleDOI
TL;DR: In this paper, a triangular metal-organic polygon (MOT-1) was synthesized from bulky tetramethyl terephthalate and Zr-based secondary building unit.
Abstract: Metal-based secondary building unit and the shape of organic ligands are the two crucial factors for determining the final topology of metal-organic materials. A careful choice of organic and inorganic structural building units occasionally produces unexpected structures, facilitating deeper fundamental understanding of coordination-driven self-assembly behind metal-organic materials. Here, we have synthesized a triangular metal-organic polygon (MOT-1), assembled from bulky tetramethyl terephthalate and Zr-based secondary building unit. Surprisingly, the Zr-based secondary building unit serves as an unusual ditopic Zr-connector, to form metal-organic polygon MOT-1, proven to be a good candidate for water adsorption with recyclability. This study highlights the interplay of the geometrically frustrated ligand and secondary building unit in controlling the connectivity of metal-organic polygon. Such a strategy can be further used to unveil a new class of metal-organic materials.

Journal ArticleDOI
TL;DR: In this article, the existence of N vortex patches located at the vertex of a regular polygon with N sides that rotate around the center of the polygon at a constant angular velocity was shown.
Abstract: This paper deals with the existence of N vortex patches located at the vertex of a regular polygon with N sides that rotate around the center of the polygon at a constant angular velocity. That is done for Euler and $$\text {(SQG)}_\beta $$ equations, with $$\beta \in (0,1)$$ , but may be also extended to more general models. The idea is the desingularization of the Thomsom polygon for the N point vortex system, that is, N point vortices located at the vertex of a regular polygon with N sides. The proof is based on the study of the contour dynamics equation combined with the application of the infinite-dimensional implicit function theorem and the well-chosen of the function spaces.

Journal ArticleDOI
TL;DR: A framework of coupling polygonal discrete elements and the lattice Boltzmann method using a direct forcing immersed boundary scheme to handle the interactions between convex and concavepolygonal particles is presented.

Journal ArticleDOI
TL;DR: In this article, a frequency spectrum solving method based on three-dimensional affine transform was proposed for combining look-up tables (LUTs) with polygon holography.
Abstract: In this study, we first analyze the fully analytical frequency spectrum solving method based on three-dimensional affine transform. Thus, we establish a new method for combining look-up tables (LUTs) with polygon holography. The proposed method was implemented and proved to be accelerated about twice compared to the existing methods. In addition, principal component analysis was used to compress the LUTs, effectively reducing the required memory without artifacts. Finally, we calculated very complex objects on a graphics processing unit using the proposed method, and the calculation speed was higher than that of existing polygon-based methods.

Journal ArticleDOI
TL;DR: In this article, Trend Polygon Star Concept analyzes distance between two months in data set in graph, which is result of IPTA, and shows analysis result by dividing it into four regions.
Abstract: Climate change is an event that has significant effects as direct or indirect on ecosystem and living things. In order to be prepared for the effect of climate change, it is necessary to anticipate these changes and take measures for this change. Therefore, many studies have been carried out on changes in climate parameters in recent years. The most common method used in these studies is trend methods. Innovative Polygon Trend Analysis (IPTA) and Trend Polygon Star Concept are trend analysis methods. IPTA Method divides data series into two as first and second data set and analyzes these two data sets by comparing them with each other. Trend Polygon Star Concept analyzes distance between two months in data set in graph, which is result of IPTA, and shows analysis result by dividing it into four regions. Therefore, in this study, monthly average temperature data are analyzed by using this two-polygon method. This data set is for 22 years (1996–2017). Polygon graphics were created as a result of study. Besides, trend slopes and lengths of temperature data with IPTA Method were calculated. The values of graphs created with Trend Polygon Star Concept Method on x- and y-axis were given in a table. When the results of both analysis methods were examined for a station, the following results were observed. For example, a regular polygon was not seen in arithmetic mean and standard deviation graphs of IPTA Method of Bandirma Station. Besides, when general evaluation of arithmetic mean analysis results was examined an increasing trend in most months. When arithmetic average graph created by Trend Polygon Star Concept Method of Bandirma Station was examined, transition between two months was seen first and third region. When standard deviation graph was examined, transitions between two months were seen in all four regions.

Journal ArticleDOI
TL;DR: In this paper, a single hydro-meteorological record is used to search for climate change impact search methodologies, based mostly on probabilistic, statistical and stochastic processes.
Abstract: Climate change impact search methodologies, for a single hydro-meteorological record, are based mostly on probabilistic, statistical and stochastic processes. The application of these methodologies...

Journal ArticleDOI
TL;DR: In this paper, wheel wear is a natural phenomenon of wheel-rail rolling contact during vehicle operation and occurs in both lateral and circumferential directions on the wheels of a vehicle.
Abstract: Wheel wear is a natural phenomenon of wheel–rail rolling contact during vehicle operation. The non-uniform wear in lateral and circumferential directions is normally occurred on wheels. The lateral...

Journal ArticleDOI
TL;DR: In this paper, a model-driven method that reconstructs LoD-2 building models following a "decomposition-optimization-fitting" paradigm is proposed, which can be applied to satellite-based point cloud or DSMs, because it provides a much wider data coverage with lower cost than the traditionally used LiDAR and airborne photogrammetry data.
Abstract: Digital surface models (DSM) generated from multi-stereo satellite images are getting higher in quality owing to the improved data resolution and photogrammetric reconstruction algorithms. Very-high-resolution (VHR, with sub-meter level resolution) satellite images effectively act as a unique data source for 3D building modeling, because it provides a much wider data coverage with lower cost than the traditionally used LiDAR and airborne photogrammetry data. Although 3D building modeling from point clouds has been intensively investigated, most of the methods are still ad-hoc to specific types of buildings and require high-quality and high-resolution data sources as input. Therefore, when applied to satellite-based point cloud or DSMs, these developed approaches are not readily applicable and more adaptive and robust methods are needed. As a result, most of the existing work on building modeling from satellite DSM achieves LoD-1 generation. In this paper, we propose a model-driven method that reconstructs LoD-2 building models following a “decomposition-optimization-fitting” paradigm. The proposed method starts building detection results through a deep learning-based detector and vectorizes individual segments into polygons using a “three-step” polygon extraction method, followed by a novel grid-based decomposition method that decomposes the complex and irregularly shaped building polygons to tightly combined elementary building rectangles ready to fit elementary building models. We have optionally introduced OpenStreetMap (OSM) and Graph-Cut (GC) labeling to further refine the orientation of 2D building rectangle. The 3D modeling step takes building-specific parameters such as hip lines, as well as non-rigid and regularized transformations to optimize the flexibility for using a minimal set of elementary models. Finally, roof type of building models s refined and adjacent building models in one building segment are merged into the complex polygonal model. Our proposed method has addressed a few technical caveats over existing methods, resulting in practically high-quality results, based on our evaluation and comparative study on a diverse set of experimental datasets of cities with different urban patterns. (codes /binaries may be available under this GitHub page: https://github.com/GDAOSU/LOD2BuildingModel ).

Journal ArticleDOI
TL;DR: An analytical model to evaluate the validity of different subgraphs is developed, which provides guidance to chooseSubgraph isomorphism-based star identification algorithms require fewer stars than pattern-based algorithms and are suitable for practical application.