scispace - formally typeset
Search or ask a question
Author

Zheng Li

Bio: Zheng Li is an academic researcher. The author has contributed to research in topics: Adjacency list & Point cloud. The author has co-authored 1 publications.

Papers
More filters
Journal ArticleDOI
TL;DR: A deep point convolutional network to recognize building shapes is proposed, which executes the convolution directly on the points of the buildings without constructing the graphs and extracting the geometric features of the points.
Abstract: The classification and recognition of the shapes of buildings in map space play an important role in spatial cognition, cartographic generalization, and map updating. As buildings in map space are often represented as the vector data, research was conducted to learn the feature representations of the buildings and recognize their shapes based on graph neural networks. Due to the principles of graph neural networks, it is necessary to construct a graph to represent the adjacency relationships between the points (i.e., the vertices of the polygons shaping the buildings), and extract a list of geometric features for each point. This paper proposes a deep point convolutional network to recognize building shapes, which executes the convolution directly on the points of the buildings without constructing the graphs and extracting the geometric features of the points. A new convolution operator named TriangleConv was designed to learn the feature representations of each point by aggregating the features of the point and the local triangle constructed by the point and its two adjacency points. The proposed method was evaluated and compared with related methods based on a dataset consisting of 5010 vector buildings. In terms of accuracy, macro-precision, macro-recall, and macro-F1, the results show that the proposed method has comparable performance with typical graph neural networks of GCN, GAT, and GraphSAGE, and point cloud neural networks of PointNet, PointNet++, and DGCNN in the task of recognizing and classifying building shapes in map space.

6 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper proposes a relation network based method for the recognization of building footprint shapes with few labeled samples, and takes the TriangleConv embedding module to act as theembedding module of the relation network.
Abstract: Buildings are important entity objects of cities, and the classification of building shapes plays an indispensable role in the cognition and planning of the urban structure. In recent years, some deep learning methods have been proposed for recognizing the shapes of building footprints in modern electronic maps. Furthermore, their performance depends on enough labeled samples for each class of building footprints. However, it is impractical to label enough samples for each type of building footprint shapes. Therefore, the deep learning methods using few labeled samples are more preferable to recognize and classify the building footprint shapes. In this paper, we propose a relation network based method for the recognization of building footprint shapes with few labeled samples. Relation network, composed of embedding module and relation module, is a metric based few-shot method which aims to learn a generalized metric function and predict the types of the new samples according to their relation with the prototypes of these few labeled samples. To better extract the shape features of the building footprints in the form of vector polygons, we have taken the TriangleConv embedding module to act as the embedding module of the relation network. We validate the effectiveness of our method based on a building footprint dataset with 10 typical shapes and compare it with three classical few-shot learning methods in accuracy. The results show that our method performs better for the classification of building footprint shapes with few labeled samples. For example, the accuracy reached 89.40% for the 2-way 5-shot classification task where there are only two classes of samples in the task and five labeled samples for each class.

5 citations

Journal ArticleDOI
TL;DR: In this paper , a metric learning model that learns similarity metrics directly from linear features is proposed to address the complexity of linear features by mapping vector lines to embeddings without format conversion or feature engineering.
Abstract: Abstract Measuring similarity is essential for classifying, clustering, retrieving, and matching linear features in geospatial data. However, the complexity of linear features challenges the formalization of characteristics and determination of the weight of each characteristic in similarity measurements. Additionally, traditional methods have limited adaptability to the variety of linear features. To address these challenges, this study proposes a metric learning model that learns similarity metrics directly from linear features. The model’s ability to learn allows no pre-determined characteristics and supports adaptability to different levels of complex linear features. LineStringNet functions as a feature encoder that maps vector lines to embeddings without format conversion or feature engineering. With a Siamese architecture, the learning process minimizes the contrastive loss, which brings similar pairs closer and pushes dissimilar pairs away in the embedding space. Finally, the proposed model calculates the Euclidean distance to measure the similarity between learned embeddings. Experiments on common linear features and building shapes indicated that the learned similarity metrics effectively supported retrieving, matching, and classifying lines and polygons, with higher precision and accuracy than traditional measures. Furthermore, the model ensures desired metric properties, including rotation and starting point invariances, by adjusting labeling strategies or preprocessing input data.

1 citations

Journal ArticleDOI
TL;DR: A novel pattern recognition and segmentation method for lines, based on deep learning and shape context descriptors, which showed that the lixel classification accuracy of the 1D-U-Net reached 90.42%, higher than either of the two existing machine learning-based segmentation methods.
Abstract: Recognizing morphological patterns in lines and segmenting them into homogeneous segments is critical for line generalization and other applications. Due to the excessive dependence on handcrafted features in existing methods and their insufficient consideration of contextual information, we propose a novel pattern recognition and segmentation method for lines, based on deep learning and shape context descriptors. In this method, a line is divided into a series of consecutive linear units of equal length, termed lixels. A grid shape context descriptor (GSCD) was designed to extract the contextual features for each lixel. A one-dimensional convolutional neural network (1D-U-Net) was constructed to classify the pattern type of each lixel, and adjacent lixels with the same pattern types were fused to obtain segmentation results. The proposed method was applied to administrative boundaries, which were segmented into components with three different patterns. The experiments showed that the lixel classification accuracy of the 1D-U-Net reached 90.42%. The consistency ratio was 92.41%, when compared with the manual segmentation results, which was higher than either of the two existing machine learning-based segmentation methods.

1 citations

Journal ArticleDOI
TL;DR: A thorough review of 34 publications on GNNs in construction, presenting a comprehensive overview of the current research landscape, is presented in this paper , where the authors identify opportunities and challenges for further advancing the application of GNN in construction.