scispace - formally typeset
Search or ask a question
Author

Xijiang Chen

Other affiliations: China University of Technology
Bio: Xijiang Chen is an academic researcher from Wuhan University of Technology. The author has contributed to research in topics: Point cloud & Computer science. The author has an hindex of 6, co-authored 18 publications receiving 111 citations. Previous affiliations of Xijiang Chen include China University of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors focused on performance analysis and accuracy enhancement of long-term position time series of a regional network of GPS stations with two near sub-blocks, one block of 8 stations in Cascadia region and another block of 14 stations in Southern California.

55 citations

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the theoretical MDD has a good match with the actual deformation, and the deformation greater than MDD can be accurately detected by the TLS device.
Abstract: Terrestrial laser scanning (TLS) is a widely used remote sensing technique which can produce very dense point cloud data very promptly and is particularly suited for surface deformation monitoring. Deformation magnitude is typically estimated by comparing TLS scans over the same area but at different time epochs of interest. However, there is an issue related to such a method, which is not clear that whether the difference between two successive surveys results from the surface deformation. Hence, it is vital to determine the minimum detectable deformation (MDD) by a TLS device with a given registration and point cloud error level. In this paper, the MDD is determined based on the computation of the point cloud error entropy. The performance of the proposed method is extensively evaluated numerically using simulated plane board deformation point clouds under a range of distances and incidence angles. This proposed method was also successfully applied to deformation monitoring of one landslide test site located in the Wuhan University of Technology. The experimental results demonstrate that the theoretical MDD has a good match with the actual deformation, and the deformation greater than MDD can be accurately detected by the TLS device.

25 citations

Journal ArticleDOI
TL;DR: A new method is proposed to simplify point cloud data by removing the least important points and updating the normal vectors and importance values progressively until user-specified reduction ratio is reached.
Abstract: With the development of modern 3D measurement technologies, it becomes easy to capture dense point cloud datasets. To settle the problem of pruning the redundant points and fast reconstruction, simplification for point cloud is a necessary step during the processing. In this paper, a new method is proposed to simplify point cloud data. The kernel procedure of the method is to evaluate the importance of points based on local entropy of normal angle. After the estimation of normal vectors, the importance evaluation of points is derived based on normal angles and the theory of information entropy. The simplification proceeds and finishes by removing the least important points and updating the normal vectors and importance values progressively until user-specified reduction ratio is reached. To evaluate the accuracy of the simplification results quantitatively, an indicator is determined by calculating the mean entropy of the simplified point cloud. Furthermore, the performance of the proposed approach is illustrated with two sets of validation experiments where other three classical simplification methods are employed for contrast. The results show that the proposed method performs much better than other three methods for point cloud simplification.

21 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a novel approach by fusing fractal dimension and UHK-Net deep learning network to conduct the semantic recognition of concrete cracks, which can not only characterize the dark crack images, but also distinguish small and fine crack images.
Abstract: Concrete wall surfaces are prone to cracking for a long time, which affects the stability of concrete structures and may even lead to collapse accidents. In view of this, it is necessary to recognize and distinguish the concrete cracks. Then, the stability of concrete will be known. In this paper, we propose a novel approach by fusing fractal dimension and UHK-Net deep learning network to conduct the semantic recognition of concrete cracks. We first use the local fractal dimensions to study the concrete cracking and roughly determine the location of concrete crack. Then, we use the U-Net Haar-like (UHK-Net) network to construct the crack segmentation network. Ultimately, the different types of concrete crack images are used to verify the advantage of the proposed method by comparing with FCN, U-Net, YOLO v5 network. Results show that the proposed method can not only characterize the dark crack images, but also distinguish small and fine crack images. The pixel accuracy (PA), mean pixel accuracy (MPA), and mean intersection over union (MIoU) of crack segmentation determined by the proposed method are all greater than 90%.

19 citations

Journal ArticleDOI
TL;DR: An iterative Gaussian mapping-based segmentation strategy has been proposed in this article, which goes from rough segmentation to refined one iteratively to decompose the indoor scene into detectable point cloud clusters layer by layer.
Abstract: Indoor scene segmentation based on 3-D laser point cloud is important for rebuilding and classification, especially for permanent building structure. However, the existing segmentation methods mainly focus on the large-scale planar structures but ignore the other sharp structures and details, which would cause accuracy degradation in scene reconstruction. To handle this issue, an iterative Gaussian mapping-based segmentation strategy has been proposed in this article, which goes from rough segmentation to refined one iteratively to decompose the indoor scene into detectable point cloud clusters layer by layer. An improved model fitting algorithm based on the maximum likelihood estimation sampling consensus (MLESAC) algorithm is proposed for refined segmentation, which is called the Prior-MLESAC algorithm, to deal with the extraction of both vertical and nonvertical planar and cylindrical structures. The experimental results demonstrate that planar and cylindrical structures are segmented more completely by the proposed strategy, and more details of the indoor structure are restored than other existing methods.

18 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this article, the authors provide a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series.

88 citations

Book
09 Jul 2009
TL;DR: A new, unified approach for laser scanner self-calibration is described, and the results of calibration of three scanners are reported, and a prototype combined TLS survey system is presented, which employs GPS for direct georeferencing of the point clouds, and can be used for accurate surveys of built environments.
Abstract: During the last decade, terrestrial laser scanning (TLS) has appeared as a new surveying technique. Its proper use requires good knowledge of the error sources, comprehensive description of which is currently lacking. Especially important are systematic instrumental errors, which are determined during calibration. Recently, the method of self- calibration used in photogrammetry has been shown to be efficient for laser scanners. Another important task in TLS is georeferencing – transformation of the point clouds into a specific coordinate system. This book provides a systematic description of the error sources in TLS surveys conducted with direct georeferencing. Further, a new, unified approach for laser scanner self-calibration is described, and the results of calibration of three scanners are reported. Finally, a prototype combined TLS survey system is presented, which employs GPS for direct georeferencing of the point clouds, and can be used for accurate surveys of built environments. The book should be useful to students and researchers in Engineering Surveying as well as surveyors in public and private sector.

80 citations

Journal ArticleDOI
TL;DR: Lidar point clouds are understood from a new and universal perspective, i.e., geometric primitives embedded in versatile objects in the physical world, and primitive-based applications are reviewed with an emphasis on object extraction and reconstruction.
Abstract: To the best of our knowledge, the most recent light detection and ranging (lidar)-based surveys have been focused only on specific applications such as reconstruction and segmentation, as well as data processing techniques based on a specific platform, e.g., mobile laser. However, in this article, lidar point clouds are understood from a new and universal perspective, i.e., geometric primitives embedded in versatile objects in the physical world. In lidar point clouds, the basic unit is the point coordinate. Geometric primitives that consist of a group of discrete points may be viewed as one kind of abstraction and representation of lidar data at the entity level. We categorize geometric primitives into two classes: shape primitives, e.g., lines, surfaces, and volumetric shapes, and structure primitives, represented by skeletons and edges. In recent years, many efforts from different communities, such as photogrammetry, computer vision, and computer graphics, have been made to finalize geometric primitive detection, regularization, and in-depth applications. Interpretations of geometric primitives from multiple disciplines try to convey the significance of geometric primitives, the latest processing techniques regarding geometric primitives, and their potential possibilities in the context of lidar point clouds. To this end, primitive-based applications are reviewed with an emphasis on object extraction and reconstruction to clearly show the significances of this article. Next, we survey and compare methods for geometric primitive extraction and then review primitive regularization methods that add real-world constrains to initial primitives. Finally, we summarize the challenges, expected applications, and describe possible future for primitive extraction methods that can achieve globally optimal results efficiently, even with disorganized, uneven, noisy, incomplete, and large-scale lidar point clouds.

63 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed a strategy based on median and interquartile range statistics to identify abnormal sites and discard abnormal sites, and two indices based on the analysis of the spatial responses of all sites in each independent component (east, north, and vertical) were used to define the CME quantitatively.
Abstract: Removal of the common mode error (CME) is a routine procedure in postprocessing regional GPS network observations, which is commonly performed using principal component analysis (PCA). PCA decomposes a network time series into a group of modes, where each mode comprises a common temporal function and corresponding spatial response based on second-order statistics (variance and covariance). However, the probability distribution function of a GPS time series is non-Gaussian; therefore, the largest variances do not correspond to the meaningful axes, and the PCA-derived components may not have an obvious physical meaning. In this study, the CME was assumed statistically independent of other errors, and it was extracted using independent component analysis (ICA), which involves higher-order statistics. First, the ICA performance was tested using a simulated example and compared with PCA and stacking methods. The existence of strong local effects on some stations causes significant large spatial responses and, therefore, a strategy based on median and interquartile range statistics was proposed to identify abnormal sites. After discarding abnormal sites, two indices based on the analysis of the spatial responses of all sites in each independent component (east, north, and vertical) were used to define the CME quantitatively. Continuous GPS coordinate time series spanning $$\sim $$ 4.5 years obtained from 259 stations of the Tectonic and Environmental Observation Network of Mainland China (CMONOC II) were analyzed using both PCA and ICA methods and their results compared. The results suggest that PCA is susceptible to deriving an artificial spatial structure, whereas ICA separates the CME from other errors reliably. Our results demonstrate that the spatial characteristics of the CME for CMONOC II are not uniform for the east, north, and vertical components, but have an obvious north–south or east–west distribution. After discarding 84 abnormal sites and performing spatiotemporal filtering using ICA, an average reduction in scatter of 6.3% was achieved for all three components.

48 citations

Journal ArticleDOI
Wei Zhang1, Rui Xiao1, Bin Shi1, Hong-Hu Zhu1, Yi-jie Sun2 
TL;DR: In this article, a nonlinear time correction factor was added to the TGM (1,1,p,q) grey model to optimize the weighting coefficients of the background values using genetic algorithm.

46 citations