scispace - formally typeset
Search or ask a question

Showing papers on "Spatial analysis published in 2021"


Journal ArticleDOI
TL;DR: Giotto as discussed by the authors is an open-source toolbox for spatial data analysis and visualization that provides end-to-end analysis by implementing a wide range of algorithms for characterizing tissue composition, spatial expression patterns, and cellular interactions.
Abstract: Spatial transcriptomic and proteomic technologies have provided new opportunities to investigate cells in their native microenvironment. Here we present Giotto, a comprehensive and open-source toolbox for spatial data analysis and visualization. The analysis module provides end-to-end analysis by implementing a wide range of algorithms for characterizing tissue composition, spatial expression patterns, and cellular interactions. Furthermore, single-cell RNAseq data can be integrated for spatial cell-type enrichment analysis. The visualization module allows users to interactively visualize analysis outputs and imaging features. To demonstrate its general applicability, we apply Giotto to a wide range of datasets encompassing diverse technologies and platforms.

239 citations


Journal ArticleDOI
TL;DR: BayesSpace is introduced, a fully Bayesian statistical method that uses the information from spatial neighborhoods for resolution enhancement of spatial transcriptomic data and for clustering analysis and shows that it improves identification of distinct intra-tissue transcriptional profiles from samples of the brain, melanoma, invasive ductal carcinoma and ovarian adenocarcinoma.
Abstract: Recent spatial gene expression technologies enable comprehensive measurement of transcriptomic profiles while retaining spatial context. However, existing analysis methods do not address the limited resolution of the technology or use the spatial information efficiently. Here, we introduce BayesSpace, a fully Bayesian statistical method that uses the information from spatial neighborhoods for resolution enhancement of spatial transcriptomic data and for clustering analysis. We benchmark BayesSpace against current methods for spatial and non-spatial clustering and show that it improves identification of distinct intra-tissue transcriptional profiles from samples of the brain, melanoma, invasive ductal carcinoma and ovarian adenocarcinoma. Using immunohistochemistry and an in silico dataset constructed from scRNA-seq data, we show that BayesSpace resolves tissue structure that is not detectable at the original resolution and identifies transcriptional heterogeneity inaccessible to histological analysis. Our results illustrate BayesSpace's utility in facilitating the discovery of biological insights from spatial transcriptomic datasets.

226 citations


Journal ArticleDOI
TL;DR: A graph network is introduced and an optimized graph convolution recurrent neural network is proposed for traffic prediction, in which the spatial information of the road network is represented as a graph, which outperforms state-of-the-art traffic prediction methods.
Abstract: Traffic prediction is a core problem in the intelligent transportation system and has broad applications in the transportation management and planning, and the main challenge of this field is how to efficiently explore the spatial and temporal information of traffic data. Recently, various deep learning methods, such as convolution neural network (CNN), have shown promising performance in traffic prediction. However, it samples traffic data in regular grids as the input of CNN, thus it destroys the spatial structure of the road network. In this paper, we introduce a graph network and propose an optimized graph convolution recurrent neural network for traffic prediction, in which the spatial information of the road network is represented as a graph. Additionally, distinguishing with most current methods using a simple and empirical spatial graph, the proposed method learns an optimized graph through a data-driven way in the training phase, which reveals the latent relationship among the road segments from the traffic data. Lastly, the proposed method is evaluated on three real-world case studies, and the experimental results show that the proposed method outperforms state-of-the-art traffic prediction methods.

164 citations


Journal ArticleDOI
02 Jul 2021-Science
TL;DR: In this article, the authors introduce sci-Space, which retains single-cell resolution while resolving spatial heterogeneity at larger scales and identify thousands of genes exhibiting anatomically patterned expression, leverage spatial information to annotate cellular subtypes, and reveal correlations between pseudotime and the migratory patterns of differentiating neurons.
Abstract: Spatial patterns of gene expression manifest at scales ranging from local (e.g., cell-cell interactions) to global (e.g., body axis patterning). However, current spatial transcriptomics methods either average local contexts or are restricted to limited fields of view. Here, we introduce sci-Space, which retains single-cell resolution while resolving spatial heterogeneity at larger scales. Applying sci-Space to developing mouse embryos, we captured approximate spatial coordinates and whole transcriptomes of about 120,000 nuclei. We identify thousands of genes exhibiting anatomically patterned expression, leverage spatial information to annotate cellular subtypes, show that cell types vary substantially in their extent of spatial patterning, and reveal correlations between pseudotime and the migratory patterns of differentiating neurons. Looking forward, we anticipate that sci-Space will facilitate the construction of spatially resolved single-cell atlases of mammalian development.

111 citations


Journal ArticleDOI
TL;DR: In this paper, the authors comprehensively assess the performance of ten published null frameworks in statistical analyses of neuroimaging data and find that naive null models do not preserve spatial autocorrelation consistently yield elevated false positive rates and unrealistically liberal statistical estimates.

107 citations


Journal ArticleDOI
TL;DR: In this paper, spatial DWLS is used to quantitatively estimate the cell-type composition at each spatial location and find that spatialDWLS outperforms the other methods in terms of accuracy and speed.
Abstract: Recent development of spatial transcriptomic technologies has made it possible to characterize cellular heterogeneity with spatial information. However, the technology often does not have sufficient resolution to distinguish neighboring cell types. Here, we present spatialDWLS, to quantitatively estimate the cell-type composition at each spatial location. We benchmark the performance of spatialDWLS by comparing it with a number of existing deconvolution methods and find that spatialDWLS outperforms the other methods in terms of accuracy and speed. By applying spatialDWLS to a human developmental heart dataset, we observe striking spatial temporal changes of cell-type composition during development.

99 citations


Journal ArticleDOI
TL;DR: In this article, the AHP-PSR (Analytic Hierarchy Process - Press-State-Response) model was used to assess the ecological vulnerability of Weifang City.

93 citations


Journal ArticleDOI
TL;DR: Based on a convolutional neural network (CNN), an interpretable spatial–spectral reconstruction network (SSR-NET) is proposed for more efficient HSI and MSI fusion and achieves the superior or competitive results in comparison with seven state-of-the-art methods.
Abstract: The fusion of a low-spatial-resolution hyperspectral image (HSI) (LR-HSI) with its corresponding high-spatial-resolution multispectral image (MSI) (HR-MSI) to reconstruct a high-spatial-resolution HSI (HR-HSI) has been a significant subject in recent years. Nevertheless, it is still difficult to achieve the cross-mode information fusion of spatial mode and spectral mode when reconstructing HR-HSI for the existing methods. In this article, based on a convolutional neural network (CNN), an interpretable spatial–spectral reconstruction network (SSR-NET) is proposed for more efficient HSI and MSI fusion. More specifically, the proposed SSR-NET is a physical straightforward model that consists of three components: 1) cross-mode message inserting (CMMI); this operation can produce the preliminary fused HR-HSI, preserving the most valuable information of LR-HSI and HR-MSI; 2) spatial reconstruction network (SpatRN); the SpatRN concentrates on reconstructing the lost spatial information of LR-HSI with the guidance of spatial edge loss ( $\mathcal {L}_{\mathrm{ spat}}$ ); and 3) spectral reconstruction network (SpecRN); the SpecRN pays attention to reconstruct the lost spectral information of HR-MSI under the constraint of spatial edge loss ( $\mathcal {L}_{\mathrm{ spec}}$ ). Comparative experiments are conducted on six HSI data sets of Urban, Pavia University (PU), Pavia Center (PC), Botswana, Indian Pines (IP), and Washington DC Mall (WDCM), and the proposed SSR-NET achieves the superior or competitive results in comparison with seven state-of-the-art methods. The code of SSR-NET is available at https://github.com/hw2hwei/SSRNET .

90 citations


Journal ArticleDOI
TL;DR: A novel hyperspectral image classification framework using the fusion of dualatial information is proposed, in which the dual spatial information is built by both exploiting pre-processing feature extraction and post-processing spatial optimization.
Abstract: The inclusion of spatial information into spectral classifiers for fine-resolution hyperspectral imagery has led to significant improvements in terms of classification performance. The task of spectral-spatial hyperspectral image (HSI) classification has remained challenging because of high intraclass spectrum variability and low interclass spectral variability. This fact has made the extraction of spatial information highly active. In this work, a novel HSI classification framework using the fusion of dual spatial information is proposed, in which the dual spatial information is built by both exploiting pre-processing feature extraction and post-processing spatial optimization. In the feature extraction stage, an adaptive texture smoothing method is proposed to construct the structural profile (SP), which makes it possible to precisely extract discriminative features from HSIs. The SP extraction method is used here for the first time in the remote sensing community. Then, the extracted SP is fed into a spectral classifier. In the spatial optimization stage, a pixel-level classifier is used to obtain the class probability followed by an extended random walker-based spatial optimization technique. Finally, a decision fusion rule is utilized to fuse the class probabilities obtained by the two different stages. Experiments performed on three data sets from different scenes illustrate that the proposed method can outperform other state-of-the-art classification techniques. In addition, the proposed feature extraction method, i.e., SP, can effectively improve the discrimination between different land covers.

81 citations


Journal ArticleDOI
01 Mar 2021-Test
TL;DR: This paper provides a review of the many recent developments in the field since the publication of Mardia and Jupp (1999), still the most comprehensive text on directional statistics, and considers developments for the exploratory analysis of directional data.
Abstract: Mainstream statistical methodology is generally applicable to data observed in Euclidean space. There are, however, numerous contexts of considerable scientific interest in which the natural supports for the data under consideration are Riemannian manifolds like the unit circle, torus, sphere, and their extensions. Typically, such data can be represented using one or more directions, and directional statistics is the branch of statistics that deals with their analysis. In this paper, we provide a review of the many recent developments in the field since the publication of Mardia and Jupp (Wiley 1999), still the most comprehensive text on directional statistics. Many of those developments have been stimulated by interesting applications in fields as diverse as astronomy, medicine, genetics, neurology, space situational awareness, acoustics, image analysis, text mining, environmetrics, and machine learning. We begin by considering developments for the exploratory analysis of directional data before progressing to distributional models, general approaches to inference, hypothesis testing, regression, nonparametric curve estimation, methods for dimension reduction, classification and clustering, and the modelling of time series, spatial and spatio-temporal data. An overview of currently available software for analysing directional data is also provided, and potential future developments are discussed.

76 citations


Journal ArticleDOI
TL;DR: Pan-sharpening methods are commonly used to synthesize multispectral and panchromatic images and a wide range of algorithms are investigated, including 41 methods investigated, which indicate that MRA-based methods performed better in terms of spectral quality, whereas most Hybrid-based method had the highest spatial quality and CS- based methods had the lowest results both spectrally and spatially.
Abstract: Pan-sharpening methods are commonly used to synthesize multispectral and panchromatic images. Selecting an appropriate algorithm that maintains the spectral and spatial information content of input images is a challenging task. This review paper investigates a wide range of algorithms, including 41 methods. For this purpose, the methods were categorized as Component Substitution (CS-based), Multi-Resolution Analysis (MRA), Variational Optimization-based (VO), and Hybrid and were tested on a collection of 21 case studies. These include images from WorldView-2, 3 & 4, GeoEye-1, QuickBird, IKONOS, KompSat-2, KompSat-3A, TripleSat, Pleiades-1, Pleiades with the aerial platform, and Deimos-2. Neural network-based methods were excluded due to their substantial computational requirements for operational mapping purposes. The methods were evaluated based on four Spectral and three Spatial quality metrics. An Analysis Of Variance (ANOVA) was used to statistically compare the pan-sharpening categories. Results indicate that MRA-based methods performed better in terms of spectral quality, whereas most Hybrid-based methods had the highest spatial quality and CS-based methods had the lowest results both spectrally and spatially. The revisited version of the Additive Wavelet Luminance Proportional Pan-sharpening method had the highest spectral quality, whereas Generalized IHS with Best Trade-off Parameter with Additive Weights showed the highest spatial quality. CS-based methods generally had the fastest run-time, whereas the majority of methods belonging to MRA and VO categories had relatively long run times.

Journal ArticleDOI
Le Yu1, Bowen Du1, Xiao Hu1, Leilei Sun1, Liangzhe Han1, Weifeng Lv1 
TL;DR: Experimental results on real-world datasets demonstrate that DSTGCN outperforms both classical and state-of-the-art methods to predict traffic accidents.

Posted ContentDOI
03 Feb 2021-bioRxiv
TL;DR: In this article, a computational method, called spatialDWLSL, is proposed to quantitatively estimate the cell-type composition at each spatial location, which can be used to extract biological information from spatial transcriptomic data.
Abstract: Recent development of spatial transcriptomic technologies has made it possible to systematically characterize cellular heterogeneity while preserving spatial information, which greatly enables the investigation of structural organization of a tissue and its impact on modulating cellular behavior. On the other hand, the technology often does not have sufficient resolution to distinguish neighboring cells which may belong to different cell types, therefore it is difficult to identify cell-type distribution directly from the data. To overcome this challenge, we have developed a computational method, called spatialDWLS, to quantitatively estimate the cell-type composition at each spatial location. We benchmarked the performance of spatialDWLS by comparing with a number of existing deconvolution methods using both real and simulated datasets, and we found that spatialDWLS outperformed the other methods in terms of accuracy and speed. By applying spatialDWLS to analyze a human developmental heart dataset, we observed striking spatial-temporal changes of cell-type composition which becomes increasing spatially coherent during development. As such, spatialDWLS provides a valuable computational tool for faithfully extracting biological information from spatial transcriptomic data.

Journal ArticleDOI
TL;DR: It is concluded that spatial cross-validation methods have no theoretical underpinning and should not be used for assessing map accuracy, while standard cross- validation is deficient in case of clustered data.

Journal ArticleDOI
TL;DR: The proposed SSWK-MEDA provides a novel approach for the combination of transfer learning and remote sensing image characteristics and utilizes the geometric structure of features in manifold space to solve the problem of feature distortions of remote sensing data in transfer learning scenarios.
Abstract: Feature distortions of data are a typical problem in remote sensing image classification, especially in the area of transfer learning. In addition, many transfer learning-based methods only focus on spectral information and fail to utilize spatial information of remote sensing images. To tackle these problems, we propose spectral–spatial weighted kernel manifold embedded distribution alignment (SSWK-MEDA) for remote sensing image classification. The proposed method applies a novel spatial information filter to effectively use similarity between nearby sample pixels and avoid the influence of nonsample pixels. Then, a complex kernel combining spatial kernel and spectral kernel with different weights is constructed to adaptively balance the relative importance of spectral and spatial information of the remote sensing image. Finally, we utilize the geometric structure of features in manifold space to solve the problem of feature distortions of remote sensing data in transfer learning scenarios. SSWK-MEDA provides a novel approach for the combination of transfer learning and remote sensing image characteristics. Extensive experiments have demonstrated that the proposed method is more effective than several state-of-the-art methods.

Journal ArticleDOI
TL;DR: Four machine learning algorithms, namely artificial neural networks (ANN), random forest regression (RFR), support vector machine regression (SVR), and Gaussian process regression (GPR), were found to estimate biophysical and biochemical variables of unseen targets with high performance.
Abstract: With an upcoming unprecedented stream of imaging spectroscopy data, there is a rising need for tools and software applications exploiting the spectral possibilities to extract relevant information on an operational basis. In this study, we investigate the potential of a scientific processor designed to quantify biophysical and biochemical crop traits from spectroscopic imagery of the upcoming Environmental Mapping and Analysis Program (EnMAP) satellite. Said processor relies on a hybrid retrieval workflow executing pre-trained machine learning regression models fast and efficiently based on training data from a lookup table of synthetic vegetation spectra and their associated parameterization of the well-known radiative transfer model (RTM) PROSAIL. The established models provide spatial information about leaf area index (LAI), average leaf inclination angle (ALIA), leaf chlorophyll content (Cab) and leaf mass per area (Cm). In contrast to using site-specific training data, the approach facilitates a universal application without the need to integrate a priori information into the processor. Four machine learning algorithms, namely artificial neural networks (ANN), random forest regression (RFR), support vector machine regression (SVR), and Gaussian process regression (GPR), were found to estimate biophysical and biochemical variables of unseen targets with high performance (relative error scores

Journal ArticleDOI
TL;DR: This paper proposes a novel hyperspectral image SR method that alternately employs 2D and 3D units to solve the problem of structural redundancy by sharing spatial information during reconstruction for existing model, which can enhance the learning ability of 2D spatial domain.
Abstract: Hyperspectral image super-resolution (SR) methods based on deep learning have achieved significant progress recently. However, previous methods lack the joint analysis between spectrum and horizontal or vertical direction. Besides, when both 2D and 3D convolution are in the network, the existing models cannot effectively combine the two. To address these issues, in this article, we propose a novel hyperspectral image SR method by exploring the relationship between 2D/3D convolution (ERCSR). Our method alternately employs 2D and 3D units to solve the problem of structural redundancy by sharing spatial information during reconstruction for existing model, which can enhance the learning ability of 2D spatial domain. Importantly, compared with the network using 3D units, i.e., 2D units are replaced by 3D units, it can not only reduce the size of the model but also improve the performance of the model. Furthermore, to exploit the spectrum fully, the split adjacent spatial and spectral convolution (SAEC) is designed to parallelly explore information between spectrum and horizontal or vertical direction in space. Experiments on widely used benchmark datasets demonstrate that the proposed approach outperforms state-of-the-art SR algorithms across different scales in terms of quantitative and qualitative analysis.

Journal ArticleDOI
TL;DR: In this article, the authors review recent research on statistical methods for analysing spatial patterns of points on a network of lines, such as road accident locations along a road network, and describe several common methodological errors.
Abstract: We review recent research on statistical methods for analysing spatial patterns of points on a network of lines, such as road accident locations along a road network. Due to geometrical complexities, the analysis of such data is extremely challenging, and we describe several common methodological errors. The intrinsic lack of homogeneity in a network militates against the traditional methods of spatial statistics based on stationary processes. Topics include kernel density estimation, relative risk estimation, parametric and non-parametric modelling of intensity, second-order analysis using the K-function and pair correlation function, and point process model construction. An important message is that the choice of distance metric on the network is pivotal in the theoretical development and in the analysis of real data. Challenges for statistical computation are discussed and open-source software is provided.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper used the entropy method to measure the development level of the digital economy in each region in 2018 and based on the theory of economic growth and new economic geography, a theoretical model of the influences of input factors, technological progress and institutional changes on China's digital economy was established.
Abstract: The spatial heterogeneity of the influences of various driving factors on the digital economy restricts the further development of regional coordination. This paper constructs an index system for measuring the development level of the digital economy from the three dimensions of infrastructure construction, digital application and digital industry development. Using the entropy method to measure the development level of the digital economy in each region in 2018 and based on the theory of economic growth and new economic geography, a theoretical model of the influences of input factors, technological progress and institutional changes on China’s digital economy is established. Combined with Exploratory Spatial Data Analysis (ESDA) and Geographically Weighted Regression (GWR) model analysis, the spatial distribution pattern of China’s digital economy and its influencing factors are discussed. The results show that there is a large gap in the development level of the digital economy in the eight comprehensive economic regions, and the development level of the digital economy presents a significant spatial correlation in space. The driving patterns of input factors, technological progress and institutional changes to the spatial distribution of the digital economy show obvious spatial differentiation. This study provides important referential value for promoting the coordinated development of the regional digital economy.

Journal ArticleDOI
TL;DR: A complete modelling framework for the North Sea region is proposed in order to fill two knowledge gaps identified in the literature: the lack of offshore integrated system modelling, and the Lack of spatial analysis while defining the offshore regions of the modelling framework.
Abstract: The importance of spatial resolution for energy modelling has increased in the last years. Incorporating more spatial resolution in energy models presents wide benefits, but it is not straightforward, as it might compromise their computational performance. This paper aims to provide a comprehensive review of spatial resolution in energy models, including benefits, challenges and future research avenues. The paper is divided in four parts: first, it reviews and analyses the applications of geographic information systems (GIS) for energy modelling in the literature. GIS analyses are found to be relevant to analyse how meteorology affects renewable production, to assess infrastructure needs, design and routing, and to analyse resource allocation, among others. Second, it analyses a selection of large scale energy modelling tools, in terms of how they can include spatial data, which resolution they have and to what extent this resolution can be modified. Out of the 34 energy models reviewed, 16 permit to include regional coverage, while 13 of them permit to include a tailor-made spatial resolution, showing that current available modelling tools permit regional analysis in large scale frameworks. The third part presents a collection of practices used in the literature to include spatial resolution in energy models, ranging from aggregated methods where the spatial granularity is non-existent to sophisticated clustering methods. Out of the spatial data clustering methods available in the literature, k-means and max-p have been successfully used in energy related applications showing promising results. K-means permits to cluster large amounts of spatial data at a low computational cost, while max-p ensures contiguity and homogeneity in the resulting clusters. The fourth part aims to apply the findings and lessons learned throughout the paper to the North Sea region. This region combines large amounts of planned deployment of variable renewable energy sources with multiple spatial claims and geographical constraints, and therefore it is ideal as a case study. We propose a complete modelling framework for the region in order to fill two knowledge gaps identified in the literature: the lack of offshore integrated system modelling, and the lack of spatial analysis while defining the offshore regions of the modelling framework.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel spatiotemporal network, where the key innovation is the design of its temporal unit Compared with other existing competitors, the proposed temporal unit exhibits an extremely lightweight design that does not degrade its strong ability to sense temporal information Furthermore, it fully enables the computation of temporal saliency cues that interact with their spatial counterparts.
Abstract: We have witnessed a growing interest in video salient object detection (VSOD) techniques in today’s computer vision applications In contrast with temporal information (which is still considered a rather unstable source thus far), the spatial information is more stable and ubiquitous, thus it could influence our vision system more As a result, the current main-stream VSOD approaches have inferred and obtained their saliency primarily from the spatial perspective, still treating temporal information as subordinate Although the aforementioned methodology of focusing on the spatial aspect is effective in achieving a numeric performance gain, it still has two critical limitations First, to ensure the dominance by the spatial information, its temporal counterpart remains inadequately used, though in some complex video scenes, the temporal information may represent the only reliable data source, which is critical to derive the correct VSOD Second, both spatial and temporal saliency cues are often computed independently in advance and then integrated later on, while the interactions between them are omitted completely, resulting in saliency cues with limited quality To combat these challenges, this paper advocates a novel spatiotemporal network, where the key innovation is the design of its temporal unit Compared with other existing competitors (eg, convLSTM), the proposed temporal unit exhibits an extremely lightweight design that does not degrade its strong ability to sense temporal information Furthermore, it fully enables the computation of temporal saliency cues that interact with their spatial counterparts, ultimately boosting the overall VSOD performance and realizing its full potential towards mutual performance improvement for each The proposed method is easy to implement yet still effective, achieving high-quality VSOD at 50 FPS in real-time applications

Journal ArticleDOI
TL;DR: In this paper, a spatial correlation analysis of near collision clusters with local traffic characteristics is presented, where the Moran's I and Getis-Ord Gi* spatial autocorrelation methods are used to determine whether near collisions show spatial clustering from global and local perspectives.

Journal ArticleDOI
TL;DR: It is argued that this closed form software is problematic and considers a number of ways in which issues identified in spatial data analysis could be overlooked when working with closed tools, leading to problems of interpretation and possibly inappropriate actions and policies based on these.
Abstract: This paper reflects on a number of trends towards a more open and reproducible approach to geographic and spatial data science over recent years. In particular, it considers trends towards Big Data, and the impacts this is having on spatial data analysis and modelling. It identifies a turn in academia towards coding as a core analytic tool, and away from proprietary software tools offering ‘black boxes’ where the internal workings of the analysis are not revealed. It is argued that this closed form software is problematic and considers a number of ways in which issues identified in spatial data analysis (such as the MAUP) could be overlooked when working with closed tools, leading to problems of interpretation and possibly inappropriate actions and policies based on these. In addition, this paper considers the role that reproducible and open spatial science may play in such an approach, taking into account the issues raised. It highlights the dangers of failing to account for the geographical properties of data, now that all data are spatial (they are collected somewhere), the problems of a desire for n = all observations in data science and it identifies the need for a critical approach. This is one in which openness, transparency, sharing and reproducibility provide a mantra for defensible and robust spatial data science.

Journal ArticleDOI
TL;DR: This letter introduces a novel spatial–spectral classification method for hyperspectral images (HSIs) based on a structural-kernel collaborative representation (SKCR), which considers one weak assumption of spatial neighborhood that of the pixels in a superpixel belong to the same class when exploiting contextual information in HSI.
Abstract: This letter introduces a novel spatial–spectral classification method for hyperspectral images (HSIs) based on a structural-kernel collaborative representation (SKCR), which considers one weak assumption of spatial neighborhood that of the pixels in a superpixel belong to the same class when exploiting contextual information in HSI. The proposed method consists of the following steps. First, a superpixel segmentation strategy is used to construct self-adaptive regions for the HSI. Then, the structural information within each superpixel block is extracted based on the density peak and $K$ nearest neighbors. Next, dual kernels are separately utilized for the exploitation of the spectral and the spatial information. Finally, the dual kernels are combined and incorporated into a support-vector-machine classifier. Since the weak assumption of spatial neighborhood is well considered in the collaborative representation, the proposed method showed excellent classification performance for two widely used real hyperspectral data sets even when the number of training samples was relatively small.

Journal ArticleDOI
Yanguang Chen1
14 Apr 2021-PLOS ONE
TL;DR: 2-dimensional spatial autocorrelation functions based on the Moran index are developed using the relative staircase function as a weight function to yield a spatial weight matrix with a displacement parameter.
Abstract: A number of spatial statistic measurements such as Moran’s I and Geary’s C can be used for spatial autocorrelation analysis. Spatial autocorrelation modeling proceeded from the 1-dimension autocorrelation of time series analysis, with time lag replaced by spatial weights so that the autocorrelation functions degenerated to autocorrelation coefficients. This paper develops 2-dimensional spatial autocorrelation functions based on the Moran index using the relative staircase function as a weight function to yield a spatial weight matrix with a displacement parameter. The displacement bears analogy with the time lag in time series analysis. Based on the spatial displacement parameter, two types of spatial autocorrelation functions are constructed for 2-dimensional spatial analysis. Then the partial spatial autocorrelation functions are derived by using the Yule-Walker recursive equation. The spatial autocorrelation functions are generalized to the autocorrelation functions based on Geary’s coefficient and Getis’ index. As an example, the new analytical framework was applied to the spatial autocorrelation modeling of Chinese cities. A conclusion can be reached that it is an effective method to build an autocorrelation function based on the relative step function. The spatial autocorrelation functions can be employed to reveal deep geographical information and perform spatial dynamic analysis, and lay the foundation for the scaling analysis of spatial correlation.

Proceedings ArticleDOI
14 Aug 2021
TL;DR: In this article, a graph neural network (GNN) and an LSTM model are used to learn the trajectory representation of each POI in the road network along with the trajectory information, which is then used by a Graph Neural Network model to identify neighboring POIs within the same trajectory.
Abstract: Trajectory similarity computation is an essential operation in many applications of spatial data analysis. In this paper, we study the problem of trajectory similarity computation over spatial network, where the real distances between objects are reflected by the network distance. Unlike previous studies which learn the representation of trajectories in Euclidean space, it requires to capture not only the sequence information of the trajectory but also the structure of spatial network. To this end, we propose GTS, a brand new framework that can jointly learn both factors so as to accurately compute the similarity. It first learns the representation of each point-of-interest (POI) in the road network along with the trajectory information. This is realized by incorporating the distances between POIs and trajectory in the random walk over the spatial network as well as the loss function. Then the trajectory representation is learned by a Graph Neural Network model to identify neighboring POIs within the same trajectory, together with an LSTM model to capture the sequence information in the trajectory. We conduct comprehensive evaluation on several real world datasets. The experimental results demonstrate that our model substantially outperforms all existing approaches.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed Spatial Information Guided Convolution (S-Conv), which allows efficient RGB feature and 3D spatial information integration to improve the performance of semantic segmentation.
Abstract: 3D spatial information is known to be beneficial to the semantic segmentation task. Most existing methods take 3D spatial data as an additional input, leading to a two-stream segmentation network that processes RGB and 3D spatial information separately. This solution greatly increases the inference time and severely limits its scope for real-time applications. To solve this problem, we propose Spatial information guided Convolution (S-Conv), which allows efficient RGB feature and 3D spatial information integration. S-Conv is competent to infer the sampling offset of the convolution kernel guided by the 3D spatial information, helping the convolutional layer adjust the receptive field and adapt to geometric transformations. S-Conv also incorporates geometric information into the feature learning process by generating spatially adaptive convolutional weights. The capability of perceiving geometry is largely enhanced without much affecting the amount of parameters and computational cost. Based on S-Conv, we further design a semantic segmentation network, called Spatial information Guided convolutional Network (SGNet), resulting in real-time inference and state-of-the-art performance on NYUDv2 and SUNRGBD datasets.

Journal ArticleDOI
TL;DR: This paper proposes a graph convolutional networks (GCNs)-based model and introduces a simple yet effective recurrent operations to perform the de-raining process in a successive manner and achieves state-of-the-art results on both synthetic and real-world data sets.
Abstract: Deep convolutional neural networks (CNNs) have shown their advantages in the single image de-raining task. However, most existing CNNs-based methods utilize only local spatial information without considering long-range contextual information. In this paper, we propose a graph convolutional networks (GCNs)-based model to solve the above problem. We specifically design two graphs to extract representations from new dimensions. The first graph models the global spatial relationship between pixels in the feature, while the second graph models the interrelationship across the channels. By integrating conventional CNNs and our GCNs into a single framework, the proposed method is able to explore comprehensive feature representations from three aspects, i.e., local spatial patterns, global spatial coherence and channel correlation. To better exploit the explored rich feature representations, we further introduce a simple yet effective recurrent operations to perform the de-raining process in a successive manner. Benefiting from the rich information exploration and exploitation, our method achieves state-of-the-art results on both synthetic and real-world data sets.

Journal ArticleDOI
TL;DR: The algorithm in this article uses Cycle-generative adversarial networks (GANs) to simulate the change process of two HSLT images, and the image with spatial information is introduced into the Flexible Spatiotemporal DAta Fusion (FSDAF) framework to improve the performance of spatiotem temporal image-fusion.
Abstract: Due to the trade-off of temporal resolution and spatial resolution, spatiotemporal image-fusion uses existing high-spatial-low-temporal (HSLT) and high-temporal-low-spatial (HTLS) images as prior knowledge to reconstruct high-temporal-high-spatial (HTHS) images. However, some existing spatiotemporal image-fusion algorithms ignore the issue that the spatial information of HTLS images is insufficient to support the acquisition of spatial information, which leads to the unsatisfactory accuracy of the fusion result. To introduce more spatial information, the algorithm in this article uses Cycle-generative adversarial networks (GANs) to simulate the change process of two HSLT images at $k-1$ and $k+1$ , and to generate some simulated images between $k-1$ and $k+1$ . Then, the generated images are selected under the help of HTLS images, and the selected ones are then enhanced with wavelet transform. Finally, the image with spatial information is introduced into the Flexible Spatiotemporal DAta Fusion (FSDAF) framework to improve the performance of spatiotemporal image-fusion. Extensive experiments on two real data sets demonstrate that our proposed method outperforms current state-of-the-art spatiotemporal image-fusion methods.

Journal ArticleDOI
TL;DR: A novel detail-preserving network (DPNet), i.e., a dual-branch network architecture that fully addresses the above problems and facilitates the depth map inference and Experimental results show that the proposed method outperforms SOTA methods on benchmark RGB-D datasets.