scispace - formally typeset
Search or ask a question
Author

Li Peng

Bio: Li Peng is an academic researcher from Wuhan University. The author has contributed to research in topics: Image restoration & Structure tensor. The author has an hindex of 1, co-authored 1 publications receiving 53 citations.

Papers
More filters
Journal ArticleDOI
Huanfeng Shen1, Li Peng1, Linwei Yue1, Qiangqiang Yuan1, Liangpei Zhang1 
TL;DR: A method to adaptively determine the optimal norms for both fidelity term and regularization term in the (SR) restoration model is proposed, Inspired by a generalized likelihood ratio test, to solve the norm of the fidelity term.
Abstract: In the commonly employed regularization models of image restoration and super-resolution (SR), the norm determination is often challenging. This paper proposes a method to adaptively determine the optimal norms for both fidelity term and regularization term in the (SR) restoration model. Inspired by a generalized likelihood ratio test, a piecewise function is proposed to solve the norm of the fidelity term. This function can find the stable norm value in a certain number of iterations, regardless of whether the noise type is Gaussian, impulse, or mixed. For the regularization norm, the main advantage of the proposed method is that it is locally adaptive. Specifically, it assigns different norms for different pixel locations, according to the local activity measured by a structure tensor metric. The proposed method was tested using different types of images. The experimental results and error analyses verify the efficacy of the method.

61 citations


Cited by
More filters
Journal ArticleDOI
Linwei Yue1, Huanfeng Shen1, Jie Li1, Qiangqiang Yuan1, Hongyan Zhang1, Liangpei Zhang1 
TL;DR: This paper aims to provide a review of SR from the perspective of techniques and applications, and especially the main contributions in recent years, and discusses the current obstacles for future research.

378 citations

Journal ArticleDOI
TL;DR: The proposed integrated fusion framework can achieve the integrated fusion of multisource observations to obtain high spatio-temporal-spectral resolution images, without limitations on the number of remote sensing sensors.
Abstract: Remote sensing satellite sensors feature a tradeoff between the spatial, temporal, and spectral resolutions. In this paper, we propose an integrated framework for the spatio–temporal–spectral fusion of remote sensing images. There are two main advantages of the proposed integrated fusion framework: it can accomplish different kinds of fusion tasks, such as multiview spatial fusion, spatio–spectral fusion, and spatio–temporal fusion, based on a single unified model, and it can achieve the integrated fusion of multisource observations to obtain high spatio–temporal–spectral resolution images, without limitations on the number of remote sensing sensors. The proposed integrated fusion framework was comprehensively tested and verified in a variety of image fusion experiments. In the experiments, a number of different remote sensing satellites were utilized, including IKONOS, the Enhanced Thematic Mapper Plus (ETM+), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Hyperspectral Digital Imagery Collection Experiment (HYDICE), and Systeme Pour l' Observation de la Terre-5 (SPOT-5). The experimental results confirm the effectiveness of the proposed method.

240 citations

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a feature learning framework for hyperspectral images spectral-spatial feature representation and classification, which learns a latent low dimensional subspace by projecting the spectral and spatial feature into a common feature space, where the complementary information has been effectively exploited.
Abstract: In hyperspectral remote sensing data mining, it is important to take into account of both spectral and spatial information, such as the spectral signature, texture feature, and morphological property, to improve the performances, e.g., the image classification accuracy. In a feature representation point of view, a nature approach to handle this situation is to concatenate the spectral and spatial features into a single but high dimensional vector and then apply a certain dimension reduction technique directly on that concatenated vector before feed it into the subsequent classifier. However, multiple features from various domains definitely have different physical meanings and statistical properties, and thus such concatenation has not efficiently explore the complementary properties among different features, which should benefit for boost the feature discriminability. Furthermore, it is also difficult to interpret the transformed results of the concatenated vector. Consequently, finding a physically meaningful consensus low dimensional feature representation of original multiple features is still a challenging task. In order to address these issues, we propose a novel feature learning framework, i.e., the simultaneous spectral-spatial feature selection and extraction algorithm, for hyperspectral images spectral-spatial feature representation and classification. Specifically, the proposed method learns a latent low dimensional subspace by projecting the spectral-spatial feature into a common feature space, where the complementary information has been effectively exploited, and simultaneously, only the most significant original features have been transformed. Encouraging experimental results on three public available hyperspectral remote sensing datasets confirm that our proposed method is effective and efficient.

219 citations

Journal ArticleDOI
TL;DR: This paper proposes a novel HSI compression and reconstruction algorithm via patch-based low-rank tensor decomposition (PLTD), which simultaneously removes the redundancy in both the spatial and spectral domains in a unified framework.
Abstract: Recent years has witnessed growing interest in hyperspectral image (HSI) processing. In practice, however, HSIs always suffer from huge data size and mass of redundant information, which hinder their application in many cases. HSI compression is a straightforward way of relieving these problems. However, most of the conventional image encoding algorithms mainly focus on the spatial dimensions, and they need not consider the redundancy in the spectral dimension. In this paper, we propose a novel HSI compression and reconstruction algorithm via patch-based low-rank tensor decomposition (PLTD). Instead of processing the HSI separately by spectral channel or by pixel, we represent each local patch of the HSI as a third-order tensor. Then, the similar tensor patches are grouped by clustering to form a fourth-order tensor per cluster. Since the grouped tensor is assumed to be redundant, each cluster can be approximately decomposed to a coefficient tensor and three dictionary matrices, which leads to a low-rank tensor representation of both the spatial and spectral modes. The reconstructed HSI can then be simply obtained by the product of the coefficient tensor and dictionary matrices per cluster. In this way, the proposed PLTD algorithm simultaneously removes the redundancy in both the spatial and spectral domains in a unified framework. The extensive experimental results on various public HSI datasets demonstrate that the proposed method outperforms the traditional image compression approaches and other tensor-based methods.

138 citations

Journal ArticleDOI
TL;DR: In this paper, the spatial and temporal nonlocal filter-based fusion model (STNLFFM) is proposed to enhance the prediction capacity and accuracy, especially for complex changed landscapes.
Abstract: The tradeoff in remote sensing instruments that balances the spatial resolution and temporal frequency limits our capacity to monitor spatial and temporal dynamics effectively. The spatiotemporal data fusion technique is considered as a cost-effective way to obtain remote sensing data with both high spatial resolution and high temporal frequency, by blending observations from multiple sensors with different advantages or characteristics. In this paper, we develop the spatial and temporal nonlocal filter-based fusion model (STNLFFM) to enhance the prediction capacity and accuracy, especially for complex changed landscapes. The STNLFFM method provides a new transformation relationship between the fine-resolution reflectance images acquired from the same sensor at different dates with the help of coarse-resolution reflectance data, and makes full use of the high degree of spatiotemporal redundancy in the remote sensing image sequence to produce the final prediction. The proposed method was tested over both the Coleambally Irrigation Area study site and the Lower Gwydir Catchment study site. The results show that the proposed method can provide a more accurate and robust prediction, especially for heterogeneous landscapes and temporally dynamic areas.

84 citations