scispace - formally typeset
Search or ask a question

Showing papers by "Kui Jiang published in 2018"


Journal ArticleDOI
TL;DR: Experiments on Kaggle Open Source Dataset and Jilin-1 video satellite images illustrate that DDRN outperforms the conventional CNN-based baselines and some state-of-the-art feature extraction approaches.
Abstract: Deep convolutional neural networks (CNNs) have been widely used and achieved state-of-the-art performance in many image or video processing and analysis tasks. In particular, for image super-resolution (SR) processing, previous CNN-based methods have led to significant improvements, when compared with shallow learning-based methods. However, previous CNN-based algorithms with simple direct or skip connections are of poor performance when applied to remote sensing satellite images SR. In this study, a simple but effective CNN framework, namely deep distillation recursive network (DDRN), is presented for video satellite image SR. DDRN includes a group of ultra-dense residual blocks (UDB), a multi-scale purification unit (MSPU), and a reconstruction module. In particular, through the addition of rich interactive links in and between multiple-path units in each UDB, features extracted from multiple parallel convolution layers can be shared effectively. Compared with classical dense-connection-based models, DDRN possesses the following main properties. (1) DDRN contains more linking nodes with the same convolution layers. (2) A distillation and compensation mechanism, which performs feature distillation and compensation in different stages of the network, is also constructed. In particular, the high-frequency components lost during information propagation can be compensated in MSPU. (3) The final SR image can benefit from the feature maps extracted from UDB and the compensated components obtained from MSPU. Experiments on Kaggle Open Source Dataset and Jilin-1 video satellite images illustrate that DDRN outperforms the conventional CNN-based baselines and some state-of-the-art feature extraction approaches.

75 citations


Journal ArticleDOI
TL;DR: Experiments on real-world Jilin-1 video satellite images and Kaggle Open Source Dataset show that the proposed PECNN outperforms the state-of theart methods both in visual effects and quantitative metrics.
Abstract: Deep convolutional neural networks (CNNs) have been extensively applied to image or video processing and analysis tasks For single-image superresolution (SR) processing, previous CNN-based methods have led to significant improvements, when compared to the shallow learning-based methods However, these CNN-based algorithms with simply direct or skip connections are not suitable for satellite imagery SR because of complex imaging conditions and unknown degradation process More importantly, they ignore the extraction and utilization of the structural information in satellite images, which is very unfavorable for video satellite imagery SR with such characteristics as small ground targets, weak textures, and over-compression distortion To this end, this letter proposes a novel progressively enhanced network for satellite image SR called PECNN, which is composed of a pretraining CNN-based network and an enhanced dense connection network The pretraining part is used to extract the low-level feature maps and reconstructs a basic high-resolution image from the low-resolution input In particular, we propose a transition unit to obtain the structural information from the base output Then, the obtained structural information and the extracted low-level feature maps are transmitted to the enhanced network for further extraction to enforce the feature expression Finally, a residual image with enhanced fine details obtained from the dense connection network is used to enrich the basic image for the ultimate SR output Experiments on real-world Jilin-1 video satellite images and Kaggle Open Source Dataset show that the proposed PECNN outperforms the state-of-the-art methods both in visual effects and quantitative metrics Code is available at https://githubcom/kuihua/PECNN

59 citations


Patent
31 Jul 2018
TL;DR: In this paper, a deep learning network training method for video satellite super-resolution reconstruction is proposed, which comprises the steps of constructing a training sample set composed of high-resolution static satellite images, and then, constructing a CNN network structure for superresolution reconstruction and setting network training parameters, and finally establishing a loss function.
Abstract: The invention discloses a deep learning network training method for video satellite super-resolution reconstruction. The method comprises the steps of firstly, constructing a training sample set composed of high-resolution static satellite images, and then, constructing a CNN network structure for super-resolution reconstruction and setting network training parameters, and finally establishing a loss function of deep CNN training. The influence degree of the target edge and the pixel gray value on the reconstruction error degree is considered, therefore, the training effect of the deep CNN network is improved, and finally the performance of the image super-resolution method based on deep learning is promoted.

2 citations