Journal ArticleDOI
Multi-scale Single Image Super-Resolution with Remote-Sensing Application Using Transferred Wide Residual Network
Reads0
Chats0
TLDR
Wang et al. as discussed by the authors proposed a transferred wide residual Single Image Super-Resolution (SISR) remote sensing deep neural network model (WRSR) by increasing the width and reducing the residual network depth.Abstract:
Super-resolution (SR) has received extensive attention in recent years for satellite image processing in a wide range of application scenarios, such as land classification, identification of changes, the discovery of resources, etc. Satellite images from satellite sensors are mostly low-resolution (LR) images, so they do not completely fulfill object detection and analysis criteria. SR has multiple residual network frameworks in deep learning that have improved performance and can extend thousands of layers in the system. However, each layer improves accuracy by doubling the number of layers, although training thousands of layers are too expensive, the process is slow, and there are functional recovery issues. We proposed a transferred wide residual Single Image Super-Resolution (SISR) remote sensing deep neural network model (WRSR). By increasing the width and reducing the residual network depth, the proposed approach has dramatically reduced memory costs. As a result, our model reduced memory costs by 21% in Enhanced Deep Residual Super-Resolution (EDSR) and 34% in SRResNet as a direct consequence of the in-depth reduction. The proposed architecture improves the efficiency of training loss by performing weight normalization instead of augmentation technology. We compared our method to five recent existing super-resolution (SR) deep neural network methods, tested over three public satellite image datasets and a standard reference (PRIM) dataset. Experiment analysis is evaluated in peak to signal noise ratio (PSNR) and structural similarity index measure (SSIM).read more
Citations
More filters
Journal ArticleDOI
A comprehensive review on deep learning based remote sensing image super-resolution methods
TL;DR: In this paper , a review of the DL-based single image super-resolution (SISR) methods on optical remote sensing images is presented, including DL models, commonly used remote sensing datasets, loss functions, and performance evaluation metrics.
Journal ArticleDOI
Devising a method for segmenting complex structured images acquired from space observation systems based on the particle swarm algorithm
Hennadii Khudov,Oleksandr Makoveichuk,Irina Khizhnyak,Oleksandr Oleksenko,Yuriy Khazhanets,Yuriy Solomonenko,Iryna Yuzova,Yevhen Dudar,Stanislav Stetsiv,Vladyslav Khudov +9 more
TL;DR: The improved segmentation method based on the particle swarm algorithm makes it possible to segment complex structured images acquired from space surveillance systems and reduces segmentation errors of the first kind by an average of 12 % and that of the second kind by 8 %.
Journal ArticleDOI
Improvement of noisy images filtered by bilateral process using a multi-scale context aggregation network
TL;DR: The study shows the effect of using Multi-scale deep learning Context Aggregation Network CAN on Bilateral Filtering Approximation (BFA) for de-noising noisy CCTV images.
Journal ArticleDOI
Texture-driven super-resolution of ultrasound images using optimized deep learning model
M. Markco,S. Kannan +1 more
TL;DR: In this paper , a deep learning-based super-resolution of ultrasound images that are texture-driven is proposed, which uses the Dwarf Mongoose Optimization (DMO) method to adjust the parameters of CNN thereby improving the quality of image resolutions.
References
More filters
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings ArticleDOI
Densely Connected Convolutional Networks
TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Posted Content
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Proceedings ArticleDOI
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
Christian Ledig,Lucas Theis,Ferenc Huszar,Jose Caballero,Andrew Cunningham,Alejandro Acosta,Andrew Peter Aitken,Alykhan Tejani,Johannes Totz,Zehan Wang,Wenzhe Shi +10 more
TL;DR: SRGAN as mentioned in this paper proposes a perceptual loss function which consists of an adversarial loss and a content loss, which pushes the solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images.
Proceedings ArticleDOI
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
Wenzhe Shi,Jose Caballero,Ferenc Huszar,Johannes Totz,Andrew Peter Aitken,Rob Bishop,Daniel Rueckert,Zehan Wang +7 more
TL;DR: This paper presents the first convolutional neural network capable of real-time SR of 1080p videos on a single K2 GPU and introduces an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output.