scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Image super-resolution

Linwei Yue1, Huanfeng Shen1, Jie Li1, Qiangqiang Yuan1, Hongyan Zhang1, Liangpei Zhang1 
01 Nov 2016-Signal Processing (Elsevier)-Vol. 128, pp 389-408
TL;DR: This paper aims to provide a review of SR from the perspective of techniques and applications, and especially the main contributions in recent years, and discusses the current obstacles for future research.
About: This article is published in Signal Processing.The article was published on 2016-11-01. It has received 378 citations till now.
Citations
More filters
Book ChapterDOI
18 Sep 2018
TL;DR: A new multi-layer benchmark dataset is introduced for systematic evaluation of multiple-image SRR techniques with particular reference to satellite imaging, hoping that the new benchmark will help the researchers to improve the state of the art in SRR, making it suitable for real-world applications.
Abstract: Super-resolution reconstruction (SRR) methods consist in processing single or multiple images to increase their spatial resolution. Deployment of such techniques is particularly important, when high resolution image acquisition is associated with high cost or risk, like for medical or satellite imaging. Unfortunately, the existing SRR techniques are not sufficiently robust to be deployed in real-world scenarios, and no real-life benchmark to validate multiple-image SRR has been published so far. As gathering a set of images presenting the same scene at different spatial resolution is not a trivial task, the SRR methods are evaluated based on different assumptions, employing various metrics and datasets, often without using any ground-truth data. In this paper, we introduce a new multi-layer benchmark dataset for systematic evaluation of multiple-image SRR techniques with particular reference to satellite imaging. We hope that the new benchmark will help the researchers to improve the state of the art in SRR, making it suitable for real-world applications.

5 citations

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed an attention-based generative adversarial network (SRAGAN) which applies both local and global attention mechanisms to improve the performance of super-resolution.
Abstract: Super-resolution (SR) technology is an important way to improve spatial resolution under the condition of sensor hardware limitations. With the development of deep learning (DL), some DL-based SR models have achieved state-of-the-art performance, especially the convolutional neural network (CNN). However, considering that remote sensing images usually contain a variety of ground scenes and objects with different scales, orientations, and spectral characteristics, previous works usually treat important and unnecessary features equally or only apply different weights in the local receptive field, which ignores long-range dependencies; it is still a challenging task to exploit features on different levels and reconstruct images with realistic details. To address these problems, an attention-based generative adversarial network (SRAGAN) is proposed in this article, which applies both local and global attention mechanisms. Specifically, we apply local attention in the SR model to focus on structural components of the earth’s surface that require more attention, and global attention is used to capture long-range interdependencies in the channel and spatial dimensions to further refine details. To optimize the adversarial learning process, we also use local and global attentions in the discriminator model to enhance the discriminative ability and apply the gradient penalty in the form of hinge loss and loss function that combines $L1$ pixel loss, $L1$ perceptual loss, and relativistic adversarial loss to promote rich details. The experiments show that SRAGAN can achieve performance improvements and reconstruct better details compared with current state-of-the-art SR methods. A series of ablation investigations and model analyses validate the efficiency and effectiveness of our method.

5 citations

Book ChapterDOI
03 Apr 2018
TL;DR: This paper proposes to search the hyper-parameter space using a genetic algorithm (GA), thus adapting to the actual relation between LR and HR, which has not been reported in the literature so far.
Abstract: Super-resolution reconstruction (SRR) allows for producing a high-resolution (HR) image from a set of low-resolution (LR) observations. The majority of existing methods require tuning a number of hyper-parameters which control the reconstruction process and configure the imaging model that is supposed to reflect the relation between high and low resolution. In this paper, we demonstrate that the reconstruction process is very sensitive to the actual relation between LR and HR images, and we argue that this is a substantial obstacle in deploying SRR in practice. We propose to search the hyper-parameter space using a genetic algorithm (GA), thus adapting to the actual relation between LR and HR, which has not been reported in the literature so far. The results of our extensive experimental study clearly indicate that our GA improves the capacities of SRR. Importantly, the GA converges to different values of the hyper-parameters depending on the applied degradation procedure, which is confirmed using statistical tests.

5 citations

Journal ArticleDOI
TL;DR: Li et al. as discussed by the authors proposed a self-supervised cycle-consistent learning-based scale-arbitrary super-resolution framework (SCL-SASR) for real-world images.
Abstract: Whether conventional machine learning-based or current deep neural networks-based single image super-resolution (SISR) methods, they are generally trained and validated on synthetic datasets, in which low-resolution (LR) inputs are artificially produced by degrading high-resolution (HR) images based on a hand-crafted degradation model (e.g., bicubic downsampling). One of the main reasons for this is that it is challenging to build a realistic dataset composed of real-world LR–HR image pairs. However, a domain gap exists between synthetic and real-world data because the degradations in real scenarios are more complicated, limiting the performance in practical applications of SISR models trained with synthetic data. To address these problems, we propose a Self-supervised Cycle-consistent Learning-based Scale-Arbitrary Super-Resolution framework (SCL-SASR) for real-world images. Inspired by the Maximum a Posteriori estimation, our SCL-SASR consists of a Scale-Arbitrary Super-Resolution Network (SASRN) and an inverse Scale-Arbitrary Resolution-Degradation Network (SARDN). SARDN and SASRN restrain each other with the bidirectional cycle consistency constraints as well as image priors, making SASRN adapt to the image-specific degradation well. Meanwhile, considering the lack of targeted training images and the complexity of realistic degradations, SCL-SASR is designed to be online optimized solely with the LR input prior to the SR reconstruction. Benefitting from the flexible architecture and the self-supervised learning manner, SCL-SASR can easily super-resolve new images with arbitrary integer or non-integer scaling factors. Experiments on real-world images demonstrate the high flexibility and good applicability of SCL-SASR, which achieves better reconstruction performance than state-of-the-art self-supervised learning-based SISR methods as well as several external dataset-trained SISR models.

5 citations

Journal ArticleDOI
TL;DR: In this paper, a deep convolution neural network was proposed to achieve single image enhancement effects, including denoising and super-resolution on images that are reconstructed by a monochromatic transmission matrix.

5 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations

Journal ArticleDOI
TL;DR: It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Abstract: Summary. We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model together.The elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n). By contrast, the lasso is not a very satisfactory variable selection method in the

16,538 citations


"Image super-resolution" refers background in this paper

  • ...As the l2 norm represents a smoothing prior and the l1 norm tends to preserve the edges, the lp ( ≤ ≤ p 1 2) norm achieves a balance between them, thereby avoiding the staircase effect [110]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations


"Image super-resolution" refers background in this paper

  • ...[93,103], based on the fact that an image is naturally “blocky” and discontinuous....

    [...]

Book
01 Jan 1977

8,009 citations


"Image super-resolution" refers background in this paper

  • ...In the early years, the smoothness of natural images was mainly considered, which leads to the quadratic property of the regularizations [99,100]....

    [...]