scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Image super-resolution

Linwei Yue1, Huanfeng Shen1, Jie Li1, Qiangqiang Yuan1, Hongyan Zhang1, Liangpei Zhang1 
01 Nov 2016-Signal Processing (Elsevier)-Vol. 128, pp 389-408
TL;DR: This paper aims to provide a review of SR from the perspective of techniques and applications, and especially the main contributions in recent years, and discusses the current obstacles for future research.
About: This article is published in Signal Processing.The article was published on 2016-11-01. It has received 378 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper proposes a new configuration of genetic algorithms to resolve the super-resolution problem using a Non-Local Means filter as a denoiser function with a rigorous proof of the existence of a unique minimizer.
Abstract: Increasing the resolution of an image is an actual and extensively studied problem in image processing. Recently, Regularization by Denoising (RED) showing that any inverse problem can be handled by sequentially applying image denoising steps, including the image super-resolution (SR) task, which facilitate the resolution of the encountered optimization problem. In this paper, we propose a new configuration of genetic algorithms to resolve the super-resolution problem using a Non-Local Means filter as a denoiser function with a rigorous proof of the existence of a unique minimizer. In fact, since the SR algorithms always skip the complex spatial interactions within images, a more consistent model is then needed. The use of the genetic algorithms with the RED techniques guaranteed, in high intensity of noise and blur, the convergence to the globally optimal solution. As a result, the proposed algorithm shows efficient and consistent results, in terms of edges and feature preservation, compared with other SR approaches.

11 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a deep shearlet residual learning network (DSRLN) to estimate the residual images based on the Shearlet transform, which provides an optimal sparse approximation of the cartoon-like image.
Abstract: Recently, the residual learning strategy has been integrated into the convolutional neural network (CNN) for single image super-resolution (SISR), where the CNN is trained to estimate the residual images. Recognizing that a residual image usually consists of high-frequency details and exhibits cartoon-like characteristics, in this paper, we propose a deep shearlet residual learning network (DSRLN) to estimate the residual images based on the shearlet transform. The proposed network is trained in the shearlet transform-domain which provides an optimal sparse approximation of the cartoon-like image. Specifically, to address the large statistical variation among the shearlet coefficients, a dual-path training strategy and a data weighting technique are proposed. Extensive evaluations on general natural image datasets as well as remote sensing image datasets show that the proposed DSRLN scheme achieves close results in PSNR to the state-of-the-art deep learning methods, using much less network parameters.

11 citations

Journal ArticleDOI
TL;DR: In this article, a wavelet domain filtering and residue extraction method was proposed to get super-resolved frames for better visual quality without embedding other pre-and post-processing techniques.
Abstract: The wavelet domain-centered algorithms for the super-resolution research area give better visual quality and have been explored by different researchers. The visual quality is achieved with increased complexity and cost as most of the systems embed different pre- and post-processing techniques. The frequency and spatial domain-based methods are the usual approaches for super-resolution with some benefits and limitations. Considering the benefits of wavelet domain processing, this paper deals with a new algorithm that depends on wavelet residues. The methodology opts for wavelet domain filtering and residue extraction to get super-resolved frames for better visuals without embedding other techniques. The avoidance of noisy high-frequency components from low-quality videos and the consideration of edge information in the frames are the main targets of the super-resolution process. This inverse process is carried with a proper combination of information present in low-frequency bands and residual information in the high-frequency components. The efficient known algorithms always have to sacrifice simplicity to achieve accuracy, but in the proposed algorithm efficiency is achieved with simplicity. The robustness of the algorithm is tested by analyzing different wavelet functions and at different noise levels. The proposed algorithm performs well in comparison to other techniques from the same domain.

10 citations

Posted Content
TL;DR: Li et al. as mentioned in this paper proposed a fully convolutional multi-stage neural network for face image super-resolution, which is composed of a stem layer, a residual backbone, and spatial upsampling layers.
Abstract: To make the best use of the underlying structure of faces, the collective information through face datasets and the intermediate estimates during the upsampling process, here we introduce a fully convolutional multi-stage neural network for 4$\times$ super-resolution for face images. We implicitly impose facial component-wise attention maps using a segmentation network to allow our network to focus on face-inherent patterns. Each stage of our network is composed of a stem layer, a residual backbone, and spatial upsampling layers. We recurrently apply stages to reconstruct an intermediate image, and then reuse its space-to-depth converted versions to bootstrap and enhance image quality progressively. Our experiments show that our face super-resolution method achieves quantitatively superior and perceptually pleasing results in comparison to state of the art.

10 citations

Journal ArticleDOI
TL;DR: A variational Bayesian approach to multiple-image super-resolution based on Super-Gaussian prior models that automatically enhances the quality of outdoor video recordings and estimates all the model parameters while preserving the authenticity, credibility and reliability of video data as digital evidence is proposed.

10 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, a structural similarity index is proposed for image quality assessment based on the degradation of structural information, which can be applied to both subjective ratings and objective methods on a database of images compressed with JPEG and JPEG2000.
Abstract: Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a structural similarity index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000. A MATLAB implementation of the proposed algorithm is available online at http://www.cns.nyu.edu//spl sim/lcv/ssim/.

40,609 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations

Journal ArticleDOI
TL;DR: It is shown that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation, and an algorithm called LARS‐EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lamba.
Abstract: Summary. We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model together.The elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n). By contrast, the lasso is not a very satisfactory variable selection method in the

16,538 citations


"Image super-resolution" refers background in this paper

  • ...As the l2 norm represents a smoothing prior and the l1 norm tends to preserve the edges, the lp ( ≤ ≤ p 1 2) norm achieves a balance between them, thereby avoiding the staircase effect [110]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a constrained optimization type of numerical algorithm for removing noise from images is presented, where the total variation of the image is minimized subject to constraints involving the statistics of the noise.

15,225 citations


"Image super-resolution" refers background in this paper

  • ...[93,103], based on the fact that an image is naturally “blocky” and discontinuous....

    [...]

Book
01 Jan 1977

8,009 citations


"Image super-resolution" refers background in this paper

  • ...In the early years, the smoothness of natural images was mainly considered, which leads to the quadratic property of the regularizations [99,100]....

    [...]