scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Image super-resolution

Linwei Yue1, Huanfeng Shen1, Jie Li1, Qiangqiang Yuan1, Hongyan Zhang1, Liangpei Zhang1 
01 Nov 2016-Signal Processing (Elsevier)-Vol. 128, pp 389-408
TL;DR: This paper aims to provide a review of SR from the perspective of techniques and applications, and especially the main contributions in recent years, and discusses the current obstacles for future research.
About: This article is published in Signal Processing.The article was published on 2016-11-01. It has received 378 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis, and provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

991 citations

Journal ArticleDOI
TL;DR: This paper indicates how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction, and provides a starting point for people interested in experimenting and contributing to the field of deep learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

590 citations


Cites background from "Image super-resolution"

  • ...Image super-resolution, reconstructing a higher-resolution image or image sequence from the observed low-resolution image [190], is an exciting application of deep learning methods....

    [...]

Journal ArticleDOI
TL;DR: In this article, a test-time augmentation-based aleatoric uncertainty was proposed to analyze the effect of different transformations of the input image on the segmentation output, and the results showed that the proposed test augmentation provides a better uncertainty estimation than calculating the testtime dropout-based model uncertainty alone and helps to reduce overconfident incorrect predictions.

305 citations

Journal ArticleDOI
TL;DR: The proposed integrated fusion framework can achieve the integrated fusion of multisource observations to obtain high spatio-temporal-spectral resolution images, without limitations on the number of remote sensing sensors.
Abstract: Remote sensing satellite sensors feature a tradeoff between the spatial, temporal, and spectral resolutions. In this paper, we propose an integrated framework for the spatio–temporal–spectral fusion of remote sensing images. There are two main advantages of the proposed integrated fusion framework: it can accomplish different kinds of fusion tasks, such as multiview spatial fusion, spatio–spectral fusion, and spatio–temporal fusion, based on a single unified model, and it can achieve the integrated fusion of multisource observations to obtain high spatio–temporal–spectral resolution images, without limitations on the number of remote sensing sensors. The proposed integrated fusion framework was comprehensively tested and verified in a variety of image fusion experiments. In the experiments, a number of different remote sensing satellites were utilized, including IKONOS, the Enhanced Thematic Mapper Plus (ETM+), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Hyperspectral Digital Imagery Collection Experiment (HYDICE), and Systeme Pour l' Observation de la Terre-5 (SPOT-5). The experimental results confirm the effectiveness of the proposed method.

240 citations

Journal ArticleDOI
TL;DR: This letter proposes a new single-image super-resolution algorithm named local–global combined networks (LGCNet) for remote sensing images based on the deep CNNs, elaborately designed with its “multifork” structure to learn multilevel representations ofRemote sensing images including both local details and global environmental priors.
Abstract: Super-resolution is an image processing technology that recovers a high-resolution image from a single or sequential low-resolution images Recently deep convolutional neural networks (CNNs) have made a huge breakthrough in many tasks including super-resolution In this letter, we propose a new single-image super-resolution algorithm named local–global combined networks (LGCNet) for remote sensing images based on the deep CNNs Our LGCNet is elaborately designed with its “multifork” structure to learn multilevel representations of remote sensing images including both local details and global environmental priors Experimental results on a public remote sensing data set (UC Merced) demonstrate an overall improvement of both accuracy and visual performance over several state-of-the-art algorithms

203 citations


Cites background from "Image super-resolution"

  • ...Instead of devoting to physical imaging technology, many researchers aim to recover highresolution images from low-resolution ones using an image processing technology called super-resolution [1]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A spatially weighted TV image SR algorithm is proposed, in which the spatial information distributed in different image regions is added to constrain the SR process, and a newly proposed and effective spatial information indicator called difference curvature is used to identify the spatial property of each pixel.
Abstract: Total variation (TV) has been used as a popular and effective image prior model in regularization-based image processing fields, such as denoising, deblurring, super-resolution (SR), and others, because of its ability to preserve edges. However, as the TV model favors a piecewise constant solution, the processing results in the flat regions of the image being poor, and it cannot automatically balance the processing strength between different spatial property regions in the image. In this paper, we propose a spatially weighted TV image SR algorithm, in which the spatial information distributed in different image regions is added to constrain the SR process. A newly proposed and effective spatial information indicator called difference curvature is used to identify the spatial property of each pixel, and a weighted parameter determined by the difference curvature information is added to constrain the regularization strength of the TV regularization at each pixel. Meanwhile, a majorization-minimization algorithm is used to optimize the proposed spatially weighted TV SR model. Finally, a significant amount of simulated and real data experimental results show that the proposed spatially weighted TV SR algorithm not only efficiently reduces the “artifacts” produced with a TV model in fat regions of the image, but also preserves the edge information, and the reconstruction results are less sensitive to the regularization parameters than the TV model, because of the consideration of the spatial information constraint.

143 citations


"Image super-resolution" refers methods in this paper

  • ...Some of them classified the image into detailed and flat regions using the spatial information, and used a larger penalty parameter for the flat regions and a smaller one for the edges [94,107]....

    [...]

Journal ArticleDOI
TL;DR: The proposed locality preserving hallucination (LPH) algorithm combines locality preserving projection (LPP) and radial basis function (RBF) regression together to hallucinate the global high-resolution face.

143 citations


"Image super-resolution" refers background in this paper

  • ...SR is also important in biometric recognition, including resolution enhancement for faces [24,201,202], fingerprints [203], and iris images [65,204]....

    [...]

Journal ArticleDOI
TL;DR: In this article, a wavelet-based interpolation-restoration algorithm for superresolution is proposed, which takes advantage of the regularity and structure inherent in interlaced data.
Abstract: Superresolution produces high-quality, high-resolution images from a set of degraded, low-resolution images where relative frame-to-frame motions provide different looks at the scene. Superresolution translates data temporal bandwith into enhanced spatial resolution. If considered together on a reference grid, given low-resolution data are nonuniformly sampled. However, data from each frame are sampled regularly on a rectangular grid. This special type of nonuniform sampling is called interlaced sampling. We propose a new wavelet-based interpolation-restoration algorithm for superresolution. Our efficient wavelet interpolation technique takes advantage of the regularity and structure inherent in interlaced data, thereby significantly reducing the computational burden. We present one- and two-dimensional superresolution experiments to demonstrate the effectiveness of our algorithm.

142 citations

Journal ArticleDOI
TL;DR: Multi-scale total variation models for image restoration introduce a spatially dependent regularization parameter in order to enhance image regions containing details while still sufficiently smoothing homogeneous features and compares with popular total variation based restoration methods.
Abstract: Multi-scale total variation models for image restoration are introduced. The models utilize a spatially dependent regularization parameter in order to enhance image regions containing details while still sufficiently smoothing homogeneous features. The fully automated adjustment strategy of the regularization parameter is based on local variance estimators. For robustness reasons, the decision on the acceptance or rejection of a local parameter value relies on a confidence interval technique based on the expected maximal local variance estimate. In order to improve the performance of the initial algorithm a generalized hierarchical decomposition of the restored image is used. The corresponding subproblems are solved by a superlinearly convergent algorithm based on Fenchel-duality and inexact semismooth Newton techniques. The paper ends by a report on numerical tests, a qualitative study of the proposed adjustment scheme and a comparison with popular total variation based restoration methods.

142 citations


"Image super-resolution" refers methods in this paper

  • ...Some of them classified the image into detailed and flat regions using the spatial information, and used a larger penalty parameter for the flat regions and a smaller one for the edges [94,107]....

    [...]

Journal ArticleDOI
TL;DR: An algorithm to reconstruct a high- resolution image from multiple aliased low-resolution images, which is based on the generalized deconvolution technique, and it is shown that the artifact caused by inaccurate motion information is reduced by regular- ization.
Abstract: While high-resolution images are required for various applica- tions, aliased low-resolution images are only available due to the physi- cal limitations of sensors. We propose an algorithm to reconstruct a high- resolution image from multiple aliased low-resolution images, which is based on the generalized deconvolution technique. The conventional approaches are based on the discrete Fourier transform (DFT) since the aliasing effect is easily analyzed in the frequency domain. However, the useful solution may not be available in many cases, i.e., the underdeter- mined cases or the insufficient subpixel information cases. To compen- sate for such ill-posedness, the generalized regularization is adopted in the spatial domain. Furthermore, the usage of the discrete cosine trans- form (DCT) instead of the DFT leads to a computationally efficient recon- struction algorithm. The validity of the proposed algorithm is both theo- retically and experimentally demonstrated. It is also shown that the artifact caused by inaccurate motion information is reduced by regular- ization. © 1999 Society of Photo-Optical Instrumentation Engineers. (S0091-3286(99)00508-5)

142 citations