scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Image super-resolution

Linwei Yue1, Huanfeng Shen1, Jie Li1, Qiangqiang Yuan1, Hongyan Zhang1, Liangpei Zhang1 
01 Nov 2016-Signal Processing (Elsevier)-Vol. 128, pp 389-408
TL;DR: This paper aims to provide a review of SR from the perspective of techniques and applications, and especially the main contributions in recent years, and discusses the current obstacles for future research.
About: This article is published in Signal Processing.The article was published on 2016-11-01. It has received 378 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis, and provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

991 citations

Journal ArticleDOI
TL;DR: This paper indicates how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction, and provides a starting point for people interested in experimenting and contributing to the field of deep learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

590 citations


Cites background from "Image super-resolution"

  • ...Image super-resolution, reconstructing a higher-resolution image or image sequence from the observed low-resolution image [190], is an exciting application of deep learning methods....

    [...]

Journal ArticleDOI
TL;DR: In this article, a test-time augmentation-based aleatoric uncertainty was proposed to analyze the effect of different transformations of the input image on the segmentation output, and the results showed that the proposed test augmentation provides a better uncertainty estimation than calculating the testtime dropout-based model uncertainty alone and helps to reduce overconfident incorrect predictions.

305 citations

Journal ArticleDOI
TL;DR: The proposed integrated fusion framework can achieve the integrated fusion of multisource observations to obtain high spatio-temporal-spectral resolution images, without limitations on the number of remote sensing sensors.
Abstract: Remote sensing satellite sensors feature a tradeoff between the spatial, temporal, and spectral resolutions. In this paper, we propose an integrated framework for the spatio–temporal–spectral fusion of remote sensing images. There are two main advantages of the proposed integrated fusion framework: it can accomplish different kinds of fusion tasks, such as multiview spatial fusion, spatio–spectral fusion, and spatio–temporal fusion, based on a single unified model, and it can achieve the integrated fusion of multisource observations to obtain high spatio–temporal–spectral resolution images, without limitations on the number of remote sensing sensors. The proposed integrated fusion framework was comprehensively tested and verified in a variety of image fusion experiments. In the experiments, a number of different remote sensing satellites were utilized, including IKONOS, the Enhanced Thematic Mapper Plus (ETM+), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Hyperspectral Digital Imagery Collection Experiment (HYDICE), and Systeme Pour l' Observation de la Terre-5 (SPOT-5). The experimental results confirm the effectiveness of the proposed method.

240 citations

Journal ArticleDOI
TL;DR: This letter proposes a new single-image super-resolution algorithm named local–global combined networks (LGCNet) for remote sensing images based on the deep CNNs, elaborately designed with its “multifork” structure to learn multilevel representations ofRemote sensing images including both local details and global environmental priors.
Abstract: Super-resolution is an image processing technology that recovers a high-resolution image from a single or sequential low-resolution images Recently deep convolutional neural networks (CNNs) have made a huge breakthrough in many tasks including super-resolution In this letter, we propose a new single-image super-resolution algorithm named local–global combined networks (LGCNet) for remote sensing images based on the deep CNNs Our LGCNet is elaborately designed with its “multifork” structure to learn multilevel representations of remote sensing images including both local details and global environmental priors Experimental results on a public remote sensing data set (UC Merced) demonstrate an overall improvement of both accuracy and visual performance over several state-of-the-art algorithms

203 citations


Cites background from "Image super-resolution"

  • ...Instead of devoting to physical imaging technology, many researchers aim to recover highresolution images from low-resolution ones using an image processing technology called super-resolution [1]....

    [...]

References
More filters
Journal ArticleDOI
01 Oct 2012
TL;DR: An adaptive strategy without user-defined parameters, and a reversible-conversion strategy between continuous space and discrete space, are utilized, to improve the classical DE algorithm and provide an effective new approach to subpixel mapping for remote sensing imagery.
Abstract: In this paper, a novel subpixel mapping algorithm based on an adaptive differential evolution (DE) algorithm, namely, adaptive-DE subpixel mapping (ADESM), is developed to perform the subpixel mapping task for remote sensing images. Subpixel mapping may provide a fine-resolution map of class labels from coarser spectral unmixing fraction images, with the assumption of spatial dependence. In ADESM, to utilize DE, the subpixel mapping problem is transformed into an optimization problem by maximizing the spatial dependence index. The traditional DE algorithm is an efficient and powerful population-based stochastic global optimizer in continuous optimization problems, but it cannot be applied to the subpixel mapping problem in a discrete search space. In addition, it is not an easy task to properly set control parameters in DE. To avoid these problems, this paper utilizes an adaptive strategy without user-defined parameters, and a reversible-conversion strategy between continuous space and discrete space, to improve the classical DE algorithm. During the process of evolution, they are further improved by enhanced evolution operators, e.g., mutation, crossover, repair, exchange, insertion, and an effective local search to generate new candidate solutions. Experimental results using different types of remote images show that the ADESM algorithm consistently outperforms the previous subpixel mapping algorithms in all the experiments. Based on sensitivity analysis, ADESM, with its self-adaptive control parameter setting, is better than, or at least comparable to, the standard DE algorithm, when considering the accuracy of subpixel mapping, and hence provides an effective new approach to subpixel mapping for remote sensing imagery.

111 citations

Journal ArticleDOI
TL;DR: A novel super-resolution reconstruction (SRR) framework in magnetic resonance imaging (MRI) is proposed to produce images of both high resolution and high contrast desirable for image-guided minimally invasive brain surgery.
Abstract: A novel super-resolution reconstruction (SRR) framework in magnetic resonance imaging (MRI) is proposed. Its purpose is to produce images of both high resolution and high contrast desirable for image-guided minimally invasive brain surgery. The input data are multiple 2-D multislice inversion recovery MRI scans acquired at orientations with regular angular spacing rotated around a common frequency encoding axis. The output is a 3-D volume of isotropic high resolution. The inversion process resembles a localized projection reconstruction problem. Iterative algorithms for reconstruction are based on the projection onto convex sets (POCS) formalism. Results demonstrate resolution enhancement in simulated phantom studies, and ex vivo and in vivo human brain scans, carried out on clinical scanners. A comparison with previously published SRR methods shows favorable characteristics in the proposed approach.

110 citations

Journal ArticleDOI
TL;DR: It is shown that the Super-Resolution Variable-Pixel Linear Reconstruction algorithm can make significant spatial resolution improvements to satellite images of the Earth's surface allowing recognition of objects with size approaching the limiting spatial resolution of the lower resolution images.
Abstract: This paper describes the development and applications of a super-resolution method, known as Super-Resolution Variable-Pixel Linear Reconstruction. The algorithm works combining different lower resolution images in order to obtain, as a result, a higher resolution image. We show that it can make significant spatial resolution improvements to satellite images of the Earth's surface allowing recognition of objects with size approaching the limiting spatial resolution of the lower resolution images. The algorithm is based on the Variable-Pixel Linear Reconstruction algorithm developed by Fruchter and Hook, a well-known method in astronomy but never used for Earth remote sensing purposes. The algorithm preserves photometry, can weight input images according to the statistical significance of each pixel, and removes the effect of geometric distortion on both image shape and photometry. In this paper, we describe its development for remote sensing purposes, show the usefulness of the algorithm working with images as different to the astronomical images as the remote sensing ones, and show applications to: 1) a set of simulated multispectral images obtained from a real Quickbird image; and 2) a set of multispectral real Landsat Enhanced Thematic Mapper Plus (ETM+) images. These examples show that the algorithm provides a substantial improvement in limiting spatial resolution for both simulated and real data sets without significantly altering the multispectral content of the input low-resolution images, without amplifying the noise, and with very few artifacts

108 citations


"Image super-resolution" refers background in this paper

  • ...The SR results for face [205], fingerprint [203], and iris images [189], respectively...

    [...]

Journal ArticleDOI
TL;DR: An image super-resolution (resolution enhancement) algorithm that takes into account inaccurate estimates of the registration parameters and the point spread function and shows the effectiveness of the proposed algorithm.
Abstract: In this paper, we propose an image super-resolution (resolution enhancement) algorithm that takes into account inaccurate estimates of the registration parameters and the point spread function. These inaccurate estimates, along with the additive Gaussian noise in the low-resolution (LR) image sequence, result in different noise level for each frame. In the proposed algorithm, the LR frames are adaptively weighted according to their reliability and the regularization parameter is simultaneously estimated. A translational motion model is assumed. The convergence property of the proposed algorithm is analyzed in detail. Our experimental results using both real and synthetic data show the effectiveness of the proposed algorithm.

107 citations

Proceedings ArticleDOI
12 Dec 2008
TL;DR: A local patch method based on sparse representation with respect to coupled overcomplete patch dictionaries is proposed, which can be fast solved through linear programming and can hallucinate high quality super-resolution faces.
Abstract: In this paper, we address the problem of hallucinating a high resolution face given a low resolution input face. The problem is approached through sparse coding. To exploit the facial structure, non-negative matrix factorization (NMF) is first employed to learn a localized part-based subspace. This subspace is effective for super-resolving the incoming low resolution face under reconstruction constraints. To further enhance the detailed facial information, we propose a local patch method based on sparse representation with respect to coupled overcomplete patch dictionaries, which can be fast solved through linear programming. Experiments demonstrate that our approach can hallucinate high quality super-resolution faces.

106 citations