scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Image super-resolution

Linwei Yue1, Huanfeng Shen1, Jie Li1, Qiangqiang Yuan1, Hongyan Zhang1, Liangpei Zhang1 
01 Nov 2016-Signal Processing (Elsevier)-Vol. 128, pp 389-408
TL;DR: This paper aims to provide a review of SR from the perspective of techniques and applications, and especially the main contributions in recent years, and discusses the current obstacles for future research.
About: This article is published in Signal Processing.The article was published on 2016-11-01. It has received 378 citations till now.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis, and provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of machine learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

991 citations

Journal ArticleDOI
TL;DR: This paper indicates how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction, and provides a starting point for people interested in experimenting and contributing to the field of deep learning for medical imaging.
Abstract: What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.

590 citations


Cites background from "Image super-resolution"

  • ...Image super-resolution, reconstructing a higher-resolution image or image sequence from the observed low-resolution image [190], is an exciting application of deep learning methods....

    [...]

Journal ArticleDOI
TL;DR: In this article, a test-time augmentation-based aleatoric uncertainty was proposed to analyze the effect of different transformations of the input image on the segmentation output, and the results showed that the proposed test augmentation provides a better uncertainty estimation than calculating the testtime dropout-based model uncertainty alone and helps to reduce overconfident incorrect predictions.

305 citations

Journal ArticleDOI
TL;DR: The proposed integrated fusion framework can achieve the integrated fusion of multisource observations to obtain high spatio-temporal-spectral resolution images, without limitations on the number of remote sensing sensors.
Abstract: Remote sensing satellite sensors feature a tradeoff between the spatial, temporal, and spectral resolutions. In this paper, we propose an integrated framework for the spatio–temporal–spectral fusion of remote sensing images. There are two main advantages of the proposed integrated fusion framework: it can accomplish different kinds of fusion tasks, such as multiview spatial fusion, spatio–spectral fusion, and spatio–temporal fusion, based on a single unified model, and it can achieve the integrated fusion of multisource observations to obtain high spatio–temporal–spectral resolution images, without limitations on the number of remote sensing sensors. The proposed integrated fusion framework was comprehensively tested and verified in a variety of image fusion experiments. In the experiments, a number of different remote sensing satellites were utilized, including IKONOS, the Enhanced Thematic Mapper Plus (ETM+), the Moderate Resolution Imaging Spectroradiometer (MODIS), the Hyperspectral Digital Imagery Collection Experiment (HYDICE), and Systeme Pour l' Observation de la Terre-5 (SPOT-5). The experimental results confirm the effectiveness of the proposed method.

240 citations

Journal ArticleDOI
TL;DR: This letter proposes a new single-image super-resolution algorithm named local–global combined networks (LGCNet) for remote sensing images based on the deep CNNs, elaborately designed with its “multifork” structure to learn multilevel representations ofRemote sensing images including both local details and global environmental priors.
Abstract: Super-resolution is an image processing technology that recovers a high-resolution image from a single or sequential low-resolution images Recently deep convolutional neural networks (CNNs) have made a huge breakthrough in many tasks including super-resolution In this letter, we propose a new single-image super-resolution algorithm named local–global combined networks (LGCNet) for remote sensing images based on the deep CNNs Our LGCNet is elaborately designed with its “multifork” structure to learn multilevel representations of remote sensing images including both local details and global environmental priors Experimental results on a public remote sensing data set (UC Merced) demonstrate an overall improvement of both accuracy and visual performance over several state-of-the-art algorithms

203 citations


Cites background from "Image super-resolution"

  • ...Instead of devoting to physical imaging technology, many researchers aim to recover highresolution images from low-resolution ones using an image processing technology called super-resolution [1]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: In this paper, an alternating augmented Lagrangian method for convex optimization problems where the cost function is the sum of two terms, one that is separable in the variable blocks, and a second th...

253 citations

Journal ArticleDOI
TL;DR: Two new regularization items are proposed, termed as locally adaptive bilateral total variation and consistency of gradients, to keep edges and flat regions, which are implicitly described in LR images, sharp and smooth, respectively, respectively.

253 citations

Proceedings ArticleDOI
18 Jun 1996
TL;DR: This paper combines geometry and illumination into an algorithm that tracks large image regions on live video sequences using no more computation than would be required to trade with no accommodation for illumination changes.
Abstract: Historically, SSD or correlation-based visual tracking algorithms have been sensitive to changes in illumination and shading across the target region. This paper describes methods for implementing SSD tracking that is both insensitive to illumination variations and computationally efficient. We first describe a vector-space formulation of the tracking problem, showing how to recover geometric deformations. We then show that the same vector space formulation can be used to account for changes in illumination. We combine geometry and illumination into an algorithm that tracks large image regions on live video sequences using no more computation than would be required to trade with no accommodation for illumination changes. We present experimental results which compare the performance of SSD tracking with and without illumination compensation.

249 citations


Additional excerpts

  • ...However, the optical flow based methods are computationally expensive [141] and are sensitive to noise, large displacements, and illumination variation [142]....

    [...]

Journal ArticleDOI
TL;DR: This work describes an alternative approach that minimizes a generalized TV functional, including both lscr2-TV and lscR1-TV as special cases, and is capable of solving more general inverse problems than denoising (e.g., deconvolution).
Abstract: Replacing the lscr2 data fidelity term of the standard total variation (TV) functional with an lscr1 data fidelity term has been found to offer a number of theoretical and practical benefits. Efficient algorithms for minimizing this lscr1-TV functional have only recently begun to be developed, the fastest of which exploit graph representations, and are restricted to the denoising problem. We describe an alternative approach that minimizes a generalized TV functional, including both lscr2-TV and lscr1-TV as special cases, and is capable of solving more general inverse problems than denoising (e.g., deconvolution). This algorithm is competitive with the graph-based methods in the denoising case, and is the fastest algorithm of which we are aware for general inverse problems involving a nontrivial forward linear operator.

236 citations


"Image super-resolution" refers background or methods in this paper

  • ...It appears that LDFPI and IRN are two different methods; however, they are almost the same in essence when dealing with the −l norm1 problem by smooth approximation....

    [...]

  • ...IRN is a method which can minimize the lp norm ( ≤ p 2) by approximating it with a weighted l2 norm [129]....

    [...]

  • ...The representative algorithms include lagged diffusivity fixed point iteration (LDFPI) [128], majorization-minimization (MM) [104], the iteratively reweighted norm (IRN) [129,132], and the half-quadratic algorithm [95]....

    [...]

  • ...The notations are based on LDFPI [128] and IRN [129], respectively....

    [...]

  • ...fixed point iteration (LDFPI) [128], majorization-minimization (MM) [104], the iteratively reweighted norm (IRN) [129,132], and the half-quadratic algorithm [95]....

    [...]

Journal ArticleDOI
TL;DR: This work applies a Hopfield neural network technique to super-resolution mapping of land cover features larger than a pixel, using information of pixel composition determined from soft classification, and shows how the approach can be extended in a new way to predict the spatial pattern of subpixel scale features.

236 citations