scispace - formally typeset
Open AccessProceedings ArticleDOI

Super-resolution from a single image

Reads0
Chats0
TLDR
This paper proposes a unified framework for combining the classical multi-image super-resolution and the example-based super- resolution, and shows how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples).
Abstract
Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.

read more

Content maybe subject to copyright    Report

Super-Resolution from a Single Image
Daniel Glasner Shai Bagon Michal Irani
Dept. of Computer Science and Applied Mathematics
The Weizmann Institute of Science
Rehovot 76100, Israel
Abstract
Methods for super-resolution can be broadly classified
into two families of methods: (i) The classical multi-image
super-resolution (combining images obtained at subpixel
misalignments), and (ii) Example-Based super-resolution
(learning correspondence between low and high resolution
image patches from a database). In this paper we propose a
unified framework for combining these two families of meth-
ods. We further show how this combined approach can be
applied to obtain super resolution from as little as a sin-
gle image (with no database or prior examples). Our ap-
proach is based on the observation that patches in a natu-
ral image tend to redundantly recur many times inside the
image, both within the same scale, as well as across differ-
ent scales. Recurrence of patches within the same image
scale (at subpixel misalignments) gives rise to the classical
super-resolution, whereas recurrence of patches across dif-
ferent scales of the same image gives rise to example-based
super-resolution. Our approach attempts to recover at each
pixel its best possible resolution increase based on its patch
redundancy within and across scales.
1. Introduction
The goal of Super-Resolution (SR) methods is to recover
a high resolution image from one or more low resolution
input images. Methods for SR can be broadly classified
into two families of methods: (i) The classical multi-image
super-resolution, and (ii) Example-Based super-resolution.
In the classical multi-image SR (e.g., [12, 5, 8] to name just
a few) a set of low-resolution images of the same scene are
taken (at subpixel misalignments). Each low resolution im-
age imposes a set of linear constraints on the unknown high-
resolution intensity values. If enough low-resolution im-
ages are available (at subpixel shifts), then the set of equa-
tions becomes determined and can be solved to recover the
high-resolution image. Practically, however, this approach
is numerically limited only to small increases in resolu-
tion [3, 14] (by factors smaller than 2).
These limitations have lead to the development of
Input image I Various scales of I
Figure 1: Patch recurrence
within and across scales of a
single image. Source patches
in I are found in different loca-
tions and in other image scales
of I (solid-marked squares).
The high-res corresponding
parent patches (dashed-marked
squares) provide an indication
of what the (unknown) high-res
parents of the source patches
might look like.
“Example-Based Super-Resolution” also termed “image
hallucination” (introduced by [10, 11, 2] and extended later
by others e.g. [13]). In example-based SR, correspon-
dences between low and high resolution image patches are
learned from a database of low and high resolution image
pairs (usually with a relative scale factor of 2), and then
applied to a new low-resolution image to recover its most
likely high-resolution version. Higher SR factors have of-
ten been obtained by repeated applications of this process.
Example-based SR has been shown to exceed the limits of
classical SR. However, unlike classical SR, the high res-
olution details reconstructed (“hallucinated”) by example-
based SR are not guaranteed to provide the true (unknown)
high resolution details.
Sophisticated methods for image up-scaling based on
learning edge models have also been proposed (e.g., [9,

19]). The goal of these methods is to magnify (up-scale)
an image while maintaining the sharpness of the edges and
the details in the image. In contrast, in SR (example-
based as well as classical) the goal is to recover new miss-
ing high-resolution details that are not explicitly found in
any individual low-resolution image (details beyond the
Nyquist frequency of the low-resolution image). In the
classical SR, this high-frequency information is assumed
to be split across multiple low-resolution images, implic-
itly found there in aliased form. In example-based SR, this
missing high-resolution information is assumed to be avail-
able in the high-resolution database patches, and learned
from the low-res/high-res pairs of examples in the database.
In this paper we propose a framework to combine the
power of both SR approaches (Classical SR and Example-
based SR), and show how this combined framework can be
applied to obtain SR from as little as a single low-resolution
image, without any additional external information. Our ap-
proach is based on an observation (justified statistically in
the paper) that patches in a single natural image tend to re-
dundantly recur many times inside the image, both within
the same scale, as well as across different scales. Recur-
rence of patches within the same image scale (at subpixel
misalignments) forms the basis for applying the classical
SR constraints to information from a single image. Re-
currence of patches across different (coarser) image scales
implicitly provides examples of low-res/high-res pairs of
patches, thus giving rise to example-based super-resolution
from a single image (without any external database or any
prior examples). Moreover, we show how these two differ-
ent approaches to SR can be combined in a single unified
computational framework.
Patch repetitions within an image were previously ex-
ploited for noise-cleaning using ‘Non-Local Means’ [4], as
well as a regularization prior for inverse problems [15]. A
related SR approach was proposed by [16] for obtaining
higher-resolution video frames, by applying the classical
SR constraints to similar patches across consecutive video
frames and within a small local spatial neighborhood. Their
algorithm relied on having multiple image frames, and did
not exploit the power of patch redundancy across different
image scales. The power of patch repetitions across scales
(although restricted to a fixed scale-factor of 2) was previ-
ously alluded to in the papers [10, 18, 6]. In contrast to all
the above, we propose a single unified approach which com-
bines the classical SR constraints with the example-based
constraints, while exploiting (for each pixel) patch redun-
dancies across all image scales and at varying scale gaps,
thus obtaining adaptive SR with as little as a single low-
resolution image.
The rest of this paper is organized as follows: In Sec. 2
we statistically examine the observation that small patches
in a single natural image tend to recur many times within
(a) All image patches
Image
scales
(b) High variance patches only
Figure 2: Average patch recurrence within and across scales of
a single image (averaged over hundreds of natural images see
text for more details). (a) The percent of image patches for which
there exist n or more similar patches (n = 1, 2, 3, ..., 9), mea-
sured at several different image scales. (b) The same statistics,
but this time measured only for image patches with the highest in-
tensity variances (top 25%). These patches correspond to patches
of edges, corners, and texture.
and across scales of the same image. Sec. 3 presents our
unified SR framework (unifying classical SR and example-
based SR), and shows how it can be applied to as little as a
single image. Results are provided in Sec. 4, as well as the
url of the paper’s website where more results can be found.
2. Patch Redundancy in a Single Image
Natural images tend to contain repetitive visual content.
In particular, small (e.g., 5 × 5) image patches in a natu-
ral image tend to redundantly recur many times inside the
image, both within the same scale, as well as across differ-
ent scales. This observation forms the basis for our single-
image super-resolution framework as well as for other al-
gorithms in computer vision (e.g., image completion [7],
image re-targeting [17], image denoising [4], etc.) In this
section we try to empirically quantify this notion of patch
redundancy (within a single image).
Fig. 1 schematically illustrates what we mean by “patch
recurrence” within and across scales of a single image.
An input patch “recurs” in another scale if it appears ‘as
is’ (without blurring, subsampling, or scaling down) in a
scaled-down version of the image. Having found a simi-
lar patch in a smaller image scale, we can extract its high-
resolution parent from the input image (see Fig. 1). Each
low-res patch with its high-res parent form a “low-res/high-
res pair of patches” (marked by arrows in the figure). The
high-res parent of a found low-res patch provides an indi-
cation to what the (unknown) high-res parent of the source
patch might look like. This forms the basis for Example-
Based SR, even without an external database. For this
approach to be effective, however, enough such recurring
patches must exist in different scales of the same image.
The patches displayed in Fig. 1 were chosen large for
illustration purpose, and were displayed on clear repetitive
structure in the image. However, when much smaller image

patches are used, e.g., 5 × 5, such patch repetitions occur
abundantly within and across image scales, even when we
do not visually perceive any obvious repetitive structure in
the image. This is due to the fact that very small patches
often contain only an edge, a corner, etc. such patches are
found abundantly in multiple image scales of almost any
natural image.
Moreover, due to the perspective projection of cameras,
images tend to contain scene-specific information in dimin-
ishing sizes (diminishing toward the horizon), thus recur-
ring in multiple scales of the same image.
We statistically tested this observation on the Berkeley
Segmentation Database
1
(Fig. 2). More specifically, we
tested the hypothesis that small 5 × 5 patches in a single
natural grayscale image, when removing their DC (their
average grayscale), tend to recur many times within and
across scales of the same image. The test was performed
as follows: Each image I in the Berkeley database was
first converted to a grayscale image. We then generated
from I a cascade of images of decreasing resolutions {I
s
},
scaled (down) by scale factors of 1.25
s
for s = 0, 1, .., 6
(I
0
= I). The size of the smallest resolution image was
1.25
6
= 0.26 of the size of the source image I (in each di-
mension). Each 5 × 5 patch in the source image I was com-
pared against the 5× 5 patches in all the images {I
s
} (with-
out their DC), measuring how many similar
2
patches it has
in each image scale. This intra-image patch statistics was
computed separately for each image. The resulting inde-
pendent statistics were then averaged across all the images
in the database (300 images), and are shown in Fig. 2a. Note
that, on the average, more than 90% of the patches in an
image have 9 or more other similar patches in the same im-
age at the original image scale (‘within scale’). Moreover,
more than 80% of the input patches have 9 or more similar
patches in 0.41 = 1.25
4
of the input scale, and 70% of
them have 9 or more similar patches in 0.26 = 1.25
6
of
the input scale.
Recurrence of patches forms the basis for our single-
image super-resolution approach. Since the impact of
super-resolution is expressed mostly in highly detailed im-
age regions (edges, corners, texture, etc.), we wish to elim-
inate the effect of uniform patches on the above statistics.
Therefore, we repeated the same experiment using only
25% of the source patches with the highest intensity vari-
ance. This excludes the uniform and low-frequency patches,
1
www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench
2
Distances between patches were measured using gaussian-weighted
SSD. Note that textured patches tend to have much larger SSD errors
than smooth (low-variance) patches when compared to other very similar-
looking patches (especially in the presence of inevitable sub-pixel mis-
alignments). Thus, for each patch we compute a patch-specific ‘good
distance’, by measuring its (gaussian-weighted) SSD with a slightly-
misaligned copy of itself (by 0.5 pixel). This forms our distance threshold
for each patch: Patches with distance below this threshold are considered
similar to the source patch.
(a) Classical Multi-Image SR (b) Single-Image Multi-Patch SR
Figure 3: (a) Low-res pixels in multiple low-res images impose
multiple linear constraints on the high-res unknowns within the
support of their blur kernels. (b) Recurring patches within a sin-
gle low-res image can be regarded as if extracted from multiple
different low-res images of the same high resolution scene, thus
inducing multiple linear constraints on the high-res unknowns.
maintaining mostly patches of edges, corners, and texture.
The resulting graphs are displayed in Fig. 2b. Although
there is a slight drop in patch recurrence, the basic observa-
tion still holds even for the high-frequency patches: Most
of them recur several times within and across scales of the
same image (more than 80% of the patches recur 9 or more
times in the original image scale; more than 70% recur 9
or more times at 0.41 of the input scale, and 60% of them
recur 9 or more times in 0.26 of the input scale.)
In principle, the lowest image scale in which we can still
find recurrence of a source patch, provides an indication
of its maximal potential resolution increase using our ap-
proach (when the only available information is the image
itself). This is pixel-dependent, and can be estimated at ev-
ery pixel in the image.
3. Single Image SR A Unified Framework
Recurrence of patches within the same image scale
forms the basis for applying the Classical SR constraints
to information from a single image (Sec. 3.1). Recurrence
of patches across different scales gives rise to Example-
Based SR from a single image, with no prior examples
(Sec. 3.2). Moreover, these two different approaches to SR
can be combined into a single unified computational frame-
work (Sec. 3.3).
3.1. Employing in-scale patch redundancy
In the classical Multi-Image Super-resolution (e.g., [12,
5, 8]), a set of low-resolution images {L
1
, ..., L
n
} of the
same scene (at subpixel misalignments) is given, and the
goal is to recover their mutual high-resolution source im-
age H. Each low resolution image L
j
(j = 1, . . . , n) is
assumed to have been generated from H by a blur and sub-
sampling process: L
j
=
H B
j
s
j
, where denotes a
subsampling operation, s
j
is the scale reduction factor (the
subsampling rate) between H and L
j
, and B
j
(q) is the cor-
responding blur kernel (the Point Spread Function PSF),
represented in the high-resolution coordinate system see
Fig. 3a. Thus, each low-resolution pixel p = (x, y) in each
low-resolution image L
j
induces one linear constraint on

the unknown high-resolution intensity values within the lo-
cal neighborhood around its corresponding high-resolution
pixel q H (the size of the neighborhood is determined by
the support of the blur kernel B
j
):
L
j
p
=
H B
j
(q) =
Σ
q
i
Support(B
j
)
H(q
i
) B
j
(q
i
q)
(1)
where {H(q
i
)} are the unknown high-resolution intensity
value. If enough low-resolution images are available (at
sub-pixel shifts), then the number of independent equations
exceeds the number of unknowns. Such super-resolution
schemes have been shown to provide reasonably stable su-
per resolution results up to a factor of 2 (a limit of 1.6 is
shown in [14] when noise removal and registration are not
good enough).
In principle, when there is only a single low-resolution
image L =
H B
s
, the problem of recovering H be-
comes under-determined, as the number of constraints in-
duced by L is smaller than the number of unknowns in H.
Nevertheless, as observed in Sec. 2, there is plenty of patch
redundancy within a single image L. Let p be a pixel in L,
and P be its surrounding patch (e.g., 5 × 5), then there exist
multiple similar patches P
1
, ...P
k
in L (inevitably, at sub-
pixel shifts). These patches can be treated as if taken from
k different low-resolution images of the same high resolu-
tion “scene”, thus inducing k times more linear constraints
(Eq. (1)) on the high-resolution intensities of pixels within
the neighborhood of q H (see Fig. 3b). For increased
numerical stability, each equation induced by a patch P
i
is
globally scaled by the degree of similarity of P
i
to its source
patch P. Thus, patches of higher similarity to P will have
a stronger influence on the recovered high-resolution pixel
values than patches of lower similarity.
These ideas can be translated to the following simple al-
gorithm: For each pixel in L find its k nearest patch neigh-
bors in the same image L (e.g., using an Approximate Near-
est Neighbor algorithm [1]; we typically use k=9) and com-
pute their subpixel alignment (at
1
s
pixel shifts, where s is
the scale factor.) Assuming sufficient neighbors are found,
this process results in a determined set of linear equations
on the unknown pixel values in H. Globally scale each
equation by its reliability (determined by its patch similarity
score), and solve the linear set of equations to obtain H. An
example of such a result can be found in Fig. 5c.
3.2. Employing cross-scale patch redundancy
The above process allows to extend the applicability of
the classical Super-Resolution (SR) to a single image. How-
ever, even if we disregard additional difficulties which arise
in the single image case (e.g., the limited accuracy of our
patch registration; image patches with insufficient matches),
this process still suffers from the same inherent limitations
of the classical multi-image SR (see [3, 14]).
The limitations of the classical SR have lead to the devel-
Figure 4: Combining Example-based SR constraints with
Classical SR constraints in a single unified computational
framework. Patches in the input low-res image L (dark red and
dark green patches) are searched for in the down-scaled versions
of L (blue-marked images). When a similar patch is found, its
parent patch (light red and light green) is copied to the appropri-
ate location in the unknown high-resolution image (purple images)
with the appropriate gap in scale. A ‘learned’ (copied) high-res
patch induces classical SR linear constraints on the unknown high-
res intensities in the target high-res H. The support of the cor-
responding blur kernels (red and green ellipses) are determined
by the residual gaps in scale between the resolution levels of the
’learned’ high-res patches and the target resolution level of H.
Note that for different patches found in different scale gaps, the
corresponding blur kernels (red and green ellipses) will accord-
ingly have different supports. (See text for more details.)
opment of “Example-Based Super-Resolution” (e.g., [11,
2]). In example-based SR, correspondences between low
and high resolution image patches are learned from a
database of low and high resolution image pairs, and then
applied to a new low-resolution image to recover its most
likely high-resolution version. Example-based SR has been
shown to exceed the limits of classical SR. In this section we
show how similar ideas can be exploited within our single
image SR framework, without any external database or any
prior example images. The low-res/high-res patch corre-
spondences can be learned directly from the image itself, by
employing patch repetitions across multiple image scales.
Let B be the blur kernel (camera PSF) relating the low-
res input image L with the unknown high-res image H:
L =
H B
s
. Let I
0
, I
1
, ..., I
n
denote a cascade of
unknown images of increasing resolutions (scales) ranging
from the low-res L to the target high-res H (I
0
= L and
I
n
= H), with a corresponding cascade of blur functions
B
0
, B
1
, ..., B
n
(where B
n
= B is the PSF relating H to
L, and B
0
is the δ function), such that every I
l
satisfies:
L = (I
l
B
l
)
s
l
(s
l
denotes the relative scaling factor).

(a) Input image (scaled for display). (b) Bicubic interpolation (×2). (c) Within image repetitions (×2). (d) Unified single-image SR (×2).
Figure 5: Comparing single-image SR with the ‘classical’ SR constraints only, to the unified single-image SR (Classical + Example-based
constraints). Note that the ‘classical’ SR constraints, when applied to similar within-scale patches, results in a high-resolution image (c)
which is sharper and cleaner than the interpolated image (b), but is not able to recover the fine rail in the intermediate arched windows.
In contrast, the high-resolution image (d) produced using the unified Classical + Example-based constraints recovers these fine rails.
The resulting cascade of images is illustrated in Fig. 4 (the
purple images).
Note that although the images {I
l
}
n
l=0
are unknown,
the cascade of blur kernels {B
l
}
n
l=0
can be assumed to
be known. When the PSF B is unknown (which is often
the case), then B can be approximated with a gaussian, in
which case B
l
= B(s
l
) are simply a cascade of gaussians
whose variances are determined by s
l
. Moreover, when the
scale factors s
l
are chosen such that s
l
= α
l
for a fixed α,
then the following constraint will also hold for all {I
l
}
n
l=1
:
I
l
= (H B
nl
)
s
nl
. (The uniform scale factor guar-
antees that if two images in this cascade are found m levels
apart (e.g., , I
l
and I
l+m
), they will be related by the same
blur kernel B
m
, regardless of l.)
Let L = I
0
, I
1
, ..., I
m
denote a cascade of images
of decreasing resolutions (scales) obtained from L using
the same blur functions {B
l
}: I
l
= (L B
l
)
s
l
(l = 0, .., m). Note that unlike the high-res image cascade,
these low-resolution images are known (computed from L).
The resulting cascade of images is also illustrated in Fig. 4
(the blue images).
Let P
l
(p) denote a patch in the image I
l
at pixel location
p. For any pixel in the input image p L (L = I
0
) and its
surrounding patch P
0
(p), we can search for similar patches
within the cascade of low resolution images {I
l
}, l > 0
(e.g., using Approximate Nearest Neighbor search [1]). Let
P
l
(˜p) be such a matching patch found in the low-res im-
age I
l
. Then its higher-res ‘parent’ patch, Q
0
(s
l
· ˜p), can
be extracted from the input image I
0
= L (or from any in-
termediate resolution level between I
l
and L, if desired).
This provides a low-res/high-res patch pair [P, Q], which
provides a prior on the appearance of the high-res parent of
the low-res input patch P
0
(p), namely patch Q
l
(s
l
· p) in
the high-res unknown image I
l
(or in any intermediate res-
olution level between L and I
l
, if desired). The basic step
is therefore as follows (schematically illustrated in Fig. 4):
P
0
(p)
findNN
P
l
(˜p)
parent
Q
0
(s
l
· ˜p)
copy
Q
l
(s
l
· p)
3.3. Combining Classical and Example-Based SR
The process described in Sec 3.2, when repeated for all
pixels in L, will yield a large collection of (possibly over-
lapping) suggested high-res patches {Q
l
} at the range of
resolution levels l = 1, .., n between L and H. Each such
‘learned’ high-res patch Q
l
induces linear constraints on
the unknown target resolution H. These constraints are in
the form of the classical SR constraints of Eq. (1), but with
a more compactly supported blur kernel than B = P SF .
These constraints are induced by a smaller blur kernel B
nl
which needs to compensate only for the residual gap in
scale (n l) between the resolution level l of the ‘learned’
patch and the final resolution level n of the target high-res
H. This is illustrated in Fig. 4.
The closer the learned patches are to the target resolu-
tion H, the better conditioned the resulting set of equations
is (since the blur kernel gradually approaches the δ func-
tion, and accordingly, the coefficient matrix gradually ap-
proaches the identity matrix). Note that the constraints in
Eq. 1 are of the same form, with l = 0 and B = P SF .
As in Sec. 3.1, each such linear constraint is globally scaled
by its reliability (determined by its patch similarity score).
Note that if, for a particular pixel, the only similar patches
found are within the input scale L, then this scheme reduces
to the ‘classical’ single-image SR of Sec. 3.1 at that pixel;
and if no similar patches are found, this scheme reduces
to simple deblurring at that pixel. Thus, the above scheme
guarantees to provide the best possible resolution increase
at each pixel (according to its patch redundancy within and
across scales of L), but never worse than simple upscaling
(interpolation) of L.
Solving Coarse-to-Fine: In most of our experiments we
used the constant scale factor α = 1.25 (namely, s
l
=
1.25
l
). When integer magnification factors were desired
this value was adjusted (e.g. for factors 2 and 4 we used
α = 2
(1/3)
). In our current implementation the above set
of linear equations was not solved at once to produce H,

Figures
Citations
More filters
Proceedings ArticleDOI

Non-local Neural Networks

TL;DR: In this article, the non-local operation computes the response at a position as a weighted sum of the features at all positions, which can be used to capture long-range dependencies.
Proceedings ArticleDOI

Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

TL;DR: SRGAN as mentioned in this paper proposes a perceptual loss function which consists of an adversarial loss and a content loss, which pushes the solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images.
Book ChapterDOI

Perceptual Losses for Real-Time Style Transfer and Super-Resolution

TL;DR: In this paper, the authors combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image style transfer, where a feedforward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
Journal ArticleDOI

Image Super-Resolution Using Deep Convolutional Networks

TL;DR: Zhang et al. as discussed by the authors proposed a deep learning method for single image super-resolution (SR), which directly learns an end-to-end mapping between the low/high-resolution images.
Posted Content

Perceptual Losses for Real-Time Style Transfer and Super-Resolution

TL;DR: This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
References
More filters
Journal ArticleDOI

A Review of Image Denoising Algorithms, with a New One

TL;DR: A general mathematical and experimental methodology to compare and classify classical image denoising algorithms and a nonlocal means (NL-means) algorithm addressing the preservation of structure in a digital image are defined.
Journal ArticleDOI

Example-based super-resolution

TL;DR: This work built on another training-based super- resolution algorithm and developed a faster and simpler algorithm for one-pass super-resolution that requires only a nearest-neighbor search in the training set for a vector derived from each patch of local image data.
Journal ArticleDOI

Fast and robust multiframe super resolution

TL;DR: This paper proposes an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models and demonstrates its superiority to other super-resolution methods.
Journal ArticleDOI

Improving resolution by image registration

TL;DR: In this paper, the relative displacements in image sequences are known accurately, and some knowledge of the imaging process is available, and the proposed approach is similar to back-projection used in tomography.
Journal ArticleDOI

Learning Low-Level Vision

TL;DR: A learning-based method for low-level vision problems—estimating scenes from images with Bayesian belief propagation, applied to the “super-resolution” problem (estimating high frequency details from a low-resolution image), showing good results.
Related Papers (5)
Frequently Asked Questions (12)
Q1. What are the contributions in "Super-resolution from a single image" ?

In this paper the authors propose a unified framework for combining these two families of methods. The authors further show how this combined approach can be applied to obtain super resolution from as little as a single image ( with no database or prior examples ). Their approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Their approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales. 

due to the perspective projection of cameras, images tend to contain scene-specific information in diminishing sizes (diminishing toward the horizon), thus recurring in multiple scales of the same image. 

In principle, when there is only a single low-resolution image L = ( H ∗ B ) ↓s, the problem of recovering H becomes under-determined, as the number of constraints induced by L is smaller than the number of unknowns in H . 

Most of them recur several times within and across scales of the same image (more than 80% of the patches recur 9 or more times in the original image scale; more than 70% recur 9 or more times at 0.41 of the input scale, and 60% of them recur 9 or more times in 0.26 of the input scale.) 

when much smaller imagepatches are used, e.g., 5 × 5, such patch repetitions occur abundantly within and across image scales, even when the authors do not visually perceive any obvious repetitive structure in the image. 

For each pixel in L find its k nearest patch neighbors in the same image L (e.g., using an Approximate Nearest Neighbor algorithm [1]; the authors typically use k=9) and compute their subpixel alignment (at 1s pixel shifts, where s is the scale factor.) 

When solving the equations for image Il+1, the authors employed not only the low-res/high-res patch correspondences found in the input image L, but also all newly learned patch correspondences from the newly recovered high-res images so far: I0, ..., Il. 

the Classical-SR component (apart from providing small resolution increase - see Fig. 5c), plays a central role in preventing the Example-Based SR component from hallucinating erroneous high-res details (a problem alluded to by [11]). 

An input patch “recurs” in another scale if it appears ‘as is’ (without blurring, subsampling, or scaling down) in a scaled-down version of the image. 

Then its higher-res ‘parent’ patch, Q0(sl · p̃), can be extracted from the input image I0 = L (or from any intermediate resolution level between I−l and L, if desired). 

Recurrence of patches across different scales gives rise to ExampleBased SR from a single image, with no prior examples (Sec. 3.2). 

Recurrence of patches within the same image scale forms the basis for applying the Classical SR constraints to information from a single image (Sec. 3.1).