Image super-resolution as sparse representation of raw image patches
read more
Citations
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
Image Super-Resolution Using Deep Convolutional Networks
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
Image Super-Resolution Via Sparse Representation
References
Regression Shrinkage and Selection via the Lasso
Compressed sensing
Nonlinear dimensionality reduction by locally linear embedding.
$rm K$ -SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries
Related Papers (5)
Frequently Asked Questions (14)
Q2. What future works have the authors mentioned in the paper "Image super-resolution as sparse representation of raw image patches" ?
However, one of the most important questions for future investigation is to determine, in terms of the within-category variation, the number of raw sample patches required to generate a dictionary satisfying the sparse representation prior.
Q3. What is the way to recover a high-resolution image?
ifD satisfies an appropriate near-isometry condition, then for a wide variety of matricesL, any sufficiently sparse linear representation of a high-resolution imagex in terms of theD can be recovered (almost) perfectly from the low-resolution image [9, 21].
Q4. What is the important component of the low-resolution image?
The high-frequency components of the low-resolution image are also arguably the most important for predicting the lost high-frequency content in the target high-resolution image.
Q5. How can the authors eliminate the discrepency in (7)?
The authors eliminate this discrepency by projectingX0 onto the solution space of DHX = Y , computingX∗ = arg min X ‖X − X0‖ s.t. DHX = Y . (9)The solution to this optimization problem can be efficiently computed using the back-projection method, originally developed in computer tomography and applied to superresolution in [15, 4].
Q6. How do the authors obtain a local consistent solution to the image denoising problem?
The authors obtain a locally consistent solution by allowing patches to overlap and demanding that the reconstructed high-resolution patches agree on the overlapped areas.
Q7. What is the sparse representation of a low-resolution patch?
The sparse representation of a low-resolution patch in terms ofDℓ will bedirectly used to recover the corresponding high-resolution patch fromD~.
Q8. Why does the low-resolution patchy algorithm not satisfy the reconstruction constraint?
Because of this, and also because of noise, the high-resolution imageX0 produced by the sparse representation approach of the previous section may not satisfy the reconstruction constraint (2) exactly.
Q9. What is the penalty function for the (i, j)th patch of X?
αij denotes the representation coefficients for the (i, j)th patch ofX, andPij is a projection matrix that selects the(i, j)th patch fromX. ρ(X) is a penalty function that encodes prior knowledge about the high-resolution image.
Q10. What is the advantage of neighbor embedding?
One advantage of their approach over methods such as neighbor embedding [5] is that it selects the number of relevant dictionary elementsadaptively for each patch.
Q11. What philosophy of LLE is adopted from manifold learning?
In [5], the authors adopt the philosophy of LLE [22] from manifold learning, assuming similarity between the two manifolds in the high-resolution patch space and the low-resolution patch space.
Q12. How many patches are used in the animal dictionary?
The authors now conduct more challenging experiments on more intricate textures found in animal images, using the animal dictionary with merely 100,000 training patches (second row of Figure2).
Q13. How many patches are sampled from each training image?
For each category of images, the authors sample only about 100,000 patches from about 30 training images to form each dictionary, which is considerably smaller than that needed by4
Q14. How many description vectors are used for each low-resolution patch?
Applying these four filters, the authors get four description feature vectors for each patch, which are concatenated as one vector as the final representation of the low-resolution patch.