An edge-guided image interpolation algorithm via directional filtering and data fusion
read more
Citations
Dynamic Crowd Simulation by Emotion-based Behavioral Control of Individuals
An efficient lightweight network for single image super-resolution
CrossNet: An End-to-end Reference-based Super Resolution Network using Cross-scale Warping
Hybrid Post-Training Quantization for Super-Resolution Neural Network Compression
Image Super-Resolution Based on Dense Convolutional Network
References
A wavelet tour of signal processing
Cubic convolution interpolation for digital image processing
New edge-directed interpolation
Splines: a perfect fit for signal and image processing
Survey: interpolation methods in medical image processing
Related Papers (5)
Frequently Asked Questions (12)
Q2. What is the problem with the convolution-based interpolation methods?
If the downsampled signal of the LR image exceeds the Nyquist sampling limit, the convolution-based interpolation methods will suffer from the aliasing problem in reconstructing the HR image.
Q3. How many multiplications and additions is required to compute the covariance matrix?
If an 8 8 window isused to compute the covariance matrix, this algorithm requires about 1300 multiplications and thousands of additions.
Q4. How can the authors reduce the computational complexity of the LMMSE algorithm?
One way to reduce the computational complexity is to judiciously invoke the LMMSE algorithm only for pixels where high local activities are detected, and use a simple linear interpolation method in smooth regions.
Q5. What is the importance of the edge direction in the interpolation process?
Given that the human visual system is highly sensitive to the edges, especially in their spatial locations, it is crucial to suppress the interpolation artifacts while retaining the edge sharpness and geometry.
Q6. How many additions and multiplications to compute a?
By setting the size of vector and as 5 and setting , i.e., , in (2-10) to reduce the computational cost, the authors still need 20 additions and 20 multiplications to compute .
Q7. How many additions and divisions do the authors need to compute?
if the authors set be the average of the four nearest LR neighbors of to reduce computation, then computing needs three additions and one division and computing needs seven additions, four multiplications, and one division.
Q8. Why is the edge structure small in this test image?
This is because the algorithm uses a relatively large window to compute the covariance matrix for each missing sample,whereas the edge structure is small in scale in this test image, causing incorrect estimation of sample covariance.
Q9. How did the authors combine the statistics of the two observation subsets?
Using and combining the statistics of the two observation subsets, the authors fused the two noisy measurements into a more robust estimate via linear minimum mean square-error estimation.
Q10. How do the authors calculate the mean and variance of a sample?
To balance the conflicting requirements of sample size and sample consistency, the authors propose a Gaussian weighting in the sample window to account for the fact that the correlation between and its neighbors decays rapidly in the distance between them.
Q11. How many times did the authors set the scale of 2-D Gaussian filter?
In the experiments, the authors set the scale of 2-D Gaussian filter [referring to (2-5)] around 1 and scale of 1-D Gaussian filter [referring to (2-9)] around 1.5.
Q12. How did the authors reduce the computational complexity of the proposed method?
To reduce the computational complexity of the proposed method, the authors simplified it to an optimal weighting problem and determined the optimal weights.