An edge-guided image interpolation algorithm via directional filtering and data fusion
Summary (2 min read)
Introduction
- Some nonlinear interpolation techniques [8]–[15] were proposed in recent years to maintain edge sharpness.
- This process can discriminate the two subsets in terms of their coherence to the missing sample, and make the subset perpendicular to the edge direction contribute less to the LMMSE estimate of the missing sample.
II. EDGE-GUIDED LMMSE-BASED INTERPOLATION
- Referring to Fig. 1, the black dots represent the available samples of and the white dots represent the missing samples of .
- Now, and can be computed by (2-8), and finally the co-variance matrix can be estimated as (2-10) where is the normalized correlation coefficient of with Although and are nearly uncorrelated with , they are somewhat correlated to each other because and have some similarities due to the high local correlation.
- In the areas where sharp edges appear, which is the situation of their concern and interests, the values of are sufficiently low, and the authors can assume that and are uncorrelated with each other without materially affecting the performance of the proposed interpolation algorithm in practice.
- As illustrated in Fig. 3, the missing sample or can be estimated in one direction by the original pixels of the LR image, and in the other direction by the already interpolated HR samples.
- Similar to (2-2), the two directional approximations of the missing sample are considered as the noisy measurements of or , and then the LMMSE of the missing sample can be computed in a similar way as described in the previous section.
III. SIMPLIFIED LMMSE INTERPOLATION ALGORITHM
- This may amount to too heavy a computation burden for some applications that need high throughput.
- In total, the algorithm needs 39 additions, 32 multiplications, and three divisions to compute a with (2-4).
- The strategy of weighted average leads to significant reduction in complexity over the exact LMMSE method.
- The weights and are determined to minimize the mean square-error (MSE) of : .
- In fact, if and are highly correlated, that is to say, the two estimates and are close to each other, then varies little in and anyway.
IV. EXPERIMENTAL RESULTS
- The proposed image interpolation algorithms were implemented and tested, and their performance was compared with some existing methods.
- Since the original HR images are known in the simulation, the authors can compare the interpolated results with the true images, and measure the PSNR of those interpolated images.
- Figs. 4 and 5 show the interpolated images Lena and Butterfly of the LMMSE_INTR_cubic and LMMSE_INTR_linear methods.
- The small holes in this image are good patterns to test the edge recovery ability of the interpolation algorithms.
- The new interpolation algorithms reproduced much sharper edges than the bicubic convolution or bicubic spline methods, while being competitive against the methods of [8] and [9].
V. CONCLUSION
- The authors developed an edge-guided LMMSE-type image interpolation technique.
- For each pixel to be interpolated, the authors partitioned its neighborhood into two observation subsets in two orthogonal directions.
- These two directional estimates were processed as two noisy measurements of the missing sample.
- To reduce the computational complexity of the proposed method, the authors simplified it to an optimal weighting problem and determined the optimal weights.
- The simplified method had competitive performance with significant computational savings.
Did you find this useful? Give us your feedback
Citations
3,221 citations
Cites methods from "An edge-guided image interpolation ..."
...To solve the super-resolution problem, early approaches use interpolation techniques based on sampling theory [1, 15, 34]....
[...]
2,860 citations
Cites methods from "An edge-guided image interpolation ..."
...To tackle this inverse problem, plenty of image SR algorithms have been proposed, including interpolationbased [36], reconstruction-based [34], and learning-based methods [27, 28, 19, 2, 20, 8, 10, 30]....
[...]
1,589 citations
1,219 citations
Cites methods from "An edge-guided image interpolation ..."
...During the past decade, a plenty of image SISR methods have been proposed in the computer vision community, including interpolation-based [37], model-based [34], and CNN-based methods [2, 29, 14, 13, 29, 17, 30, 39, 38]....
[...]
...Therefore, a great number of SR methods have been proposed, ranging from early interpolation-based [37] and model-based [4], to recent learning-based methods [32, 39]....
[...]
645 citations
References
17,693 citations
"An edge-guided image interpolation ..." refers background in this paper
...In the past two decades, wavelet transform (WT) theory [17] has been well developed and it endows a good multiresolution framework for signal representation....
[...]
3,280 citations
"An edge-guided image interpolation ..." refers methods in this paper
...A number of image interpolation methods have been developed [1], [2], [5], [6], [8]–[16]....
[...]
1,933 citations
"An edge-guided image interpolation ..." refers background or methods in this paper
...Another disadvantage of Li and Orchard’s method is its high computational complexity....
[...]
...The method in [9] also introduces artifacts around the hole’s perimeters....
[...]
...algorithm achieves the highest PSNR results on all test images, except for image Lena, for which it has a slightly lower PSNR than the method in [9]....
[...]
...(a) Original image; interpolated image by (b) the cubic convolution; (c) the method in [8]; (d) the method in [9]; (e) the proposed LMMSE_INTR_cubic; (f) the proposed OW_INTR_cubic....
[...]
...The interpolator of Li and Orchard [9] can preserve large edge structures well, such as those in Lena; however, it introduces artifacts in the finer edge structures, such as the drops of Splash and the head part of Butterfly....
[...]
1,732 citations
"An edge-guided image interpolation ..." refers background or methods in this paper
...A number of image interpolation methods have been developed [1], [2], [5], [6], [8]–[16]....
[...]
...It has applications in medical imaging, remote sensing and digital photographs [3]–[5], etc....
[...]
...Denote by and the two directional interpolation results by some linear methods, such as bilinear interpolation, bicubic convolution, or spline interpolation [1]–[5]....
[...]
...The traditional linear interpolation methods [1]–[3], [5], [6] do not work very well under the edge preserving criterion....
[...]
1,360 citations
"An edge-guided image interpolation ..." refers background or methods in this paper
...It has applications in medical imaging, remote sensing and digital photographs [3]–[5], etc....
[...]
...The traditional linear interpolation methods [1]–[3], [5], [6] do not work very well under the edge preserving criterion....
[...]
Related Papers (5)
Frequently Asked Questions (12)
Q2. What is the problem with the convolution-based interpolation methods?
If the downsampled signal of the LR image exceeds the Nyquist sampling limit, the convolution-based interpolation methods will suffer from the aliasing problem in reconstructing the HR image.
Q3. How many multiplications and additions is required to compute the covariance matrix?
If an 8 8 window isused to compute the covariance matrix, this algorithm requires about 1300 multiplications and thousands of additions.
Q4. How can the authors reduce the computational complexity of the LMMSE algorithm?
One way to reduce the computational complexity is to judiciously invoke the LMMSE algorithm only for pixels where high local activities are detected, and use a simple linear interpolation method in smooth regions.
Q5. What is the importance of the edge direction in the interpolation process?
Given that the human visual system is highly sensitive to the edges, especially in their spatial locations, it is crucial to suppress the interpolation artifacts while retaining the edge sharpness and geometry.
Q6. How many additions and multiplications to compute a?
By setting the size of vector and as 5 and setting , i.e., , in (2-10) to reduce the computational cost, the authors still need 20 additions and 20 multiplications to compute .
Q7. How many additions and divisions do the authors need to compute?
if the authors set be the average of the four nearest LR neighbors of to reduce computation, then computing needs three additions and one division and computing needs seven additions, four multiplications, and one division.
Q8. Why is the edge structure small in this test image?
This is because the algorithm uses a relatively large window to compute the covariance matrix for each missing sample,whereas the edge structure is small in scale in this test image, causing incorrect estimation of sample covariance.
Q9. How did the authors combine the statistics of the two observation subsets?
Using and combining the statistics of the two observation subsets, the authors fused the two noisy measurements into a more robust estimate via linear minimum mean square-error estimation.
Q10. How do the authors calculate the mean and variance of a sample?
To balance the conflicting requirements of sample size and sample consistency, the authors propose a Gaussian weighting in the sample window to account for the fact that the correlation between and its neighbors decays rapidly in the distance between them.
Q11. How many times did the authors set the scale of 2-D Gaussian filter?
In the experiments, the authors set the scale of 2-D Gaussian filter [referring to (2-5)] around 1 and scale of 1-D Gaussian filter [referring to (2-9)] around 1.5.
Q12. How did the authors reduce the computational complexity of the proposed method?
To reduce the computational complexity of the proposed method, the authors simplified it to an optimal weighting problem and determined the optimal weights.