Image fusion using multivariate and multidimensional EMD
Summary (2 min read)
1. INTRODUCTION
- The authors propose a hybrid (multi-scale and pixel-level) and data-driven scheme for image fusion based on multivariate extensions of empirical mode decomposition (MEMD) algorithm [6] .
- The authors also compare their results with the standard bi-dimensional EMD [7] based fusion approach.
- The EMD based fusion methods are employed since they are fully data adaptive, enable fusion of intrinsic scales at local level, and allow fusion of matched spatial frequency content between input images.
- Standard multiscale methods (based on Fourier and wavelet transform) employ static filter banks and predefined basis functions which hinder the fusion of matched spatial frequency content between input images.
- In both cases, the fusion results obtained from the proposed scheme outperforms the results obtained by BDEMD both qualitatively and quantitatively.
2. EMD AND ITS MULTIVARIATE AND MULTIDIMENSIONAL EXTENSIONS
- The recursive sifting algorithm operates by defining the upper and lower envelopes of an input signal by interpolating its extrema.
- The local mean m(k) is then estimated by averaging these envelopes, which is subsequently subtracted from the input signal x(k) to obtain the fast oscillating signal d(k) = x(k) − m(k).
- The sifting process stops when d(k) has inadequate extrema.
2.2. Bi-dimensional EMD (BDEMD)
- Bi-dimensional EMD [7] is a generic extension of EMD for images.
- Various algorithms for computing BDEMD decomposition exist which mainly differ in the way the extrema are interpolated to obtain upper and lower envelopes.
- Radial basis functions (tensor product) or B-splines are commonly used methods for interpolation [7] , whereas the method by Linderhed [8] uses thin-plate splines for the interpolation of the extrema.
2.3. Multivariate EMD (MEMD)
- Multivariate EMD (MEMD) algorithm extends the functionality of EMD to signals containing multiple channels [6] .
- The rationale behind the MEMD is to separate inherent rotations (rather than oscillations) within a signal.
- This is achieved by estimating the local mean of a multivariate signal in multidimensional spaces where the signal resides.
- For multivariate signals, however, the concept of extrema cannot be defined in clear terms and therefore envelopes cannot be obtained as a trivial extension of univariate case.
- Note that the MEMD produced diagonally dominant correlograms of IMFs as compared to the BDEMD, proving that the same-indexed IMFs generated from MEMD are highly correlated, a major requirement in most fusion applications.
4. MEMD-AND BDEMD-BASED IMAGE FUSION
- The block diagram of the proposed multivariate EMD based fusion algorithm is shown in Fig. 2 .
- The BDEMD based fusion algorithm operates similarly to the MEMD algorithm illustrated above.
- Note that owing to the empirical nature of the EMD algorithm, typically different number of IMFs are obtained for multiple input images resulting in mismatched IMFs, thus hindering the fusion process.
5. CASE STUDY 1: MULTI-FOCUS IMAGE FUSION
- Fig. 3 shows a subset of input multi-focus images and the fused images obtained from the two methods; only three out of seven input images are shown due to the space restrictions.
- Note from Figs. 3(a-c ) that each input image has some specific objects within focus: Image 1, for instance, focuses on the nearest objects such as the coin, whereas the Image 2 and Image 3 focus on the middle and the farthest objects respec- shows pie-charts highlighting the number of cases in terms of percentage where each method performed best.
- It can be observed that for E and SF measures, the MEMD yielded superior results for approximately all input data sets, whereas for QABF , the MEMD produced better results for 83% of the input data sets.
6. CASE STUDY 2: PAN-SHARPENING
- The authors next performed experiments for Pan-sharpening of multispectral (MS) images using the existing BDEMD-based fusion and the proposed MEMD-based fusion algorithms.
- The ground truth for both data sets was also available (not shown in Fig. 5 due to space restrictions) and was used for the quantitative analysis of the fusion results.
- The details added by the fusion process were extracted by subtracting the original intensity plane I from Î, which were then separately added to the B, G, R, and NIR components of the MS image to obtain the Pan-sharpened MS image.
- The authors employed the following set of performance metrics for this purpose: i) Relative dimensionless global error in synthesis (ER, ideally 0), ii) Spectral Angle Mapper (S, ideally 0), and iii) Quaternion Index (Q4, ideally 100%) [2] .
- Please observe that in both data sets the proposed scheme performed better than AWT and BDEMD fusion methods for all performance metrics with an exception to S value for the Tolouse image where AWT performed better.
7. CONCLUSIONS
- The authors have presented a method for the fusion of multiple images using multivariate empirical mode decomposition (MEMD) algorithm.
- The superiority of the method has been demonstrated on a large data set for two applications: i) multi-focus fusion, and ii) pan-sharpening of multi-spectral images.
- In addition to the qualitative analysis, the authors have also employed a wide range of quantitative performance measures to compare the fusion results obtained from the two approaches.
Did you find this useful? Give us your feedback
Citations
38 citations
Cites background from "Image fusion using multivariate and..."
...That ability of MEMD to cater for more than two input images has been highlighted for multiexposure and multifocus images in [23]....
[...]
26 citations
19 citations
15 citations
Cites background or methods from "Image fusion using multivariate and..."
...Although the simulations in [23], [24] illustrate the potentiality of MEMD in fusion of multi-focus images and pan-...
[...]
...However, a prerequisite to implement these EMD extensions in 2D image processing is to vectorize multiple images by concatenating their columns/rows to form a one-dimensional multivariate signal [21]–[23]....
[...]
8 citations
Additional excerpts
...Though MEMD algorithm is designed for time varying signals; however, it can be used for image processing applications [29], [30]....
[...]
References
18,956 citations
1,690 citations
"Image fusion using multivariate and..." refers background in this paper
...(1) below: x(k) = M∑ m=1 cm(k) + r(k) (1) The residual r(k) does not contain any oscillations and represents a trend within the signal....
[...]
1,446 citations
"Image fusion using multivariate and..." refers background in this paper
...(1) below: x(k) = M∑ m=1 cm(k) + r(k) (1) The residual r(k) does not contain any oscillations and represents a trend within the signal....
[...]
1,187 citations
"Image fusion using multivariate and..." refers methods in this paper
...Typical examples are the methods based on Gaussian pyramids, Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT) [3], and Discrete Wavelet Transform (DWT) [4]....
[...]
1,151 citations
"Image fusion using multivariate and..." refers background in this paper
...The sifting process stops when d(k) has inadequate extrema....
[...]
Related Papers (5)
Frequently Asked Questions (16)
Q2. What are the contributions mentioned in the paper "Image fusion using multivariate and multidimensional emd" ?
The authors present a novel methodology for the fusion of multiple ( two or more ) images using the multivariate extension of empirical mode decomposition ( MEMD ). The authors show that the multivariate and multidimensional extensions of EMD are suitable for image fusion purposes. The authors further demonstrate that while multidimensional extensions, by design, may seem more appropriate for tasks related to image processing, the proposed multivariate extension outperforms these in image fusion applications owing to its mode-alignment property for IMFs.
Q3. What is the definition of a sifting algorithm?
Empirical mode decomposition (EMD) [5] is a data-driven method which decomposes an arbitrary signal x(k) into a set of multiple oscillatory components called the intrinsic mode functions (IMFs) via an iterative process known as sifting algorithm [6].
Q4. What is the purpose of the recursive sifting algorithm?
The recursive sifting algorithm operates by defining the upper and lower envelopes of an input signal by interpolating its extrema.
Q5. What quantitative measures were used for the fusion results?
For quantitative evaluation of the fusion results, the authors have employed Entropy (E) [11], objective image fusion (QABF ) [10] and the spatial frequency (SF ) [9] performance measures.
Q6. How many images were used for this purpose?
For this purpose, multiple images of 30 different scenes were used; seven images were taken of each scene with different parts of the scene out-of-focus in each image.
Q7. What were the performance metrics used for the fusion of the MS images?
The authors employed the following set of performance metrics for this purpose: i) Relative dimensionless global error in synthesis (ER, ideally 0), ii) Spectral Angle Mapper (S, ideally 0), and iii) Quaternion Index (Q4, ideally 100%) [2].
Q8. What measures were used for the fusion of images?
It can be observed that for E and SF measures, the MEMD yielded superior results for approximately all input data sets, whereas for QABF , the MEMD produced better results for 83% of the input data sets.
Q9. What is the reason why MEMD is superior to BDEMD?
Superiority of MEMD over BDEMD can be attributed to data adaptive and local nature of its decomposition which manifested in improved spatial performance in both case studies.
Q10. What was the purpose of the experiments?
The authors next performed experiments for Pan-sharpening of multispectral (MS) images using the existing BDEMD-based fusion and the proposed MEMD-based fusion algorithms.
Q11. What is the definition of a multivariate EMD?
To address this issue, MEMD operates by projecting an input multivariate signal in V uniformly spaced directions on a unit p-sphere; the extrema of the so projected signals are then interpolated to obtain multiple envelopes which are subsequently averaged to obtain the local mean.
Q12. What is the performance of the proposed methods?
Please observe that in both data sets the proposed scheme performed better than AWT and BDEMD fusion methods for all performance metrics with an exception to S value for the Tolouse image where AWT performed better.
Q13. What is the definition of a recursive sifting algorithm?
The IMFs represent the intrinsic temporal modes (scales) that are present in the input data which when added together reproduce the input x(k), as shown in eq. (1) below:x(k) = M∑ m=1 cm(k) + r(k) (1)The residual r(k) does not contain any oscillations and represents a trend within the signal.
Q14. What is the superiority of the method?
The superiority of the method has been demonstrated on a large data set for two applications: i) multi-focus fusion, and ii) pan-sharpening of multi-spectral images.
Q15. What is the difference between MEMD and BDEMD?
This mode alignment property of MEMD is a result of direct processing of input images within MEMD, whereas the lack of it in BDEMD is due to the fact that it processes multiple input images separately.
Q16. What was the performance of the fusion images?
The details added by the fusion process were extracted by subtracting the original intensity plane The authorfrom Î , which were then separately added to the B, G, R, and NIR components of the MS image to obtain the Pan-sharpened MS image.