# 12 – Enhancement of multiple sensor images using joint image fusion and blind restoration

TL;DR: The authors propose a combined spatial-domain method of fusion and restoration in order to identify these common degraded areas in the fused image and use a regularised restoration approach to enhance the content in these areas.

Abstract: Image fusion systems aim at transferring \interesting" information from the input sensor images to the fused image. The common assumption for most fusion approaches is the existence of a high-quality reference image signal for all image parts in all input sensor images. In the case that there are common degraded areas in at least one of the input images, the fusion algorithms can not improve the information provided there, but simply convey a combination of this degraded information to the output. In this study, the authors propose a combined spatial-domain method of fusion and restoration in order to identify these common degraded areas in the fused image and use a regularised restoration approach to enhance the content in these areas. The proposed approach was tested on both multi-focus and multi-modal image sets and produced interesting results.

## Summary (4 min read)

### 1 Introduction

- Data fusion is defined as the process of combining data from sensors and related information from several databases, so that the performance of the system can be improved, while the accuracy of the results can be also increased.
- Image fusion can be similarly viewed as the process of combining information in the form of images, obtained from various sources in order to construct an artificial image that contains all “useful” information that exists in the input images.
- Ideally, the images acquired by these sensors should be similar.
- One can perform error minimization between the fused and input images, using various proposed error norms in the spatial domain in order to perform fusion.

### 2 Robust Error Estimation Theory

- Let the image y(r) be a recovered version from a degraded observed image x(r), where r = (i, j) are pixel coordinates (i, j).
- To estimate the recovered image y(r), one can minimise an error functional E(y) that expresses the difference between the original image and the estimated one, in terms of y.
- The function ρ(·) is termed the error norm and is defined according to the application, i.e. the type of degradation or the desired task.
- The EulerLagrange equation is described by the following ordinary differential equation, i.e. a relation that contains functions of only one independent variable, and one or more of its derivatives with respect to that variable, the solution t of which extremises the above functional [21].
- In practice, only a finite number of iterations are performed to achieve visually satisfactory results [6].

### 2.1 Isotropic diffusion

- As mentioned previously, one candidate error norm ρ(·) is the least-squares error norm.
- The above error norm smooths Gaussian noise and depends only on the image gradient ∇y(r), but not explicitly on the image y(r) itself.
- The solution specifies that the time evolution in (12) is a convolution process performing Gaussian smoothing.
- These are two disadvantages that need to be seriously considered when using isotropic diffusion.

### 2.2 Isotropic diffusion with edge enhancement

- Image fusion aims at transferring salient features to the fused image.
- The anisotropic gain function has significantly higher values around edges or where sharp features are dominant compared to blurred or smooth regions.
- The above equation essentially smoothes noise while enhancing edges.
- In the opposite case that Jx(r) < Jy(r), the information in the enhanced image has better edge representation than the original degraded image for several r and therefore, no processing is necessary.

### 3 Fusion with Error Estimation theory

- The authors propose a novel spatial-domain fusion algorithm, based on the basic formulation of John and Vorontsov.
- Assuming that the authors have a number of T input frames xn(r) to be fused, one can easily perform selective image fusion, by iterating the update rule (18) for the estimation of y(r) using each of input images xn consecutively for a number of K iterations.
- In a succession of intervals of K iterations, the synthetic frame finally integrates high-quality edge areas from the entire set of input frames.
- The proposed approach by John and Vorontsov can be applied mainly in the case of a video stream, where the quality of the observed image is enhanced, based on previous and forthcoming frames.
- This framework is not efficient in the case of fusion applications, where the input frames are simultaneously available for processing and fusion.

### 3.1 A novel fusion formulation based on error estimation theory

- Assume there are T images xn(r) that capture the same observed scene.
- In a second attempt to photograph the object correctly, the foreground object appears properly and the background appears blurred.
- This cannot be accomplished directly by the scheme proposed by Jones and Vorontsov.
- In addition, all the fusion weights are estimated simultaneously using this scheme.
- Therefore, after a couple of iterations the majority of the useful information is extracted from the input images and transferred to the composite image.

### 3.2 Fusion experiments of out-of-focus and multimodal image sets using error estimation theory

- The authors perform several fusion experiments of both out-of-focus and multimodal images to evaluate the performance of the proposed approach.
- In the first experiment, the system is tested with an out-of-focus example, the “Disk” dataset.
- In Table 1, the performance of the proposed method is compared with the ICA-based method, in terms of the Petrovic and Piella method.
- It aims at highlighting the edges of the input images to the fused image, due to the edge enhancement term in the cost function.
- One should also consult the human operators of modern fusion systems, apart from proposed fusion metrics [15,18], in order to evaluate efficiently the performance of these algorithms.

### 4 Joint Image Fusion and Restoration

- The basic Image Fusion concept assumes that there is some useful information for all parts of the observed scene at least in one of the input sensors.
- This assumption might not always be true.
- This means that there might be parts of the observed scene where there is only degraded information available.
- The current fusion algorithms will fuse all high quality information from the input sensors and for the common degraded areas will form a blurry mixture of the input images, as there is no high quality information available.
- Once this part is identified, an image restoration approach can be applied as a second step in order to enhance these parts for the final composite “fused” image.

### 4.1 Identifying common degraded areas in the sensor images

- The first task will be to identify the areas of degraded information in the input sensor images.
- The following algorithm for extracting common degraded areas is described in the following steps: (1) Extract an edge map of the fused image f , using the Laplacian kernel, i.e. ∇2f(r, t).
- (3) Reduce the dynamic range by calculating ln(VL(r, t)).
- The aim is to avoid high quality edge/texture and constant background information.

### 4.2 Image restoration

- A number of different approaches for tackling the image restoration problem have been proposed in the literature, based on various principles.
- For an overview of image restoration methods, one can always possibly refer to Kundur and Hatzinakos [8] and Andrews and Hunt [1].
- The double-weighted regularised image restoration approach in the spatial domain is pursued, that was initially proposed by You and Kaveh [19], with additional robust functionals to improve the performance in the case of outliers.
- The restoration problem is described by the following model: y(r) = h(r) ∗ f(r) + d(r) (25) where ∗ denotes 2D convolution, h(r) the degradation kernel, f(r) the estimated image and d(r) possible additive noise.

### 4.2.1 Double weighted regularised image restoration

- The second term, called the regularising term, imposes a smoothness constraint on the recovered image and the third term acts similarly to the estimated blur.
- Additional constraints must be imposed, including the non- negativity and finite-support constraint for both the blurring kernel and the image.
- A Maximum-APosteriori (MAP) estimate of f(r) is given by performing maxf log p(y, f |r) = maxf log p(y|f, r)p(f |r), where r denotes the observed samples.
- To estimate f(r) and h(r), the above cost function needs to be minimised.

### 4.2.2 Robust functionals to the restoration cost function

- There exist several criticisms regarding the conventional double regularisation restoration approach.
- These parameters determine the “shape” of the influence function and as a consequence the filtering of outliers.
- In order to find a trade-off between noise elimination and preservation of high-frequency details, the influence functional for the image regularising term must approximate the quadratic structure at small to moderate values and alternatively deviate from the quadratic structure at high values, so that the sharp changes will not be greatly penalised.
- The PSF support is initially set to a large enough value.

### 4.3 Combining image fusion and restoration

- The authors propose an algorithm that can combine all the previous methodologies and essentially perform fusion of all the parts that contain valid information in at least one of the input images and restoration of those image parts that are found to be degraded in all input images.
- All useful information from the input images has been transferred to the fused image and the next step is to identify and restore the areas where only low quality information is available.
- (2) The second step is to estimate the common degraded area, using the previous methodology based on the Laplacian edge map of the fused image y(r).
- More specifically, this step aims at identifying possible corrupted areas in all input images that need enhancement in order to highlight more image details that were not previously available.
- In a similar manner the update for the Point Spread Function (PSF) needs to be influenced only by the common degraded area, i.e. in (33) f(r) is always substituted by A(r)f(r).

### 4.4 Examples of joint image fusion and restoration

- Three synthetic examples are constructed to test the performance of the joint fusion and restoration approach.
- In the case that the smaller kernel captures more than 85% of the total kernel variance, its size becomes the new estimated kernel size in the next step of the adaptation.
- In Figure 7 (e), (f), a focus on the common degraded area in the fused and the fused/restored image can verify the above conclusions.
- The two artificially created blurred input images are depicted in Figures 9 (a), (b). In Figure 9(a), Gaussian blur is applied to the upper left part of the image and in Figure 9(b).
- As expected, the fusion algorithm manages to transfer all high quality information to the fusion image except for the area in the centre of the image that still remains blurred.

### 5 Conclusions

- The problem of image fusion, i.e. the problem of incorporating useful information from various modality input sensors into a composite image that enhances the visual comprehension and surveillance of the observed scene, was addressed in this study.
- More specifically, a spatial-domain method was proposed to perform fusion of both multi-focus and multi-modal input image sets.
- By definition, fusion systems aim only at transferring the “interesting” information from the input sensor images to the fused image, assuming there is proper reference image signal for all parts of the image in at least one of the input sensor images.
- The authors proposed a mechanism of identifying these common degraded areas in the fused image and use a regularised restoration approach to enhance the content in this area.
- In addition, there are several other applications such as increasing the resolution and quality of pictures taken by commercial digital cameras.

Did you find this useful? Give us your feedback

...read more

##### Citations

27 citations

6 citations

1 citations

##### References

11,917 citations

### "12 – Enhancement of multiple sensor..." refers background in this paper

...In addition, it has been shown that isotropic diffusion may not only smooth edges, but also causes drifts of the actual edges in the image edge, because of the Gaussian filtering (smoothing) [2,13]....

[...]

4,687 citations

### "12 – Enhancement of multiple sensor..." refers methods in this paper

...Noise Ratio (PSNR) and the Image Quality Index Q0, proposed by Wang and Bovik [17]....

[...]

...In Table 3, the performance of the TopoICA-based fusion scheme, the fusion scheme based on Error Estimation and the fusion+restoration scheme are evaluated in terms of Peak Signal-to- Noise Ratio (PSNR) and the Image Quality Index Q0, proposed by Wang and Bovik [17]....

[...]

...The enhanced images will be compared with the ground truth image, in terms of Peak Signal-toNoise Ratio (PSNR) and Image Quality Index Q0, as proposed by Wang and Bovik [17]....

[...]

1,362 citations

### "12 – Enhancement of multiple sensor..." refers background in this paper

...which is the isotropic diffusion equation having the following analytic solution [2]:...

[...]

...In addition, it has been shown that isotropic diffusion may not only smooth edges, but also causes drifts of the actual edges in the image edge, because of the Gaussian filtering (smoothing) [2,13]....

[...]

1,293 citations