Multisensor image fusion using the wavelet transform
01 May 1995-Graphical Models and Image Processing (Academic Press)-Vol. 57, Iss: 3, pp 235-245
TL;DR: In this article, an image fusion scheme based on the wavelet transform is presented, where wavelet transforms of the input images are appropriately combined, and the new image is obtained by taking the inverse wavelet transformation of the fused wavelet coefficients.
Abstract: The goal of image fusion is to integrate complementary information from multisensor data such that the new images are more suitable for the purpose of human visual perception and computer-processing tasks such as segmentation, feature extraction, and object recognition. This paper presents an image fusion scheme which is based on the wavelet transform. The wavelet transforms of the input images are appropriately combined, and the new image is obtained by taking the inverse wavelet transform of the fused wavelet coefficients. An area-based maximum selection rule and a consistency verification step are used for feature selection. The proposed scheme performs better than the Laplacian pyramid-based methods due to the compactness, directional selectivity, and orthogonality of the wavelet transform. A performance measure using specially generated test images is suggested and is used in the evaluation of different fusion methods, and in comparing the merits of different wavelet transform kernels. Extensive experimental results including the fusion of multifocus images, Landsat and Spot images, Landsat and Seasat SAR images, IR and visible images, and MRI and PET images are presented in the paper.
TL;DR: A survey of recent publications concerning medical image registration techniques is presented, according to a model based on nine salient criteria, the main dichotomy of which is extrinsic versus intrinsic methods.
Abstract: The purpose of this paper is to present a survey of recent (published in 1993 or later) publications concerning medical image registration techniques. These publications will be classified according to a model based on nine salient criteria, the main dichotomy of which is extrinsic versus intrinsic methods. The statistics of the classification show definite trends in the evolving registration techniques, which will be discussed. At this moment, the bulk of interesting intrinsic methods is based on either segmented points or surfaces, or on techniques endeavouring to use the full information content of the images involved.
TL;DR: Experimental results clearly indicate that this metric reflects the quality of visual information obtained from the fusion of input images and can be used to compare the performance of different image fusion algorithms.
Abstract: A measure for objectively assessing the pixel level fusion performance is defined. The proposed metric reflects the quality of visual information obtained from the fusion of input images and can be used to compare the performance of different image fusion algorithms. Experimental results clearly indicate that this metric is perceptually meaningful.
TL;DR: This tutorial performs a synthesis between the multiscale-decomposition-based image approach, the ARSIS concept, and a multisensor scheme based on wavelet decomposition, i.e. a multiresolution image fusion approach.
Abstract: The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a new image which is more suitable for human and machine perception or further image-processing tasks such as segmentation, feature extraction and object recognition. Different fusion methods have been proposed in literature, including multiresolution analysis. This paper is an image fusion tutorial based on wavelet decomposition, i.e. a multiresolution image fusion approach. We can fuse images with the same or different resolution level, i.e. range sensing, visual CCD, infrared, thermal or medical. The tutorial performs a synthesis between the multiscale-decomposition-based image approach (Proc. IEEE 87 (8) (1999) 1315), the ARSIS concept (Photogramm. Eng. Remote Sensing 66 (1) (2000) 49) and a multisensor scheme (Graphical Models Image Process. 57 (3) (1995) 235). Some image fusion examples illustrate the proposed fusion approach. A comparative analysis is carried out against classical existing strategies, including those of multiresolution.
TL;DR: In this article, the spectral properties of enhanced multispectral images with enhanced spatial resolution have been defined and a formal approach and some criteria to provide a quantitative assessment of the spectral quality of these products are defined.
Abstract: Methods have been proposed to produce multispectral images with enhanced spatial resolution using one or more images of the same scene of better spatial resolution. Assuming that the main concern of the user is the quality of the transformation of the multispectral content when increasing the spatial resolution, this paper defines the properties of such enhanced multispectral images. It then proposes both a formal approach and some criteria to provide a quantitative assessment of the spectral quality of these products. Five sets of criteria are defined. They measure the pe$ormance of a method to synthesize the radiometry in a single spectral band as well as the multispectral information when increasing the spatial resolution. The influence of the type of landscape present in the scene upon the assessment of the quality is underlined, as well as its dependence with scale. The whole approach is illustrated by the case of a SPOT image and three different standard methods to enhance the spatial resolution.
TL;DR: The results show that the measure represents how much information is obtained from the input images and is meaningful and explicit.
Abstract: Mutual information is proposed as an information measure for evaluating image fusion performance. The proposed measure represents how much information is obtained from the input images. No assumption is made regarding the nature of the relation between the intensities in both input modalities. The results show that the measure is meaningful and explicit.