scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Objective image fusion performance measure

17 Feb 2000-Electronics Letters (IET)-Vol. 36, Iss: 4, pp 308-309
TL;DR: Experimental results clearly indicate that this metric reflects the quality of visual information obtained from the fusion of input images and can be used to compare the performance of different image fusion algorithms.
Abstract: A measure for objectively assessing the pixel level fusion performance is defined. The proposed metric reflects the quality of visual information obtained from the fusion of input images and can be used to compare the performance of different image fusion algorithms. Experimental results clearly indicate that this metric is perceptually meaningful.
Citations
More filters
Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.
Abstract: A fast and effective image fusion method is proposed for creating a highly informative fused image through merging multiple images. The proposed method is based on a two-scale decomposition of an image into a base layer containing large scale variations in intensity, and a detail layer capturing small scale details. A novel guided filtering-based weighted average technique is proposed to make full use of spatial consistency for fusion of the base and detail layers. Experimental results demonstrate that the proposed method can obtain state-of-the-art performance for fusion of multispectral, multifocus, multimodal, and multiexposure images.

1,300 citations


Additional excerpts

  • ...First, the base and detail layers of different source images are fused together by weighted averaging B = N∑ n=1 WBn Bn (17) D = N∑ n=1 WDn Dn. (18) Then, the fused image F is obtained by combining the fused base layer B and the fused detail layer D F = B + D. (19)...

    [...]

Journal ArticleDOI
TL;DR: The results show that the measure represents how much information is obtained from the input images and is meaningful and explicit.
Abstract: Mutual information is proposed as an information measure for evaluating image fusion performance. The proposed measure represents how much information is obtained from the input images. No assumption is made regarding the nature of the relation between the intensities in both input modalities. The results show that the measure is meaningful and explicit.

1,059 citations

Journal ArticleDOI
TL;DR: A general image fusion framework by combining MST and SR to simultaneously overcome the inherent defects of both the MST- and SR-based fusion methods is presented and experimental results demonstrate that the proposed fusion framework can obtain state-of-the-art performance.
Abstract: Includes discussion on multi-scale transform (MST) based image fusion methods.Includes discussion on sparse representation (SR) based image fusion methods.Presents a general image fusion framework with MST and SR.Introduces several promising image fusion methods under the proposed framework.Provides a new image fusion toolbox. In image fusion literature, multi-scale transform (MST) and sparse representation (SR) are two most widely used signal/image representation theories. This paper presents a general image fusion framework by combining MST and SR to simultaneously overcome the inherent defects of both the MST- and SR-based fusion methods. In our fusion framework, the MST is firstly performed on each of the pre-registered source images to obtain their low-pass and high-pass coefficients. Then, the low-pass bands are merged with a SR-based fusion approach while the high-pass bands are fused using the absolute values of coefficients as activity level measurement. The fused image is finally obtained by performing the inverse MST on the merged coefficients. The advantages of the proposed fusion framework over individual MST- or SR-based method are first exhibited in detail from a theoretical point of view, and then experimentally verified with multi-focus, visible-infrared and medical image fusion. In particular, six popular multi-scale transforms, which are Laplacian pyramid (LP), ratio of low-pass pyramid (RP), discrete wavelet transform (DWT), dual-tree complex wavelet transform (DTCWT), curvelet transform (CVT) and nonsubsampled contourlet transform (NSCT), with different decomposition levels ranging from one to four are tested in our experiments. By comparing the fused results subjectively and objectively, we give the best-performed fusion method under the proposed framework for each category of image fusion. The effect of the sliding window's step length is also investigated. Furthermore, experimental results demonstrate that the proposed fusion framework can obtain state-of-the-art performance, especially for the fusion of multimodal images.

952 citations


Cites methods from "Objective image fusion performance ..."

  • ...The gradient based fusion metric Q G proposed by Xydeas and Petrovic [23]....

    [...]

Journal ArticleDOI
TL;DR: It is concluded that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications and the researches in the image fusion field are still expected to significantly grow in the coming years.
Abstract: This review provides a survey of various pixel-level image fusion methods according to the adopted transform strategy.The existing fusion performance evaluation methods and the unresolved problems are concluded.The major challenges met in different image fusion applications are analyzed and concluded. Pixel-level image fusion is designed to combine multiple input images into a fused image, which is expected to be more informative for human or machine perception as compared to any of the input images. Due to this advantage, pixel-level image fusion has shown notable achievements in remote sensing, medical imaging, and night vision applications. In this paper, we first provide a comprehensive survey of the state of the art pixel-level image fusion methods. Then, the existing fusion quality measures are summarized. Next, four major applications, i.e., remote sensing, medical diagnosis, surveillance, photography, and challenges in pixel-level image fusion applications are analyzed. At last, this review concludes that although various image fusion methods have been proposed, there still exist several future directions in different image fusion applications. Therefore, the researches in the image fusion field are still expected to significantly grow in the coming years.

871 citations


Cites background from "Objective image fusion performance ..."

  • ...For instance, Xydeas and Petrović propose a gradient-based fusion metric which estimates the amount of edge information that is transferred from the inputs to the fusion result [104]....

    [...]

Journal ArticleDOI
TL;DR: A new multi-focus image fusion method is primarily proposed, aiming to learn a direct mapping between source images and focus map, using a deep convolutional neural network trained by high-quality image patches and their blurred versions to encode the mapping.
Abstract: Introduces Convolutional neural networks (CNNs) into the field of image fusion.Discusses the feasibility and superiority of CNNs used for image fusion.Proposes a state-of-the-art CNN-based multi-focus image fusion method.Exhibits the potential of CNNs for other-type image fusion issues.Puts forward some suggestions on the future study of CNN-based image fusion. As is well known, activity level measurement and fusion rule are two crucial factors in image fusion. For most existing fusion methods, either in spatial domain or in a transform domain like wavelet, the activity level measurement is essentially implemented by designing local filters to extract high-frequency details, and the calculated clarity information of different source images are then compared using some elaborately designed rules to obtain a clarity/focus map. Consequently, the focus map contains the integrated clarity information, which is of great significance to various image fusion issues, such as multi-focus image fusion, multi-modal image fusion, etc. However, in order to achieve a satisfactory fusion performance, these two tasks are usually difficult to finish. In this study, we address this problem with a deep learning approach, aiming to learn a direct mapping between source images and focus map. To this end, a deep convolutional neural network (CNN) trained by high-quality image patches and their blurred versions is adopted to encode the mapping. The main novelty of this idea is that the activity level measurement and fusion rule can be jointly generated through learning a CNN model, which overcomes the difficulty faced by the existing fusion methods. Based on the above idea, a new multi-focus image fusion method is primarily proposed in this paper. Experimental results demonstrate that the proposed method can obtain state-of-the-art fusion performance in terms of both visual quality and objective assessment. The computational speed of the proposed method using parallel computing is fast enough for practical usage. The potential of the learned CNN model for some other-type image fusion issues is also briefly exhibited in the experiments.

826 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, an image fusion scheme based on the wavelet transform is presented, where wavelet transforms of the input images are appropriately combined, and the new image is obtained by taking the inverse wavelet transformation of the fused wavelet coefficients.
Abstract: The goal of image fusion is to integrate complementary information from multisensor data such that the new images are more suitable for the purpose of human visual perception and computer-processing tasks such as segmentation, feature extraction, and object recognition. This paper presents an image fusion scheme which is based on the wavelet transform. The wavelet transforms of the input images are appropriately combined, and the new image is obtained by taking the inverse wavelet transform of the fused wavelet coefficients. An area-based maximum selection rule and a consistency verification step are used for feature selection. The proposed scheme performs better than the Laplacian pyramid-based methods due to the compactness, directional selectivity, and orthogonality of the wavelet transform. A performance measure using specially generated test images is suggested and is used in the evaluation of different fusion methods, and in comparing the merits of different wavelet transform kernels. Extensive experimental results including the fusion of multifocus images, Landsat and Spot images, Landsat and Seasat SAR images, IR and visible images, and MRI and PET images are presented in the paper.

1,532 citations

Book
01 Apr 1993
TL;DR: This book examines the current status of what is known (and not known) about human vision, how human observers interpret visual data, and how to present such data to facilitate their interpretation and use.
Abstract: From the Publisher: This book examines the current status of what is known (and not known) about human vision, how human observers interpret visual data, and how to present such data to facilitate their interpretation and use. Written by experts who are able to cross disciplinary boundaries, the book provides an educational pathway through several models of human vision; describes how the visual response is analyzed and quantified; presents current theories of how the human visual response is interpreted; discusses the cognitive responses of human observers; and examines such applications as space exploration, manufacturing, surveillance, earth and air sciences, and medicine. The book is intended for everyone with an undergraduate-level background in science or engineering with an interest in visual science.

128 citations


"Objective image fusion performance ..." refers methods in this paper

  • ...Notice that this visual to edge information association is supported by Human Visual System [4] studies and is extensively used in image analysis and compression systems....

    [...]

Proceedings ArticleDOI
12 Mar 1999
TL;DR: Preliminary subjective image fusion results demonstrate clearly the advantage which the proposed cross-band selection technique offers, when compared to conventional area based pixel selection.
Abstract: The work described in this paper focuses on cross band pixel selection as applied to pixel level multi-resolution image fusion. In addition, multi-resolution analysis and synthesis is realized via QMF sub-band decomposition techniques. Thus cross-band pixel selection is considered with the aim of reducing the contrast and structural distortion image artifacts produced by existing wavelet based, pixel level, image fusion schemes. Preliminary subjective image fusion results demonstrate clearly the advantage which the proposed cross-band selection technique offers, when compared to conventional area based pixel selection.

39 citations