A novel framework for multi-focus image fusion
01 Dec 2013-pp 1-4
TL;DR: A novel framework for multi-focus image fusion is proposed, which is computationally simple since it realizes only in the spatial domain and based on the fractal dimensions of the images into the fusion process.
Abstract: One of the foremost requisite for human perception and computer vision task is to get an image with all objects in focus. The image fusion process, as one of the solutions, allows getting a clear fused image from several images acquired with different focus levels of a scene. In this paper, a novel framework for multi-focus image fusion is proposed, which is computationally simple since it realizes only in the spatial domain. The proposed framework is based on the fractal dimensions of the images into the fusion process. The extensive experiments on different multi-focus image sets demonstrate that it is consistently superior to the conventional image fusion methods in terms of visual and quantitative evaluations.
References
More filters
TL;DR: Textures are classified based on the change in their properties with changing resolution, and the relation of a texture picture to its negative, and directional properties are discussed.
Abstract: Textures are classified based on the change in their properties with changing resolution The area of the gray level surface is measured at serveral resolutions This area decreases at coarser resolutions since fine details that contribute to the area disappear Fractal properties of the picture are computed from the rate of this decrease in area, and are used for texture comparison and classification The relation of a texture picture to its negative, and directional properties, are also discussed
833 citations
"A novel framework for multi-focus i..." refers background in this paper
...Fractal geometry is a mathematical model to represent many complex objects found in the nature, such as coastlines, mountains, clouds, etc [10]....
[...]
TL;DR: The aim is to reframe the multiresolution-based fusion methodology into a common formalism and to develop a new region-based approach which combines aspects of both object and pixel-level fusion.
Abstract: This paper presents an overview on image fusion techniques using multiresolution decompositions. The aim is twofold: (i) to reframe the multiresolution-based fusion methodology into a common formalism and, within this framework, (ii) to develop a new region-based approach which combines aspects of both object and pixel-level fusion. To this end, we first present a general framework which encompasses most of the existing multiresolution-based fusion schemes and provides freedom to create new ones. Then, we extend this framework to allow a region-based fusion approach. The basic idea is to make a multiresolution segmentation based on all different input images and to use this segmentation to guide the fusion process. Performance assessment is also addressed and future directions and open problems are discussed as well.
832 citations
"A novel framework for multi-focus i..." refers methods in this paper
...The following six methods are used for comparative analysis, which includes: 1) PCA based [3], 2) pyramid based [4], [5], [6] and 3) multi-resolution based methods [7], [8]....
[...]
...These approaches involve the methods of PCA [3], Ratio pyramid [4], Contrast pyramid [5], Gradient pyramid [6], simple wavelet transform [7], morphological wavelet transform [8], etc....
[...]
...Data Set Metric PCA [3] RP [4] CP [5] GP [6] WT [7] MWT [8] Proposed...
[...]
TL;DR: A hierarchial image merging scheme based on a multiresolution contrast decomposition (the ratio of low-pass pyramid) to preserve those details from the input images that are most relevant to visual perception is introduced.
Abstract: This paper introduces a hierarchial image merging scheme based on a multiresolution contrast decomposition (the ratio of low-pass pyramid). The composite images produced by this scheme preserve those details from the input images that are most relevant to visual perception. Some applications of the method are indicated
611 citations
TL;DR: A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion.
Abstract: A novel approach to multiresolution signal-level image fusion is presented for accurately transferring visual information from any number of input image signals, into a single fused image without loss of information or the introduction of distortion. The proposed system uses a "fuse-then-decompose" technique realized through a novel, fusion/decomposition system architecture. In particular, information fusion is performed on a multiresolution gradient map representation domain of image signal information. At each resolution, input images are represented as gradient maps and combined to produce new, fused gradient maps. Fused gradient map signals are processed, using gradient filters derived from high-pass quadrature mirror filters to yield a fused multiresolution pyramid representation. The fused output image is obtained by applying, on the fused pyramid, a reconstruction process that is analogous to that of conventional discrete wavelet transform. This new gradient fusion significantly reduces the amount of distortion artefacts and the loss of contrast information usually observed in fused images obtained from conventional multiresolution fusion schemes. This is because fusion in the gradient map domain significantly improves the reliability of the feature selection and information fusion processes. Fusion performance is evaluated through informal visual inspection and subjective psychometric preference tests, as well as objective fusion performance measurements. Results clearly demonstrate the superiority of this new approach when compared to conventional fusion systems.
536 citations