scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Joint Fusion and Blind Restoration For Multiple Image Scenarios With Missing Data

01 Nov 2007-The Computer Journal (Oxford University Press)-Vol. 50, Iss: 6, pp 660-673
TL;DR: The authors propose a combined spatial-domain method of fusion and restoration in order to identify these common degraded areas in the fused image and use a regularized restoration approach to enhance the content in these areas.
Abstract: Image fusion systems aim at transferring ‘interesting’ information from the input sensor images to the fused image. The common assumption for most fusion approaches is the existence of a high-quality reference image signal for all image parts in all input sensor images. In the case that there are common degraded areas in at least one of the input images, the fusion algorithms cannot improve the information provided there, but simply convey a combination of this degraded information to the output. The authors propose a combined spatial-domain method of fusion and restoration in order to identify these common degraded areas in the fused image and use a regularized restoration approach to enhance the content in these areas. The proposed approach was tested on both multi-focus and multi-modal image sets and produced interesting results.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
Bin Yang1, Shutao Li1
TL;DR: A sparse representation-based multifocus image fusion method that can simultaneously resolve the image restoration and fusion problem by changing the approximate criterion in the sparse representation algorithm is proposed.
Abstract: To obtain an image with every object in focus, we always need to fuse images taken from the same view point with different focal settings. Multiresolution transforms, such as pyramid decomposition and wavelet, are usually used to solve this problem. In this paper, a sparse representation-based multifocus image fusion method is proposed. In the method, first, the source image is represented with sparse coefficients using an overcomplete dictionary. Second, the coefficients are combined with the choose-max fusion rule. Finally, the fused image is reconstructed from the combined sparse coefficients and the dictionary. Furthermore, the proposed fusion scheme can simultaneously resolve the image restoration and fusion problem by changing the approximate criterion in the sparse representation algorithm. The proposed method is compared with spatial gradient (SG)-, morphological wavelet transform (MWT)-, discrete wavelet transform (DWT)-, stationary wavelet transform (SWT)-, curvelet transform (CVT)-, and nonsubsampling contourlet transform (NSCT)-based methods on several pairs of multifocus images. The experimental results demonstrate that the proposed approach performs better in both subjective and objective qualities.

571 citations

Journal ArticleDOI
TL;DR: A novel image fusion algorithm based on homogeneity similarity that can simultaneously resolve the image restoration and fusion problem when the source multifocus images are corrupted by the Gaussian white noise, and can also provide better performance than the conventional methods.

59 citations

Journal ArticleDOI
TL;DR: This work has proposed a random forest (RF)-based novel scheme that has incorporated feature and decision levels information and has generated better-fused images than Support Vector Machine and Probabilistic Neural Network-based individual Machine Learning approaches.
Abstract: Often captured images are not focussed everywhere. Many applications of pattern recognition and computer vision require all parts of the image to be well-focussed. The all-in-focus image obtained, through the improved image fusion scheme, is useful for downstream tasks of image processing such as image enhancement, image segmentation, and edge detection. Mostly, fusion techniques have used feature-level information extracted from spatial or transform domain. In contrast, we have proposed a random forest (RF)-based novel scheme that has incorporated feature and decision levels information. In the proposed scheme, useful features are extracted from both spatial and transform domains. These features are used to train randomly generated trees of RF algorithm. The predicted information of trees is aggregated to construct more accurate decision map for fusion. Our proposed scheme has yielded better-fused image than the fused image produced by principal component analysis and Wavelet transform-based previous approaches that use simple feature-level information. Moreover, our approach has generated better-fused images than Support Vector Machine and Probabilistic Neural Network-based individual Machine Learning approaches. The performance of proposed scheme is evaluated using various qualitative and quantitative measures. The proposed scheme has reported 98.83, 97.29, 98.97, 97.78, and 98.14 % accuracy for standard images of Elaine, Barbara, Boat, Lena, and Cameraman, respectively. Further, this scheme has yielded 97.94, 98.84, 97.55, and 98.09 % accuracy for the real blurred images of Calendar, Leaf, Tree, and Lab, respectively.

31 citations


Cites background from "Joint Fusion and Blind Restoration ..."

  • ...Image fusion is a process of integrating useful information from two or more images to get an image which contains more information [1, 2]....

    [...]

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method can provide better perception effect and is employed to extract features from the infrared image, and then the primary and secondary features are distinguished by the kurtosis information of the ICA base coefficients.
Abstract: The goal of infrared (IR) and visible image fusion is for the fused image to contain IR object features from the IR image and retain the visual details provided by the visible image. The disadvantage of traditional fusion method based on independent component analysis (ICA) is that the primary feature information that describes the IR objects and the secondary feature information in the IR image are fused into the fused image. Secondary feature information can depress the visual effect of the fused image. A novel ICA-based IR and visible image fusion scheme is proposed in this paper. ICA is employed to extract features from the infrared image, and then the primary and secondary features are distinguished by the kurtosis information of the ICA base coefficients. The secondary features of the IR image are discarded during fusion. The fused image is obtained by fusing primary features into the visible image. Experimental results show that the proposed method can provide better perception effect.

9 citations

Journal ArticleDOI
TL;DR: A wide work base in multi-dimensional fusion that is brought together through the use of common synthetic data, posing real-life problems faced in the theatre of war is presented.
Abstract: The purpose of the Applied Multi-dimensional Fusion Project is to investigate the benefits that data fusion and related techniques may bring to future military Intelligence Surveillance Target Acquisition and Reconnaissance systems. In the course of this work, it is intended to show the practical application of some of the best multi-dimensional fusion research in the UK. This paper highlights the work done in the area of multi-spectral synthetic data generation, super-resolution, joint fusion and blind image restoration, multi-resolution target detection and identification and assessment measures for fusion. The paper also delves into the future aspirations of the work to look further at the use of hyper-spectral data and hyper-spectral fusion. The paper presents a wide work base in multi-dimensional fusion that is brought together through the use of common synthetic data, posing real-life problems faced in the theatre of war. Work done to date has produced practical pertinent research products with direct applicability to the problems posed.

8 citations

References
More filters
Journal ArticleDOI
TL;DR: A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced, chosen to vary spatially in such a way as to encourage intra Region smoothing rather than interregion smoothing.
Abstract: A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image. >

12,560 citations


"Joint Fusion and Blind Restoration ..." refers background in this paper

  • ...The Euler–Lagrange equation is described by the following ordinary differential equation, i.e. a relation that contains functions of only one independent variable, and one or more of its derivatives with respect to that variable, the solution t of which extremises the above functional [7]....

    [...]

Journal ArticleDOI
TL;DR: Although the new index is mathematically defined and no human visual system model is explicitly employed, experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error.
Abstract: We propose a new universal objective image quality index, which is easy to calculate and applicable to various image processing applications. Instead of using traditional error summation methods, the proposed index is designed by modeling any image distortion as a combination of three factors: loss of correlation, luminance distortion, and contrast distortion. Although the new index is mathematically defined and no human visual system model is explicitly employed, our experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error. Demonstrative images and an efficient MATLAB implementation of the algorithm are available online at http://anchovy.ece.utexas.edu//spl sim/zwang/research/quality_index/demo.html.

5,285 citations

Journal ArticleDOI
TL;DR: It is shown that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image and the connection to the error norm and influence function in the robust estimation framework leads to a new "edge-stopping" function based on Tukey's biweight robust estimator that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion.
Abstract: Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edge-stopping" function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new "edge-stopping" function based on Tukey's biweight robust estimator that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in an image that has been smoothed with anisotropic diffusion. Additionally, we derive a relationship between anisotropic diffusion and regularization with line processes. Adding constraints on the spatial organization of the line processes allows us to develop new anisotropic diffusion equations that result in a qualitative improvement in the continuity of edges.

1,397 citations


"Joint Fusion and Blind Restoration ..." refers background in this paper

  • ...The function r(.) is termed as the error norm and is defined according to the application, i.e. the criterion the algorithm needs to fulfil in order to remove the degradation....

    [...]

Journal ArticleDOI
TL;DR: The problem of blind deconvolution for images is introduced, the basic principles and methodologies behind the existing algorithms are provided, and the current trends and the potential of this difficult signal processing problem are examined.
Abstract: The goal of image restoration is to reconstruct the original scene from a degraded observation. This recovery process is critical to many image processing applications. Although classical linear image restoration has been thoroughly studied, the more difficult problem of blind image restoration has numerous research possibilities. We introduce the problem of blind deconvolution for images, provide an overview of the basic principles and methodologies behind the existing algorithms, and examine the current trends and the potential of this difficult signal processing problem. A broad review of blind deconvolution methods for images is given to portray the experience of the authors and of the many other researchers in this area. We first introduce the blind deconvolution problem for general signal processing applications. The specific challenges encountered in image related restoration applications are explained. Analytic descriptions of the structure of the major blind deconvolution approaches for images then follows. The application areas, convergence properties, complexity, and other implementation issues are addressed for each approach. We then discuss the strengths and limitations of various approaches based on theoretical expectations and computer simulations.

1,332 citations

01 Jan 1977

933 citations