scispace - formally typeset
Search or ask a question
Journal Article•DOI•

Multifocus image fusion and denoising: A variational approach

01 Jul 2012-Pattern Recognition Letters (North-Holland)-Vol. 33, Iss: 10, pp 1388-1396
TL;DR: The preliminary experimental analysis shows that robust anisotropic denoising can be attained in parallel with efficient image fusion, thus bringing two paramount image processing tasks into complete synergy.
About: This article is published in Pattern Recognition Letters.The article was published on 2012-07-01. It has received 28 citations till now. The article focuses on the topics: Image fusion & Image restoration.
Citations
More filters
Journal Article•DOI•
TL;DR: This work proposes a novel image fusion scheme for combining two or multiple images with different focus points to generate an all-in-focus image that is consistently superior to the other existing state-of-the-art fusion methods in terms of visual and quantitative evaluations.

144 citations


Cites background from "Multifocus image fusion and denoisi..."

  • ...Image fusion technique provides a promising way to solve this problem by combining two or multiple images of the same scene that are taken with diverse focuses into a single image in which all the objects within the image are in focus (Ludusan and Lavialle, 2012)....

    [...]

Journal Article•DOI•
TL;DR: A comprehensive overview of existing multi-focus image fusion methods is presented and a new taxonomy is introduced to classify existing methods into four main categories: transformdomain methods, spatial domain methods, methods combining transform domain and spatial domain, and deep learning methods.

143 citations

Journal Article•DOI•
TL;DR: In this article, the available satellite LST products have not been used for the estimation of radiation and energy budgets associated with land-surface processes, which is of great significance for estimating the power consumption of land surface processes.
Abstract: Land-surface temperature (LST) is of great significance for the estimation of radiation and energy budgets associated with land-surface processes. However, the available satellite LST products have...

54 citations

Journal Article•DOI•
TL;DR: A novel fractional differential and variational model that includes the terms of fusion and super-resolution, edge enhancement and noise suppression is introduced and the numerical results indicate that the proposed method is feasible and effective.

53 citations


Cites methods from "Multifocus image fusion and denoisi..."

  • ...In [20], a variational approach for image fusion and denoising was proposed by Kumar and Dass....

    [...]

Journal Article•DOI•
TL;DR: In this paper, a saliency-motivated pulse coupled neural networks (PCNN) was proposed to fuse high-pass subband coefficients with their visual saliency maps as input to motivate PCNN.
Abstract: In the nonsubsampled contourlet transform (NSCT) domain, a novel image fusion algorithm based on the visual attention model and pulse coupled neural networks (PCNNs) is proposed. For the fusion of high-pass subbands in NSCT domain, a saliency-motivated PCNN model is proposed. The main idea is that high-pass subband coefficients are combined with their visual saliency maps as input to motivate PCNN. Coefficients with large firing times are employed as the fused high-pass subband coefficients. Low-pass subband coefficients are merged to develop a weighted fusion rule based on firing times of PCNN. The fused image contains abundant detailed contents from source images and preserves effectively the saliency structure while enhancing the image contrast. The algorithm can preserve the completeness and the sharpness of object regions. The fused image is more natural and can satisfy the requirement of human visual system (HVS). Experiments demonstrate that the proposed algorithm yields better performance.

52 citations


Additional excerpts

  • ...Ludusan and Lavialle [17] propose a variational approach based on error estimation theory and partial differential equations for concurrent image fusion and denoising of multifocus images....

    [...]

References
More filters
Journal Article•DOI•
TL;DR: Although the new index is mathematically defined and no human visual system model is explicitly employed, experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error.
Abstract: We propose a new universal objective image quality index, which is easy to calculate and applicable to various image processing applications. Instead of using traditional error summation methods, the proposed index is designed by modeling any image distortion as a combination of three factors: loss of correlation, luminance distortion, and contrast distortion. Although the new index is mathematically defined and no human visual system model is explicitly employed, our experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error. Demonstrative images and an efficient MATLAB implementation of the algorithm are available online at http://anchovy.ece.utexas.edu//spl sim/zwang/research/quality_index/demo.html.

5,285 citations

Journal Article•DOI•
TL;DR: An image information measure is proposed that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image and combined these two quantities form a visual information fidelity measure for image QA.
Abstract: Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.

3,146 citations

Journal Article•DOI•
TL;DR: In this article, a new version of the Perona and Malik theory for edge detection and image restoration is proposed, which keeps all the improvements of the original model and avoids its drawbacks.
Abstract: A new version of the Perona and Malik theory for edge detection and image restoration is proposed. This new version keeps all the improvements of the original model and avoids its drawbacks: it is proved to be stable in presence of noise, with existence and uniqueness results. Numerical experiments on natural images are presented.

2,565 citations

Journal Article•DOI•
TL;DR: Experimental results clearly indicate that this metric reflects the quality of visual information obtained from the fusion of input images and can be used to compare the performance of different image fusion algorithms.
Abstract: A measure for objectively assessing the pixel level fusion performance is defined. The proposed metric reflects the quality of visual information obtained from the fusion of input images and can be used to compare the performance of different image fusion algorithms. Experimental results clearly indicate that this metric is perceptually meaningful.

1,446 citations