scispace - formally typeset
Search or ask a question
Book ChapterDOI

12 – Enhancement of multiple sensor images using joint image fusion and blind restoration

01 Jan 2007-pp 299-326
TL;DR: The authors propose a combined spatial-domain method of fusion and restoration in order to identify these common degraded areas in the fused image and use a regularised restoration approach to enhance the content in these areas.
Abstract: Image fusion systems aim at transferring \interesting" information from the input sensor images to the fused image. The common assumption for most fusion approaches is the existence of a high-quality reference image signal for all image parts in all input sensor images. In the case that there are common degraded areas in at least one of the input images, the fusion algorithms can not improve the information provided there, but simply convey a combination of this degraded information to the output. In this study, the authors propose a combined spatial-domain method of fusion and restoration in order to identify these common degraded areas in the fused image and use a regularised restoration approach to enhance the content in these areas. The proposed approach was tested on both multi-focus and multi-modal image sets and produced interesting results.

Summary (4 min read)

1 Introduction

  • Data fusion is defined as the process of combining data from sensors and related information from several databases, so that the performance of the system can be improved, while the accuracy of the results can be also increased.
  • Image fusion can be similarly viewed as the process of combining information in the form of images, obtained from various sources in order to construct an artificial image that contains all “useful” information that exists in the input images.
  • Ideally, the images acquired by these sensors should be similar.
  • One can perform error minimization between the fused and input images, using various proposed error norms in the spatial domain in order to perform fusion.

2 Robust Error Estimation Theory

  • Let the image y(r) be a recovered version from a degraded observed image x(r), where r = (i, j) are pixel coordinates (i, j).
  • To estimate the recovered image y(r), one can minimise an error functional E(y) that expresses the difference between the original image and the estimated one, in terms of y.
  • The function ρ(·) is termed the error norm and is defined according to the application, i.e. the type of degradation or the desired task.
  • The EulerLagrange equation is described by the following ordinary differential equation, i.e. a relation that contains functions of only one independent variable, and one or more of its derivatives with respect to that variable, the solution t of which extremises the above functional [21].
  • In practice, only a finite number of iterations are performed to achieve visually satisfactory results [6].

2.1 Isotropic diffusion

  • As mentioned previously, one candidate error norm ρ(·) is the least-squares error norm.
  • The above error norm smooths Gaussian noise and depends only on the image gradient ∇y(r), but not explicitly on the image y(r) itself.
  • The solution specifies that the time evolution in (12) is a convolution process performing Gaussian smoothing.
  • These are two disadvantages that need to be seriously considered when using isotropic diffusion.

2.2 Isotropic diffusion with edge enhancement

  • Image fusion aims at transferring salient features to the fused image.
  • The anisotropic gain function has significantly higher values around edges or where sharp features are dominant compared to blurred or smooth regions.
  • The above equation essentially smoothes noise while enhancing edges.
  • In the opposite case that Jx(r) < Jy(r), the information in the enhanced image has better edge representation than the original degraded image for several r and therefore, no processing is necessary.

3 Fusion with Error Estimation theory

  • The authors propose a novel spatial-domain fusion algorithm, based on the basic formulation of John and Vorontsov.
  • Assuming that the authors have a number of T input frames xn(r) to be fused, one can easily perform selective image fusion, by iterating the update rule (18) for the estimation of y(r) using each of input images xn consecutively for a number of K iterations.
  • In a succession of intervals of K iterations, the synthetic frame finally integrates high-quality edge areas from the entire set of input frames.
  • The proposed approach by John and Vorontsov can be applied mainly in the case of a video stream, where the quality of the observed image is enhanced, based on previous and forthcoming frames.
  • This framework is not efficient in the case of fusion applications, where the input frames are simultaneously available for processing and fusion.

3.1 A novel fusion formulation based on error estimation theory

  • Assume there are T images xn(r) that capture the same observed scene.
  • In a second attempt to photograph the object correctly, the foreground object appears properly and the background appears blurred.
  • This cannot be accomplished directly by the scheme proposed by Jones and Vorontsov.
  • In addition, all the fusion weights are estimated simultaneously using this scheme.
  • Therefore, after a couple of iterations the majority of the useful information is extracted from the input images and transferred to the composite image.

3.2 Fusion experiments of out-of-focus and multimodal image sets using error estimation theory

  • The authors perform several fusion experiments of both out-of-focus and multimodal images to evaluate the performance of the proposed approach.
  • In the first experiment, the system is tested with an out-of-focus example, the “Disk” dataset.
  • In Table 1, the performance of the proposed method is compared with the ICA-based method, in terms of the Petrovic and Piella method.
  • It aims at highlighting the edges of the input images to the fused image, due to the edge enhancement term in the cost function.
  • One should also consult the human operators of modern fusion systems, apart from proposed fusion metrics [15,18], in order to evaluate efficiently the performance of these algorithms.

4 Joint Image Fusion and Restoration

  • The basic Image Fusion concept assumes that there is some useful information for all parts of the observed scene at least in one of the input sensors.
  • This assumption might not always be true.
  • This means that there might be parts of the observed scene where there is only degraded information available.
  • The current fusion algorithms will fuse all high quality information from the input sensors and for the common degraded areas will form a blurry mixture of the input images, as there is no high quality information available.
  • Once this part is identified, an image restoration approach can be applied as a second step in order to enhance these parts for the final composite “fused” image.

4.1 Identifying common degraded areas in the sensor images

  • The first task will be to identify the areas of degraded information in the input sensor images.
  • The following algorithm for extracting common degraded areas is described in the following steps: (1) Extract an edge map of the fused image f , using the Laplacian kernel, i.e. ∇2f(r, t).
  • (3) Reduce the dynamic range by calculating ln(VL(r, t)).
  • The aim is to avoid high quality edge/texture and constant background information.

4.2 Image restoration

  • A number of different approaches for tackling the image restoration problem have been proposed in the literature, based on various principles.
  • For an overview of image restoration methods, one can always possibly refer to Kundur and Hatzinakos [8] and Andrews and Hunt [1].
  • The double-weighted regularised image restoration approach in the spatial domain is pursued, that was initially proposed by You and Kaveh [19], with additional robust functionals to improve the performance in the case of outliers.
  • The restoration problem is described by the following model: y(r) = h(r) ∗ f(r) + d(r) (25) where ∗ denotes 2D convolution, h(r) the degradation kernel, f(r) the estimated image and d(r) possible additive noise.

4.2.1 Double weighted regularised image restoration

  • The second term, called the regularising term, imposes a smoothness constraint on the recovered image and the third term acts similarly to the estimated blur.
  • Additional constraints must be imposed, including the non- negativity and finite-support constraint for both the blurring kernel and the image.
  • A Maximum-APosteriori (MAP) estimate of f(r) is given by performing maxf log p(y, f |r) = maxf log p(y|f, r)p(f |r), where r denotes the observed samples.
  • To estimate f(r) and h(r), the above cost function needs to be minimised.

4.2.2 Robust functionals to the restoration cost function

  • There exist several criticisms regarding the conventional double regularisation restoration approach.
  • These parameters determine the “shape” of the influence function and as a consequence the filtering of outliers.
  • In order to find a trade-off between noise elimination and preservation of high-frequency details, the influence functional for the image regularising term must approximate the quadratic structure at small to moderate values and alternatively deviate from the quadratic structure at high values, so that the sharp changes will not be greatly penalised.
  • The PSF support is initially set to a large enough value.

4.3 Combining image fusion and restoration

  • The authors propose an algorithm that can combine all the previous methodologies and essentially perform fusion of all the parts that contain valid information in at least one of the input images and restoration of those image parts that are found to be degraded in all input images.
  • All useful information from the input images has been transferred to the fused image and the next step is to identify and restore the areas where only low quality information is available.
  • (2) The second step is to estimate the common degraded area, using the previous methodology based on the Laplacian edge map of the fused image y(r).
  • More specifically, this step aims at identifying possible corrupted areas in all input images that need enhancement in order to highlight more image details that were not previously available.
  • In a similar manner the update for the Point Spread Function (PSF) needs to be influenced only by the common degraded area, i.e. in (33) f(r) is always substituted by A(r)f(r).

4.4 Examples of joint image fusion and restoration

  • Three synthetic examples are constructed to test the performance of the joint fusion and restoration approach.
  • In the case that the smaller kernel captures more than 85% of the total kernel variance, its size becomes the new estimated kernel size in the next step of the adaptation.
  • In Figure 7 (e), (f), a focus on the common degraded area in the fused and the fused/restored image can verify the above conclusions.
  • The two artificially created blurred input images are depicted in Figures 9 (a), (b). In Figure 9(a), Gaussian blur is applied to the upper left part of the image and in Figure 9(b).
  • As expected, the fusion algorithm manages to transfer all high quality information to the fusion image except for the area in the centre of the image that still remains blurred.

5 Conclusions

  • The problem of image fusion, i.e. the problem of incorporating useful information from various modality input sensors into a composite image that enhances the visual comprehension and surveillance of the observed scene, was addressed in this study.
  • More specifically, a spatial-domain method was proposed to perform fusion of both multi-focus and multi-modal input image sets.
  • By definition, fusion systems aim only at transferring the “interesting” information from the input sensor images to the fused image, assuming there is proper reference image signal for all parts of the image in at least one of the input sensor images.
  • The authors proposed a mechanism of identifying these common degraded areas in the fused image and use a regularised restoration approach to enhance the content in this area.
  • In addition, there are several other applications such as increasing the resolution and quality of pictures taken by commercial digital cameras.

Did you find this useful? Give us your feedback

Figures (15)

Content maybe subject to copyright    Report

Enhancement of Multiple Sensor Images using
Joint Image Fusion and Blind Restoration
Nikolaos Mitianoudis, Tania Stathaki
Communications and Signal Processing group, Imperial College London,
Exhibition Road, SW7 2AZ London, UK
Abstract
Image fusion systems aim at transferring “interesting” information from the input
sensor images to the fused image. The common assumption for most fusion ap-
proaches is the existence of a high-quality reference image signal for all image parts
in all input sensor images. In the case that there are common degraded areas in at
least one of the input images, the fusion algorithms can not improve the information
provided there, but simply convey a combination of this degraded information to
the output. In this study, the authors propose a combined spatial-domain method
of fusion and restoration in order to identify these common degraded areas in the
fused image and use a regularised restoration approach to enhance the content in
these areas. The proposed approach was tested on both multi-focus and multi-modal
image sets and produced interesting results.
Key words: Spatial-domain Image Fusion, Image Restoration.
PACS:
1 Introduction
Data fusion is defined as the process of combining data from sensors and
related information from several databases, so that the performance of the
system can be improved, while the accuracy of the results can be also increased.
Essentially, fusion is a procedure of incorporating essential information from
several sensors to a composite result that will be more comprehensive and thus
more useful for a human operator or other computer vision tasks.
Image fusion can be similarly viewed as the process of combining information
in the form of images, obtained from various sources in order to construct
an artificial image that contains all “useful” information that exists in the
input images. Each image has been acquired using different sensor modalities
Preprint submitted to Elsevier Science 22 October 2007

or capture techniques, and therefore, it has different features, such as type of
degradation, thermal and visual characteristics. The main concept behind all
image fusion algorithms is to detect strong salient features in the input sensor
images and fuse these details to the synthetic image. The resulting synthetic
image is usually referred to as the fused image.
Let x
1
(r), . . . , x
T
(r) represent T images of size M
1
× M
2
capturing the same
scene, where r = (i, j) refers to pixel coordinates (i, j) in the image. Each im-
age has been acquired using different sensors that are placed relatively close
and are observing the same scene. Ideally, the images acquired by these sen-
sors should be similar. However, there might exist some miscorrespondence
between several points of the observed scene, due to the different sensor view-
points. Image registration is the process of establishing point-by-point corre-
spondence between a number of images, describing the same scene. In this
study, the input images are assumed to have negligible registration problems
or the transformation matrix between the sensors’ viewpoints is known. Thus,
the objects in all images can be considered geometrically aligned.
As already mentioned, the process of combining the important features from
the original T images to form a single enhanced image y(r) is usually referred
to as image fusion. Fusion techniques can be divided into spatial domain and
transform domain techniques [5]. In spatial domain techniques, the input im-
ages are fused in the spatial domain, i.e. using localised spatial features. As-
suming that g(·) represents the “fusion rule”, i.e. the method that combines
features from the input images, the spatial domain techniques can be sum-
marised, as follows:
y(r) = g(x
1
(r), . . . , x
T
(r)) (1)
Moving to a transform domain enables the use of a framework, where the
image’s salient features are more clearly depicted than in the spatial domain.
Let T {·} represent a transform operator and g(·) the applied fusion rule.
Transform-domain fusion techniques can then be outlined, as follows:
y(r) = T
1
{g(T {x
1
(r)}, . . . , T {x
T
(r)})} (2)
Several transformations were proposed to be used for image fusion, including
the Dual-Tree Wavelet Transform [5,7,12], Pyramid Decomposition [14] and
image-trained Independent Component Analysis bases [10,9]. All these trans-
formations project the input images onto localised bases, modelling sharp and
abrupt transitions (edges) and therefore, describe the image using a more
meaningful representation that can be used to detect and emphasize salient
features, important for performing the task of image fusion. In essence, these
transformations can discriminate between salient information (strong edges
2

and texture) and constant or non-textured background and can also evaluate
the quality of the provided salient information. Consequently, one can select
the required information from the input images in the transform domain to
construct the “fused” image, following the criteria presented earlier on.
In the case of multi-focus image fusion scenarios, an alternative approach
has been proposed in the spatial domain, exploiting current error estimation
methods to identify high-quality edge information [6]. One can perform error
minimization between the fused and input images, using various proposed error
norms in the spatial domain in order to perform fusion. The possible benefit
of a spatial-domain approach is the reduction in computational complexity,
which is present in a transform-domain method due to the forward and inverse
transformation step.
In addition, following a spatial-domain fusion framework, one can also ben-
efit from current available spatial-domain image enhancement techniques to
incorporate a possible restoration step to enhance areas that exhibit distorted
information in all input images. Current fusion approaches can not enhance
areas that appear degraded in any sense in all input images. There is a neces-
sity for some pure information to exist for all parts of the image in the various
input images, so that the fusion algorithm can produce a high quality output.
In this work, we propose to reformulate and extend Jones and Vorontsov’s [6]
spatial-domain approach to fuse the non-degraded common parts of the sensor
images. A novel approach is used to identify the areas of common degradation
in all input sensor images. A double-regularised image restoration approach
using robust functionals is applied on the estimated common degraded area
to enhance the common degraded area in the “fused” image. The overall fu-
sion result is superior to any traditional fusion approach since the proposed
approach goes beyond the concept of transferring useful information to a thor-
ough fusion-enhancement approach.
2 Robust Error Estimation Theory
Let the image y(r) be a recovered version from a degraded observed image
x(r), where r = (i, j) are pixel coordinates (i, j). To estimate the recovered
image y(r), one can minimise an error functional E(y) that expresses the
difference between the original image and the estimated one, in terms of y.
The error functional can be defined by:
E(y) =
Z
ρ (r, y(r), |∇y(r)|) dr (3)
3

where is the image support, y(r) is the image gradient. The function ρ (·)
is termed the error norm and is defined according to the application, i.e. the
type of degradation or the desired task. For example, a least square error norm
can be appropriate to remove additive Gaussian noise from a degraded image.
The extremum of the previous equation can be estimated, using the Euler -
Lagrange equation. The Euler-Lagrange equation is an equation satisfied by a
function f of a parameter t which extremises the functional:
E(f) =
Z
F (t, f(t), f
0
(t)) dt (4)
where F is a given function with continuous first partial derivatives. The Euler-
Lagrange equation is described by the following ordinary differential equation,
i.e. a relation that contains functions of only one independent variable, and
one or more of its derivatives with respect to that variable, the solution t of
which extremises the above functional [21].
f(t)
F (t, f(t), f
0
(t))
d
dt
f
0
(t)
F (t, f(t), f
0
(t)) = 0 (5)
Applying the above rule to derive the extremum of (3), the following Euler-
Lagrange equation is derived:
ρ
y
³
ρ
y
´
= 0 (6)
Since ρ(·) is a function of |∇y| and not y, we perform the substitution
y = |∇y|/sgn(y) = |∇y||∇y|/y (7)
where sgn(y) = y/|y|. Consequently, the Euler-Lagrange equation is given by:
ρ
y
³
1
|∇y|
ρ
|∇y|
y(r)
´
= 0 (8)
To obtain a closed-form solution y(r) from (8) is not straightforward. Hence,
one can use numerical optimisation methods to estimate y. Gradient-descent
optimisation can be applied to estimate y(r) iteratively using the following
update rule:
y(r, t) y(r, t 1) η
y(r, t)
t
(9)
4

where t is the time evolution parameter, η is the optimisation step size and
y(r, t)
t
=
ρ
y
+
³
1
|∇y|
ρ
|∇y|
y(r, t)
´
(10)
Starting with the initial condition y(r, 0) = x(r), the iteration of (10) continues
until the minimisation criterion is satisfied, i.e. |y(r, t)/∂t| < ², where ² is a
small constant (² 0.0001). In practice, only a finite number of iterations are
performed to achieve visually satisfactory results [6]. The choice of the error
norm ρ(·) in the Lagrange-Euler equation is the next topic of discussion.
2.1 Isotropic diffusion
As mentioned previously, one candidate error norm ρ(·) is the least-squares
error norm. This norm is given by:
ρ(r, |∇y(r)|) =
1
2
|∇y(r)|
2
(11)
The above error norm smooths Gaussian noise and depends only on the image
gradient y(r), but not explicitly on the image y(r) itself. If the least-squares
error norm is substituted in the time evolution equation (10), we get the
following update:
y(r, t)
t
=
2
y(r, t) (12)
which is the isotropic diffusion equation having the following analytic solu-
tion [2]:
y(r, t) = G(r, t) x(r) (13)
where denotes the convolution of a Gaussian function G(r, t) of standard
deviation t with x(r), the initial data. The solution specifies that the time
evolution in (12) is a convolution process performing Gaussian smoothing.
However, as the time evolution iteration progresses, the function y(r, t) be-
comes the product of the convolution of the input image with a Gaussian of
constantly increasing variance, which will finally pro duce a constant value.
In addition, it has been shown that isotropic diffusion may not only smooth
edges, but also causes drifts of the actual edges in the image edge, because
of the Gaussian filtering (smoothing) [2,13]. These are two disadvantages that
need to be seriously considered when using isotropic diffusion.
5

Citations
More filters
Journal ArticleDOI
TL;DR: The preliminary experimental analysis shows that robust anisotropic denoising can be attained in parallel with efficient image fusion, thus bringing two paramount image processing tasks into complete synergy.

28 citations

01 Jan 2010
TL;DR: This thesis proposed and presented some quick image fusion algorithms, based upon spatial mixture analysis, and developed graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library.
Abstract: This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation ability

6 citations

Journal ArticleDOI
TL;DR: More extensive and detailed experimental data is extracted and analyzed to validate computational fluid dynamics (CFD) simulations and opens an avenue for future fire-safety research.

2 citations

References
More filters
Journal ArticleDOI
TL;DR: A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced, chosen to vary spatially in such a way as to encourage intra Region smoothing rather than interregion smoothing.
Abstract: A new definition of scale-space is suggested, and a class of algorithms used to realize a diffusion process is introduced. The diffusion coefficient is chosen to vary spatially in such a way as to encourage intraregion smoothing rather than interregion smoothing. It is shown that the 'no new maxima should be generated at coarse scales' property of conventional scale space is preserved. As the region boundaries in the approach remain sharp, a high-quality edge detector which successfully exploits global information is obtained. Experimental results are shown on a number of images. Parallel hardware implementations are made feasible because the algorithm involves elementary, local operations replicated over the image. >

12,560 citations


"12 – Enhancement of multiple sensor..." refers background in this paper

  • ...In addition, it has been shown that isotropic diffusion may not only smooth edges, but also causes drifts of the actual edges in the image edge, because of the Gaussian filtering (smoothing) [2,13]....

    [...]

Journal ArticleDOI
TL;DR: Although the new index is mathematically defined and no human visual system model is explicitly employed, experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error.
Abstract: We propose a new universal objective image quality index, which is easy to calculate and applicable to various image processing applications. Instead of using traditional error summation methods, the proposed index is designed by modeling any image distortion as a combination of three factors: loss of correlation, luminance distortion, and contrast distortion. Although the new index is mathematically defined and no human visual system model is explicitly employed, our experiments on various image distortion types indicate that it performs significantly better than the widely used distortion metric mean squared error. Demonstrative images and an efficient MATLAB implementation of the algorithm are available online at http://anchovy.ece.utexas.edu//spl sim/zwang/research/quality_index/demo.html.

5,285 citations


"12 – Enhancement of multiple sensor..." refers methods in this paper

  • ...Noise Ratio (PSNR) and the Image Quality Index Q0, proposed by Wang and Bovik [17]....

    [...]

  • ...In Table 3, the performance of the TopoICA-based fusion scheme, the fusion scheme based on Error Estimation and the fusion+restoration scheme are evaluated in terms of Peak Signal-to- Noise Ratio (PSNR) and the Image Quality Index Q0, proposed by Wang and Bovik [17]....

    [...]

  • ...The enhanced images will be compared with the ground truth image, in terms of Peak Signal-toNoise Ratio (PSNR) and Image Quality Index Q0, as proposed by Wang and Bovik [17]....

    [...]

Journal ArticleDOI
TL;DR: It is shown that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image and the connection to the error norm and influence function in the robust estimation framework leads to a new "edge-stopping" function based on Tukey's biweight robust estimator that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion.
Abstract: Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edge-stopping" function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new "edge-stopping" function based on Tukey's biweight robust estimator that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in an image that has been smoothed with anisotropic diffusion. Additionally, we derive a relationship between anisotropic diffusion and regularization with line processes. Adding constraints on the spatial organization of the line processes allows us to develop new anisotropic diffusion equations that result in a qualitative improvement in the continuity of edges.

1,397 citations


"12 – Enhancement of multiple sensor..." refers background in this paper

  • ...which is the isotropic diffusion equation having the following analytic solution [2]:...

    [...]

  • ...In addition, it has been shown that isotropic diffusion may not only smooth edges, but also causes drifts of the actual edges in the image edge, because of the Gaussian filtering (smoothing) [2,13]....

    [...]

Journal ArticleDOI
TL;DR: The problem of blind deconvolution for images is introduced, the basic principles and methodologies behind the existing algorithms are provided, and the current trends and the potential of this difficult signal processing problem are examined.
Abstract: The goal of image restoration is to reconstruct the original scene from a degraded observation. This recovery process is critical to many image processing applications. Although classical linear image restoration has been thoroughly studied, the more difficult problem of blind image restoration has numerous research possibilities. We introduce the problem of blind deconvolution for images, provide an overview of the basic principles and methodologies behind the existing algorithms, and examine the current trends and the potential of this difficult signal processing problem. A broad review of blind deconvolution methods for images is given to portray the experience of the authors and of the many other researchers in this area. We first introduce the blind deconvolution problem for general signal processing applications. The specific challenges encountered in image related restoration applications are explained. Analytic descriptions of the structure of the major blind deconvolution approaches for images then follows. The application areas, convergence properties, complexity, and other implementation issues are addressed for each approach. We then discuss the strengths and limitations of various approaches based on theoretical expectations and computer simulations.

1,332 citations

01 Jan 1977

933 citations

Frequently Asked Questions (1)
Q1. What are the contributions in "Enhancement of multiple sensor images using joint image fusion and blind restoration" ?

In the case that there are common degraded areas in at least one of the input images, the fusion algorithms can not improve the information provided there, but simply convey a combination of this degraded information to the output. In this study, the authors propose a combined spatial-domain method of fusion and restoration in order to identify these common degraded areas in the fused image and use a regularised restoration approach to enhance the content in these areas.