scispace - formally typeset
Open AccessJournal ArticleDOI

Scene-based Nonuniformity Correction with Video Sequences and Registration

TLDR
A new, to the authors' knowledge, scene-based nonuniformity correction algorithm for array detectors that relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel non uniformity.
Abstract
We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity.

read more

Content maybe subject to copyright    Report

Marquee University
e-Publications@Marquee
Electrical and Computer Engineering Faculty
Research and Publications
Electrical and Computer Engineering, Department
of
1-1-2000
Scene-based nonuniformity correction with video
sequences and registration
Russell C. Hardie
University of Dayton
Majeed M. Hayat
Marquee University, majeed.hayat@marque6e.edu
Earnest Armstrong
U.S. Air Force Research Laboratory
Brian Yasuda
U.S. Air Force Research Laboratory
Accepted version. Applied Optics, Vol. 39, No. 8 (2000): 1241-1250. DOI. © 2000 Optical Society of
America. Used with permission.

Marquette University
e-Publications@Marquette
Electronical and Computer Engineering Faculty Research and
Publications/College of Engineering
This paper is NOT THE PUBLISHED VERSION; but the author’s final, peer-reviewed manuscript. The
published version may be accessed by following the link in the citation below.
Applied Optics, Vol. 39, No. 8 (2000): 1241-1250. DOI. This article is © Optical Society of America and
permission has been granted for this version to appear in e-Publications@Marquette. Optical Society
of America does not grant permission for this article to be further copied/distributed or hosted
elsewhere without the express permission from Optical Society of America.
Scene-based nonuniformity correction with
video sequences and registration
Russell C. Hardie
University of Dayton
Majeed M. Hayat
University of Dayton
Earnest Armstrong
The Air Force Research Labs
Brian Yasuda
The Air Force Research Labs
Abstract
We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array
detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence
of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of
nonuniformity, sufficiently accurate registration may be possible with standard scene-based
registration techniques. If the registration is accurate, and motion exists between the frames, then

groups of independent detectors can be identified that observe the same irradiance (or true scene
value). These detector outputs are averaged to generate estimates of the true scene values. With
these scene estimates, and the corresponding observed values through a given detector, a curve-fitting
procedure is used to estimate the individual detector response parameters. These can then be used to
correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low
computational complexity. Experimental results, to illustrate the performance of the algorithm, include
the use of visible-range imagery with simulated nonuniformity and infrared imagery with real
nonuniformity.
1. Introduction
Focal-plane array (FPA) sensors are widely used in visible-light and infrared imaging systems for a
variety of applications. A FPA sensor consists of a two-dimensional mosaic of photodetectors placed in
the focal plane of an imaging lens. The wide spectral response and short response time of such arrays,
along with their compactness and optical simplicity, give FPA sensors an edge over scanning systems in
applications that demand high sensitivity and high frame rates.
The performance of FPA’s is known, however, to be affected by the presence of spatial fixed-pattern
noise that is superimposed on the true image.[1]-[3] This is particularly true for infrared FPA’s. This
noise is attributed to the spatial nonuniformity in the photoresponses of the individual detectors in the
array. Furthermore, what makes overcoming this problem more challenging is the fact that the spatial
nonuniformity drifts slowly in time.[4] This drift is due to changes in the external conditions such as the
surrounding temperature, variation in the transistor bias voltage, and the variation in the collected
irradiance. In many applications the response of each detector is characterized by a linear model in
which the collected irradiance is multiplied by a gain factor and offset by a bias term. The pixel-to-pixel
nonuniformity in these parameters is therefore responsible for the fixed-pattern noise.
Numerous nonuniformity correction (NUC) techniques have been developed over the years. For most
of these techniques some knowledge of the true irradiance (true scene values) and the corresponding
observed detector responses is essential. Different observation models and methods for extracting
information about the true scene give rise to the variety of NUC techniques. A standard two-point
calibration technique relies on knowledge of the true irradiance and corresponding detector outputs at
two distinct levels. With this information the gain and the bias can be computed for each detector and
used to compensate for nonuniformity. For infrared sensors two flat-field scenes are typically
generated by means of a blackbody radiation source for this purpose.[1],[5],[6] Unfortunately, such
calibration generally involves expensive equipment (e.g., blackbody sources, additional electronics,
mirrors, and optics) and requires halting the normal operation of the camera for the duration of the
calibration. This procedure may also reduce the reliability of the system and increase maintenance
costs.
Recently considerable research has been focused on developing NUC techniques that use only the
information in the scene being imaged (no calibration targets). The scene-based NUC algorithms
generally use an image sequence and rely on motion between frames. Scribner et al.[3],[7],[8]
developed a least-mean-square-based NUC technique that resembles adaptive temporal high-pass
filtering of frames. O’Neil[9],[10] developed a technique that uses a dither scan mechanism that results

in a deterministic pixel motion. Narendra and Foss,[11] and more recently, Harris[12] and Harris and
Chiang,[13] developed algorithms based on the assumption that the statistics (mean and variance) of
the irradiance are fixed for all pixels. Cain et al.[14] considered a Bayesian approach to NUC and
developed a maximum-likelihood algorithm that jointly estimates the scene sampled on a high-
resolution grid, the detector parameters, and translational motion parameters. A statistical technique
that adaptively estimates the gain and the bias using a constant-range assumption was developed
recently by Hayat et al.[15]
In this paper we consider a method to extract information about the true scene that exploits global
motion between frames in a sequence. If reliable motion estimation (registration) is achievable in the
presence of the nonuniformity, then the true scene value at a particular location and frame can be
traced along a motion trajectory of pixels. This means that all the detectors along this trajectory are
exposed to the same true scene value. If the gains and biases of the detectors are assumed to be
uncorrelated along the trajectory, then we may obtain a reasonable estimate of the true scene by
taking the average of these observed pixel values. This represents a simple motion-compensated
temporal average. Furthermore, in a sequence of frames, each detector is potentially exposed to a
number of scene values (which can be estimated). Thus the gain and bias of each detector can be
estimated with a line-fitting procedure. The observed pixel values and the corresponding estimates of
the true scene values form the points used in the line fitting. The procedure may be repeated
periodically to account for drift in gain and bias. Although the proposed algorithm may be viewed as
heuristic in nature, we believe that it is intuitive and that its strength lies in its simplicity and low
computational cost. Furthermore, it appears to offer promising results on the data sets tested.
The remainder of this paper is organized as follows. In Section 2 the proposed NUC algorithm is defined
and a statistical error analysis is presented. In Section 3 experimental results are presented. These
results illustrate the performance of the algorithm with visible-range images with simulated
nonuniformities and forward-looking infrared (FLIR) imagery with real nonuniformities. Finally, some
conclusions are presented in Section 4.
2. Nonuniformity Correction
In this section we describe the proposed NUC algorithm in detail and present a statistical analysis. The
proposed technique is based on three steps, which are illustrated in Fig. 1. First, registration is
performed on a sequence of raw frames. We show in the results section that, unless the levels of
nonuniformity are high, a fairly accurate registration can be performed in the case of global motion.
The registration algorithm used here is a gradient-based method.[16],[17] Other methods may also be
suitable for this application. The next step in the proposed algorithm involves estimating the true scene
data with a motion-compensated temporal average. Finally, the observed data and the estimated
scene data are used to form an estimate of the nonuniformity parameters. These parameters can then
be used to correct future frames with minimal computations. We now describe, in detail, the
estimation of the true scene data and the nonuniformity parameters.
A. Estimation of the True Scene
Consider a sequence of desired (true) image frames that are free from the effects of detector
nonuniformity. Let us define these data in lexicographical order such that z
i
(j) represents the jth pixel

value in the ith frame. Let N be the number of frames in a given sequence and P be the number of
pixels per frame.
Here we assume a linear detector response and model the nonuniformity of each detector with a gain
and a bias. For the jth pixel of the ith frame, where 1 ≤ i N and 1 ≤ j P, the observed pixel value is
given by (1)
(
)
=
(
)
(
)
+
(
)
,
where the variable a(j) represents the gain of the jth detector and b(j) is the offset of the detector.
These gains and biases are assumed to be constant for each detector over the duration of the N frame
sequence.
Let us assume that each ideal pixel value in the first frame maps to a particular pixel in all subsequent
frames. Thus we neglect border effects and assume that no occlusion or perspective changes occur.
This is often reasonable when objects are imaged at a relatively large distance where the motion is the
result of small camera pointing angle movement and/or jitter. Furthermore, our mathematical
development does not explicitly treat the case of subpixel motion (although the proposed algorithm
can be used with subpixel motion). To describe this frame-to-frame pixel mapping or trajectory, let t
i,j,k
be the spatial index of z
i
(j) as it appears in the kth frame. This index is determined from the
registration step. Thus (2)
(
)
=

,,
for i = 1, 2, … , N, j = 1, 2, … , P, and k = 1, 2, … , N. An example illustrating the use of the notation is
shown in Fig. 2.
Here we adopt the assumption that the detector gains and biases are independent and identically
distributed from pixel to pixel. We believe that this is reasonable for many applications. In this case let
the probability density function of the gain and the bias parameters be denoted f
a
(x) and f
b
(x),
respectively. To achieve relative NUC from pixel to pixel (without calibrated targets, absolute gain and
bias values cannot be determined), there is no loss of generality in assuming that the mean of the gain
parameters is 1, whereas the mean of the bias terms is 0. If so, the mean of an observed value is the
desired scene value, E{ x
i
(j)} = z
i
(j). The probability density function of the observed value is given by
(3)
(
)
(
)
=
1
(
)
[
(
)
]
(
)
,
where * represents convolution. If the gains and the biases are Gaussian, x
i
(j) will also have a Gaussian
distribution with mean z
i
(j). Furthermore, if the variance of the gains is σ
a
2
, and is σ
b
2
for the biases,
then the variance of x
i
(j) is σ
xi
(j)
2
= [z
i
(j
a
]
2
+ σ
b
2
.
If motion is present in the scene, then one has the luxury of making multiple observations of the same
scene value through independent detectors. In the case of Gaussian parameters the maximum-
likelihood estimate of the desired scene value is given by the sample mean estimate.[18] In particular,
this estimate is

Citations
More filters
Journal ArticleDOI

Kalman filtering for adaptive nonuniformity correction in infrared focal-plane arrays.

TL;DR: The proposed Gauss-Markov framework provides a mechanism for capturing the slow and random drift in the fixed-pattern noise as the operational conditions of the sensor vary in time.
Journal ArticleDOI

An Algebraic Algorithm for Nonuniformity Correction in Focal-Plane Arrays

TL;DR: In this paper, a scene-based algorithm is developed to compensate for bias nonuniformity in focal-plane arrays, which is based on use of estimates of interframe subpixel shifts in an image sequence, in conjunction with a linear-interpolation model for the motion, to extract information on the bias non-iformity algebraically.
Journal ArticleDOI

Projection-based image registration in the presence of fixed-pattern noise

TL;DR: A computationally efficient method for image registration is investigated that can achieve an improved performance over the traditional two-dimensional (2-D) cross-correlation-based techniques in the presence of both fixed-pattern and temporal noise.
Journal ArticleDOI

Scene-based nonuniformity correction algorithm based on interframe registration.

TL;DR: A simple and effective scene-based nonuniformity correction (NUC) method for infrared focal plane arrays based on interframe registration that estimates the global translation between two adjacent frames and minimizes the mean square error between the two properly registered images.
Journal ArticleDOI

Scene-based nonuniformity correction for focal plane arrays by the method of the inverse covariance form

TL;DR: The inverse covariance form filter estimates the gain and the bias of each detector in the focal-plane array and optimally updates them as they drift in time and is performed with considerably higher computational efficiency than the equivalent KF.
References
More filters
Journal ArticleDOI

Improving resolution by image registration

TL;DR: In this paper, the relative displacements in image sequences are known accurately, and some knowledge of the imaging process is available, and the proposed approach is similar to back-projection used in tomography.
Book

Discrete Random Signals and Statistical Signal Processing

TL;DR: Random vectors random processes moment analysis linear transformations estimation optimal filtering linear prediction linear models spectrum estimation.
Book

CCD arrays, cameras, and displays

TL;DR: Radiometry and photoetry solid state arrays arrya performance camera performance CRT-based displays sampling theory linear system theory system MFT image quality minimum resolvable contrast as discussed by the authors.
Journal ArticleDOI

High-Resolution Image Reconstruction from a Sequence of Rotated and Translated Frames and its Application to an Infrared Imaging System

TL;DR: A technique for estimating a high resolution image, with reduced aliasing, from a sequence of undersampled rotated and translationally shifted frames and shows that with the proper choice of a tuning parameter, the algorithm exhibits robustness in the presence of noise.
Journal ArticleDOI

Linear theory of nonuniformity correction in infrared staring sensors

TL;DR: In this paper, spatial noise expressions for uncorrected, one-point corrected, and two-point-corrected IR staring sensors were developed for a platinum silicide IR sensor.
Related Papers (5)