scispace - formally typeset
Open AccessProceedings ArticleDOI

Non-ideal iris segmentation using graph cuts

Reads0
Chats0
TLDR
The algorithm is automatic, unsupervised, and efficient at producing smooth segmentation regions on many non-ideal iris images and a comparison of the estimated iris region parameters with the ground truth data is provided.
Abstract
A non-ideal iris segmentation approach using graph cuts is presented. Unlike many existing algorithms for iris localization which extensively utilize eye geometry, the proposed approach is predominantly based on image intensities. In a step-wise procedure, first eyelashes are segmented from the input images using image texture, then the iris is segmented using grayscale information, followed by a post-processing step that utilizes eye geometry to refine the results. A preprocessing step removes specular reflections in the iris, and image gradients in a pixel neighborhood are used to compute texture. The image is modeled as a Markov random field, and a graph cut based energy minimization algorithm [2] is used to separate textured and untextured regions for eyelash segmentation, as well as to segment the pupil, iris, and background using pixel intensity values. The algorithm is automatic, unsupervised, and efficient at producing smooth segmentation regions on many non-ideal iris images. A comparison of the estimated iris region parameters with the ground truth data is provided.

read more

Content maybe subject to copyright    Report

Non-Ideal Iris Segmentation Using Graph Cuts
Shrinivas J. Pundlik
Damon L. Woodard
Stanley T. Birchfield
Electrical and Computer Engineering Department
Image and Video Analysis Lab, School of Computing
Clemson University, Clemson, SC 29634
{spundli, woodard, stb}@clemson.edu
Workshop on Biometrics
(in association with CVPR)
Anchorage, Alaska, June 2008
c
2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or
promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted
component of this work in other works, must be obtained from the IEEE.
Abstract
A non-ideal iris segmentation approach using graph cuts is
presented. Unlike many existing algorithms for iris local-
ization which extensively utilize eye geometry, the proposed
approach is predominantly based on image intensities. In a
step-wise procedure, first eyelashes are segmented from the
input images using image texture, then the iris is segmented
using grayscale information, followed by a postprocessing
step that utilizes eye geometry to refine the results. A
preprocessing step removes specular reflections in the iris,
and image gradients in a pixel neighborhood are used
to compute texture. The image is modeled as a Markov
random field, and a graph cut based energy minimization
algorithm [2] is used to separate textured and untextured
regions for eyelash segmentation, as well as to segment the
pupil, iris, and background using pixel intensity values.
The algorithm is automatic, unsupervised, and efficient at
producing smooth segmentation regions on many non-ideal
iris images. A comparison of the estimated iris region
parameters with the ground truth data is provided.
1. Introduction
Automated person identification and verification systems
based on human biometrics are becoming increasingly pop-
ular and have found wide ranging applications in defense,
public, and private sectors. Over the years the iris has
emerged as an effective biometric for many such applica-
tions due to the availability of efficient algorithms for per-
forming iris recognition tasks. A large number of iris recog-
nition approaches rely on ideal iris images for successful
recognition, i.e., low noise iris images in which the per-
son is looking straight at the camera. Their performance
degrades if the iris undergoes large occlusion, illumination
change, or out-of-plane rotation. Iris recognition using such
non-ideal iris images is still a challenging problem. Figure
1 shows an ideal and several non-ideal iris images.
Figure 1. An ideal iris image (left), and iris images of varying
quality (right three columns), containing out of plane rotation, il-
lumination effects, and occlusion.
Iris segmentation is an important part of the larger recog-
nition problem, because only once the iris has been local-
ized can the unique signature be extracted. In previous
work, geometric approaches have been common. For ex-
ample, in his pioneering work on iris recognition, Daugman
[5, 6] fits a circle to the iris and parabolic curves above and
below the iris to account for eyelids and eyelashes. Simi-
larly, geometric cues such as pupil location or eyelid loca-
tion have been used for iris localization [8], while stretching
and contraction properties of the pupil and iris have also
been used [10]. Another important approach has been to
detect the eyelashes in order to determine iris occlusion.
To this end, Ma et al. [13] use Fourier transforms to deter-
mine whether the iris is being occluded by the eyelashes; the
unique spectrum associated with eyelashes are used to reject
images in which significant iris occlusion occurs. Other ap-
proaches for eyelash segmentation involve the use of image
intensity differences between the eyelash and iris regions
[12, 11], gray level co-occurrence matrices [1], and the use
of multiple eyelash models [16].
These attempts at iris segmentation are limited to ideal
iris images, assuming that the shape of the iris can be
modeled as a circle. Such a simplifying assumption lim-
its the range of input images that can be successfully used
for recognition. By relying on geometry, these techniques
are sensitive to noise in the image. Some more recent ap-
proaches to handle non-ideal iris images rely upon active
contour models [7] or geodesic active contours [14] for iris
segmentation. Building upon this work, we propose in this
paper an algorithm for eyelash and iris segmentation that
uses image intensity information directly instead of relying
1

Figure 2. Overview of the proposed approach.
on intensity gradients. Our approach models the eye as a
Markov random field and uses graph cuts to minimize an
objective function that enforces spatial continuity in the re-
gion labels found. Four labels are assigned: iris, pupil, eye-
lashes, and background, in addition to specular reflections.
By automatically choosing the expected graylevel values
via histogramming, the algorithm adapts to variations in im-
ages, enabling it to handle non-ideal iris images containing
out-of-plane rotation, extensive iris occlusion by eyelashes
and eyelids, and various illumination effects.
An overview of our approach is presented in Figure 2.
The first step is a simple preprocessing procedure applied to
the input images to deal with specular reflections which may
cause errors in segmentation. In the second step we perform
texture computation for eyelash segmentation by measuring
the amount of intensity variations in the neighborhood of a
pixel and generating a probability map in which each pixel
is assigned a probability of belonging to a highly textured
region. This pixel probability map is fed to an energy min-
imization procedure that uses graph cuts to produce a bi-
nary segmentation of the image separating the eyelash and
non-eyelash pixels. A simple postprocessing step applies
morphological operations to refine the eyelash segmenta-
tion results. The next step is to segment iris images based on
grayscale intensity. The iris refinement step involves fitting
ellipses to the segmented iris regions for parameter estima-
tion. The final step is to combine the iris region mask and
the specular reflection mask to output usable iris regions.
These steps are described in more detail in the following
sections.
2. Removing Specular Reflections
Specular reflections are a major cause of errors in iris
recognition systems because of the fact that the affected iris
pixels cannot be used for recognition . In this case, these
bright spots (see Figure 3) are a cause of segmentation error
as high texture values are assigned to the pixels surrounding
these points which are in turn segmented as eyelashes. We
adopt a straightforward preprocessing procedure to remove
the specular reflections from the input iris images. Let R
be the raw input image. The output of this preprocessing
Figure 3. Removing specular reflection in iris images. LEFT: In-
put image. RIGHT: Preprocessed image with specular reflections
removed.
step is the preprocessed iris image I with the reflections re-
moved and a binary mask M
R
corresponding to the pixels
removed from R. We maintain a list of all the pixel loca-
tions in the input image with grayscale intensity higher that
a preset threshold value along with their immediate neigh-
bors. The values of the pixel locations in the list are set to
zero, i.e., these pixels are unpainted. The list is sorted ac-
cording to the number of painted neighbors each pixel has.
Starting from the first element in the list, grayscale values
are linearly interpolated until all the unpainted pixels are as-
signed a valid gray value. Results of the specular reflection
removal algorithm are shown in Figure 3. It should be noted
that the painted pixels obtained by the above algorithm can-
not be used for iris recognition and are discarded or masked
while constructing the iris signatures.
3. Segmentation of Eyelashes
3.1. Texture Computation
Let I be an image with N pixels, and let I
x
and I
y
de-
note the derivatives of the image in the x and y directions,
respectively. For each image pixel n, texture is computed
using the gradient covariance matrix, which captures the in-
tensity variation in the different directions [15]:
G(n) =
X
n
∈N
g
(n)
I
2
x
(n
) I
x
(n
)I
y
(n
)
I
x
(n
)I
y
(n
) I
2
y
(n
)
, (1)
where N
g
(n) is the local neighborhood around the pixel. If
both the eigenvalues of G(n) are large, then the pixel n has
large intensity variations in orthogonal directions. This is
usually known as a point feature and is indicative of a high
amount of texture in its immediate neighborhood. Letting
e
1
and e
2
be the two eigenvalues of G(n), we detect points
for which h(n) = min{e
1
, e
2
} > τ , where τ is a threshold.
The value h(n) is indicative of the quality of the feature.
Depending upon the value of τ, we can adjust the quality
and hence the number of such points detected in any image.
Let f
i
be the i
th
point feature detected in the image with
corresponding weight h(f
i
) > τ, i = 1, . . . , M. Here
M N , i.e., the number of point features detected is much
less than the number of image pixels. We need a dense map
that assigns a probability value to each pixel in the input im-
age. To accomplish this, we compute an oriented histogram
of point features weighted by their values in a region around

a pixel in an image. This spatial histogram is defined by two
concentric circles of radii r
1
and r
2
centered around a pixel
n. The inner and outer circular regions are represented by
H
n
and
H
n
, respectively. These regions are divided into K
bins, each spanning (360/K) degrees and carrying an equal
weight of ω
b
. The bin values of this 2D oriented histogram
are further multiplied by the weights associated with the cir-
cular region of which it is a part, i.e., bins in the inner circle
are weighted by ω
r
1
while the outer ones are weighted by
ω
r
2
. The feature point score at a pixel n is obtained from
the normalized sum of all the bins at a point:
P
f
(n) =
1
K
K
X
k=1
ω
b
X
fH
n
(k)
ω
r
1
h(f) +
X
f
H
n
(k)
ω
r
2
h(f)
,
(2)
where H
n
(k) and
H
n
(k) are the set of features contributing
to the k
th
bins of the two histograms.
The feature point score alone cannot give a substantive
measure of texture in an image because the feature points
represent locations where image intensity changes occur in
both x and y directions. To effectively compute the texture
around a point, we have to account for all the neighboring
points with gradient changes in a single direction. To ad-
dress this problem, we sum the gradient magnitudes in the
neighborhood of a pixel in a manner similar to the one de-
scribed above in the case of finding the feature point score
in Equation (2). The score due to gradients is given by
P
g
(n) =
1
K
K
X
k=1
ω
b
X
j∈R(k)
ω
r
1
g(j) +
X
j
R(k)
ω
r
2
g(j)
,
(3)
where
g(j) =
q
I
2
x
(j) + I
2
y
(j)
is the gradient magnitude sum in the j
th
pixel in a his-
togram, and R(k) and
R(k) are the image regions specified
by the k
th
bins of the two histograms.
The total score for a pixel is the sum of the feature point
score and the gradient score:
P(n) = P
f
(n) + P
g
(n). (4)
We compute the total score for each pixel and normalize the
values to obtain a probability map that assigns the proba-
bility of each pixel having high texture in its neighborhood.
Figure 4 shows the various texture measures and the texture
probability map obtained for an iris image.
3.2. Image Bipartitioning using Graph Cuts
Once the texture probability map is obtained for an input
image, it is desirable that the segmentation produces smooth
Figure 4. Eyelash segmentation details. LEFT: Steps involved in
the texture computation. RIGHT: Binary graph cuts on an image.
For clarity, only a few nodes and corresponding links are shown.
Thicker links denote greater affinity between the corresponding
nodes or terminals (i.e., t-links between terminals and nodes and
n-links between two nodes).
regions as an output. This problem can be considered as
a binary labeling problem. Our goal is to assign a label
l {0, 1} to each pixel in the image based on the prob-
ability map P. Let ψ : x l be a function that maps an
image pixel x to a label l. If D
n
(l
n
) represents the energy
associated with assigning label l
n
to the n
th
pixel, then the
energy term to be minimized is given by
E(ψ) = E
S
(ψ) + λE
D
(ψ), (5)
where
E
S
(ψ) =
N
X
n=1
X
m∈N
s
(n)
S
n,m
(l
n
, l
m
) (6)
E
D
(ψ) =
N
X
n=1
D
n
(l
n
). (7)
In these equations, E
s
(ψ) is the smoothness energy term
that enforces spatial continuity in the regions, , N is the
number of pixels in the image, N
s
(n) is the neighborhood
of the n
th
pixel, and λ is the regularization parameter. The
data penalty term, derived from P, is given by:
D
n
(l
n
) = exp{ρ(l
n
P(n))},
where
ρ =
1 if l
n
= 1
1 if l
n
= 0
.
The smoothness term is given by:
S
m,n
(l
m
, l
n
) = [1 δ(m, n)] exp{− kI(m) I(n)k
2
},

0 50 100 150 200 250
0
1000
2000
3000
4000
5000
6000
7000
8000
0 50 100 150 200 250 300
0
1000
2000
3000
4000
5000
6000
iris
pupil
background
Figure 5. Iris segmentation. TOP: Grayscale histogram of a typical
iris image and a smoothed version on the right with peak detection.
BOTTOM: Iris image for which the histogram is computed and the
corresponding segmentation.
where δ(m, n) = 1 when m = n, or 0 otherwise. I(m) and
I(n) are image intensities of m
th
and n
th
pixels, respec-
tively.
The energy term in Equation (5) is minimized by a
graph cut algorithm [2]. The image can be considered as
a weighted graph G(V, E), where the vertices V are the pix-
els, and the edges E are the links between neighboring pix-
els. For a binary graph cut problem, two additional nodes
known as source and sink terminals are added to the graph.
The terminals correspond to the labels being assigned to the
nodes, i.e., pixels of the image. In this case, the source ter-
minal corresponds to the high-texture label, while the sink
terminal is associated with the low-texture label. A cut C is
a set of edges that separates the source and sink terminals
such that no subsets of the edges themselves separate the
two terminals. The sum of the weights of the edges in the
cut is the capacity of the cut. The goal is to find the mini-
mum cut, i.e., the cut for which the sum of the edge weights
in the cut is minimum. Figure 4 is a representative diagram
showing the process of partitioning the input image.
4. Iris Segmentation
Iris segmentation is based upon the same energy min-
imization approach described in the previous section, ex-
cept that it involves more than two labels. In fact, for a
typical image, four labels are considered: eyelash, pupil,
iris, and background (i.e., the rest of the eye). Since the
eyelash segmentation already provides us with a binary la-
beling that separates the eyelash pixels, our problem is re-
duced to that of assigning labels to the remaining pixels in
the image. Although this is an NP-hard problem, the solu-
tion provided by the α-β swap graph-cut algorithm [3] is
in practice a close approximation to the global minimum.
The algorithm works by initially assigning random labels to
the pixels. Then for all possible pairs of labels, the pixels
assigned to those labels are allowed to swap their label in
order to minimize the energy of Equation (5). The new la-
beling is retained only if the energy is minimized, and this
procedure is repeated until the overall energy is not further
minimized. Convergence is usually obtained in a few (about
3–4) iterations. Grayscale intensities of the pixels are used
to compute the data energy term of Equation (7). Figure
5 shows the grayscale histogram of a typical image of an
eye. The three peaks in the histogram correspond to the
grayscale intensities of the pupil, iris, and background. The
desired grayscale values for the pupil, iris, and background
regions are obtained via a simple histogram peak detecting
algorithm, where we assume that the first local maximum
corresponds to the pupil region, the second to the iris, and
so on. Figure 5 shows the iris segmentation obtained using
this approach.
The quality of iris segmentation depends on the nature of
the image and is highly susceptible to noise and illumination
effects in the input images. To overcome these problems,
we use a priori information regarding the eye geometry for
refining the segmentation of the iris region. Specifically, we
assume the iris can be approximated by an ellipse centered
on the pupil and aligned with the image axes. Even if these
assumptions are not valid for some images, they serve as a
good starting point for estimating the iris region. The pre-
vious segmentation step provides us with a location of the
pupil center. In our experiments, we observed that the pupil
is accurately segmented in almost all cases even if the over-
all image quality is poor. However, in certain cases, other
dark regions are mistakenly labeled as pupil. These mis-
takes are easily corrected by enforcing a maximum eccen-
tricity on the dark region to distinguish the true pupil from
these distracting pixels.
In order to find the best fitting ellipse to the segmented
iris region, points near the iris boundary must be reliably
located considering the possibilities that the segmented iris
region may not have a elliptical shape, and that the iris may
be occluded partly by the eyelashes (on the top or bottom
or both). In other words, even if we know the approximate
location of the center of the iris (i.e., the pupil center), its
exact extent in both the x and y directions cannot be naively
ascertained using the segmented iris regions. For a reliable
initial estimate of iris boundary points, we extend rays from
the pupil center in all directions (360
) with one degree in-
crements and find those locations where the lines transition
from an iris region to the background region (see Figure 6).
Because all these lines extending out from a center point
may not lead to an iris boundary point, only a subset of the
360 points is obtained. To increase the number of points
(and hence increase the reliability of the ellipse fitting pro-
cedure), we utilize the inherent symmetry of the iris region.
For each ellipse point, a new point is generated about the
vertical symmetry line passing through the center of the iris,
if a point does not already exist for that direction. In addi-
tion, points whose distance from the pupil center exceeds

Figure 6. Refining the iris segmentation. TOP LEFT: Iris segmen-
tation image with pupil center overlaid (green dot). The lines orig-
inating from the center point in 360
of the center point intersect
with the iris boundary at points shown in red. For clarity only a
subset of lines and corresponding points are shown. TOP RIGHT:
Potential iris boundary points. Due to erroneous segmentation, the
full set of points is not obtained. BOTTOM LEFT: Increasing the
iris boundary points using the pupil center and the inherent sym-
metry in the iris regions. BOTTOM RIGHT: Ellipse fitting to the
potential iris boundary points leads to an erroneous result (red el-
lipse), while fitting to the increased boundary points leads to the
correct result (yellow ellipse).
1.5 times the distance of the closest point to the pupil center
are rejected. This yields a substantial set of points to which
an ellipse is fit using the least squares method proposed by
Fitzgibbon et al. [9]. Figure 6 summarizes this process and
shows the results of our ellipse fitting algorithm.
5. Experimental Results
We tested our approach on various non-ideal iris images
captured using a near infrared camera. Figure 7 shows the
results of our approach on some sample images obtained
from the West Virginia University (WVU) Non-Ideal Iris
database, [4] (a sample of images can be found online
1
).
It can be seen that each step in our approach aids the next
one. For example, eyelash segmentation helps in iris seg-
mentation by removing the eyelashes which may cause er-
rors in iris segmentation. To perform eyelash segmentation
we used 8-bin histograms for computing feature points and
gradient scores (K = 8). The bin weight, ω
b
, is set at 0.125
while w
r
1
= 1, w
r
2
= 0.75, and τ = 50. It can be seen
that despite using a simple texture measure, the algorithm
is able to accurately segment regions. The iris segmentation
step, in turn, helps the iris refinement step, and the prepro-
cessing step to remove specular reflections is also helpful in
iris segmentation and building a mask of usable iris regions.
To quantitatively evaluate our results we compared our
iris localization results with direct ground truth. We used 60
iris images (40 with out-of-plane rotation) from the WVU
Non-Ideal Iris image database for iris localization and ver-
ification. We manually marked the iris regions in the input
1
http://www.csee.wvu.edu/˜xinl/demo/
nonideal
iris.html
Iris Parameter Average Error Standard Deviation
(in pixels) (in pixels)
Center (x) 1.9 2.2
Center (y) 2.7 2.5
Radius (x) 3.4 5.2
Radius (y) 3.9 4.0
Pixel labels 5.9% 7.2%
Table 1. Comparison of estimated iris region parameters with the
ground truth data for 60 images from the WVU Non-Ideal Iris
database.
images and obtained the ground truth parameters such as
the location of the center of the iris and the x and y radius
values. We also obtained a mask of the usable iris regions
(without specular reflections) from the original image. The
parameters of our estimated iris region were compared with
ground truth in terms of the iris center location, x and y ra-
dius, and the number of pixels in agreement with the iris
label. Table 1 shows that the average error in the estimation
of iris region parameters as compared to the ground truth is
small, indicating accurate segmentation and localization.
6. Conclusion
This paper presents a novel approach for non-ideal iris
localization using graph cuts. Key components of the ap-
proach include a novel texture-based eyelash segmentation
technique which helps in accurate iris segmentation and lo-
calization. Unlike many of the existing approaches which
use extensive eye geometry heuristics, our approach uses
eye geometry for refining the results of iris segmentation
and is not overly constrained by the location and size of the
iris in the input images. Since we explicitly segment the
eyelashes from the input images, we can account for the oc-
clusion of the iris by the eyelids and eyelashes. An added
advantage of this algorithm is that it can handle specular re-
flections that affect iris recognition procedures by removing
them from the detected iris regions.
Many improvements can be made to the existing ap-
proach at each stage of its operation. The texture measure
used by the current algorithm can be modified by including
gradient orientation cues to improve the accuracy of eye-
lash segmentation. The current iris segmentation is some-
what limited as it relies on histogram peaks of the images
to assign labels; therefore, multi-modal distributions of in-
tensities in any of the regions can lead to errors. This can
be improved by using an approach that uses both intensity
distributions and intensity edges to compute the objective
function. Another important improvement to be made is
to reduce the overall computation time of the algorithm.
Currently, the amount of time required for the eyelash seg-
mentation is about two seconds for eyelash segmentation
and an additional three seconds for iris localization using

Citations
More filters
Journal Article

Dictionary Learning

TL;DR: Methods for learning dictionaries that are appropriate for the representation of given classes of signals and multisensor data are described and dimensionality reduction based on dictionary representation can be extended to address specific tasks such as data analy sis or classification.
Journal ArticleDOI

Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition

TL;DR: In this paper, a discriminant correlation analysis (DCA) is proposed for feature fusion by maximizing the pairwise correlations across the two feature sets and eliminating the between-class correlations and restricting the correlations to be within the classes.
Journal ArticleDOI

Joint Sparse Representation for Robust Multimodal Biometrics Recognition

TL;DR: A multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations, which simultaneously takes into account correlations as well as coupling information among biometric modalities.
Proceedings ArticleDOI

Accurate iris segmentation in non-cooperative environments using fully convolutional networks

TL;DR: Experimental results show that MFCNs are more robust than HCNNs to noises, and can greatly improve the current state-of-the-arts by 25.62% and 13.24% on the UBIRIS.v2 and CASIA.v4-distance databases, respectively.
Book ChapterDOI

A Survey of Iris Biometrics Research: 2008–2010

TL;DR: This new survey is intended to update the previous one, and covers iris biometrics research over the period of roughly 2008–2010, and lists a larger number of references than the inception-through-2007 survey.
References
More filters
Proceedings ArticleDOI

Good features to track

TL;DR: A feature selection criterion that is optimal by construction because it is based on how the tracker works, and a feature monitoring method that can detect occlusions, disocclusions, and features that do not correspond to points in the world are proposed.
Journal ArticleDOI

Fast approximate energy minimization via graph cuts

TL;DR: This work presents two algorithms based on graph cuts that efficiently find a local minimum with respect to two types of large moves, namely expansion moves and swap moves that allow important cases of discontinuity preserving energies.
Journal ArticleDOI

An experimental comparison of min-cut/max- flow algorithms for energy minimization in vision

TL;DR: This paper compares the running times of several standard algorithms, as well as a new algorithm that is recently developed that works several times faster than any of the other methods, making near real-time performance possible.
Journal ArticleDOI

High confidence visual recognition of persons by a test of statistical independence

TL;DR: A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence, which implies a theoretical "cross-over" error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates.
Proceedings ArticleDOI

Fast approximate energy minimization via graph cuts

TL;DR: This paper proposes two algorithms that use graph cuts to compute a local minimum even when very large moves are allowed, and generates a labeling such that there is no expansion move that decreases the energy.
Frequently Asked Questions (10)
Q1. What have the authors contributed in "Non-ideal iris segmentation using graph cuts" ?

A non-ideal iris segmentation approach using graph cuts is presented. In a step-wise procedure, first eyelashes are segmented from the input images using image texture, then the iris is segmented using grayscale information, followed by a postprocessing step that utilizes eye geometry to refine the results. A comparison of the estimated iris region parameters with the ground truth data is provided. 

Finally, their future work involves using the proposed iris segmentation algorithm to perform iris recognition on various available databases to evaluate its performance. 

Once the texture probability map is obtained for an input image, it is desirable that the segmentation produces smoothregions as an output. 

The iris segmentation step, in turn, helps the iris refinement step, and the preprocessing step to remove specular reflections is also helpful in iris segmentation and building a mask of usable iris regions. 

Specular reflections are a major cause of errors in iris recognition systems because of the fact that the affected iris pixels cannot be used for recognition . 

Iris segmentation is an important part of the larger recognition problem, because only once the iris has been localized can the unique signature be extracted. 

An added advantage of this algorithm is that it can handle specular reflections that affect iris recognition procedures by removing them from the detected iris regions. 

The first step is a simple preprocessing procedure applied to the input images to deal with specular reflections which may cause errors in segmentation. 

In order to find the best fitting ellipse to the segmented iris region, points near the iris boundary must be reliably located considering the possibilities that the segmented iris region may not have a elliptical shape, and that the iris may be occluded partly by the eyelashes (on the top or bottom or both). 

Some more recent approaches to handle non-ideal iris images rely upon active contour models [7] or geodesic active contours [14] for iris segmentation.