scispace - formally typeset
Open AccessProceedings ArticleDOI

Robust sparse coding for face recognition

TLDR
The robust sparse coding (RSC) scheme is proposed, which seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC.
Abstract
Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the l 2 -norm or l 1 -norm of coding residual. Such a sparse coding model actually assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be accurate enough to describe the coding errors in practice. In this paper, we propose a new scheme, namely the robust sparse coding (RSC), by modeling the sparse coding as a sparsity-constrained robust regression problem. The RSC seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC. An efficient iteratively reweighted sparse coding algorithm is proposed to solve the RSC model. Extensive experiments on representative face databases demonstrate that the RSC scheme is much more effective than state-of-the-art methods in dealing with face occlusion, corruption, lighting and expression changes, etc.

read more

Content maybe subject to copyright    Report

Robust Sparse Coding for Face Recognition
Meng Yang Lei Zhang
Hong Kong Polytechnic Univ.
Jian Yang
Nanjing Univ. of Sci. & Tech.
David Zhang
Hong Kong Polytechnic Univ.
Abstract
Recently the sparse representation (or coding) based
classification (SRC) has been successfully used in face
recognition. In SRC, the testing image is represented as
a sparse linear combination of the training samples, and
the representation fidelity is measured by the 𝑙
2
-norm or
𝑙
1
-norm of coding residual. Such a sparse coding model
actually assumes that the coding residual follows Gaus-
sian or Laplacian distribution, which may not be accurate
enough to describe the coding errors in practice. In this
paper, we propose a new scheme, namely the robust sparse
coding (RSC), by modeling the sparse coding as a sparsity-
constrained robust regression problem. The RSC seeks for
the MLE (maximum likelihood estimation) solution of the
sparse coding problem, and it is much more robust to out-
liers (e.g., occlusions, corruptions, etc.) than SRC. An
efficient iteratively reweighted sparse coding algorithm is
proposed to solve the RSC model. Extensive experiments
on representative face databases demonstrate that the RSC
scheme is much more effective than state-of-the-art meth-
ods in dealing with face occlusion, corruption, lighting and
expression changes, etc.
1. Introduction
As a powerful tool for statistical signal modeling, sparse
representation (or sparse coding) has been successfully used
in image processing applications [16], and recently has led
to promising results in face recognition [24, 25, 27] and
texture classification [15]. Based on the findings that nat-
ural images can be generally coded by structural primitives
(e.g., edges and line segments) that are qualitatively similar
in form to simple cell receptive fields [18], sparse coding
techniques represent a natural image using a small number
of atoms parsimoniously chosen out of an over-complete
dictionary. Intuitively, the sparsity of the coding coefficient
vector can be measured by the 𝑙
0
-norm of it (𝑙
0
-norm counts
the number of nonzero entries in a vector). Since the com-
binatorial 𝑙
0
-norm minimization is an NP-hard problem, the
Corresponding author. This research is supported by the Hong Kong
General Research Fund (PolyU 5351/08E).
𝑙
1
-norm minimization, as the closest convex function to 𝑙
0
-
norm minimization, is widely employed in sparse coding,
and it was shown that 𝑙
0
-norm and 𝑙
1
-norm minimizations
are equivalent if the solution is sufficiently sparse [3]. In
general, the sparse coding problem can be formulated as
min
𝜶
𝜶
1
s.t. 𝒚 𝐷𝜶
2
2
𝜀, (1)
where 𝒚 is a given signal, 𝐷 is the dictionary of coding
atoms, 𝜶 is the coding vector of 𝒚 over 𝐷, and 𝜀>0 is a
constant.
Face recognition (FR) is among the most visible and
challenging research topics in computer vision and pattern
recognition [29], and many methods, such as Eigenfaces
[21], Fisherfaces [2] and SVM [7], have been proposed in
the past two decades. Recently, Wright et al. [25] applied
sparse coding to FR and proposed the sparse representation
based classification (SRC) scheme, which achieves impres-
sive FR performance. By coding a query image 𝒚 as a
sparse linear combination of the training samples via the
𝑙
1
-norm minimization in Eq. (1), SRC classifies the query
image 𝒚 by evaluating which class of training samples could
result in the minimal reconstruction error of it with the as-
sociated coding coefficients. In addition, by introducing an
identity matrix 𝐼 as a dictionary to code the outlier pixels
(e.g., corrupted or occluded pixels):
min
𝜶,𝜷
[𝜶; 𝜷]
1
s.t. 𝒚 =[𝐷, 𝐼] [𝜶; 𝜷] , (2)
the SRC method shows high robustness to face occlusion
and corruption. In [9], Huang et al. proposed a sparse rep-
resentation recovery method which is invariant to image-
plane transformation to deal with the misalignment and
pose variation in FR, while in [22] Wagner et al. proposed
a sparse representation based method that could deal with
face misalignment and illumination variation. Instead of di-
rectly using original facial features, Yang and Zhang [27]
used Gabor features in SRC to reduce greatly the size of
occlusion dictionary and improve a lot the FR accuracy.
The sparse coding model in Eq. (1)iswidelyusedin
literature. There are mainly two issues in this model. The
first one is that whether the 𝑙
1
-norm constraint 𝜶
1
is good
enough to characterize the signal sparsity. The second one is
625

that whether the 𝑙
2
-norm term 𝒚 𝐷𝜶
2
2
𝜀 is effective
enough to characterize the signal fidelity, especially when
the observation 𝒚 is noisy or has many outliers. Many works
have been done for the first issue by modifying the sparsity
constraint. For example, Liu et al. [14] added a nonneg-
ative constraint to the sparse coefficient 𝜶;Gaoet al. [4]
introduced a Laplacian term of coefficient in sparse coding;
Wang et al. [23] used the weighted 𝑙
2
-norm for the spar-
sity constraint. In addition, Ramirez et al. [19] proposed a
framework of universal sparse modeling to design sparsity
regularization terms. The Bayesian methods were also used
for designing the sparsity regularization terms [11].
The above developments of sparsity regularization term
in Eq. (1) improve the sparse representation in different as-
pects; however, to the best of our knowledge, little work has
been done on improving the fidelity term 𝒚 𝐷𝜶
2
2
ex-
cept that in [24, 25]the𝑙
1
-norm was used to define the cod-
ing fidelity (i.e., 𝒚 𝐷𝜶
1
). In fact, the fidelity term has
a high impact on the final coding results because it ensures
that the given signal 𝒚 can be faithfully represented by the
dictionary 𝐷. From the viewpoint of maximum likelihood
estimation (MLE), defining the fidelity term with 𝑙
2
-or𝑙
1
-
norm actually assumes that the coding residual 𝒆 = 𝒚𝐷𝜶
follows Gaussian or Laplacian distribution. But in prac-
tice this assumption may not hold well, especially when
occlusions, corruptions and expression variations occur in
the query face images. So the conventional 𝑙
2
-or𝑙
1
-norm
based fidelity term in sparse coding model Eq. (1) may not
be robust enough in these cases. Meanwhile, these prob-
lems cannot be well solved by modifying the sparsity regu-
larization term.
To improve the robustness and effectiveness of sparse
representation, we propose a so-called robust sparse cod-
ing (RSC) model in this paper. Inspired by the robust re-
gression theory [1, 10], we design the signal fidelity term
as an MLE-like estimator, which minimizes some function
(associated with the distribution of the coding residuals) of
the coding residuals. The proposed RSC scheme utilizes
the MLE principle to robustly regress the given signal with
sparse regression coefficients, and we transform the mini-
mization problem into an iteratively reweighted sparse cod-
ing problem. A reasonable weight function is designed for
applying RSC to FR. Our extensive experiments in bench-
mark face databases show that RSC achieves much better
performance than existing sparse coding based FR methods,
especially when there are complicated variations of face im-
ages, such as occlusions, corruptions and expressions, etc.
The rest of this paper is organized as follows. Section 2
presents the proposed RSC model. Section 3 presents the
algorithm of RSC and some analyses, such as convergence
and complexity. Section 4 conducts the experiments, and
Section 5 concludes the paper.
2. Robust Sparse Coding (RSC)
2.1. The RSC model
The traditional sparse coding model in Eq. (1) is equiva-
lent to the so-called LASSO problem [20]:
min
𝜶
𝒚 𝐷𝜶
2
2
s.t. 𝜶
1
𝜎, (3)
where 𝜎>0 is a constant, 𝒚 =[𝑦
1
; 𝑦
2
; ⋅⋅⋅ ; 𝑦
𝑛
]
𝑛
is the
signal to be coded, 𝐷 =[𝒅
1
, 𝒅
2
, ⋅⋅⋅ , 𝒅
𝑚
]
𝑛×𝑚
is the
dictionary with column vector 𝒅
𝑗
being the 𝑗
th
atom, and 𝜶
is the coding coefficient vector. In our problem of FR, the
atom 𝒅
𝑗
is the training face sample (or its dimensionality
reduced feature) and hence the dictionary 𝐷 is the training
dataset.
We can see that the sparse coding problem in Eq. (3)
is essentially a sparsity-constrained least square estima-
tion problem. It is known that only when the residual
𝒆 = 𝒚 𝐷𝜶 follows the Gaussian distribution, the least
square solution is the MLE solution. If 𝒆 follows the Lapla-
cian distribution, the MLE solution will be
min
𝜶
𝒚 𝐷𝜶
1
s.t. 𝜶
1
𝜎, (4)
Actually Eq. (4) is essentially another expression of Eq. (2)
because both of them can have the following Lagrangian
formulation: min
𝜶
{∥𝒚 𝐷𝜶
1
+ 𝜆 𝜶
1
}[26].
In practice, however, the distribution of residual 𝒆 may
be far from Gaussian or Laplacian distribution, especially
when there are occlusions, corruptions and/or other varia-
tions. Hence, the conventional sparse coding models in Eq.
(3)(orEq. (1)) and Eq. (4)(orEq.(2)) may not be robust
and effective enough for face image representation.
In order to construct a more robust model for sparse cod-
ing of face images, in this paper we propose to find an MLE
solution of the coding coefficients. We rewrite the dictio-
nary 𝐷 as 𝐷 =[𝒓
1
; 𝒓
2
; ⋅⋅⋅ ; 𝒓
𝑛
], where row vector 𝒓
𝑖
is the
𝑖
th
row of 𝐷. Denote by 𝒆 = 𝒚 𝐷𝜶 =[𝑒
1
; 𝑒
2
; ⋅⋅⋅ ; 𝑒
𝑛
]
the coding residual. Then each element of 𝒆 is 𝑒
𝑖
=
𝑦
𝑖
𝑟
𝑖
𝜶,𝑖 =1, 2, ⋅⋅⋅ ,𝑛. Assume that 𝑒
1
,𝑒
2
, ⋅⋅⋅ ,𝑒
𝑛
are independently and identically distributed according to
some probability density function (PDF) 𝑓
𝜽
(𝑒
𝑖
), where 𝜽
denotes the parameter set that characterizes the distribution.
Without considering the sparsity constraint of 𝜶, the likeli-
hood of the estimator is 𝐿
𝜽
(𝑒
1
,𝑒
2
, ⋅⋅⋅ ,𝑒
𝑛
)=
𝑛
𝑖=1
𝑓
𝜽
(𝑒
𝑖
),
and MLE aims to maximize this likelihood function or,
equivalently, minimize the objective function: ln 𝐿
𝜽
=
𝑛
𝑖=1
𝜌
𝜽
(𝑒
𝑖
), where 𝜌
𝜽
(𝑒
𝑖
)= ln 𝑓
𝜽
(𝑒
𝑖
).
With consideration of the sparsity constraint of 𝜶,the
MLE of 𝜶, namely the robust sparse coding (RSC), can be
formulated as the following minimization
min
𝜶
𝑛
𝑖=1
𝜌
𝜽
(𝑦
𝑖
𝒓
𝑖
𝜶)s.t. 𝜶
1
𝜎, (5)
626

In general, we assume that the unknown PDF 𝑓
𝜽
(𝑒
𝑖
) is sym-
metric, and 𝑓
𝜽
(𝑒
𝑖
) <𝑓
𝜽
(𝑒
𝑗
) if 𝑒
𝑖
> 𝑒
𝑗
.So𝜌
𝜽
(𝑒
𝑖
) has the
following properties: 𝜌
𝜽
(0) is the global minimal of 𝜌
𝜽
(𝑒
𝑖
);
𝜌
𝜽
(𝑒
𝑖
)=𝜌
𝜽
(𝑒
𝑖
); 𝜌
𝜽
(𝑒
𝑖
) <𝜌
𝜽
(𝑒
𝑗
) if 𝑒
𝑖
> 𝑒
𝑗
. Without
loss of generality, we let 𝜌
𝜽
(0) = 0.
Form Eq. (5), we can see that the proposed RSC model
is essentially a sparsity-constrained MLE problem. In other
words, it is a more general sparse coding model, while the
conventional sparse coding models in Eq. (3) and Eq. (4)
are special cases of it when the coding residual follows
Gaussian and Laplacian distributions, respectively.
By solving Eq. (5), we can get the MLE solution to 𝜶
with sparsity constraint. Clearly, one key problem is how
to determine the distribution 𝜌
𝜽
(or 𝑓
𝜽
). Explicitly taking
𝑓
𝜽
as Gaussian or Laplacian distribution i s simple but not
effective enough. In this paper, we do not determine 𝜌
𝜽
di-
rectly to solve Eq. (5). Instead, with the above mentioned
general assumptions of 𝜌
𝜽
, we transform the minimization
problem in Eq. (5) into an iteratively reweighted sparse cod-
ing problem, and the resulted weights have clear physical
meaning, i.e., outliers will have low weight values. By it-
eratively computing the weights, the MLE solution of RSC
could be solved efficiently.
2.2. The distribution induced weights
Let 𝐹
𝜽
(𝒆)=
𝑛
𝑖=1
𝜌
𝜽
(𝑒
𝑖
). We can approximate 𝐹
𝜽
(𝒆)
by its first order Taylor expansion i n the neighborhood of
𝒆
0
:
˜
𝐹
𝜽
(𝒆)=𝐹
𝜽
(𝒆
0
)+(𝒆 𝒆
0
)
𝑇
𝐹
𝜽
(𝒆
0
)+𝑹
1
(𝒆), where
𝑹
1
(𝒆) is the high order residual term, and 𝐹
𝜽
(𝒆) is the
derivative of 𝐹
𝜽
(𝒆). Denote by 𝜌
𝜽
the derivative of 𝜌
𝜽
, and
then 𝐹
𝜽
(𝒆
0
)=[𝜌
𝜽
(𝑒
0,1
); 𝜌
𝜽
(𝑒
0,2
); ⋅⋅⋅ ; 𝜌
𝜽
(𝑒
0,𝑛
)], where
𝑒
0,𝑖
is the 𝑖
th
element of 𝒆
0
.
In sparse coding, it is usually expected that the fidelity
term is strictly convex. So we approximate the residual term
as 𝑅
1
(𝒆)=0.5(𝒆𝒆
0
)
𝑇
𝑊 (𝒆𝒆
0
), where 𝑊 is a diagonal
matrix for that the elements in 𝒆 are independent and there
is no cross term between 𝑒
𝑖
and 𝑒
𝑗
, 𝑖 = 𝑗,in𝐹
𝜽
(𝒆). Since
𝐹
𝜽
(𝒆) reaches its minimal value (i.e., 0) at 𝒆 = 0,wealso
require that
˜
𝐹
𝜽
(𝒆) has its minimal value at 𝒆 = 0. Letting
˜
𝐹
𝜽
(0)=0, we have the diagonal element of 𝑊 as
𝑊
𝑖,𝑖
= 𝜔
𝜽
(𝑒
0,𝑖
)=𝜌
𝜽
(𝑒
0,𝑖
)/𝑒
0,𝑖
. (6)
According to the properties of 𝜌
𝜽
(𝑒
𝑖
), 𝜌
𝜽
(𝑒
𝑖
) will have the
same sign as 𝑒
𝑖
. So each 𝑊
𝑖,𝑖
is a non-negative scalar. Then
˜
𝐹
𝜽
(𝒆) can be written as
˜
𝐹
𝜽
(𝒆)=
1
2
𝑊
1/2
𝒆
2
2
+ 𝑏, where
𝑏 is a scalar value determined by 𝒆
0
. Since 𝒆 = 𝒚 𝐷𝜶,
the RSC model in Eq. (5) can be approximated by
min
𝜶
𝑊
1/2
(𝒚 𝐷𝜶)
2
2
s.t. 𝜶
1
𝜎, (7)
which is clearly a weighted LASSO problem. Because the
weight matrix 𝑊 needs to be estimated using Eq. ( 6), Eq.
(7) is a local approximation of the RSC in Eq. ( 5)at𝒆
0
, and
the minimization procedure of RSC can be transformed into
an iteratively reweighted sparse coding problem with 𝑊 be-
ing updated using the residuals in previous iteration via Eq.
(6). Each 𝑊
𝑖,𝑖
is a non-negative scalar, so the weighted
LASSO in each iteration is a convex problem, which could
be solved easily by methods such as 𝑙
1
-ls [12].
Since 𝑊 is a diagonal matrix, its element 𝑊
𝑖,𝑖
(i.e.,
𝜔
𝜽
(𝑒
𝑖
)) is the weight assigned to each pixel of the query
image 𝒚. Intuitively, in FR the outlier pixels (e.g. occluded
or corrupted pixels) should have low weight values. Thus,
with Eq. (7) the determination of distribution 𝜌
𝜽
is trans-
formed into the determination of weight 𝑊 . Considering
the logistic function has properties similar to the hinge loss
function in SVM [28], we choose it as the weight function
𝜔
𝜽
(𝑒
𝑖
)=exp
𝜇𝛿 𝜇𝑒
2
𝑖

1+exp
𝜇𝛿 𝜇𝑒
2
𝑖

(8)
where 𝜇 and 𝛿 are positive scalars. Parameter 𝜇 controls the
decreasing rate from 1 to 0, and 𝛿 controls the location of
demarcation point. With Eq. (8), Eq. (6) and 𝜌
𝜽
(0) = 0,
we could get
𝜌
𝜽
(𝑒
𝑖
)=
1
2𝜇
ln
1+exp
𝜇𝛿 𝜇𝑒
2
𝑖

ln (1 + exp 𝜇𝛿)
(9)
The original sparse coding models in Eqs. (3) and (4) can
be interpreted by Eq. (7). The model in Eq. (3) is the case
by letting 𝜔
𝜽
(𝑒
𝑖
)=2. The model in Eq. (4) is the case by
letting 𝜔
𝜽
(𝑒
𝑖
)=1/𝑒
𝑖
. Compared with the models in Eqs.
(3) and (4), the proposed weighted LASSO in Eq. (7) has
the following advantage: outliers (usually the pixels with
big residuals) will be adaptively assigned with low weights
to reduce their affects on the regression estimation so that
the sensitiveness to outliers can be greatly reduced. The
weight function of Eq. (8) is bounded in [0, 1]. Although
the model in Eq. (4) also assigns low weight to outliers, its
weight function is not bounded. The weights of pixels with
very small residuals will have nearly infinite values. This
reduces the stability of the coding process.
The convexity of the RSC model (Eq. (5)) depends on
the form of 𝜌
𝜽
(𝑒
𝑖
) or the weight function 𝜔
𝜽
(𝑒
𝑖
).Ifwe
simply let 𝜔
𝜽
(𝑒
𝑖
)=2, the RSC degenerates to the origi-
nal sparse coding problem (Eq. (3)), which is convex but
not effective. The RSC model is not convex with the weight
function defined in Eq. (8). However, for FR, a good initial-
ization can always be got, and our RSC algorithm described
in next section could always find a local optimal solution,
which has very good FR performance as validated in the
experiments in Section 4.
3. Algorithm of RSC
As discussed in Section 2.2, the implementation of RSC
can be an iterative process, and in each iteration it is a con-
vex 𝑙
1
-minimization problem. In this section we propose
627

such an iteratively reweighted sparse coding (IRSC) algo-
rithm to solve the RSC minimization.
3.1. Iteratively reweighted sparse coding (IRSC)
Although in general the RSC model can only have a
locally optimal solution, fortunately in FR we are able to
have a very reasonable initialization to achieve good per-
formance. When a testing face image 𝒚 comes, in order to
initialize the weight, we should firstly estimate the coding
residual 𝒆 of 𝒚. We can initialize 𝒆 as 𝒆 = 𝒚 𝒚
𝑖𝑛𝑖
, where
𝒚
𝑖𝑛𝑖
is some initial estimation of the true face from obser-
vation 𝒚. Because we do not know which class the testing
face image 𝒚 belongs to, a reasonable 𝒚
𝑖𝑛𝑖
can be set as the
mean image of all training images. In the paper, we simply
compute 𝒚
𝑖𝑛𝑖
as
𝒚
𝑖𝑛𝑖
= 𝒎
𝐷
, (10)
where 𝒎
𝐷
is the mean image of all training samples.
With the initialized 𝒚
𝑖𝑛𝑖
, our algorithm to solve the
RSC model, namely Iteratively Reweighted Sparse Coding
(IRSC), is summarized in Algorithm 1.
When RSC converges, we use the same classification
strategy as in SRC [25] to classify the face image 𝒚.
3.2. The convergence of IRSC
The weighted sparse coding in Eq. (7) is a local ap-
proximation of RSC in Eq. ( 5), and in each iteration the
objective function value of Eq. (5) decreases by the IRSC
algorithm. Since the original cost function of Eq. (5)is
lower bounded ( 0), the iterative minimization procedure
in IRSC will converge.
The convergence is achieved when the difference of the
weight between adjacent iterations is small enough. Specif-
ically, we stop the iteration if the following holds:
𝑊
(𝑡)
𝑊
(𝑡1)
2
𝑊
(𝑡1)
2
<𝛾, (12)
where 𝛾 is a small positive scalar.
3.3. Complexity analysis
The complexity of both SRC and the proposed IRSC
mainly lies in the sparse coding process, i.e., Eq. (3) and
Eq. (7). Suppose that the dimensionality 𝑛 of face feature
is fixed, the complexity of sparse coding model Eq. (3) ba-
sically depends on the number of dictionary atoms, i.e. 𝑚.
The empirical complexity of commonly used 𝑙
1
-regularized
sparse coding methods (such as 𝑙
1
-ls [12]) to solve Eq. (3)
or Eq. (7)is𝑂(𝑚
𝜀
) with 𝜀 1.5 [12]. For FR without
occlusion, SRC [25] performs sparse coding once and then
uses the residuals associated with each class to classify the
face image, while RSC needs several iterations (usually 2
iterations) to finish the coding. Thus in this case, RSC’s
complexity is higher than SRC.
Algorithm 1 Iteratively Reweighted Sparse Coding
Input: Normalized test sample 𝒚 with unit 𝑙
2
-norm, dic-
tionary 𝐷 (each column of 𝐷 has unit 𝑙
2
-norm) and 𝒚
(1)
𝑟𝑒𝑐
initialized as 𝒚
𝑖𝑛𝑖
.
Output: 𝜶
Start from 𝑡 =1:
1: Compute residual 𝒆
(𝑡)
= 𝒚 𝒚
(𝑡)
𝑟𝑒𝑐
.
2: Estimate weights as
𝜔
𝜽
𝑒
(𝑡)
𝑖
=
exp
𝜇
(𝑡)
𝛿
(𝑡)
𝜇
(𝑡)
(𝑒
(𝑡)
𝑖
)
2
1+exp
𝜇
(𝑡)
𝛿
(𝑡)
𝜇
(𝑡)
(𝑒
(𝑡)
𝑖
)
2
, (11)
where 𝜇
(𝑡)
and 𝛿
(𝑡)
are parameters estimated in the 𝑡
th
iteration (please refer to Section 4.1 for the setting of
them).
3: Sparse coding:
𝜶
=min
𝜶
(𝑊
(𝑡)
)
1/2
(𝒚 𝐷𝜶)
2
2
s.t. 𝜶
1
𝜎,
where 𝑊
(𝑡)
is the estimated diagonal weight matrix
with 𝑊
(𝑡)
𝑖,𝑖
= 𝜔
𝜽
(𝑒
(𝑡)
𝑖
).
4: Update the sparse coding coefficients:
If 𝑡 =1, 𝜶
(𝑡)
= 𝜶
;
If 𝑡>1, 𝜶
(𝑡)
= 𝜶
(𝑡1)
+ 𝜂
(𝑡)
𝜶
𝜶
(𝑡1)
;
where 0 <𝜂
(𝑡)
< 1 is the step size, and a suitable 𝜂
(𝑡)
should make
𝑛
𝑖=1
𝜌
𝜽
(𝑒
(𝑡)
) <
𝑛
𝑖=1
𝜌
𝜽
(𝑒
(𝑡1)
). 𝜂
(𝑡)
can be searched from 1 to 0 by the standard line-search
process [8]. (Since both 𝜶
(𝑡1)
and 𝜶
belong to the
convex set 𝑄 = {∥𝜶
1
𝜎}, 𝜶
(𝑡)
will also belong to
𝑄.
5: Compute the reconstructed test sample:
𝒚
(𝑡)
𝑟𝑒𝑐
= 𝐷𝜶
(𝑡)
,
and let 𝑡 = 𝑡 +1.
6: Go back to step 1 until the condition of convergence
(described in Section 3.2) is met, or the maximal num-
ber of iterations is reached.
For FR with occlusion or corruption, SRC needs to use
an identity matrix to code the occluded or corrupted pix-
els, as shown in Eq. (2). In this case SRC’s complexity is
𝑂((𝑚 + 𝑛)
𝜀
). Considering the fact that 𝑛 is often much
greater than 𝑚 in sparse coding based FR (e.g. 𝑛 = 8086,
𝑚 = 717 in the experiments with pixel corruption and block
occlusion in [25]), the complexity of SRC becomes very
high when dealing with occlusion and corruption.
The computational complexity of our proposed RSC is
𝑂(𝑘(𝑚)
𝜀
), where 𝑘 is the number of iteration. Note that
𝑘 depends on the percentage of outliers in the face image.
By our experience, when there is a small percentage of out-
liers, RSC will converge in only two iterations. If there is a
big percentage of outliers (e.g. occlusion, corruption, etc.),
RSC could converge in 10 iterations. So for FR with occlu-
628

sion, the complexity of RSC is generally much lower than
SRC. In addition, in the iteration of IRSC we can delete the
element 𝑦
𝑖
that has very small weight because this implies
that 𝑦
𝑖
is an outlier. Thus the complexity of RSC can be
further reduced (i.e., in FR with real disguise on the AR
database, about 30% pixels could be deleted in each itera-
tion in average).
4. Experimental Results
In this section, we perform experiments on benchmark
face databases to demonstrate the performance of RSC
(source codes accompanying this work are available at
http://www.comp.polyu.edu.hk/
˜
cslzhang/
code.htm). We first discuss the parameter selection of
RSC in Section 4.1; in Section 4.2, we test RSC for FR
without occlusion on three face databases (Extended Yale
B[5, 13], AR [17], and Multi-PIE [6]). In Section 4.3,
we demonstrate the robustness of RSC to random pixel
corruption, random block occlusion and real disguise.
All the face images are cropped and aligned by using the
locations of eyes, which are provided by the face databases
(except for Multi-PIE, for which we manually locate the
positions of eyes). For all methods, the training samples
are used as the dictionary 𝐷 in sparse coding.
4.1. Parameter selection
In the weight function Eq. (8), there are two parameters,
𝛿 and 𝜇, which need to be calculated in Step 2 of IRSC. 𝛿
is the parameter of demarcation point. When the square of
residual is larger than 𝛿, the weight value is less than 0.5. In
order to make the model robust to outliers, we compute the
value of 𝛿 as follows.
Denote by 𝝍 =
(𝑒
1
)
2
, (𝑒
2
)
2
, ⋅⋅⋅ , (𝑒
𝑛
)
2
. By sorting 𝝍
in an ascending order, we get the re-ordered array 𝝍
𝑎
.Let
𝑘 = 𝜏𝑛, where scalar 𝜏 (0, 1] , and 𝜏𝑛 outputs the
largest integer smaller than 𝜏𝑛.Weset𝛿 as
𝛿 = 𝝍
𝑎
(𝑘) (13)
Parameter 𝜇 controls the decreasing rate of weight value
from 1 to 0. Here we simply let 𝜇 = 𝑐/𝛿, where 𝑐 is a
constant. In the experiments, if no specific instructions,
𝑐 is set as 8; 𝜏 is set as 0.8 for FR without occlusion,
and 0.5 for FR with occlusion. In addition, in our exper-
iments, we solve the (weighted) sparse coding (in Eq. (2),
Eq. (3)orEq.(7)) by its unconstrained Lagrangian formu-
lation. Take Eq. (3) as an example, its Lagrangian form
is min
𝜶
𝒚 𝐷𝜶
2
2
+ 𝜆 𝜶
1
), and the default value for
the multiplier, 𝜆, is 0.001.
4.2. Face recognition without occlusion
We first validate the performance of RSC in FR with
variations such as illumination and expression changes but
Dim 30 84 150 300
NN 66.3% 85.8% 90.0% 91.6%
NS 63.6% 94.5% 95.1% 96.0%
SVM 92.4% 94.9% 96.4% 97.0%
SRC [25] 90.9% 95.5% 96.8% 98.3%
RSC 91.3% 98.1% 98.4% 99.4%
Table 1. Face recognition rates on the Extended Yale B database
without occlusion. We compare RSC with the popular
methods such as nearest neighbor (NN), nearest subspace
(NS), linear support vector machine (SVM), and the re-
cently developed SRC [25].
In the experiments, PCA (i.e., Eigenfaces [21]) is used
to reduce the dimensionality of original face features, and
the Eigenface features are used for all the competing meth-
ods. Denote by 𝑃 the subspace projection matrix com-
puted by applying PCA to the training data. Then in RSC,
the sparse coding in step 3 of IRSC becomes: 𝜶
=
min
𝜶
𝑃 (𝑊
(𝑡)
)
1/2
(𝒚 𝐷𝜶)
2
2
s.t. 𝜶
1
𝜎.
1) Extended Yale B Database: The Extended Yale B
[5, 13] database contains about 2,414 frontal face images
of 38 individuals. We used the cropped and normalized
54×48 face images, which were taken under varying illumi-
nation conditions. We randomly split the database into two
halves. One half (about 32 images per person) was used as
the dictionary, and the other half for testing. Table 1 shows
the recognition rates versus feature dimension by NN, NS,
SVM, SRC and RSC. It can be seen that RSC achieves bet-
ter results than the other methods in all dimensions except
that RSC is slightly worse than SVM when the dimension
is 30. When the dimension is 84, RSC achieves about 3%
improvement of recognition rate over SRC. The best recog-
nition rate of RSC is 99.4%, compared to 91.6% for NN,
96.0% for NS, 97.0% for SVM, and 98.3% for SRC.
2) AR database: As in [25], a subset (with only illumina-
tion and expression changes) that contains 50 males and 50
females was chosen from the AR dataset [17]. For each sub-
ject, the seven images from Session 1 were used for training,
with other seven images from Session 2 for testing. The size
of image is cropped to 60×43. The comparison of RSC and
its competing methods is given in Table 2. Again, we can
see that RSC performs much better than all the other four
methods in all dimensions except that RSC is slightly worse
than SRC when the dimension is 30. Nevertheless, when the
dimension is too low, all the methods cannot achieve very
high recognition rate. On other dimensions, RSC outper-
forms SRC by about 3%. SVM does not give good results in
this experiment because there are not enough training sam-
ples (7 samples per class here) and there are high variations
between training set and testing set. The maximal recog-
nition rates of RSC, SRC, SVM, NS and NN are 96.0%,
629

Citations
More filters
Proceedings ArticleDOI

Fisher Discrimination Dictionary Learning for sparse representation

TL;DR: A novel dictionary learning (DL) method based on the Fisher discrimination criterion, whose dictionary atoms have correspondence to the class labels is learned so that the reconstruction error after sparse coding can be used for pattern classification.
Journal ArticleDOI

A Survey of Sparse Representation: Algorithms and Applications

TL;DR: A comprehensive overview of sparse representation is provided and an experimentally comparative study of these sparse representation algorithms was presented, which could sufficiently reveal the potential nature of the sparse representation theory.
Journal ArticleDOI

Sparse Representation Based Fisher Discrimination Dictionary Learning for Image Classification

TL;DR: The proposed FDDL model is extensively evaluated on various image datasets, and it shows superior performance to many state-of-the-art dictionary learning methods in a variety of classification tasks.
Proceedings ArticleDOI

Latent semantic sparse hashing for cross-modal similarity search

TL;DR: A novel Latent Semantic Sparse Hashing (LSSH) is proposed to perform cross-modal similarity search by employing Sparse Coding and Matrix Factorization to capture the salient structures of images and learn the latent concepts from text.
Journal ArticleDOI

Ensemble Sparse Classification of Alzheimer’s Disease

TL;DR: A local patch-based subspace ensemble method which builds multiple individual classifiers based on different subsets of local patches and then combines them for more accurate and robust classification.
References
More filters
Journal ArticleDOI

Regression Shrinkage and Selection via the Lasso

TL;DR: A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.
Journal ArticleDOI

Eigenfaces for recognition

TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Journal ArticleDOI

Eigenfaces vs. Fisherfaces: recognition using class specific linear projection

TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Journal ArticleDOI

Robust Face Recognition via Sparse Representation

TL;DR: This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.
Journal ArticleDOI

Face recognition: A literature survey

TL;DR: In this paper, the authors provide an up-to-date critical survey of still-and video-based face recognition research, and provide some insights into the studies of machine recognition of faces.
Related Papers (5)
Frequently Asked Questions (1)
Q1. What have the authors contributed in "Robust sparse coding for face recognition" ?

Such a sparse coding model actually assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be accurate enough to describe the coding errors in practice. In this paper, the authors propose a new scheme, namely the robust sparse coding ( RSC ), by modeling the sparse coding as a sparsityconstrained robust regression problem.