scispace - formally typeset
Open AccessJournal ArticleDOI

Development of a Face Recognition System and Its Intelligent Lighting Compensation Method for Dark-Field Application

TLDR
In this article, a face recognition system that uses 3D lighting estimation and optimal lighting compensation for dark-field application is proposed, which can realize people identification in a near scene dark field environment, a light-emitting diode (LED) overhead light, eight LED wall lights, a visible light binocular camera, and a control circuit are used.
Abstract
A face recognition system that uses 3-D lighting estimation and optimal lighting compensation for dark-field application is proposed. To develop the proposed system, which can realize people identification in a near scene dark-field environment, a light-emitting diode (LED) overhead light, eight LED wall lights, a visible light binocular camera, and a control circuit are used. First, 68 facial landmarks are detected, and their coordinates in both image and camera coordinate systems are computed. Second, a 3-D morphable model (3DMM) is developed after considering facial shadows, and a transformation matrix between the 3DMM and camera coordinate systems is estimated. Third, to assess lighting uniformity, 30 evaluation points are selected from the face. Sequencing computations of LED radiation intensity, ray reflection luminance, camera response, and face lighting uniformity are then carried out. Ray occlusion is processed using a simplified 3-D face model. Fourth, an optimal lighting compensation is realized: the overhead light is used for flood lighting, and the wall lights are employed as meticulous lighting. A genetic algorithm then is used to identify the optimal lighting of the wall lights. Finally, an Eigenface method is used for face recognition. The results show that our system and method can improve face recognition accuracy by >10% compared to traditional recognition methods.

read more

Content maybe subject to copyright    Report

This is a repository copy of Development of a face recognition system and its intelligent
lighting compensation method for dark-field application.
White Rose Research Online URL for this paper:
https://eprints.whiterose.ac.uk/177925/
Version: Accepted Version
Article:
Haoting, L, Na, Z, Wang, Y et al. (4 more authors) (2021) Development of a face
recognition system and its intelligent lighting compensation method for dark-field
application. IEEE Transactions on Instrumentation and Measurement. ISSN 0018-9456
https://doi.org/10.1109/TIM.2021.3111076
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be
obtained for all other uses, in any current or future media, including reprinting/republishing
this material for advertising or promotional purposes, creating new collective works, for
resale or redistribution to servers or lists, or reuse of any copyrighted component of this
work in other works.
eprints@whiterose.ac.uk
https://eprints.whiterose.ac.uk/
Reuse
Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless
indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by
national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of
the full text version. This is indicated by the licence information on the White Rose Research Online record
for the item.
Takedown
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by
emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request.

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
1
AbstractA face recognition system which uses 3D lighting
estimation and optimal lighting compensation for dark-field
application is proposed. To develop the proposed system, which
can realize people identification in a near scene dark-field
environment, a light-emitting diode (LED) overhead light, eight
LED wall lights, a visible light binocular camera, and a control
circuit are used. First, 68 facial landmarks are detected and their
coordinates in both image as well as camera coordinate systems
are computed. Second, a 3D morphable model (3DMM) is
developed after considering facial shadows, and a transformation
matrix between the 3DMM and camera coordinate systems is
estimated. Third, to assess lighting uniformity, 30 evaluation
points are selected from the face. Sequencing computations of
LED radiation intensity, ray reflection luminance, camera
response, and face lighting uniformity are then carried out. Ray
occlusion is processed using a simplified 3D face model. Fourth,
an optimal lighting compensation is realized: the overhead light is
used for flood lighting, and the wall lights are employed as
meticulous lighting. A genetic algorithm then is used to identify
the optimal lighting of the wall lights. Finally, an Eigenface
method is used for face recognition. The results show that our
system and method can improve face recognition accuracy
by >10% compared to traditional recognition methods.
Index Termsface recognition, distributed intelligent lighting,
3DMM, light compensation, dark-field environment
I. INTRODUCTION
ACE recognition plays an important role for crime
prevention and case investigation in the modern society [1].
Compared with other public management methods, face
recognition is simple to use and can put criminals under
Manuscript received May 8, 2021; revised July 18, 2021; accepted ***. Date
of publication ***; date of current version ***
This work was supported by the National Natural Science Foundation of
China under Grant 61975011, the Fund of State Key Laboratory of Intense
Pulsed Radiation Simulation and Effect under Grant SKLIPR2024, and the
Fundamental Research Fund for the China Central Universities of USTB under
Grant FRF-BD-19-002A.
H. Liu, N. Zheng, Y. Wang, J. Li, Y. Li, and J. Lan are with the Beijing
Engineering Research Center of Industrial Spectrum Imaging, School of
Automation and Electrical Engineering, University of Science and Technology
Beijing, Beijing, 100083, China (e-mails:
liuhaoting@ustb.edu.cn).
Z. Zhang and H. Liu are with the School of Electronic and Electrical
Engineering, School of Mechanical Engineering, University of Leeds
(
*
correspondence e-mails:
eenzzh@leeds.ac.uk, liuhaoting@ustb.edu.cn).
Digital Object Identifier
tremendous mental pressure when they intend to commit a
crime. Currently, face recognition always performs poorly
when the lighting conditions are weak or complex. For example,
if the surface illuminance of the face is 50.0 lx, a recognition
algorithm may fail because many face details are covered by
either shadows or glares. Fig. 1 shows examples of face image.
In Fig. 1, (a) and (b) are influenced by shadows, whereas (c) is
affected by non-uniform lighting; note that both shadow and
glare can be observed. Unfortunately, criminals also take
advantage of these limitations. A considerable amount of
statistical data have shown juvenile delinquency in China
always occurs at night in Karaoke bars or hotels [2]. Therefore,
at these sites, if a robust face recognition system and method
can be developed and used, the crime rate may be reduced.
Fig. 1. Examples of degraded face image captured in the dark field.
Three types of techniques have been developed to solve the
face recognitions limitations in dark environment in the past.
The first technique used a proper sensor to capture high-quality
images. Moreover, a near-infrared camera [3] and a multi-
spectral camera [4] could be used. The second technique
attempted to improve the computational effect of recognition
algorithms. A data fusion-based method [5], a luminance
processing-based technique [6], and a machine learning-based
algorithm [7] were then developed. The third technique focused
on designing an effective lighting source to compensate for
poor lighting conditions. For example, in [8], an imaging
definition feedback-based method was developed. Proper use
of sensors can obtain images of better quality; however their
costs are always high. Artificial intelligent algorithms, such as
deep learning-based methods [9], can improve processing
effect; however, their computational complexities inevitably
add to the system’s burden. Comparatively, a lighting design-
based method is inexpensive and effective; and research in this
regard is not as popular as the other two methods.
To improve the computational effect of imaging systems
[10], certain adaptive lighting systems have been developed in
Development of a face recognition system and its
intelligent lighting compensation method for
dark-field application
Haoting Liu, Na Zheng, Yuan Wang, Jiacheng Li, Zhiqiang Zhang, Member, IEEE, Yajie Li, and Jinhui
Lan
F
(a)
(b)

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
2
the past. In [11], the authors employed near-infrared laser
lighting to assist finger vein recognition. In [12], an adaptive
lighting device was invented to realize the docking application
of underwater vehicle. In [13], infrared light-emitting diodes
(LEDs) were used to capture “red-eye” such that robust face
recognition could be gotten. Currently, the researches of
intelligent LED systems still face several challenges. The first
problem is the lighting perception abilities of certain systems
are still low. Both the environment lighting state and the surface
light reflection character of a subject cannot be accurately
estimated. To evaluate the lighting effect, many systems only
use 2D images [14]. The second limitation lies in the poor
control ability of LED lamps. There is no optimal model to
obtain the control intensity of LEDs [15]. Third, narrow designs
of lighting fields or negative influences of shadow and glare
restrict these system’s application. Because of these limitations,
a new development of intelligent lighting systems is required.
In this study, a novel lighting system is proposed for face
recognition in the dark field. It comprises one LED overhead
light, eight LED wall lights, a visible light binocular camera,
and a control circuit. First, face detection and facial landmark
extraction [16] are performed. The histogram of oriented
gradients (HOG), support vector machine (SVM), and
ensemble of regression trees (ERT) are used. Second, a 3D face
model and a transformation matrix between 3D faces and the
camera coordinate systems are estimated. A 3D morphable
model (3DMM) [17] with shadow analysis is then developed.
Third, 3D face lighting effect estimation is performed. The
LED radiation intensity [18], ray reflection luminance [19],
camera response [20], and face lighting uniformity are then
computed. Ray occlusion between the LED source and
investigated point in the face is processed using a simplified 3D
face model. Fourth, intelligent lighting control is performed.
The overhead light is set to provide flood-lighting [21] and the
wall lights provide optimal luminance compensation [22].
Finally, to identify the face, an Eigenface method [23] is used.
The primary contributions of this study are: 1) a novel 3D
light field analysis-based intelligent lighting system which uses
the visible light binocular camera and distributed LED units is
proposed. 2) A complete mathematical model of optimal
lighting estimation and compensation is developed, including
ray radiation, ray reflection, ray occlusion, and ray distribution
estimation. 3) Certain relationships about the lighting effect,
face recognition accuracy, and vision interference issues are
disclosed; factors, such as the overhead light output intensity,
the spatial position relationship between the wall lights and face,
and the lighting uniformity, etc., are investigated.
In the following sections, first, the problem definition and
system design will be presented. Second, the intelligent lighting
control method will be shown. Third, the experimental results
and discussions will be provided.
II. PROBLEM FORMULATION AND PROPOSED SYSTEM
A. Problem Formulation
Fig. 2 shows the sketch maps of the face recognition
application in the dark field and the system design method. In
Fig. 2-(a), an imaginary picture of the proposed application is
presented. This system can then be used to identify people who
come to the information desk in a Karaoke bar or a hotel. For
example, a waitress stands behind a table with her back to a
wall and a visitor inquires something from her. In certain cases,
the lighting conditions in a Karaoke bar or hotel may be poor to
create a relaxing atmosphere. For example, the environment
illuminance is only 30.0 - 60.0 lx. In Fig. 2-(b), a system
constitution method is shown. A LED overhead light is used to
provide flood-lighting; eight LED point lights in the wall are
considered to configure a proper compensation source for face
recognition. A visible light binocular camera is used to capture
face images and estimate the 3D coordinate of typical facial
landmarks. All the light sources and the camera system are
connected by a wireless communication device. If the
environment lighting can be well compensated, an ideal face
image can be created.
Fig. 2. Imaginary picture of face recognition application in the dark field and
sketch map of our proposed system.
B. Distributed Lighting and Face Recognition System
Fig. 3 shows a sketch map of the proposed distributed
lighting and face recognition system. In Fig. 3, (a) and (c) are
the side view and the top view of this system, respectively; (b)
is the design method of wall lights; and (d) is the top view of
our application with a roof. The parameters D
i
(i = 0, 1, …, 11)
define the distance or length variables in the application. As
shown in Fig. 2-(a), because the waitress requires to assess
information on a computer, she has to stand in front of a display;
moreover, a visitor has to stand in the left or the right side of
that display such that he or she can directly face the waitress. In
this situation, the visitor’s face can be exposed to the wall lights.
Without loss of generality, the output tuning methods of the
overhead light and wall lights have certain fixed degrees. For
example, they have 3 degrees and 6 degrees, respectively. The
control method of the wall lights is more flexible. A plane and
symmetrical design mode is considered for the distributed wall
lights. The binocular vision system has one camera positioned
in the geometric center of the wall light array.
(a)
(b)
(a)

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
3
Fig. 3. Sketch map of the distributed lighting and face recognition system.
III. PROPOSED FACE RECOGNITION METHOD WITH
INTELLIGENT LIGHTING COMPENSATION
A. The Overall Computational Framework
Fig. 4 presents the overall computational framework of the
proposed system and method. Our computations include four
steps. First, the fast face detection and facial landmark
extraction are performed. This step can yield the initial face
region and extract 68 2D and 3D facial landmarks in an image
and the camera coordinate systems, respectively. Second, a 3D
face model and a coordinate system transformation matrix are
constructed and calculated. This step can estimate a complete
3D face model and compute the transformation matrix between
the 3D face model and the camera coordinate systems. Third,
face lighting effect evaluation is conducted. This step can
simulate the face lighting effect of 30 typical facial landmarks
and estimate a lighting uniformity index using any simulated
LED intensity input. Fourth, intelligent lighting compensation
and face recognition are implemented. This step can then tune
flood-lighting and meticulous lighting to achieve robust face
recognition in the dark field. In Fig. 4, contents within the red
dash rectangle are our proposed steps, which can achieve
intelligent lighting compensation, whereas other contents
belong to the traditional processing steps of face recognition.
Fig. 4. Proposed computational flow chart.
B. Face Detection and Facial Landmark Extraction
When implementing face detection, three methods are
commonly used: Harr feature and AdaBoost, HOG features and
SVM, and deep learning-based methods. Among these methods,
the SVM classifier can achieve a high classification accuracy
only with a limited amount of training data, whereas other
methods need high image quality or large training data input;
thus, SVM is used in our system. When performing 2D facial
landmark extraction, a facial landmark template is first defined;
then, an ERT is used to fit the contour of the face in an image
via iterative computations. Finally, 68 facial 2D points are
identified. A visible light binocular camera is used to estimate
3D facial landmarks. The Zhang et al.’s method [24] is used to
calibrate the binocular vision system first. Then we incorporate
the world coordinate system into one of the camera coordinate
systems, and then, 3D facial landmarks can be calculated from
2D landmarks using (1). Fig. 5 shows the definition of 68 facial
landmarks and their 2D and 3D samples in an actual face
image.
0
0
0
0|
1 0 0 1
1
x
y
U
x f u
V
s y f v R T
W






(1)
where (x, y) is a coordinate in the image coordinate system; (U,
V, W) is a coordinate in the world coordinate system; s is a scale
factor; f
x
, f
y
are the camera focal lengths; u
0
and v
0
are the
principal point coordinates in the x and y directions. Note that R
is a rotation matrix and its sub-variables can be defined by r
ij
(i,
j = 0, 1, 2), whereas T = [t
0
, t
1
, t
2
] is a translation matrix.
Fig. 5. Definition of facial landmarks and their 2D and 3D results in an actual
face image.
C. 3D Face Model and Coordinate Transformation
Estimation
3DMM is used for 3D face modeling. It is a classic model
that can represent the face by a 3D dense point cloud. Generally,
a 3DMM can be defined by a mean face and a linear weighted
sum of the shape and expression factors. In this study, the Basel
face model (BFM) [25] is used as the mean face; then, the
estimation of the 3DMM transforms into an iterative
computation of the shape and expression factors. This step can
realize coordinate calculations from 68 2D facial landmarks to
the corresponding 3D coordinate points in the 3DMM
coordinate system. Because the binocular camera is used in our
system, it is necessary to build a transformation relationship
between the 3DMM coordinate system and the camera
coordinate system such that we can transfer any spatial points
from one coordinate system to the other. Equation (2) presents
the computational method of the transformation matrix; the
least squares method can be used to obtain the rotation and
translation matrices.
(a)
(d)
Intelligent lighting
compensation using 3D
face information and
optimal computation

> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
4
_ 3 3 _ 3Cam Cam DMM DMM Cam DMM
M R M T
(2)
where M
3DMM
is the 3D coordinate in the 3DMM coordinate
system; M
Cam
is the 3D coordinate in the camera coordinate
system; R
Cam_3DMM
and T
Cam_3DMM
are the rotation and
transformation matrices, respectively.
The shadow will affect the estimation of 3DMM severely in
dark field: if an image contrast is low, the estimations of 68
facial landmarks will be inaccurate. To overcome limitation,
shadow analysis-based enhancement is developed. Both a
multiple-scale Retinex (MSR) [26] and SVM are considered.
First, potential shadow features around the eyes, nose, and
mouth are computed. Gray means in the typical face blocks are
examined. Fig. 6 shows the sketch map and the image sampling
method of MSR computational blocks. Their order numbers of
MSR blocks are also demonstrated. Second, SVM is trained to
predict the control parameters of MSR using the gray features
above. Equations (3) and (4) show the MSR model. In this
study, l
1
, l
2
, l
3
,
1
, and
2
are set with certain fixed experiential
values, whereas
3
is regarded as a control parameter. The
inputs of SVM are the gray means of shadow blocks, and its
output is
3
. Finally, an adaptive MSR after considering face
shadows is developed.
3
1
( , ) lg ( , ) lg ( , ) ( , )
kk
k
R i j l I i j G i j I i j
(3)
22
22
1
( , ) exp
22
k
kk
ij
G i j




(4)
where l
k
and
k
are the weight and scale factors with k = 1, 2, 3,
l
1
+ l
2
+ l
3
= 1; I(i, j) is the image; G
k
(i, j) is a low-pass Gaussian
filter; and the symbol ‘*is the convolution operation.
Fig. 6. Proposed shadow analysis method and some image examples [27].
D. Face Lighting Effect Evaluation
The face lighting effect evaluation can realize complete
computations from LED ray radiation to camera ray response.
LED lighting effect computation
In this study, the LED bead is considered as a Lambertian
emitter [28]. Without loss of generality, it is supposed that the
LED bead is a symmetrical source; ideally, it is a point source
for the wall light application. Because the size of each LED
bead is 1.0 cm, whereas the observation distance is always
100.0 cm, it can be considered as a point light source. Fig. 3-(b)
shows the arrangement of wall light LED beads, and their order
numbers are illustrated; it is supposed all beads have the same
optic performance. A camera coordinate system is then defined
in a camera that is marked by a red solid circle: its z-axis is
perpendicular to the wall light panel, and its x- and y-axes are
parallel to the rectangle edges of that panel. In the following
computations, all other coordinate systems will be transformed
into this camera coordinate system. Finally, the radiance
intensity E
Wall
of all LED beads can be estimated using the
arithmetic sum of a single LED output since the wall lights lie
in the same working plane. Equation (5) shows the radiation
intensity definition of wall lights. It is defined in the camera
coordinate system.
2
2
2
22
all _ 0 _ 0
2
22
2
2
_1 _1
2
2
2
22
_ 2 _ 2
2
_ 3 _ 3
2
22
2
22
W
W
W
W
W
W
W
m
m
W
W W WL WL W W W
m
m
WW
W WL WL W W W
m
m
W
W WL WL W W W
m
WW
W WL WL W W
d
E z I A x y z
dd
z I A x y z
d
z I A x y z
dd
z I A x y





















2
2
2
2
2
2
2
22
_ 4 _ 4
2
22
2
2
_ 5 _ 5
2
2
2
22
_ 6 _ 6
_ 7 _ 7
2
22
2
W
W
W
W
W
W
W
W
m
W
m
m
W
W WL WL W W W
m
m
WW
W WL WL W W W
m
m
W
W WL WL W W W
m
W WL WL
z
d
z I A x y z
dd
z I A x y z
d
z I A x y z
z I A


























2
22
2
2
22
W
m
WW
W W W
dd
x y z




(5)
where m
W
represents a parameter that can reflect the
relationship between the view angle and radiance attenuation,
and m
W
= 32.0 in this study; (x
W
, y
W
, z
W
) is a spatial coordinate of
observation point in the wall light coordinate system, with z
W
>
0; I
WL_i
and A
WL_i
(i = 0, 1, …, 7) are the intensity output and the
emitting area factors of the i
th
LED bead, with A
WL_0
= A
WL_1
= … = A
WL_7
= 1.0; d
W
= D
5
= D
6
= D
8
= D
9
(see Fig. 3 (b)).
The overhead light is used to provide flood-lighting and
avoid glare in our application. Fig. 3-(a) shows that the
overhead light is installed in the front top side of a visitor which
can guarantee a comparable good face lighting effect. The
overhead light is developed using a rectangle array with MN
LED beads; the intensity output of all LED beads should be
equally controlled in the same lighting effect degree; its output
intensity only has 3 degrees in our application for simplicity.
Similar to the coordinate system definition of the wall light
plane, the original point of the overhead light coordinate system
(a)
(b)

Citations
More filters
Journal ArticleDOI

A cost-efficient-based cooperative allocation of mining devices and renewable resources enhancing blockchain architecture

TL;DR: In this article, the authors address managing the energy consumption of miners by using the advantage of distributed generation resources (DGRs) in the smart grid, which is a prospective solution for merging communication technologies and industrial infrastructures.
Journal ArticleDOI

Pig Face Recognition Based on Trapezoid Normalized Pixel Difference Feature and Trimmed Mean Attention Mechanism

TL;DR: In this article , a trapezoid normalized pixel difference (T-NPD) feature is designed to achieve more accurate detection in unconstrained outdoor conditions, and a trimmed mean attention mechanism (TMAM) uses the trimmed mean-based squeeze method to assign more precise weights to feature channels, and then fuses it into a 50-layer ResNet backbone network to classify detected pig face images with high accuracy.
Journal ArticleDOI

LocalEyenet: Deep Attention framework for Localization of Eyes

Somsukla Maiti, +1 more
- 13 Mar 2023 - 
TL;DR: LocalEyenet as mentioned in this paper proposes a coarse-to-fine architecture for facial landmark detection, which learns the self-attention in feature maps which aids in preserving global and local spatial dependencies in face image.
Journal ArticleDOI

Multi-Target Cross-Dataset Palmprint Recognition via Distilling From Multi-Teacher

TL;DR: In this article , a multi-target cross-dataset palmprint recognition using knowledge distillation and domain adaptation is presented, where a teacher feature extractor is constructed to extract the adaptive knowledge of each pair using domain adaptation.
References
More filters
Journal ArticleDOI

Robust Face Recognition via Sparse Representation

TL;DR: This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.
Proceedings ArticleDOI

FaceNet: A unified embedding for face recognition and clustering

TL;DR: A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface.
Book

Robot Vision

TL;DR: Robot Vision as discussed by the authors is a broad overview of the field of computer vision, using a consistent notation based on a detailed understanding of the image formation process, which can provide a useful and current reference for professionals working in the fields of machine vision, image processing, and pattern recognition.
Journal ArticleDOI

Emotion-Aware Connected Healthcare Big Data Towards 5G

TL;DR: This paper proposes an emotion-aware connected healthcare system using a powerful emotion detection module, and good accuracies, up to 99.87%, were achieved for emotion detection.
Journal ArticleDOI

Image-Quality-Based Adaptive Face Recognition

TL;DR: In this paper, an adaptive approach to face recognition is presented to overcome the adverse effects of varying lighting conditions, which is measured in terms of luminance distortion in comparison to a known reference image, will be used as the base for adapting the application of global and region illumination normalization procedures.
Frequently Asked Questions (16)
Q1. What are the contributions in this paper?

This is indicated by the licence information on the White Rose Research Online record for the item. 

In the future, different LED layout methods and face detection or recognition techniques with better processing performance will be studied to improve the usability of proposed system. 

When estimating the facet normal vector of LUOPs, since the 3DMM includes more than 50,000 points, some vertexes of triangular patch can be selected close to the LUOPs; then the authors can use any 3 vertexes to estimate the facet normal vector. 

After the sequencing computations of LED radiation, ray reflection, camera response, and lighting uniformity, intelligent system control can be solved by a genetic algorithm. 

In the future, additional spatial layouts of LED units can be designed, and other machine learning methods can be used to improve the processing effect of their system. 

Sixty-eight facial landmarks can be used to estimate their parameters because they can provide the corresponding spatial point coordinates close to the face edge or the nose. 

In this study, the Basel face model (BFM) [25] is used as the mean face; then, the estimation of the 3DMM transforms into an iterative computation of the shape and expression factors. 

The DTW was employed to assess the lighting effect between the point set captured from the standard lighting environment and the set recorded from the arbitrary environment. 

Considering the spatial layout constraint, the appearance design requirement, and even the government management regulation, a rectangle shape design mode was considered in this study. 

When performing 2D facial landmark extraction, a facial landmark template is first defined; then, an ERT is used to fit the contour of the face in an image via iterative computations. 

The shadow will affect the estimation of 3DMM severely in dark field: if an image contrast is low, the estimations of 68 facial landmarks will be inaccurate. 

In Fig. 4, contents within the red dash rectangle are their proposed steps, which can achieve intelligent lighting compensation, whereas other contents belong to the traditional processing steps of face recognition. 

As shown in Fig. 2-(a), because the waitress requires to assess information on a computer, she has to stand in front of a display; moreover, a visitor has to stand in the left or the right side of that display such that he or she can directly face the waitress. 

The inability of the traditional neural network and sparse representation-based methods to obtain the best performance may be attributed to the small amount of training data and the improper feature description. 

the inability of the deep learning-based method to obtain the best result may be attributed to the small amount of training data. 

According to the experimental results, face recognition accuracy can be improved by at least 10.0%, particularly when the environment lighting is poor and complex.