# 3D face recognition by projection-based methods

TL;DR: The feature extraction techniques are applied to three different representations of registered faces, namely, 3D point clouds, 2D depth images and 3D voxel, and the resulting feature vectors are matched using Linear Discriminant Analysis.

Abstract: In this paper, we investigate recognition performances of various projection-based features applied on registered 3D scans of faces Some features are data driven, such as ICA-based features or NNMF-based features Other features are obtained using DFT or DCT-based schemes We apply the feature extraction techniques to three different representations of registered faces, namely, 3D point clouds, 2D depth images and 3D voxel We consider both global and local features Global features are extracted from the whole face data, whereas local features are computed over the blocks partitioned from 2D depth images The block-based local features are fused both at feature level and at decision level The resulting feature vectors are matched using Linear Discriminant Analysis Experiments using different combinations of representation types and feature vectors are conducted on the 3D-RMA dataset

## Summary (3 min read)

### Introduction

- In intensity images, faces acquired from the same person show high variability due to lighting conditions.
- Section 3 describes the projection-based features and their extraction from different representation types.

### 2. REPRESENTATION TYPES OF FACE DATA

- The authors have compared three different representation schemes and extracted the features from these representations.
- These representation types are 3D point cloud, 2D depth image and 3D voxel representation.
- All these representations are derived from registered and cropped face data.
- The faces are registered using the ICP algorithm described by Akarun et al.

### 2.1. 3D point cloud

- The 3D point cloud representation is the set of 3D coordinates, },,{ zyx of the range data, obtained after registration.
- The authors have all the correspondences, defined at the registration process.
- Thus the authors can threat the ordered set of the coordinates as the signal describing the face.
- Another way of arranging the set of coordinates is to form an Nx3 matrix, where each dimension is placed into one of the columns.

### 2.2. 2D depth image

- 2D depth image is a commonly used representation type for face recognition.
- The point cloud is placed onto a regular X-Y grid, and the Z coordinates are mapped onto this grid to form the depth image ),.
- This representation type is similar to intensity images by structure, therefore many techniques applied on intensity images can be also applied to ),( yxI .
- The authors have tested the following descriptors, which were previously applied to 2D intensity images, with the depth images: DFT, DCT, block-based versions of DFT and DCT, Independent Component Analysis (ICA) and Nonnegative Matrix Factorization (NNMF).

### 2.3. 3D voxel representation

- To obtain this function, the authors implement the following steps:.
- The center of this voxel grid should coincide with the center of mass of the point cloud.
- Then the authors define a binary function ),,( zyxV on the voxel grid.
- If, in a particular voxel at location ),,( zyx , there does not exist any point from the face, then ),,( zyxV at that voxel is set to zero.
- By using the distance transform the authors distribute the shape information of the surface throughout the 3D space and obtain a richer representation.

### 3.1. Global DFT/DCT

- The authors could have concatenated the X, Y and Z coordinates and computed the one-dimensional DFT, however, then they would lose the inherent relation within the coordinates of a point in the face.
- One should note that, most of the energy is concentrated in the band-pass region due to the zigzag scan of the face as can be observed from the plots of the coordinates in Figure 1.
- In order to obtain global DFT-based features from the depth image, the authors apply 2D-DFT to the function ),( yxI .
- The authors extract the first KxK coefficients of this matrix and obtain a feature vector of size 2K2 – 1, by concatenating the real and imaginary parts .

### 3.2. Block-based DFT/DCT

- In addition to the global DFT/DCT-based techniques, the authors also extract local features, based on the calculation of DFT coefficients on blocks.
- The authors perform fusion at decision level by using the sum rule.
- The depth image of an input face to be recognized is partitioned into blocks and each block is matched with the corresponding blocks of the depth images in the database.
- From this comparison, each face in the database gets a rank.

### 3.3. Independent Component Analysis (ICA)

- Let X be the data matrix, where each column includes the data from one face, then the authors can represent X as follows: ASX = where A is the mixing matrix.
- For the point cloud, the ),,( zyx coordinates are concatenated to form a one-dimensional vector.
- For depth images, the authors follow a similar procedure.
- Then PCA is applied to the face database and ICA-based features are derived from the PCA coefficients of the faces.

### 3.4. Nonnegative Matrix Factorization (NNMF)

- W and H are obtained using the multiplicative update rules described by Lee and Seung14.
- To construct the data matrix X , the authors either use the point cloud representations or the depth images.
- Figure 13 shows the first five basis faces obtained from NNMF of the depth images.
- Since the nonnegativity constraints only allow additive combinations, NNMF provides a parts-based representation.

### 4. MATCHING FEATURES

- The authors use linear discrimination for classifying an input feature vector.
- The authors estimate the covariance matrix of the feature vectors in the training set and fit a multivariate normal density to each class ( person ) using this global covariance matrix.
- When there is an input face to be recognized, the feature vector of the face is extracted and the Mahalanobis distances of the input feature vector to the class centers are calculated.
- The class giving the smallest Mahalanobis distance is chosen as the identity of the input face.

### 5. EXPERIMENTAL RESULTS

- The authors have used the 3D-RMA face database16 for comparing the schemes discussed above.
- The 3D-RMA database contains face scans of 106 subjects.
- The authors have used 4 sessions for training (424 face scans) and utilized the rest 193 faces for test.
- Table 2 gives the identification results of all the schemes, averaged over the 5 experiments.

### 6. CONCLUSION

- Several feature types are proposed for the recognition of pre-registered 3D face data.
- The features are extracted from three different face representations of the face data.
- Experimental results show that the point cloud representation along with the ICA-based or NNMF-based features gave superior results, 99.8 per cent recognition performance.
- On the other hand, ICA and NNMF have the ability to extract the essence of the information present in the large data matrices.
- Several fusion methods at both feature and decision levels can be applied for block-based DFT-DCT methods.

Did you find this useful? Give us your feedback

...read more

##### Citations

240 citations

37 citations

^{1}

14 citations

### Cites methods from "3D face recognition by projection-b..."

...In the range image based approaches, well known 2D recognition methods such as Eigenface [7], Fisherface [8], Gabor Features [9], and DCT [ 10 ,11], are directly applied to range images....

[...]

...Dutagaci et. al. [ 10 ] evaluates several projection based methods like ICA, DFT, DCT and NNMF by using both point clouds and range images....

[...]

...2D DCT and DFT are successfully used for feature extraction from 2D intensity images and range images for feature extraction [ 10 ]....

[...]

11 citations

2 citations

##### References

9,911 citations

9,604 citations

7,434 citations

4,690 citations

397 citations

### "3D face recognition by projection-b..." refers background in this paper

...3D Face Recognition by Projection Based Methods Helin Dutağacı ((1)), Bülent Sankur ((1)), Yücel Yemez (2) (1) Electrical and Electronic Engineering Department, Boğaziçi University, Bebek, İstanbul, Turkey [dutagach, bulent....

[...]