scispace - formally typeset
Search or ask a question
Author

Insaf Adjabi

Bio: Insaf Adjabi is an academic researcher from University of Bouira. The author has contributed to research in topics: Pattern recognition (psychology) & Facial recognition system. The author has an hindex of 4, co-authored 5 publications receiving 79 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The history of face recognition technology, the current state-of-the-art methodologies, and future directions are presented, specifically on the most recent databases, 2D and 3D face recognition methods.
Abstract: Face recognition is one of the most active research fields of computer vision and pattern recognition, with many practical and commercial applications including identification, access control, forensics, and human-computer interactions. However, identifying a face in a crowd raises serious questions about individual freedoms and poses ethical issues. Significant methods, algorithms, approaches, and databases have been proposed over recent years to study constrained and unconstrained face recognition. 2D approaches reached some degree of maturity and reported very high rates of recognition. This performance is achieved in controlled environments where the acquisition parameters are controlled, such as lighting, angle of view, and distance between the camera–subject. However, if the ambient conditions (e.g., lighting) or the facial appearance (e.g., pose or facial expression) change, this performance will degrade dramatically. 3D approaches were proposed as an alternative solution to the problems mentioned above. The advantage of 3D data lies in its invariance to pose and lighting conditions, which has enhanced recognition systems efficiency. 3D data, however, is somewhat sensitive to changes in facial expressions. This review presents the history of face recognition technology, the current state-of-the-art methodologies, and future directions. We specifically concentrate on the most recent databases, 2D and 3D face recognition methods. Besides, we pay particular attention to deep learning approach as it presents the actuality in this field. Open issues are examined and potential directions for research in facial recognition are proposed in order to provide the reader with a point of reference for topics that deserve consideration.

155 citations

Journal ArticleDOI
21 Jan 2021-Sensors
TL;DR: Extensive experiments on several subsets of the unconstrained Alex and Robert (AR) and Labeled Faces in the Wild (LFW) databases show that the MB-C-BSIF achieves superior and competitive results in unconStrained situations when compared to current state-of-the-art methods, especially when dealing with changes in facial expression, lighting, and occlusion.
Abstract: Single-Sample Face Recognition (SSFR) is a computer vision challenge. In this scenario, there is only one example from each individual on which to train the system, making it difficult to identify persons in unconstrained environments, mainly when dealing with changes in facial expression, posture, lighting, and occlusion. This paper discusses the relevance of an original method for SSFR, called Multi-Block Color-Binarized Statistical Image Features (MB-C-BSIF), which exploits several kinds of features, namely, local, regional, global, and textured-color characteristics. First, the MB-C-BSIF method decomposes a facial image into three channels (e.g., red, green, and blue), then it divides each channel into equal non-overlapping blocks to select the local facial characteristics that are consequently employed in the classification phase. Finally, the identity is determined by calculating the similarities among the characteristic vectors adopting a distance measurement of the K-nearest neighbors (K-NN) classifier. Extensive experiments on several subsets of the unconstrained Alex and Robert (AR) and Labeled Faces in the Wild (LFW) databases show that the MB-C-BSIF achieves superior and competitive results in unconstrained situations when compared to current state-of-the-art methods, especially when dealing with changes in facial expression, lighting, and occlusion. The average classification accuracies are 96.17% and 99% for the AR database with two specific protocols (i.e., Protocols I and II, respectively), and 38.01% for the challenging LFW database. These performances are clearly superior to those obtained by state-of-the-art methods. Furthermore, the proposed method uses algorithms based only on simple and elementary image processing operations that do not imply higher computational costs as in holistic, sparse or deep learning methods, making it ideal for real-time identification.

43 citations

Journal ArticleDOI
TL;DR: This work proposes to use anatomical and embryological information about the human ear in order to find the autonomous components and the locations where large interindividual variations can be detected.
Abstract: The morphology of the human ear presents rich and stable information embedded on the curved 3-D surface and has as a result attracted considerable attention from forensic scientists and engineers as a biometric recognition modality. However, recognizing a person’s identity from the morphology of the human ear in unconstrained environments, with insufficient and incomplete training data, strong person-specificity, and high within-range variance, can be very challenging. Following our previous work on ear recognition based on local texture descriptors, we propose to use anatomical and embryological information about the human ear in order to find the autonomous components and the locations where large interindividual variations can be detected. Embryology is particularly relevant to our approach as it provides information on the possible changes that can be observed in the external structure of the ear. We experimented with three publicly available databases, namely: IIT Delhi-1, IIT Delhi-2, and USTB-1, consisting of several ear benchmarks acquired under varying conditions and imaging qualities. The experiments show excellent results, beyond the state of the art.

25 citations

Proceedings ArticleDOI
01 Oct 2016
TL;DR: This work implements a simple yet effective approach which uses and exploits recent local texture-based descriptors to achieve faster and more accurate results in recognizing identity from morphological shape of the human ear in unconstrained environments.
Abstract: Morphological shape of the human ear presents a rich and stable information embedded on the curved 3D surface, which has invited lot attention from the forensic and engineer scientists in order to differentiate and recognize people. However, recognizing identity from morphological shape of the human ear in unconstrained environments, with insufficient and incomplete training data, dealing with strong person-specificity, and high within-range variance, can be very challenging. In this work, we implement a simple yet effective approach which uses and exploits recent local texture-based descriptors to achieve faster and more accurate results. Support Vector Machine (SVM) is used as a classifier. We experiment with two publicly available databases, which are IIT Delhi-1 and IIT Delhi-2, consisting of several ear benchmarks of different natures under varying conditions and imaging qualities. The experiments show excellent results beyond the state-of-the-art.

7 citations

Posted ContentDOI
09 Dec 2020
TL;DR: Extensive experiments show that the MB-C-BSIF achieves superior results in unconstrained situations when compared to current state-of-the-art methods, especially when dealing with changes in facial expression, lighting, and occlusion.
Abstract: Single sample face recognition (SSFR) is a computer vision challenge. In this scenario, there is only one example from each individual on which to train the system, making it difficult to identify persons in unconstrained environments, particularly when dealing with changes in facial expression, posture, lighting, and occlusion. This paper suggests a different method based on a variant of the Binarized Statistical Image Features (BSIF) descriptor called Multi-Block ColorBinarized Statistical Image Features (MB-C-BSIF) to resolve the SSFR Problem. First, the MB-C-BSIF method decomposes a facial image into three channels (e.g., red, green, and blue), then it divides each channel into equal non-overlapping blocks to select the local facial characteristics that are consequently employed in the classification phase. Finally, the identity is determined by calculating the similarities among the characteristic vectors adopting a distance measurement of the k-nearest neighbors (K-NN) classifier. Extensive experiments on several subsets of the unconstrained Alex & Robert (AR) and Labeled Faces in the Wild (LFW) databases show that the MB-C-BSIF achieves superior results in unconstrained situations when compared to current state-of-the-art methods, especially when dealing with changes in facial expression, lighting, and occlusion. Furthermore, the suggested method employs algorithms with lower computational cost, making it ideal for real-time applications.

7 citations


Cited by
More filters
Journal ArticleDOI
12 Jul 2021-Sensors
TL;DR: In this article, the authors present a survey of the existing literature in applying deep convolutional neural networks to predict plant diseases from leaf images, and highlight the advantages and disadvantages of different techniques and models.
Abstract: In the modern era, deep learning techniques have emerged as powerful tools in image recognition. Convolutional Neural Networks, one of the deep learning tools, have attained an impressive outcome in this area. Applications such as identifying objects, faces, bones, handwritten digits, and traffic signs signify the importance of Convolutional Neural Networks in the real world. The effectiveness of Convolutional Neural Networks in image recognition motivates the researchers to extend its applications in the field of agriculture for recognition of plant species, yield management, weed detection, soil, and water management, fruit counting, diseases, and pest detection, evaluating the nutrient status of plants, and much more. The availability of voluminous research works in applying deep learning models in agriculture leads to difficulty in selecting a suitable model according to the type of dataset and experimental environment. In this manuscript, the authors present a survey of the existing literature in applying deep Convolutional Neural Networks to predict plant diseases from leaf images. This manuscript presents an exemplary comparison of the pre-processing techniques, Convolutional Neural Network models, frameworks, and optimization techniques applied to detect and classify plant diseases using leaf images as a data set. This manuscript also presents a survey of the datasets and performance metrics used to evaluate the efficacy of models. The manuscript highlights the advantages and disadvantages of different techniques and models proposed in the existing literature. This survey will ease the task of researchers working in the field of applying deep learning techniques for the identification and classification of plant leaf diseases.

99 citations

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a Fully Convolution dense Dilated Network (FCD-DN) to improve the segmentation efficiency while ensuring high accuracy, which integrates the advantages of dense connectivity, dilated convolutions and factorized filters.

71 citations

Journal ArticleDOI
TL;DR: The authors propose a deep learning-based averaging ensemble to reduce the effect of over-fitting on unconstrained ear recognition datasets as compared to DNN feature-extraction based models and single fine-tuned models.
Abstract: The authors perform unconstrained ear recognition using transfer learning with deep neural networks (DNNs). First, they show how existing DNNs can be used as a feature extractor. The extracted features are used by a shallow classifier to perform ear recognition. Performance can be improved by augmenting the training dataset with small image transformations. Next, they compare the performance of the feature-extraction models with fine-tuned networks. However, because the datasets are limited in size, a fine-tuned network tends to over-fit. They propose a deep learning-based averaging ensemble to reduce the effect of over-fitting. Performance results are provided on unconstrained ear recognition datasets, the AWE and CVLE datasets as well as a combined AWE + CVLE dataset. They show that their ensemble results in the best recognition performance on these datasets as compared to DNN feature-extraction based models and single fine-tuned models.

66 citations

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an alternative to this approach: an initial training process called Deep Unsupervised Active Learning, where a classification model can incrementally acquire new knowledge during the testing phase without manual guidance or correction of decision making.
Abstract: Cooperative machine learning has many applications, such as data annotation, where an initial model trained with partially labeled data is used to predict labels for unseen data continuously. Predicted labels with a low confidence value are manually revised to allow the model to be retrained with the predicted and revised data. In this paper, we propose an alternative to this approach: an initial training process called Deep Unsupervised Active Learning. Using the proposed training scheme, a classification model can incrementally acquire new knowledge during the testing phase without manual guidance or correction of decision making. The training process consists of two stages: the first stage of supervised training using a classification model, and an unsupervised active learning stage during the test phase. The labels predicted during the test phase, with high confidence, are continuously used to extend the knowledge base of the model. To optimize the proposed method, the model must have a high initial recognition rate. To this end, we exploited the Visual Geometric Group (VGG16) pre-trained model applied to three datasets: Mathematical Image Analysis (AMI), University of Science and Technology Beijing (USTB2), and Annotated Web Ears (AWE). This approach achieved impressive performance that shows a significant improvement in the recognition rate of the USTB2 dataset by coloring its images using a Generative Adversarial Network (GAN). The obtained performances are interesting compared to the current methods: the recognition rates are 100.00%, 98.33%, and 51.25% for the USTB2, AMI, and AWE datasets, respectively.

57 citations

Journal ArticleDOI
27 Jan 2022-Sensors
TL;DR: A compressed sensing reconstruction method that combines the total variation regularization and the non-local self-similarity constraint, and permits a gain up to 25% in terms of denoising efficiency and visual quality using two metrics: peak signal-to-noise ratio (PSNR) and structural similarity (SSIM).
Abstract: In remote sensing applications and medical imaging, one of the key points is the acquisition, real-time preprocessing and storage of information. Due to the large amount of information present in the form of images or videos, compression of these data is necessary. Compressed sensing is an efficient technique to meet this challenge. It consists in acquiring a signal, assuming that it can have a sparse representation, by using a minimum number of nonadaptive linear measurements. After this compressed sensing process, a reconstruction of the original signal must be performed at the receiver. Reconstruction techniques are often unable to preserve the texture of the image and tend to smooth out its details. To overcome this problem, we propose, in this work, a compressed sensing reconstruction method that combines the total variation regularization and the non-local self-similarity constraint. The optimization of this method is performed by using an augmented Lagrangian that avoids the difficult problem of nonlinearity and nondifferentiability of the regularization terms. The proposed algorithm, called denoising-compressed sensing by regularization (DCSR) terms, will not only perform image reconstruction but also denoising. To evaluate the performance of the proposed algorithm, we compare its performance with state-of-the-art methods, such as Nesterov’s algorithm, group-based sparse representation and wavelet-based methods, in terms of denoising and preservation of edges, texture and image details, as well as from the point of view of computational complexity. Our approach permits a gain up to 25% in terms of denoising efficiency and visual quality using two metrics: peak signal-to-noise ratio (PSNR) and structural similarity (SSIM).

55 citations