scispace - formally typeset
Search or ask a question

Showing papers on "Eigenface published in 2020"


Journal ArticleDOI
TL;DR: This paper introduces a new privacy-preserving face recognition protocol named PEEP, which applies perturbation to Eigenfaces utilizing differential privacy and stores only the perturbed data in the third-party servers to run a standard Eigenface recognition algorithm that will not be vulnerable to privacy attacks.

59 citations


Proceedings ArticleDOI
15 Jun 2020
TL;DR: This research focusing on a face recognition based attendance system with getting a less false-positive rate using a threshold to confidence i.e. euclidean distance value while detecting unknown persons and save their images.
Abstract: The attendance system is used to track and monitor whether a student attends a class. There are different types of attendance systems like Biometric-based, Radiofrequency card-based, face recognition based and old paper-based attendance system. Out of them all, a Face recognition based attendance system is more secure and time-saving. There are several research papers focusing on only the recognition rate of students. This research focusing on a face recognition based attendance system with getting a less false-positive rate using a threshold to confidence i.e. euclidean distance value while detecting unknown persons and save their images. Compare to other euclidean distance-based algorithms like Eigenfaces and Fisherfaces, Local Binary Pattern Histogram (LBPH) algorithm is better [11]. We used Haar cascade for face detection because of their robustness and LBPH algorithm for face recognition. It is robust against monotonic grayscale transformations. Scenarios such as face recognition rate, false-positive rate for that and false-positive rate with and without using a threshold in detecting unknown persons are considered to evaluate our system. We got face recognition rate of students is 77% and its false-positive rate is 28%. This system is recognizing students even when students are wearing glasses or grown a beard. Face Recognition of unknown persons is nearly 60% for both with and without applying threshold value. Its false-positive rate is 14% and 30% with and without applying threshold respectively.

51 citations


Proceedings ArticleDOI
05 Mar 2020
TL;DR: The area of concern of this paper is using the digital image processing to develop a face recognition system which is considered to be one of the most extremely deliberated biometric technology.
Abstract: While recognizing any individual, the most important attribute is face. It serves as an individual identity of everyone and therefore face recognition helps in authenticating any person's identity using his personal characteristics. The whole procedure for authenticating any face data is sub-divided into two phases, in the first phase, the face detection is done quickly except for those cases in which the object is placed quite far, followed by this the second phase is initiated in which the face is recognized as an individual. Then the whole process is repeated thereby helping in developing a face recognition model which is considered to be one of the most extremely deliberated biometric technology. Basically, there are two type of techniques that are currently being followed in face recognition pattern that is, the Eigenface method and the Fisherface method. The Eigenface method basically make use of the PCA (Principal Component Analysis) to minimize the face dimensional space of the facial features. The area of concern of this paper is using the digital image processing to develop a face recognition system.

48 citations


Journal ArticleDOI
TL;DR: The unsupervised clustering approach is described, which consists in a neural network model for face attributes recognition based on transfer learning whose goal is grouping faces according to common facial features, and uses the features collected in each cluster to provide a compact and comprehensive description of the faces belonging to each cluster.
Abstract: Despite the success obtained in face detection and recognition over the last ten years of research, the analysis of facial attributes still represents a trend topic. Keeping the full face recognition aside, exploring the potentials of soft biometric traits, i.e. singular facial traits like the nose, the mouth, the hair and so on, is yet considered a fruitful field of investigation. Being able to infer the identity of an occluded face, e.g. voluntary occluded by sunglasses or accidentally due to environmental factors, can be useful in a wide range of operative fields where user collaboration cannot be considered as an assumption. This especially happens when dealing with forensic scenarios in which is not unusual to have partial face photos or partial fingerprints. In this paper, an unsupervised clustering approach is described. It consists in a neural network model for face attributes recognition based on transfer learning whose goal is grouping faces according to common facial features. Moreover, we use the features collected in each cluster to provide a compact and comprehensive description of the faces belonging to each cluster and deep learning as a mean for task prediction in partially visible faces.

23 citations


Journal ArticleDOI
TL;DR: The proposed software system can be used as an application in a smart building as a security system and used to control access in smart buildings as a rule and the advancement of techniques connected around there.
Abstract: In this paper, the proposed software system based on face recognition the proposed system can be implemented in the smart building or any VIP building need security interring in general, The human face will be recognized from a stream of pictures or video feed, this technology recognizes the person according to the specific algorithm, the algorithm that employed in this paper is the Viola–Jones object detection framework by using Python. The task of the proposed facial recognition system consists of two steps, the first one was detected the human face from live video using the webcamera in the computer, and the second step recognizes if this face allowed to enter the building or not by comparing it with the existing database, the two steps depending on the OpenCV python by importing cv2 method for detecting the human face, the frames can be read or written to file with the cv2.imread and cv2.imwrite functions respectively Finally, this proposed software system can be used to control access in smart buildings as a rule and the advancement of techniques connected around there, Providing a security system is one of the most important features must be achieved in the smart buildings, this proposed system can be used as an application in a smart building as a security system. Face recognition is one of the most important applications using today for practical facial recognition. The proposed software system, depending on using OpenCV (open source computer vision) is a popular computer vision library, in 1999 this library started by Intel. The platform library sets its focus on real-time image processing and includes patent-free implementations of the latest computer vision algorithms. OpenCV 2.3.1 now comes with a programming interface to C, C++, Python, and Android. OpenCV library of python, the three algorithms that will be used in this proposed system. The currently available algorithms are: Eigenfaces → createEigenFaceRecognizer(). Fisherfaces → createFisherFaceRecognizer(). Local Binary Patterns Histograms → createLBPHFaceRecognizer(). Finally the proposed system provide entering to the building just for the authorized person according to face recognition algorithem.

18 citations


Journal ArticleDOI
TL;DR: Multiple convolutional and pooling layers of Deep Learning Networks (DLN) will efficiently extract the face database’s high-level features in the present work, leading to better performance results and fast learning speed than classification using deep neural networks.

17 citations


Journal ArticleDOI
TL;DR: The biometric-based authentication is effective and applicable for the WBANs authentication and personality continuous authentication on these medical applications wireless networks.
Abstract: The authentication of the Wireless Body Area Networks (WBANs) nodes is a vital factor in its medical applications This paper, investigates methods of authentication over these networks Also, an effective unimodal and multimodal biometrics identification approaches based on individual face and voice recognition or combined using different fusion types are presented The cryptography and non-cryptography-based authentication are discussed in this research work and its suitability with the medical applications Cryptographic based authentication is not suitable for WBANs The biometrics authentication is discussed and its challenges In this work, different fusion types in multimodal biometric are presented There are two unimodal schemes have been presented based on using the voice and face image individually, these two biometrics have been used in the multimodal biometric scheme The presneted multimodal scheme is evaluated and applied using the feature and score fusion The mechanism operation of presented algorithm starts with capturing the biometics signals ‘Face/Voice’, the second step is the feature extracting from each biometric individually The Artificial Neural Network (ANN), The Support Vector Machine (SVM) and the Gaussian Mixture Model (GMM) classifiers have been employed to perform the classification process individually The computer simulation experiments reveal that the cepstral coefficients and statistical coefficients for voice recognition performed better for the voice scenario Also, the Eigenface and support vector machine tools in the face recognition scheme performed better than other schemes The multimodal results better than the unimodal schemes Also, the results of the scores fusion-based multimodal biometric scheme is better than the feature fusion-based scheme Hence, the biometric-based authentication is effective and applicable for the WBANs authentication and personality continuous authentication on these medical applications wireless networks

14 citations


Book ChapterDOI
01 Jan 2020
TL;DR: Through aggregating the facial expressions of students in the class, an adaptive learning strategy can be developed and implemented in the classroom environment and is suggested that relevant interventions can be predicted based on emotions observed in a lecture setting or a class.
Abstract: Emotion is equivalent to mood or state of human emotion that correlates with non-verbal behavior. Related literature shows that humans tend to give off a clue for a particular feeling through nonverbal cues such as facial expression. This study aims to analyze the emotion of students using Philippines-based corpus of a facial expression such as fear, disgust, surprised, sad, anger and neutral with 611 examples validated by psychology experts and results aggregates the final emotion, and it will be used to define the meaning of emotion and connect it with a teaching pedagogy to support decisions on teaching strategies. The experiments used feature extraction methods such as Haar-Cascade classifier for face detection; Gabor filter and eigenfaces API for features extraction; and support vector machine in training the model with 80.11% accuracy. The result was analyzed and correlated with the appropriate teaching pedagogies for educators and suggest that relevant interventions can be predicted based on emotions observed in a lecture setting or a class. Implementing the prototype in Java environment, it captured images in actual class to scale the actual performance rating and had an average accuracy of 60.83 %. It concludes that through aggregating the facial expressions of students in the class, an adaptive learning strategy can be developed and implemented in the classroom environment.

12 citations


Journal ArticleDOI
TL;DR: The experimental results showed that the proposed BMC-LBPH FR techniques outperformed the traditional LBPH methods by achieving the accuracy of 65, 98%, and 78% in 5_celebrity dataset, LU dataset, and rainy weather, respectively, and the proposed method provides a promising solution for facial recognition using UAV.
Abstract: Face recognition (FR) in an unconstrained environment, such as low light, illumination variations, and bad weather is very challenging and still needs intensive further study. Previously, numerous experiments on FR in an unconstrained environment have been assessed using Eigenface, Fisherface, and Local binary pattern histogram (LBPH) algorithms. The result indicates that LBPH FR is the optimal one compared to others due to its robustness in various lighting conditions. However, no specific experiment has been conducted to identify the best setting of four parameters of LBPH, radius, neighbors, grid, and the threshold value, for FR techniques in terms of accuracy and computation time. Additionally, the overall performance of LBPH in the unconstrained environments are usually underestimated. Therefore, in this work, an in-depth experiment is carried out to evaluate the four LBPH parameters using two face datasets: Lamar University data base (LUDB) and 5_celebrity dataset, and a novel Bilateral Median Convolution-Local binary pattern histogram (BMC-LBPH) method was proposed and examined in real-time in rainy weather using an unmanned aerial vehicle (UAV) incorporates with 4 vision sensors. The experimental results showed that the proposed BMC-LBPH FR techniques outperformed the traditional LBPH methods by achieving the accuracy of 65%, 98%, and 78% in 5_celebrity dataset, LU dataset, and rainy weather, respectively. Ultimately, the proposed method provides a promising solution for facial recognition using UAV.

11 citations


Book ChapterDOI
01 Jan 2020
TL;DR: In this paper, a convolutional neural network (CNN) method was used for conventional item discovery using a gentle classifier development for resolving the substance identification problem in face recognition.
Abstract: Face recognition is an important concept, which has generally considered in the course of recent decades. Generally image location can be considered as an extraordinary sort of item recognition in PC vision. In this paper, we explore one of the vital and very effective systems for conventional item discovery using convolutional neural network (CNN) method, that is, a gentle classifier development for resolving the substance identification problem. That in recognizing the face of images as the problem is very difficult one, and so far no quality results are been obtained. Usually, this problem splits into distinctive sub-issues, to make it simpler to work predominantly identification of face of a picture pursued by the face acknowledgment itself. There are several tasks to perform in between such as partial image face detection or extracting more features from them. Many years there are numerous calculations and systems have been utilized such as eigenfaces or active shape model, principal component analysis (PCA), K-nearest neighbour (KNN), and local binary pattern histograms (LBPH), but accurate results have not been identified. However because of the drawbacks of previously mentioned techniques in my study, I want to use CNN in deep learning to obtain best results.

8 citations


Proceedings ArticleDOI
23 Oct 2020
TL;DR: The results showed that faces could be recognized using the eigenface algorithm with an average accuracy rate of 85% and the purpose of this research was to build face recognition software using eigenfaces.
Abstract: The eigenface algorithm is a collection of eigenvectors used for face recognition through computers. The face recognition system is part of image processing that recognizes faces based on imagery that is stamped and stored in an image file in JPEG format. Face recognition problems can be solved through the implementation of an algorithm. The algorithm used in this study is the eigenface algorithm. The input image is stamped through a laptop camera with a size of 320 pixels x 240 pixels and reduced to 100 pixels x 100 pixels to be saved as a master file of the face with various facial expressions is a forward-facing position without a smile, facing forward a thin smile, facing forward with a big smile, head tilted to the left, and head tilted to the right. The purpose of this research is to build face recognition software using eigenface algorithms. The results showed that faces could be recognized using the eigenface algorithm with an average accuracy rate of 85%.

Proceedings ArticleDOI
07 Oct 2020
TL;DR: Convolutional Neural system arrangement for anticipating the likelihood of relationship of a person with others in a social affair is the main objective of this paper and the framework joins the convolutional neural system and AI calculations.
Abstract: Convolutional Neural system arrangement for anticipating the likelihood of relationship of a person with others in a social affair is the main objective of this paper. The framework joins the convolutional neural system and AI calculations. The coordinated applications are face identification, face bunching and face identifier. This technique is fit for quick order and gives preferred execution over the eigenfaces approach. The DLIB calculation encodes the face into 128D exhibit and stores it in record taking care of framework. The framework accepts video as source and employments face acknowledgment Chinese murmur bunching calculation to group the essences of a person. In view of the information from the bunch results, the apriori/affiliation rule mining calculation finds the affiliated likelihood. The framework is coordinated with google cloud to give application-based interface which makes application programming interface calls. Association rule mining algorithm is used to find the confidence probabilities of a person with other individuals of the video clipping.

Journal ArticleDOI
20 Jul 2020
TL;DR: The safe safety system using Android-based face recognition can read the user's face in real time and can work well for safe security systems.
Abstract: The level of security in terms of access is one of the main priorities of everyone to improve the security system that feels the need for improvement following the development of modern technology. This study discusses a security system using Android-based face recognition. The aim of this research is that the safe safety system has a better level of security than the previous system. The initial stage to build this system, the authors do the literature data collection stage as a basis for the theory and system development methods used by software designers before is the waterfall method, in general this method is divided into several stages, including: Analysis, Design, Program Code and Unit Testing. For the method used in the research of this system is the eigenfaces algorithm method for the detection of facial objects in the initial process of image training. As well as the Local Binary Patterns algorithm method and Histrogram Equalization at the stage of reading the user's face recognition image accurately which has an accuracy of face reading up to 95.56%. The results of the user's face data will be processed in Wemos D1 and the data will be sent and stored in a database. The results of data from face recognition data will be used again as user data to open the safe. The conclusion, the system can read the user's face in real time and can work well for safe security systems

Journal ArticleDOI
TL;DR: This work has mathematically formulated the approach and placed importance upon the most discriminant features of the face recognition algorithm and compared the whole proposed algorithm with a well-known face recognition method, Eigenfaces, and achieved promising results in different cases.
Abstract: Face recognition is still an active pattern analysis topic. Faces have already been treated as objects or textures, but human face recognition system takes a different approach in face recognition. People refer to faces by their most discriminant features. People usually describe faces in sentences like ``She's snub-nosed'' or ``he's got long nose'' or ``he's got round eyes'' and so like. These most discriminant features have been extracted by comparing a face with average face formed in one's mind. We have mathematically formulated the approach and placed importance upon the most discriminant features. We have explained feature processing and classification parts in details. We also explained the train and test phases of the proposed algorithm. We have compared the proposed classification part with 1-NN classifier to show the strength of the algorithm and reported the results. We have also compared the whole proposed algorithm with a well-known face recognition method, Eigenfaces and achieved promising results in different cases.

Book ChapterDOI
13 Feb 2020
TL;DR: This system here represents the automated attendance system using real-time computer vision algorithms and adaptive techniques to track the faces during a specific period of time using eigenface recognizers and Intel's Haar cascades to make the attendance-taking process easier and less time-consuming rather than the traditional process.
Abstract: The face identification system is one of the most emerging methods for authentication of user; it is drawing wide attraction to the surveillance system which reflects innovation in a video surveillance system. This system here represents the automated attendance system using real-time computer vision algorithms and adaptive techniques to track the faces during a specific period of time. Our system works on eigenface recognizers and Intel’s Haar cascades which make the attendance-taking process easier and less time-consuming rather than the traditional process. Our system provides the cheapest solution rather than a previous biometric system like fingerprint authentications. The recorded data is being compared with the training dataset, and the attendance is recorded if the match is found with the help of Python libraries. The camera is being installed at the entry location, so that attendance is recorded as soon as the match of the person entering the particular area is found. So, our main aim is to provide an alternative which is very much convenient to process the attendance and also is very much safe and authentic to have faced as a security option.

Journal ArticleDOI
TL;DR: The evaluations of the test cases indicate that among the compared facial recognition algorithms the OpenFace algorithm has the highest accuracy to identify faces and the practitioner and academician are advised on how to improve the accuracy of the current algorithms even further.
Abstract: — Computer visions and their applications have become important in contemporary life. Hence, researches on facial and object recognition have become increasingly important both from academicians and practitioners. Smart gadgets such as smartphones are nowadays capable of high processing power, memory capacity, along with high resolutions camera. Furthermore, the connectivity bandwidth and the speed of the interaction have significantly impacted the popularity of mobile object recognition applications. These developments in addition to computer vision’s algorithms advancement have transferred object’s recognitions from desktop environments to the mobile world. The aim of this paper to reveal the efficiency and accuracy of the existing open-source facial recognition algorithms in real-life settings. We use the following popular open-source algorithms for efficiency evaluations: Eigenfaces, Fisherfaces, Local Binary Pattern Histogram, the deep convolutional neural network algorithm, and OpenFace. The evaluations of the test cases indicate that among the compared facial recognition algorithms the OpenFace algorithm has the highest accuracy to identify faces. The findings of this study help the practitioner on their decision of the algorithm selections and the academician on how to improve the accuracy of the current algorithms even further.


Journal ArticleDOI
28 Dec 2020
TL;DR: This study experimented with a smart phone camera and different combinations of face detection and recognition algorithms to determine if it can be used to record attendance successfully, while keeping the solution cost-effective.
Abstract: Class attendance is important. Class attendance recording is often done using ‘roll-call’ or signing attendance registers. These are time consuming, easy to cheat and it is difficult to draw any information from them. There are other, expensive alternatives to automate attendance recording with varying accuracy. This study experimented with a smart phone camera and different combinations of face detection and recognition algorithms to determine if it can be used to record attendance successfully, while keeping the solution cost-effective. The effect of different class sizes was also investigated. The research was done within a pragmatism philosophy, using a prototype in a field experiment. The algorithms that were used, are: Viola-Jones (HAAR features), Deep Neural Network (DNN) and Histogram of Oriented Gradients (HOG) for detection and Eigenfaces, Fisherfaces and Local Binary Pattern Histogram (LBPH) for recognition. The best combination was Viola-Jones combined with Fisherfaces, with a mean accuracy of 54% for a class of 10 students and 34.5% for a class of 22 students. The best all over performance on a single class photo was 70% (class size 10). As is, this prototype is not accurate enough to use, but with a few adjustments, it may become a cheap, easy-to-implement solution to the attendance recording problem.

Journal ArticleDOI
30 Oct 2020
TL;DR: This test has proven that the Eigenface and Euclidean distance in the Principal Component Analysis (PCA) are able to handle and recognize smoker's facial image data well.
Abstract: Cigarettes are one of the biggest contributors to preventable causes of death in society. Cigarette smoke contains various chemicals that can cause various diseases such as chronic coughs, lung cancer, and other health problems. Cigarette smoke not only harms the health of the smoker itself but also the health of others. Sometimes written warnings about smoking bans are often not followed by active smokers. This study aims to identify smokers 'facial recognition in order to recognize and identify smokers' faces who do not obey the rules by using dimensional reduction techniques oriented to the Principal component Analysis (PCA) method. Principal Component Analysis will later be integrated with the Eigenface and Eucladean analysis algorithms to reduce the image size in obtaining the best value vectors to simplify the face image in the input image space and look for the threshold value which is the threshold that the test data must pass so that it can prove the data value. testing becomes recognizable data through the calculation of the distance for each weight. In this study, there were 8 smoker faces with 5 different facial poses that were tested for 40 face recognition experiments and resulted in 34 correct smoker face recognition and 6 wrong smoker face recognition with an accuracy rate of 92.5% and a long face recognition process time of 80. second. This test has proven that the Eigenface and Euclidean distance in the Principal Component Analysis (PCA) are able to handle and recognize smoker's facial image data well.

Journal ArticleDOI
TL;DR: In this paper, the authors used principal component analysis (PCA), linear discriminant analysis (LDA), Fisher Discriminant Analysis (FDA), and simple projection (SP) to recognize people from their facial images.
Abstract: Computer vision becomes a great area of research due to the huge availability of images and videos. For enhancement of security, biomedical imaging, or automation of identification, one may need useful tools to recognize images. One main problem of image data set is high dimensional, and it is very expensive to work with huge dimensions. In this paper, our main aim is to show a better dimension reduction process of high dimensional image data sets from several existing techniques. To verify it we start with most useful singular value decomposition to reduce the dimensionality of data to incorporate principal components. On the other hand, we classify data in advance to work out Fisher’s discriminant. From many real-world examples, we set a very well-known paradigm of analysis using Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) or Fisher Discriminant Analysis (FDA) and Simple Projection (SP) to recognize people from their facial images. We consider that we have some images of known people that can be used to compare and recognize new images (of the same set of face images). Moreover, we show graphical and tabular representation for average performance of correct recognition as well as analyze the effectiveness of three different machine learning techniques.

Proceedings ArticleDOI
23 Jul 2020
TL;DR: The evaluation analyzes the accuracy of a model in detecting human faces, its computation time and vulnerability to photo hacking while implemented in Raspberry pi platform and evaluates the performance of conventional face detection and recognition models.
Abstract: The authenticated door-lock security system for smart-home is a critical Internet-of-Things (IoT) application. The most pressing bottleneck of running algorithms in IoT devices is the availability of low computation resources. The state-of-the-art algorithms which work significantly well in high-end computing unit more often fail to operate in IoT devices like Raspberry pi. This work targets design of a face authenticated door-lock security system in Raspberry pi and, therefore, evaluates the performance of conventional face detection and recognition models. The local binary pattern histogram (LBPH), eigenface and fisherface algorithms are considered for evaluation. The evaluation analyzes the accuracy of a model in detecting human faces, its computation time and vulnerability to photo hacking while implemented in Raspberry pi platform. It covers a large number of stored faces and the cases of variation in angle and illumination in the face image. The python and c++ bindings to the opencv framework are considered to create a fully functional face recognition system hardware.

Journal ArticleDOI
TL;DR: The main objective of this paper is to demonstrate the weaknesses and strengthens of the facial recognition approach as an identifier known as eigenfaces by using the principal components analysis algorithm based on the images of previously stored training data.
Abstract: This paper discusses the results of a study that aimed to develop an eigenface technique known as (PC) 2A that collect the image of the original face with its vertical and horizontal projections. The basic components of the image were analyzed in the image enrichment section. An evaluation of the proposed method demonstrates that it costs less than the standard eigenface technique. Moreover, the experimental results show that a front-end database that has a gray level for each person has one training image; thus, in terms of accuracy, it was possible to get a 3-5% result for the proposed (PC)2A, which is higher than the precision of the standard eigenface technique. The main objective of this paper is to demonstrate the weaknesses and strengthens of the facial recognition approach as an identifier known as eigenfaces. This aim was achieved by using the principal components analysis algorithm based on the images of previously stored training data. The outcomes show the strength of the proposed technique, in which it was possible to obtain accuracy results of up to 96%, which in turn provides support for developing the technique proposed in this paper in the future because this work is of great importance in the field of biological treatments, the need for which has significantly increased over the last 5 years.

Posted Content
TL;DR: In this article, a combination of Principal Component Analysis (PCA) and Delaunay Triangulation (DTA) was used to improve the accuracy of the face recognition system.
Abstract: Face Recognition is most used for biometric user authentication that identifies a user based on his or her facial features. The system is in high demand, as it is used by many businesses and employed in many devices such as smartphones and surveillance cameras. However, one frequent problem that is still observed in this user-verification method is its accuracy rate. Numerous approaches and algorithms have been experimented to improve the stated flaw of the system. This research develops one such algorithm that utilizes a combination of two different approaches. Using the concepts from Linear Algebra and computational geometry, the research examines the integration of Principal Component Analysis with Delaunay Triangulation; the method triangulates a set of face landmark points and obtains eigenfaces of the provided images. It compares the algorithm with traditional PCA and discusses the inclusion of different face landmark points to deliver an effective recognition rate.

Proceedings ArticleDOI
01 Jul 2020
TL;DR: This paper will design and implement a security system based on a machine learning algorithms that identifies people using the precision and efficiency with which the model identifies people.
Abstract: The recognition of human faces plays an important role in many applications, for example in video surveillance and the management of facial image databases. This paper will design and implement a security system based on a machine learning algorithms. Principal Component Analysis (PCA) is the algorithm that represents the faces economically. It extracts the most dominant Eigenfaces from the present set of the faces. Comparison of video frames can be done by using this technique. Faces can be recognized in frames using the haar cascade to extract the characteristics of a human face. The SVM algorithm is used to classify between data sets using the kernel. The performance of the identification system also depends on the extraction of the attributes and their classification in order to obtain accurate results. These algorithms give different accuracy rates under different conditions, as observed experimentally. The precision and efficiency with which the model identifies people is the real added value of this paper.

Book
30 Jun 2020
TL;DR: In this paper, the use of the automated facial recognition algorithms that are increasingly intervening in our society from a critical visual culture studies perspective is analyzed from the point of view of machinic vision and concurrent modes of perception.
Abstract: This book offers a unique analysis of the use of the automated facial recognition algorithms that are increasingly intervening in our society from a critical visual culture studies perspective. The discussion focuses on the visuality of automated facial recognition and its designed algorithms as a case study in machinic vision and its concurrent modes of perception. It focuses on a general problematic of facial recognition technology, in asking how recognition can be defined through a technical process. This analysis draws on two primary genres of image sources: firstly, technical images that result from an algorithmic process of facial recognition and secondly, artistic images of contemporary artists who intervene with facial recognition technology. The first part of this study historicizes an early facial recognition algorithm called “eigenface” by relating its processes of recognition with a practice of composite portraiture, invented by Francis Galton in the 1880s as part of his larger project of eugenics. Both the technical processes of eigenface and Galton’s composite portraiture practice reference a merging of statistical logic with vision, as a means of recognition. As a counter aesthetic approach, the discussion moves to an alternate reading of the composite portrait by Ludwig Wittgenstein in the context of his philosophical investigations. The second part addresses contemporary artistic engagements with facial recognition technology that articulate the contemporary cultural and political implications of the technology. Notions of representation, identity and algorithmic meaning production in relation to facial recognition is explored through the work of Thomas Ruff, Zach Blas and Trevor Paglen. This investigation is interdisciplinary and draws on a wide range of discourse including the fields of computer science, sociology, philosophy, media studies and contemporary art. This book argues that we must take a closer look at how the enactment of recognition occurs through automated facial recognition technology and that it is indeed embedded with a visual politics. Even more significantly, this technology, the book argues, is redefining what it means to see and be seen in the contemporary world. (Less)

Proceedings ArticleDOI
20 Mar 2020
TL;DR: A Convolutional Neural Network (CNN) based face recognition technique previously done with eigenfaces but CNN has better accuracy is proposed, which is machine recognization of person face by analysing patterns on facial features.
Abstract: Fast and accurate user identification and verification is always desirable. Face recognition, which is machine recognization of person face by analysing patterns on facial features is becoming important for security and validation. Less interaction from user contributes high enrolment as well as easily applicable for current technology further adds its importance. In this regard, we propose a Convolutional Neural Network (CNN) based face recognition technique previously done with eigenfaces[8] but CNN has better accuracy. Entire process is divided into four phases: capturing the image, features extraction, classification and matching. At&t faces dataset is used in this paper. Input images are first fed for face detection. Face detection in input images are performed using Viola Jones algorithm. Convolutional Neural Network (CNN) is applied for feature extraction and classification. The result obtained in this paper shows that - recall is 0.992, precision is 99.4, f1 score is 99.1 and f1 beta score is 0.992 for 70-30 split of dataset, i.e, 70% dataset used as training dataset and 30% as testing dataset.

Proceedings ArticleDOI
28 Sep 2020
TL;DR: This work compares two algorithm local binary pattern and Eigenfaces for predicting suspect positive drugs based on face images and shows that the result of the prediction using Local binary pattern is better than the Prediction using Eigen faces.
Abstract: The current activity of drug inspection is usually carried out at school or university. This procedure, however, is less effective and efficient, as the urine samples are taken randomly. In many cases, the suspect student is not present or escapes the urine or hair inspection. A predictive drug user is needed, where only students suspected of positive drug use are selected for a urine test. To handle this problem, we need a system to predict suspect positive drugs. The dataset is generated from online sources by collecting and pre-processing 30 images of people before and after drug. We compare two algorithm local binary pattern and Eigenfaces for predicting suspect positive drugs based on face images. The experiment shows that the result of the prediction using Local binary pattern is better than the prediction using Eigenfaces. However, a higher accuracy of prediction reaches only 75 %.

Journal Article
TL;DR: This paper will provide a comparative study between these algorithms, namely: Eigenface, FisherFace & SURF, in the field of secure voting methodology.
Abstract: Facial recognition is a category of biometric software which works by matching the facial features. We will be studying the implementation of various algorithms in the field of secure voting methodology. There are three levels of verification which were used for the voters in our proposed system. The first is UID verification, second is for the voter card number, and the third level of verification includes the use of various algorithms for facial recognition. In this paper, we will provide a comparative study between these algorithms, namely: Eigenface, FisherFace & SURF.

Proceedings ArticleDOI
11 May 2020
TL;DR: A Cloud-Edge architecture for robots that enables the use case of face recognition in deadline constrained environments is proposed and a mathematical model for a Data Capsule which represents structured units of data in a time series is formulated.
Abstract: Nowadays robots are one of the most important technologies and face recognition is crucial for human-robot interaction. Face recognition has direct benefits in the commercial and law enforcement fields and it enables robots to perform a wide variety of roles such as assistance, search and rescue, military and so on. In this paper we propose a Cloud-Edge architecture for robots that enables the use case of face recognition in deadline constrained environments. We formulate a mathematical model for a Data Capsule which represents structured units of data in a time series. We design the components on each layer of the architecture. We propose a deadline aware scheduler in the Fog that acts as a proxy for the processing platforms in the Cloud and we design two face recognition applications, one in the Edge for robots that is implemented with eigenfaces and one in the Cloud for the processing platforms with deep neural networks (DNN). We evaluate the performance of the face recognition applications by running a workload that consists of a well-known labelled image data set. We test the ability of the Fog scheduler to launch jobs on time when strict deadlines are in place and it runs a heavy workload of jobs.

Posted Content
TL;DR: This paper is introducing a smart and efficient system for attendance using face detection and face recognition with the help of the Convolution Neural Network.
Abstract: The research on the attendance system has been going for a very long time, numerous arrangements have been proposed in the last decade to make this system efficient and less time consuming, but all those systems have several flaws. In this paper, we are introducing a smart and efficient system for attendance using face detection and face recognition. This system can be used to take attendance in colleges or offices using real-time face recognition with the help of the Convolution Neural Network(CNN). The conventional methods like Eigenfaces and Fisher faces are sensitive to lighting, noise, posture, obstruction, illumination etc. Hence, we have used CNN to recognize the face and overcome such difficulties. The attendance records will be updated automatically and stored in an excel sheet as well as in a database. We have used MongoDB as a backend database for attendance records.