scispace - formally typeset
Search or ask a question

Showing papers on "Eigenface published in 2013"


Proceedings ArticleDOI
11 Aug 2013
TL;DR: This paper proposes an image sentiment prediction framework, which leverages the mid-level attributes of an image to predict its sentiment, and introduces eigenface-based facial expression detection as an additional mid- level attributes.
Abstract: Visual content analysis has always been important yet challenging. Thanks to the popularity of social networks, images become an convenient carrier for information diffusion among online users. To understand the diffusion patterns and different aspects of the social images, we need to interpret the images first. Similar to textual content, images also carry different levels of sentiment to their viewers. However, different from text, where sentiment analysis can use easily accessible semantic and context information, how to extract and interpret the sentiment of an image remains quite challenging. In this paper, we propose an image sentiment prediction framework, which leverages the mid-level attributes of an image to predict its sentiment. This makes the sentiment classification results more interpretable than directly using the low-level features of an image. To obtain a better performance on images containing faces, we introduce eigenface-based facial expression detection as an additional mid-level attributes. An empirical study of the proposed framework shows improved performance in terms of prediction accuracy. More importantly, by inspecting the prediction results, we are able to discover interesting relationships between mid-level attribute and image sentiment.

194 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: This paper advances descriptor-based face recognition by suggesting a novel usage of descriptors to form an over-complete representation, and by proposing a new metric learning pipeline within the same/not-same framework.
Abstract: This paper advances descriptor-based face recognition by suggesting a novel usage of descriptors to form an over-complete representation, and by proposing a new metric learning pipeline within the same/not-same framework. First, the Over-Complete Local Binary Patterns (OCLBP) face representation scheme is introduced as a multi-scale modified version of the Local Binary Patterns (LBP) scheme. Second, we propose an efficient matrix-vector multiplication-based recognition system. The system is based on Linear Discriminant Analysis (LDA) coupled with Within Class Covariance Normalization (WCCN). This is further extended to the unsupervised case by proposing an unsupervised variant of WCCN. Lastly, we introduce Diffusion Maps (DM) for non-linear dimensionality reduction as an alternative to the Whitened Principal Component Analysis (WPCA) method which is often used in face recognition. We evaluate the proposed framework on the LFW face recognition dataset under the restricted, unrestricted and unsupervised protocols. In all three cases we achieve very competitive results.

180 citations


Journal ArticleDOI
TL;DR: The whitened principal component analysis (PCA) dimensionality reduction technique is applied upon both the POEM- and POD-based representations to get more compact and discriminative face descriptors and it is proved that the two methods have complementary strength.
Abstract: A novel direction for efficiently describing face images is proposed by exploring the relationships between both gradient orientations and magnitudes of different local image structures. Presented in this paper are not only a novel feature set called patterns of orientation difference (POD) but also several improvements to our previous algorithm called patterns of oriented edge magnitudes (POEM). The whitened principal component analysis (PCA) dimensionality reduction technique is applied upon both the POEM- and POD-based representations to get more compact and discriminative face descriptors. We then show that the two methods have complementary strength and that by combining the two algorithms, one obtains stronger results than either of them considered separately. By experiments carried out on several common benchmarks, including the FERET database with both frontal and nonfrontal images as well as the very challenging LFW data set, we prove that our approach is more efficient than contemporary ones in terms of both higher performance and lower complexity.

78 citations


Journal ArticleDOI
01 Nov 2013-Optik
TL;DR: A novel method based on PCA image reconstruction and LDA for face recognition is proposed, where the inner-classes covariance matrix for feature extraction is used as generating matrix and eigenvectors from each person are obtained, then the reconstructed images are obtained.

59 citations


01 Jan 2013
TL;DR: In this paper different biometrics techniques such as Iris scan, retina scan and face recognition techniques are discussed.
Abstract: Biometrics is a growing technology, which has been widely used in forensics, secured access and prison security. A biometric system is fundamentally a pattern recognition system that recognizes a person by determining the authentication by using his different biological features i.e. Fingerprint, retina-scan, iris scan, hand geometry, and face recognition are leading physiological biometrics and behavioral characteristic are Voice recognition, keystroke-scan, and signature-scan. In this paper different biometrics techniques such as Iris scan, retina scan and face recognition techniques are discussed. Keyword: Biometric, Biometric techniques, Eigenface, Face recognition.

56 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed eigenface conversion-based approach outperforms current approaches and demonstrates that removing the speaking effect on facial expression is useful for improving the performance of emotion recognition.
Abstract: Speaking effect is a crucial issue that may dramatically degrade performance in emotion recognition from facial expressions. To manage this problem, an eigenface conversion-based approach is proposed to remove speaking effect on facial expressions for improving accuracy of emotion recognition. In the proposed approach, a context-dependent linear conversion function modeled by a statistical Gaussian Mixture Model (GMM) is constructed with parallel data from speaking and non-speaking facial expressions with emotions. To model the speaking effect in more detail, the conversion functions are categorized using a decision tree considering the visual temporal context of the Articulatory Attribute (AA) classes of the corresponding input speech segments. For verification of the identified quadrant of emotional expression on the Arousal-Valence (A-V) emotion plane, which is commonly used to dimensionally define the emotion classes, from the reconstructed facial feature points, an expression template is constructed to represent the feature points of the non-speaking facial expressions for each quadrant. With the verified quadrant, a regression scheme is further employed to estimate the A-V values of the facial expression as a precise point in the A-V emotion plane. Experimental results show that the proposed method outperforms current approaches and demonstrates that removing the speaking effect on facial expression is useful for improving the performance of emotion recognition.

43 citations


01 Jan 2013
TL;DR: The face detection system of colored face images which is invariant to the background and acceptable illumination conditions is demonstrated and face recognition task is completed with improved accuracy and success rate even for noisy face images.
Abstract: Face detection from a long database of face images with different backgrounds is not an easy task. In this work, we demonstrate the face detection system of colored face images which is invariant to the background and acceptable illumination conditions. A threshold level is set to reject the non-human face images and the unknown human face images which are not present in the input database of face images. In this paper, the global features extraction is completed using PCA based eigenface computation method and the detection part is completed using multi-layered feed forward Artificial Neural Networks with back propagation process. This algorithm is implemented using MATLAB software. The learning process of neurons is used to train the input face images with 1000 iterations to minimize the error. In this system, face recognition task is completed with improved accuracy and success rate even for noisy face images. Face Recognition System is a computer based digital technology and is an active area of research. The Face Recognition System has various applications like various authentication systems, security systems and searching of persons etc. These applications are cost effective and save the time. Moreover the face database can be easily designed by using any image of the person. In past few years various face recognition techniques are purposed with varied and successful results. As the brain of human beings create the learning ability to recognize the persons by face even the feature characteristics of the face changes with time. The neurons of the human brain are trained by reading or learning the face of a person and they can identify that face quickly even after several years. This ability of training and identifying is converted into machine systems using the Artificial Neural Networks. The basic function for the face recognition system is to compare the face of a person which is to be recognized with the faces already trained in the Artificial Neural Networks and it recognized the best matching face as output even at different lightening conditions, viewing conditions and facial expressions. In this paper, the features of the face images are extracted by creating the feature vectors of maximum varied face points and computing s Covariance column matrix using PCA. These faces are projected onto the face space that spans the significant variations in the face images stored in the database (7). These feature vectors are the eigenvectors of covariance matrix and having the face like appearance so that we call them eigenfaces which are used as input to train the Artificial Neural Networks. The learning of the correlated patterns between the input face images is one of the useful properties of Artificial Neural Networks. After training the Artificial Neural Networks, we tested it with known and unknown face images for success and rejection rate analysis. Database used in this work contains 49 different face images of nine persons resized to 180×200 pixels including the non-human and unknown face images for improving the rejection rate.

36 citations


01 Jan 2013
TL;DR: This paper is aimed at implementing a digitized system for attendance recording using MATLAB's Image Acquisition Toolbox and creates a feature set for each of the images provided in the database using PCA (Principal Component Analysis).
Abstract: Being one of the most successful applications of the image processing, face recognition has a vital role in technical field especially in the field of security purpose. Human face recognition is an important field for verification purpose especially in the case of student's attendance. This paper is aimed at implementing a digitized system for attendance recording. Current attendance marking methods are monotonous & time consuming. Manually recorded attendance can be easily manipulated. Hence the paper is proposed to tackle all these issues. extraction methods viz. PCA (Principal Component Analysis) -Thus we create a feature set for each of the images provided in the database. During real time, the images of human face may be extracted from a USB camera. This involves MATLAB's Image Acquisition Toolbox, using which a camera is configured, accessed & brought one frame at a time into MATLAB's workspace for further processing using MATLAB's Image Processing Toolbox. This method uses eigen face approach for face recognition which was introduced by Kirby and Sirovich in 1988 at Brown University. The method works by analyzing face images and computing eigenface which are faces composed of eigenvectors. The comparison of eigenface is used to identify the presence of a face and its identity. There is a five step process involved with the system developed byTurk and

35 citations


Proceedings ArticleDOI
21 Feb 2013
TL;DR: This paper proposes here to assign different weight to the only very few nonzero eigenvalues related eigenvectors which are considered as non-trivial principal components for classification which improves the performance of face recognition with respect to existing techniques.
Abstract: Now a days research is going on to design a high performance automatic face recognition system which is really a challenging task for researchers. As faces are complex visual stimuli that differ dramatically, hence developing an efficient computational approach for accurate face recognition is very difficult. In this paper a high performance face recognition algorithm is developed and tested using conventional Principal Component Analysis (PCA) and two dimensional Principal Component Analysis (2DPCA). These statistical transforms are exploited for feature extraction and data reduction. We have proposed here to assign different weight to the only very few nonzero eigenvalues related eigenvectors which are considered as non-trivial principal components for classification. Lastly face recognition task is performed by k-nearest distance measurement. Experimental results on ORL and YALE face databases show that the proposed method improves the performance of face recognition with respect to existing techniques. The results show that better recognition performance can be achieved with less computational cost than that of other existing methods.

35 citations


Journal ArticleDOI
TL;DR: This paper presents experimental proof the accuracy of the Eigenfaces approach, a unique approach which directly classifies a test image as belonging to one of the six standard expressions - anger, disgust, fear, happy, sad or surprise with great accuracy.

34 citations


01 Jan 2013
TL;DR: This paper mainly addresses the Methodological Analysis of Principal Component Analysis Method and presents a comprehensive discussion of PCA and also simulate it on some data sets using MATLAB.
Abstract: Principal Components Analysis (PCA) is a practical and standard statistical tool in modern data analysis that has found application in different areas such as face recognition, image compression and neuroscience. It has been called one of the most precious results from applied linear algebra. PCA is a straightforward, non-parametric method for extracting pertinent information from confusing data sets. It presents a roadmap for how to reduce a complex data set to a lower dimension to disclose the hidden, simplified structures that often underlie it. This paper mainly addresses the Methodological Analysis of Principal Component Analysis (PCA) Method. PCA is a statistical approach used for reducing the number of variables which is most widely used in face recognition. In PCA, every image in the training set is represented as a linear combination of weighted eigenvectors called eigenfaces. These eigenvectors are obtained from covariance matrix of a training image set. The weights are found out after selecting a set of most relevant Eigenfaces. Recognition is performed by projecting a test image onto the subspace spanned by the eigenfaces and then classification is done by measuring minimum Euclidean distance. In this paper we present a comprehensive discussion of PCA and also simulate it on some data sets using MATLAB.

Journal ArticleDOI
TL;DR: Using the proposed precise patch histogram (PPH) enabled the accuracy of the global facial features to be improved by using the PPH, and a portion-oriented posteriori fine-tuning was used to improve the classification.

Proceedings ArticleDOI
14 Mar 2013
TL;DR: Face recognition is performed using Principal Component Analysis followed by Linear Discriminant Analysis based dimension reduction techniques and it is found that recognition rate on this database is 96.35% showing efficiency of the proposed method than previously adopted methods of face recognition systems.
Abstract: Face recognition has a major impact in security measures which makes it one of the most appealing areas to explore. To perform face recognition, researchers adopt mathematical calculations to develop automatic recognition systems. As a face recognition system has to perform over wide range of database, dimension reduction techniques become a prime requirement to reduce time and increase accuracy. In this paper, face recognition is performed using Principal Component Analysis followed by Linear Discriminant Analysis based dimension reduction techniques. Sequencing of this paper is preprocessing, dimension reduction of training database set by PCA, extraction of features for class separability by LDA and finally testing by nearest mean classification techniques. The proposed method is tested over ORL face database. It is found that recognition rate on this database is 96.35% and hence showing efficiency of the proposed method than previously adopted methods of face recognition systems.

01 Jan 2013
TL;DR: This paper finds the face capturing and storing the database of face recognition through Eigenspace-based and utilizes the similarities of a face image against a set of faces from a training set at the same view to establish style and recognition invariant representations of a person in different poses.
Abstract: Face recognition security systems have become important for many applications such as automatic access control and video capturing . Most of the face recognition Eigenspace systems today require proper frontal views of a person, and these systems will fail if the person to be recognized does not face the camera correctly. In this paper, we find the face capturing and storing the database . Our recognition system is insensitive to viewing directions and it requires only one sample view per person. The proposed approach utilizes the similarities of a face image against a set of faces from a training set at the same view to establish style and recognition invariant representations of a person in different poses. In this paper we store the no of face recognition through Eigenspace-based . In this paper to store the image database and match proper face .

Proceedings ArticleDOI
18 Mar 2013
TL;DR: A solution capable of recognizing the facial expressions performed by a person's face and mapping them to a 3D face virtual model using the depth and RGB data captured from Microsoft's Kinect sensor is presented.
Abstract: This paper presents a solution capable of recognizing the facial expressions performed by a person's face and mapping them to a 3D face virtual model using the depth and RGB data captured from Microsoft's Kinect sensor. This solution starts by detecting the face and segmenting its regions, then, it identifies the actual expression using EigenFaces metrics on the RGB images and reconstructs the face from the filtered Depth data. A new dataset relative to 20 human subjects is introduced for learning purposes. It contains the images and point clouds for the different facial expressions performed. The algorithm seeks and displays automatically the seven state of the art expressions including surprise, fear, disgust, anger, joy, sadness and the neutral appearance. As result our system shows a morphing sequence between the sets of 3D face avatar models.

Journal ArticleDOI
TL;DR: The MCA plays a vital role as MCA can properly decompose a signal into several semantic sub-signals in accordance with specific dictionaries, and can be also extended to simultaneous implementation of face hallucination and expression normalization.

Journal ArticleDOI
01 Dec 2013-Optik
TL;DR: A new method using the LBP map conjunction with the SRC framework, which can reach the highest accuracy with the lowest time-consuming on clean face images than those methods which use different features such as raw image, Downsample image, Eigenfaces, Laplacianfaces and Gabor conjunction with SRC.

Proceedings ArticleDOI
04 Jun 2013
TL;DR: A novel framework of face recognition combined with the occluded-region detection method using Fast-Weighted Principal Component Analysis (FW-PCA) is proposed and face recognition algorithms are used as weights for matching face images.
Abstract: Facial occlusions such as eyeglasses, hairs and beards decrease the performance of face recognition algorithms. To improve the performance of face recognition algorithms, this paper proposes a novel framework of face recognition combined with the occluded-region detection method. In this paper, we detect occluded regions using Fast-Weighted Principal Component Analysis (FW-PCA) and use the occluded regions as weights for matching face images. To demonstrate the effectiveness of the proposed framework, we use two face recognition algorithms: Local Binary Patterns (LBP) and Phase-Only Correlation (POC). Experimental evaluation using public face image databases indicates performance improvement of the face recognition algorithms for face images with natural and artificial occlusions.

Journal ArticleDOI
TL;DR: Alternative accounts of face space are investigated and it is found that independent component analysis provided the best fit to human judgments of face similarity and identification and support the use of color information in the representation of facial identity.
Abstract: The concept of psychological face space lies at the core of many theories of face recognition and representation. To date, much of the understanding of face space has been based on principal component analysis (PCA); the structure of the psychological space is thought to reflect some important aspects of a physical face space characterized by PCA applications to face images. In the present experiments, we investigated alternative accounts of face space and found that independent component analysis provided the best fit to human judgments of face similarity and identification. Thus, our results challenge an influential approach to the study of human face space and provide evidence for the role of statistically independent features in face encoding. In addition, our findings support the use of color information in the representation of facial identity, and we thus argue for the inclusion of such information in theoretical and computational constructs of face space.

Proceedings ArticleDOI
22 Apr 2013
TL;DR: The paper introduces a novel framework for 3D face recognition that capitalizes on region covariance descriptors and Gaussian mixture models, and assesses the feasibility of the proposed framework on the Face Recognition Grand Challenge version 2 (FRGCv2) database with highly encouraging results.
Abstract: The paper introduces a novel framework for 3D face recognition that capitalizes on region covariance descriptors and Gaussian mixture models. The framework presents an elegant and coherent way of combining multiple facial representations, while simultaneously examining all computed representations at various levels of locality. The framework first computes a number of region covariance matrices/descriptors from different sized regions of several image representations and then adopts the unscented transform to derive low-dimensional feature vectors from the computed descriptors. By doing so, it enables computations in the Euclidean space, and makes Gaussian mixture modeling feasible. In the last step a support vector machine classification scheme is used to make a decision regarding the identity of the modeled input 3D face image. The proposed framework exhibits several desirable characteristics, such as an inherent mechanism for data fusion/integration (through the region covariance matrices), the ability to examine the facial images at different levels of locality, and the ability to integrate domain-specific prior knowledge into the modeling procedure. We assess the feasibility of the proposed framework on the Face Recognition Grand Challenge version 2 (FRGCv2) database with highly encouraging results.

Journal ArticleDOI
TL;DR: This paper presents an experimental comparison of the statistical Eigenfaces method for feature extraction and the unsupervised neural networks in order to evaluate the classification accuracies as comparison criteria, and shows that the proposed method SOFM/MLP neural network is more efficient and robust than the Sanger PCNN/ MLP and the Eigen faces/MLp, when used a few number of training samples per person.
Abstract: In this paper, new appearances based on neural networks (NN) algorithms are presented for face recognition. Face recognition is subdivided into two main stages: feature extraction and classifier. The suggested NN algorithms are the unsupervised Sanger principal component neural network (Sanger PCNN) and the self-organizing feature map (SOFM), which will be applied for features extraction of the frontal view of a face image. It is of interest to compare the unsupervised network with the traditional Eigenfaces technique. This paper presents an experimental comparison of the statistical Eigenfaces method for feature extraction and the unsupervised neural networks in order to evaluate the classification accuracies as comparison criteria. The classifier is done by the multilayer perceptron (MLP) neural network. Overcoming of the problem of the finite number of training samples per person is discussed. Experimental results are implemented on the Olivetti Research Laboratory database that contains variability in expression, pose, and facial details. The results show that the proposed method SOFM/MLP neural network is more efficient and robust than the Sanger PCNN/MLP and the Eigenfaces/MLP, when used a few number of training samples per person. As a result, it would be more applicable to utilize the SOFM/MLP NN in order to accomplish a higher level of accuracy within a recognition system.

01 Jan 2013
TL;DR: This paper discusses and compares the performance of various PCA-based face recognition techniques and proposes a system which combines these above mentioned features into one face recognition system which outperforms the classical PCA.
Abstract: The main aim of Face Recognition system is to retrieve face images which are similar to a specific query face image in large face Databases. The retrieved face images can be used for many applications, such as photo management, Visual surveillance, Criminal face identification and searching specific faces from the internet etc. This paper discusses and compares the performance of various PCA-based face recognition techniques. Based on the performance of various parameters such as distance classifier used, the DWT level used, applying histogram equalization and selecting the number of eigenfaces, we propose a system which combines these above mentioned features into one face recognition system. From the results obtained, it is observed that the proposed system outperforms the classical PCA-based face recognition system.

Book ChapterDOI
01 Jan 2013
TL;DR: The exhaustive tests of four known methods of linear transformations in the context of face verification task and a new variant of the transformation (Laplacianface + LDA), and the specific interval-based decision rule are described.
Abstract: This paper describes the exhaustive tests of four known methods of linear transformations (Eigenfaces, Fisherfaces, Laplacianfaces and Marginfaces) in the context of face verification task. Additionally, we introduce a new variant of the transformation (Laplacianface + LDA), and the specific interval-based decision rule. Both of them improve the performance of face verification, in general, however, our experiments show that the linear transformations are of marginal importance in this field.

Book ChapterDOI
28 Jul 2013
TL;DR: This paper extracts HOG (Histogram of Orientated Gradient) features of each class face images in used Face databases to build a overcomplete dictionary for ESRC (the Eigenface-based Sparse Representation Classification).
Abstract: Face recognition has been a challenging task in computer vision. In this paper, we propose a new method for face recognition. Firstly, we extract HOG (Histogram of Orientated Gradient) features of each class face images in used Face databases. Then, we select the so-called eigenfaces from HOG features corresponding to each class face images and finally use them to build a overcomplete dictionary for ESRC (the Eigenface-based Sparse Representation Classification ). Experiments show that our method receives better results by comparison.

Proceedings ArticleDOI
19 Dec 2013
TL;DR: The proposed work aims to create a smart application camera, with the intention of eliminating the need for a human presence to detect any unwanted sinister activities, such as theft in this case.
Abstract: The proposed work aims to create a smart application camera, with the intention of eliminating the need for a human presence to detect any unwanted sinister activities, such as theft in this case. Spread among the campus, are certain valuable biometric identification systems at arbitrary locations. The application monitosr these systems (hereafter referred to as “object”) using our smart camera system based on an OpenCV platform. By using OpenCV Haar Training, employing the Viola-Jones algorithm implementation in OpenCV, we teach the machine to identify the object in environmental conditions. An added feature of face recognition is based on Principal Component Analysis (PCA) to generate Eigen Faces and the test images are verified by using distance based algorithm against the eigenfaces, like Euclidean distance algorithm or Mahalanobis Algorithm. If the object is misplaced, or an unauthorized user is in the extreme vicinity of the object, an alarm signal is raised.

Journal ArticleDOI
17 Oct 2013
TL;DR: A brief personal view of the genesis of Eigenfaces for face recognition and its relevance to the multimedia community is presented.
Abstract: The inaugural ACM Multimedia Conference coincided with a surge of interest in computer vision technologies for detecting and recognizing people and their activities in images and video. Face recognition was the first of these topics to broadly engage the vision and multimedia research communities. The Eigenfaces approach was, deservedly or not, the method that captured much of the initial attention, and it continues to be taught and used as a benchmark over 20 years later. This article is a brief personal view of the genesis of Eigenfaces for face recognition and its relevance to the multimedia community.

Journal ArticleDOI
TL;DR: A method of applying PCA on wavelet subband of the face image and two methods are proposed to select best of the eigenvectors for recognition, which shows better recognition accuracy and discriminatory power than applyingPCA on the entire original image.
Abstract: Face recognition has advantages over other biometric methods. Principal Component Analysis (PCA) has been widely used for the face recognition algorithm. PCA has limitations such as poor discriminatory power and large computational load. Due to these limitations of the existing PCA based approach, we used a method of applying PCA on wavelet subband of the face image and two methods are proposed to select best of the eigenvectors for recognition. The proposed methods select important eigenvectors using genetic algorithm and entropy of eigenvectors. Results show that compared to traditional method of selecting top eigenvectors, proposed method gives better results with less number of eigenvectors. Keywords—face recognition; PCA; wavelet transform; genetic algorithm I. INTRODUCTION Many recent events, exposed defects in most sophisticated security systems. Therefore, it is necessary to improve security systems based on the body or behavioral characteristics, called biometrics. With the interest in the development of human and computer interface and biometric identification, human face recognition has become an active research area. Face recognition offers several advantages over other biometric methods. Nowadays, Principal Component Analysis (PCA) has been widely adopted for the face recognition algorithm. Yet still, PCA has limitations such as poor discriminatory power and large computational load (1). In view of the limitations of the existing PCA-based approach, here we used a method of applying PCA on wavelet subband of the face image and two methods are proposed to select a best eigenvectors for recognition. In the proposed method, face image is decomposed into a number of subbands with different frequency components using the wavelet transform (WT). Out of the different frequency subbands, a mid-range frequency subband image is selected. The resolution of the selected subband is 16x16. The proposed method works on lower resolution, instead of 128 x 128 resolution of the original image. Working on lower resolution images, reduces the computational complexity. Experimental results show that applying PCA on WT sub-image with mid- range frequency components gives better recognition accuracy and discriminatory power than applying PCA on the entire original image (2)(3). In PCA, all the eigenvectors are not equally informative. This paper proposes two methods of eigenvector selection. In comparison with the traditional use of PCA, the proposed methods select the eigenvectors based on genetic algorithm and entropy of eigenvectors.

Proceedings ArticleDOI
02 Dec 2013
TL;DR: This paper presents a new framework and feature set for vehicle model query system, by giving model names or manufacturer names as keywords, the desired vehicle images can be queried from target videos or vehicle image databases using internet-vision approach.
Abstract: This paper presents a new framework and feature set for vehicle model query system. By giving model names or manufacturer names as keywords, the desired vehicle images can be queried from target videos or vehicle image databases using internet-vision approach. In this framework, sample images are automatically retrieved from internet via search engine or car related website. Logos and frontal masks are segmented and are used for recognizing the manufacturer name and model of the vehicles, respectively. Eigenfaces and Pyramid Histogram of Oriented Gradients (PHOG) are proposed as features for recognition process. The experiments show that the proposed method can provide recognition rate of 98.2 % for manufacturer logo recognition process, and 94.00% for vehicle model recognition process. The performance of the entire framework of our proposed query system is also evaluated via precision and recall which are obtained as 87.67% and 80.00%, respectively.

Proceedings ArticleDOI
14 Nov 2013
TL;DR: A comparative analysis of two algorithms for image representation with application to recognition of 3D face scans with the presence of facial expressions using Principal Component Analysis and Linear Discriminant Analysis.
Abstract: In this paper we present a comparative analysis of two algorithms for image representation with application to recognition of 3D face scans with the presence of facial expressions. We begin with processing of the input point cloud based on curvature analysis and range image representation to achieve a unique representation of the face features. Then, subspace projection using Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are performed. Finally classification with different classifiers will be performed.

Proceedings ArticleDOI
01 Dec 2013
TL;DR: This article proposes the use of cloud computing - more specifically, Windows Azure platform - to identify possible performance gains while testing EmguCV framework.
Abstract: Multiple face recognition has several applications, such as in the areas of security and robotics. Recognition and classification techniques have been developed in recent years, through different programming languages and approaches. However, the level of detailing often requires a high processing power. This article proposes the use of cloud computing - more specifically, Windows Azure platform - to identify possible performance gains while testing EmguCV framework.