scispace - formally typeset
Search or ask a question
Author

Mohammadreza Hajiarbabi

Bio: Mohammadreza Hajiarbabi is an academic researcher from University of Kansas. The author has contributed to research in topics: Facial recognition system & Face detection. The author has an hindex of 4, co-authored 10 publications receiving 45 citations. Previous affiliations of Mohammadreza Hajiarbabi include Isfahan University of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: A novel technique of using the neural network to enhance the capabilities of skin detection is introduced and the results show that the neuralnetwork has better precision and accuracy rate, as well as comparable recall and specificity, compared with other methods.
Abstract: Abstract Human skin detection is an essential phase in face detection and face recognition when using color images. Skin detection is very challenging because of the differences in illumination, differences in photos taken using an assortment of cameras with their own characteristics, range of skin colors due to different ethnicities, and other variations. Numerous methods have been used for human skin color detection, including the Gaussian model, rule-based methods, and artificial neural networks. In this article, we introduce a novel technique of using the neural network to enhance the capabilities of skin detection. Several different entities were used as inputs of a neural network, and the pros and cons of different color spaces are discussed. Also, a vector was used as the input to the neural network that contains information from three different color spaces. The comparison of the proposed technique with existing methods in this domain illustrates the effectiveness and accuracy of the proposed approach. Tests were done on two databases, and the results show that the neural network has better precision and accuracy rate, as well as comparable recall and specificity, compared with other methods.

13 citations

Proceedings Article
01 Jan 2007
TL;DR: It is shown that there exist stronger methods such as discrete cosine transform (DCT) in face recognition than principal component analysis (PCA), which is the most popular one.
Abstract: Face recognition is a biometric identification methodwhich among the other methods such as, finger printidentification, speech recognition, signature and hand writtenrecognition has assigned a special place to itself. In principle, thebiometric identification methods include a wide range of sciencessuch as machine vision, image processing, pattern recognitionneural networks and has various applications in film processing,control access networks and etc. There are several methods forrecognition and appearance based methods is one of them. One of the most important algorithms in appearance based methods islinear discriminant analysis (LDA) method. One of the drawbacksfor LDA in face recognition is the small sample size (SSS) problemso it is suggested to first reduce the dimension of the space usingmethods among which, principal component analysis (PCA) is themost popular one. In this paper we show that there exist strongermethods such as discrete cosine transform (DCT).

9 citations

Journal ArticleDOI
TL;DR: A novel neural network-based technique for skin detection is presented, introducing a skin segmentation process for finding the faces in color.
Abstract: Face detection which is a challenging problem in computer vision, can be used as a major step in face recognition. The challenges of face detection in color images include illumination differences, various cameras characteristics, different ethnicities, and other distinctions. In order to detect faces in color images, skin detection can be applied to the image. Numerous methods have been utilized for human skin color detection, including Gaussian model, rule-based methods, and artificial neural networks. In this paper, we present a novel neural network-based technique for skin detection, introducing a skin segmentation process for finding the faces in color

8 citations

Journal ArticleDOI
TL;DR: The results show that skin detection utilizing deep learning has better results compared to other methods such as rule-based, Gaussian model and feed forward neural network.
Abstract: Human skin detection is an important and challenging problem in computer vision. Skin detection can be used as the first phase in face detection when using color images. The differences in illumination and ranges of skin colors have made skin detection a challenging task. Gaussian model, rule based methods, and artificial neural networks are methods that have been used for human skin color detection. Deep learning methods are new techniques in learning that have shown improved classification power compared to neural networks. In this paper the authors use deep learning methods in order to enhance the capabilities of skin detection algorithms. Several experiments have been performed using auto encoders and different color spaces. The proposed technique is evaluated compare with other available methods in this domain using two color image databases. The results show that skin detection utilizing deep learning has better results compared to other methods such as rule-based, Gaussian model and feed forward neural network. Human Skin Detection in Color Images Using Deep Learning

6 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The combined method of feature extraction (Spatial and Frequency) shows superior performance than individual feature extraction schemes and gives good recognition results even without pre-processing of the image.
Abstract: Quality of image plays a vital role in increasing face recognition rate. A good quality image gives better recognition rate than noisy images. It is more difficult to extract features from such noisy images which in-turn reduces face recognition rate. To overcome problems occurred due to low quality image, pre-processing is done before extracting features from the image. In this paper we will analyze the effect of pre-processing prior to feature extraction process with respect to the face recognition rate. This also gives a qualitative description of various pre-processing techniques and feature extraction schemes that were used for our analysis. The results were analyzed with the help of bar graphs. The combined method of feature extraction (Spatial and Frequency) shows superior performance than individual feature extraction schemes. Also, this combined method gives good recognition results even without pre-processing of the image.

31 citations

Journal ArticleDOI
Yuqing Peng1, Huifang Tao1, Wei Li1, Hongtao Yuan1, Tiejun Li1 
TL;DR: Wang et al. as discussed by the authors proposed a new gesture recognition architecture, which combines feature fusion network with variant convolutional long short-term memory (ConvLSTM), which extracts spatio-temporal feature information from local, global and deep aspects, and combined feature fusion to alleviate the loss of feature information.
Abstract: Gesture is a natural form of human communication, and it is of great significance in human–computer interaction. In the dynamic gesture recognition method based on deep learning, the key is to obtain comprehensive gesture feature information. Aiming at the problem of inadequate extraction of spatiotemporal features or loss of feature information in current dynamic gesture recognition, a new gesture recognition architecture is proposed, which combines feature fusion network with variant convolutional long short-term memory (ConvLSTM). The architecture extracts spatiotemporal feature information from local, global and deep aspects, and combines feature fusion to alleviate the loss of feature information. Firstly, local spatiotemporal feature information is extracted from video sequence by 3D residual network based on channel feature fusion. Then the authors use the variant ConvLSTM to learn the global spatiotemporal information of dynamic gesture, and introduce the attention mechanism to change the gate structure of ConvLSTM. Finally, a multi-feature fusion depthwise separable network is used to learn higher-level features including depth feature information. The proposed approach obtains very competitive performance on the Jester dataset with the classification accuracies of 95.59%, achieving state-of-the-art performance with 99.65% accuracy on the SKIG (Sheffifield Kinect Gesture) dataset.

25 citations

Journal ArticleDOI
TL;DR: The spatial statistics of these features, measured by Ripley's K-function, are used to assess whether feature matches are clustered together or spread around the overlap region to show that SFOP introduces significantly less aggregation than the other detectors tested.
Abstract: When matching images for applications such as mosaicking and homography estimation, the distribution of features across the overlap region affects the accuracy of the result. This paper uses the spatial statistics of these features, measured by Ripley's K-function, to assess whether feature matches are clustered together or spread around the overlap region. A comparison of the performances of a dozen state-of-the-art feature detectors is then carried out using analysis of variance and a large image database. Results show that SFOP introduces significantly less aggregation than the other detectors tested. When the detectors are rank-ordered by this performance measure, the order is broadly similar to those obtained by other means, suggesting that the ordering reflects genuine performance differences. Experiments on stitching images into mosaics confirm that better coverage values yield better quality outputs.

24 citations

Journal ArticleDOI
TL;DR: An original method that was developed to remotely measure the instantaneous pulse rate using photoplethysmographic signals that were recorded from a low-cost webcam to outperform available region-of-interest selection methods is proposed.
Abstract: We propose, in this study, an original method that was developed to remotely measure the instantaneous pulse rate using photoplethysmographic signals that were recorded from a low-cost webcam. The method is based on a prior selection of pixels of interest using a custom segmentation that used the face lightness distribution to define different sub-regions. The most relevant sub-regions are automatically selected and combined by evaluating their respective signal to noise ratio. Performances of the proposed technique were evaluated using an approved contact sensor on a set of seven healthy subjects. Different experiments while reading, with motion or while performing common tasks on a computer were conducted in the laboratory. The proposed segmentation technique was compared with other benchmark methods that were already introduced in the scientific literature. The results exhibit high degrees of correlation and low pulse rate absolute errors, demonstrating that the segmentation we propose in this study outperform available region-of-interest selection methods.

23 citations

Proceedings ArticleDOI
14 Sep 2017
TL;DR: Two convolutional neural networks, consisting of 20 convolution layers with 3 × 3 filters, is a kind of VGG network and 20 networkin-network layers which can be considered a modification of Inception structure are presented.
Abstract: This paper presents two convolutional neural networks (CNN) and their training strategies for skin detection. The first CNN, consisting of 20 convolution layers with 3 × 3 filters, is a kind of VGG network. The second is composed of 20 networkin-network (NiN) layers which can be considered a modification of Inception structure. When training these networks for human skin detection, we consider patch-based and whole-image-based training. The first method focuses on local features such as skin color and texture, and the second on the human-related shape features as well as color and texture. Experiments show that the proposed CNNs yield better performance than the conventional methods and also than the existing deep-learning based method. Also, it is found that the NiN structure generally shows higher accuracy than the VGG-based structure. The experiments also show that the whole-image-based training that learns the shape features yields better accuracy than the patch-based learning that focuses on local color and texture only.

23 citations