Bio: Dzulkifli Mohamad is an academic researcher from Universiti Teknologi Malaysia. The author has contributed to research in topics: Feature extraction & Segmentation. The author has an hindex of 20, co-authored 109 publications receiving 1254 citations. Previous affiliations of Dzulkifli Mohamad include Multimedia University & King Abdulaziz University.
Papers published on a yearly basis
TL;DR: A new hybrid method has been proposed for image clustering based on combining the particle swarm optimization (PSO) with k-means clustering algorithms that uses the color and texture images as visual features to represent the images.
Abstract: In various application domains such as website, education, crime prevention, commerce, and biomedicine, the volume of digital data is increasing rapidly. The trouble appears when retrieving the data from the storage media because some of the existing methods compare the query image with all images in the database; as a result, the search space and computational complexity will increase, respectively. The content-based image retrieval (CBIR) methods aim to retrieve images accurately from large image databases similar to the query image based on the similarity between image features. In this study, a new hybrid method has been proposed for image clustering based on combining the particle swarm optimization (PSO) with k-means clustering algorithms. It is presented as a proposed CBIR method that uses the color and texture images as visual features to represent the images. The proposed method is based on four feature extractions for measuring the similarity, which are color histogram, color moment, co-occurrence matrices, and wavelet moment. The experimental results have indicated that the proposed system has a superior performance compared to the other system in terms of accuracy.
TL;DR: Combination of orientation of the skeleton and gravity centre point to extract accurate pattern features of signature data in offline signature verification system is presented.
Abstract: Signature verification is an active research area in the field of pattern recognition. It is employed to identify the particular person with the help of his/her signature's characteristics such as pen pressure, loops shape, speed of writing and up down motion of pen, writing speed, pen pressure, shape of loops, etc. in order to identify that person. However, in the entire process, features extraction and selection stage is of prime importance. Since several signatures have similar strokes, characteristics and sizes. Accordingly, this paper presents combination of orientation of the skeleton and gravity centre point to extract accurate pattern features of signature data in offline signature verification system. Promising results have proved the success of the integration of the two methods.
TL;DR: The accuracy and efficiency of the proposed scheme in the context of being automatic were proved experimentally, surpassing other state-of-the-art schemes, and can be considered as low-cost solutions for malaria parasitemia quantification in massive examinations.
Abstract: Malaria parasitemia is the quantitative measurement of the parasites in the blood to grade the degree of infection. Light microscopy is the most well-known method used to examine the blood for parasitemia quantification. The visual quantification of malaria parasitemia is laborious, time-consuming and subjective. Although automating the process is a good solution, the available techniques are unable to evaluate the same cases such as anemia and hemoglobinopathies due to deviation from normal RBCs' morphology. The main aim of this research is to examine the microscopic images of stained thin blood smears using a variety of computer vision techniques, grading malaria parasitemia on independent factors (RBC's morphology). The proposed methodology is based on inductive approach, color segmentation of malaria parasites through adaptive algorithm of Gaussian mixture model (GMM). The quantification accuracy of RBCs is improved, splitting the occlusions of RBCs with distance transform and local maxima. Further, the classification of infected and non-infected RBCs has been made to properly grade parasitemia. The training and evaluation have been carried out on image dataset with respect to ground truth data, determining the degree of infection with the sensitivity of 98 % and specificity of 97 %. The accuracy and efficiency of the proposed scheme in the context of being automatic were proved experimentally, surpassing other state-of-the-art schemes. In addition, this research addressed the process with independent factors (RBCs' morphology). Eventually, this can be considered as low-cost solutions for malaria parasitemia quantification in massive examinations.
TL;DR: The novel fusion of the enhanced features for the classification of static signs of the sign language is presented, explaining how the hand can be separated from the scene by depth data and a combination feature extraction method for extracting some appropriate features of the images.
Abstract: Gesture recognition and hand pose tracking are applicable techniques in human---computer interaction fields. Depth data obtained by depth cameras present a very informative explanation of the body or in particular hand pose that it can be used for more accurate gesture recognition systems. The hand detection and feature extraction process are very challenging task in the RGB images that they can be effectively dissolved with simple ways with depth data. However, depth data could be combined with the color information for more reliable recognition. A common hand gesture recognition system requires identifying the hand and its position or direction, extracting some useful features and applying a suitable machine-learning method to detect the performed gesture. This paper presents the novel fusion of the enhanced features for the classification of static signs of the sign language. It begins by explaining how the hand can be separated from the scene by depth data. Then, a combination feature extraction method is introduced for extracting some appropriate features of the images. Finally, an artificial neural network classifier is trained with these fused features and applied to critically analyze various descriptors performance.
TL;DR: This paper seeks to present a novel face segmentation and facial feature extraction algorithm for gray intensity images (each containing a single face object) based on the Voronoi diagram, a well-known technique in computational geometry, which generates clusters of intensity values using information from the vertices of the external boundary of Delaunay triangulation.
Abstract: Segmentation of human faces from still images is a research field of rapidly increasing interest. Although the field encounters several challenges, this paper seeks to present a novel face segmentation and facial feature extraction algorithm for gray intensity images (each containing a single face object). Face location and extraction must first be performed to obtain the approximate, if not exact, representation of a given face in an image. The proposed approach is based on the Voronoi diagram (VD), a well-known technique in computational geometry, which generates clusters of intensity values using information from the vertices of the external boundary of Delaunay triangulation (DT). In this way, it is possible to produce segmented image regions. A greedy search algorithm looks for a particular face candidate by focusing its action in elliptical-like regions. VD is presently employed in many fields, but researchers primarily focus on its use in skeletonization and for generating Euclidean distances; this work exploits the triangulations (i.e., Delaunay) generated by the VD for use in this field. A distance transformation is applied to segment face features. We used the BioID face database to test our algorithm. We obtained promising results: 95.14% of faces were correctly segmented; 90.2% of eyes were detected and a 98.03% detection rate was obtained for mouth and nose.
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.
01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.
TL;DR: Investigating whether individuals with higher levels of aerobic fitness displayed greater volume of the hippocampus and better spatial memory performance than individuals with lower fitness levels found a triple association such that higher fitness levels were associated with larger left and right hippocampi after controlling for age, sex, and years of education.
Abstract: Deterioration of the hippocampus occurs in elderly individuals with and without dementia, yet individual variation exists in the degree and rate of hippocampal decay. Determining the factors that influence individual variation in the magnitude and rate of hippocampal decay may help promote lifestyle changes that prevent such deterioration from taking place. Aerobic fitness and exercise are effective at preventing cortical decay and cognitive impairment in older adults and epidemiological studies suggest that physical activity can reduce the risk for developing dementia. However, the relationship between aerobic fitness and hippocampal volume in elderly humans is unknown. In this study, we investigated whether individuals with higher levels of aerobic fitness displayed greater volume of the hippocampus and better spatial memory performance than individuals with lower fitness levels. Furthermore, in exploratory analyses, we assessed whether hippocampal volume mediated the relationship between fitness and spatial memory. Using a region-of-interest analysis on magnetic resonance images in 165 nondemented older adults, we found a triple association such that higher fitness levels were associated with larger left and right hippocampi after controlling for age, sex, and years of education, and larger hippocampi and higher fitness levels were correlated with better spatial memory performance. Furthermore, we demonstrated that hippocampal volume partially mediated the relationship between higher fitness levels and enhanced spatial memory. Our results clearly indicate that higher levels of aerobic fitness are associated with increased hippocampal volume in older humans, which translates to better memory function.
TL;DR: A novel local feature descriptor, local directional number pattern (LDN), for face analysis, i.e., face and expression recognition, that encodes the directional information of the face's textures in a compact way, producing a more discriminative code than current methods.
Abstract: This paper proposes a novel local feature descriptor, local directional number pattern (LDN), for face analysis, i.e., face and expression recognition. LDN encodes the directional information of the face's textures (i.e., the texture's structure) in a compact way, producing a more discriminative code than current methods. We compute the structure of each micro-pattern with the aid of a compass mask that extracts directional information, and we encode such information using the prominent direction indices (directional numbers) and sign-which allows us to distinguish among similar structural patterns that have different intensity transitions. We divide the face into several regions, and extract the distribution of the LDN features from them. Then, we concatenate these features into a feature vector, and we use it as a face descriptor. We perform several experiments in which our descriptor performs consistently under illumination, noise, expression, and time lapse variations. Moreover, we test our descriptor with different masks to analyze its performance in different face analysis tasks.
TL;DR: The different approaches published in the literature are organized according to the techniques used for imaging, image preprocessing, parasite detection and cell segmentation, feature computation, and automatic cell classification for microscopic malaria diagnosis.
Abstract: Malaria remains a major burden on global health, with roughly 200 million cases worldwide and more than 400,000 deaths per year. Besides biomedical research and political efforts, modern information technology is playing a key role in many attempts at fighting the disease. One of the barriers toward a successful mortality reduction has been inadequate malaria diagnosis in particular. To improve diagnosis, image analysis software and machine learning methods have been used to quantify parasitemia in microscopic blood slides. This article gives an overview of these techniques and discusses the current developments in image analysis and machine learning for microscopic malaria diagnosis. We organize the different approaches published in the literature according to the techniques used for imaging, image preprocessing, parasite detection and cell segmentation, feature computation, and automatic cell classification. Readers will find the different techniques listed in tables, with the relevant articles cited next to them, for both thin and thick blood smear images. We also discussed the latest developments in sections devoted to deep learning and smartphone technology for future malaria diagnosis.