scispace - formally typeset
Search or ask a question
Author

Syed Muhammad Saqlain

Bio: Syed Muhammad Saqlain is an academic researcher from International Islamic University, Islamabad. The author has contributed to research in topics: Incremental decision tree & Decision tree. The author has an hindex of 6, co-authored 13 publications receiving 170 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A clinical heart disease diagnostic system is presented by proposing feature subset selection methodology with an object of achieving improved performance and the proposed methodology is validated through accuracy, specificity and sensitivity using four UCI datasets.
Abstract: Heart is one of the essential operating organs of the human body and its failure is a major contributing factor toward the human deaths. Coronary heart disease may be asymptotic but can be anticipated through the medical tests and daily life routine of the subject. Diagnosis of the coronary heart disease needs a specialized medical resource with the plenty of experience. All over the world and particularly in the developing countries, there is a lack of such experts which make the diagnosis more difficult. In this paper, we present a clinical heart disease diagnostic system by proposing feature subset selection methodology with an object of achieving improved performance. The proposed methodology presents three algorithms for selecting candidate feature subsets: (1) mean Fisher score-based feature selection algorithm, (2) forward feature selection algorithm and (3) reverse feature selection algorithm. Feature subset selection algorithm is presented to select the most decisive subset from the candidate feature subsets. The features are added to the feature subsets on the basis of their individual Fisher scores, while the selection of a feature subset depends on its Matthews correlation coefficient score and dimension. The selected feature subset with the reduced dimension is fed to the RBF kernel-based SVM which results in binary classification: (1) heart disease patient and (2) normal control subject. The proposed methodology is validated through accuracy, specificity and sensitivity using four UCI datasets, i.e., Cleveland, Switzerland, Hungarian and SPECTF. The statistical results achieved using the proposed technique are shown in comparison with the existing techniques reflecting its better performance. It has an accuracy of 81.19, 84.52, 92.68 and 82.7% for Cleveland, Hungarian, Switzerland and SPECTF, respectively.

83 citations

Journal ArticleDOI
TL;DR: It is proposed that the concepts of ontology of semantic web can be applied for carrying out semantic search in Holy Quran and certain recommendation for the project of attaining semantic search from all domains and resultantly all text of Holy Quran is proposed.
Abstract: Holy Quran, due to its unique style and allegorical nature, needs special attention about search and information retrieval issues. Many works have been done to accomplish keyword search from Holy Quran. The main problem in all these works is that these are either static or they does not provide us semantic search. In this paper, we propose that the concepts of ontology of semantic web can be applied for carrying out semantic search in Holy Quran. For this purpose, exploratory search have been done from semantic web field of knowledge. The sample domain ontology, based on living creatures including animals and birds mentioned in Holy Quran, has been developed in protégé ontology editor tool. SPARQL Queries have been run to depict the proper role of ontology. Then certain recommendation for the project of attaining semantic search from all domains and resultantly all text of Holy Quran has been proposed. These recommendations include model and framework including creation of Quranic WordNet, integration, merging and mapping of domain ontologies under the umbrella of upper ontology. This work can be extended to other Islamic knowledge sources like Hadith, Fiqh etc.

76 citations

Journal ArticleDOI
TL;DR: A data security approach with less computational and response times based on a modified version of Diffie–Hellman, modified to secure it against attacks by generating a hash of each value that is transmitted over the network.
Abstract: In wireless sensor networks, the sensors transfer data through radio signals to a remote base station. Sensor nodes are used to sense environmental conditions such as temperature, strain, humidity,...

24 citations

Proceedings ArticleDOI
01 Dec 2012
TL;DR: Comparison results of the methodology with existing techniques are presented and the results clearly proved that the proposed technique outperforms the existing techniques and this is proved by producing comparative results.
Abstract: In this paper we have presented and enhanced methodology for robust detection of human in videos Our research covers most of the limitations for detection of human in crowded places like detection of non-human objects and large human shadow as humans The proposed technique is based on hierarchal structure consisting of three phases Firstly, segmentation of moving objects is done using Gaussian Mixture model Secondly, shadow removal technique is applied to avoid detection of large human shadows as human Finally, human detection is achieved by applying human detection algorithm [3] on shadowless segmented images Experiments are performed on different videos having single and multiple humans in indoor and outdoor scenes and videos under different illumination producing large and small shadows This paper also presents comparative results of our methodology with existing techniques and the results clearly proved that the proposed technique outperforms the existing techniques and this is proved by producing comparative results

15 citations

Journal ArticleDOI
TL;DR: The proposed LDDEP descriptor is compared with the existing methods on two databases, namely National Institute of Standards Technology Special Database 4 (NIST SD 4) and Fingerprint Verification Competition (FVC), and gave higher accuracies compared to theexisting methods.
Abstract: Proper classification of fingerprints still poses difficult issues in large-scale databases due to ambiguity in intraclass and interclass structures, discontinuity in low-quality images, and ridges. To address these challenges, we propose a feature named local diagonal and directional extrema pattern (LDDEP) as a descriptor for classification of fingerprints. The proposed method utilizes first-order derivatives to find values and indices of local diagonal and directional extremas. The local extrema values are then compared with the central pixel intensity value to find the correlation with the neighbors. Eventually, the descriptor is generated with the help of the indices and local extrema values. Furthermore, the proposed descriptor is fed into K-nearest neighbor and support vector machine (SVM) for classifying the fingerprint images into four and five groups, respectively. The LDDEP descriptor is compared with the existing methods on two databases, namely National Institute of Standards Technology Special Database 4 (NIST SD 4) and Fingerprint Verification Competition (FVC). Our experiments have shown that, on the 4000 image NIST SD 4 test dataset, the proposed descriptor achieved a classification accuracy of 95.15% for five classes and 96.85% for four classes for half of the dataset, and an accuracy of 95.5% for five classes and 96.63% for four classes for the entire test dataset using SVM classifier. Similarly, FVC databases for the LDDEP descriptor gave classification accuracy of 98.2% using SVM classifier. The proposed method gave higher accuracies compared to the existing methods.

14 citations


Cited by
More filters
Journal Article
TL;DR: In this paper, the authors argue that PRNGs are their own unique type of cryptographic primitive, and should be analyzed as such, and demonstrate the applicability of the model (and their attacks) to four real-world PRNG models.
Abstract: In this paper we discuss PRNGs: the mechanisms used by real-world secure systems to generate cryptographic keys, initialization vectors, random nonces, and other values assumed to be random. We argue that PRNGs are their own unique type of cryptographic primitive, and should be analyzed as such. We propose a model for PRNGs, discuss possible attacks against this model, and demonstrate the applicability of the model (and our attacks) to four real-world PRNGs. We close with a discussion of lessons learned about PRNG design and use, and a few open questions.

192 citations

Journal ArticleDOI
TL;DR: This review categorizes and delineates several crowd density estimation and counting methods that have been applied for the examination of crowd scenes and covers two main approaches which are direct approach (i.e., object based target detection) and indirect approach (e.g. pixel- based, texture-based, and corner points based analysis).

183 citations

Journal ArticleDOI
TL;DR: In this article, the authors proposed a model that incorporates different methods to achieve effective prediction of heart disease, which used efficient Data Collection, Data Pre-processing and Data Transformation methods to create accurate information for the training model.
Abstract: Cardiovascular diseases (CVD) are among the most common serious illnesses affecting human health. CVDs may be prevented or mitigated by early diagnosis, and this may reduce mortality rates. Identifying risk factors using machine learning models is a promising approach. We would like to propose a model that incorporates different methods to achieve effective prediction of heart disease. For our proposed model to be successful, we have used efficient Data Collection, Data Pre-processing and Data Transformation methods to create accurate information for the training model. We have used a combined dataset (Cleveland, Long Beach VA, Switzerland, Hungarian and Stat log). Suitable features are selected by using the Relief, and Least Absolute Shrinkage and Selection Operator (LASSO) techniques. New hybrid classifiers like Decision Tree Bagging Method (DTBM), Random Forest Bagging Method (RFBM), K-Nearest Neighbors Bagging Method (KNNBM), AdaBoost Boosting Method (ABBM), and Gradient Boosting Boosting Method (GBBM) are developed by integrating the traditional classifiers with bagging and boosting methods, which are used in the training process. We have also instrumented some machine learning algorithms to calculate the Accuracy (ACC), Sensitivity (SEN), Error Rate, Precision (PRE) and F1 Score (F1) of our model, along with the Negative Predictive Value (NPR), False Positive Rate (FPR), and False Negative Rate (FNR). The results are shown separately to provide comparisons. Based on the result analysis, we can conclude that our proposed model produced the highest accuracy while using RFBM and Relief feature selection methods (99.05%).

169 citations

Journal ArticleDOI
TL;DR: An effective heart disease prediction model (HDPM) for a CDSS which consists of Density-Based Spatial Clustering of Applications with Noise (DBSCAN) to detect and eliminate the outliers, a hybrid Synthetic Minority Over-sampling Technique-Edited Nearest Neighbor (SMOTE-ENN) to balance the training data distribution and XGBoost to predict heart disease.
Abstract: Heart disease, one of the major causes of mortality worldwide, can be mitigated by early heart disease diagnosis. A clinical decision support system (CDSS) can be used to diagnose the subjects' heart disease status earlier. This study proposes an effective heart disease prediction model (HDPM) for a CDSS which consists of Density-Based Spatial Clustering of Applications with Noise (DBSCAN) to detect and eliminate the outliers, a hybrid Synthetic Minority Over-sampling Technique-Edited Nearest Neighbor (SMOTE-ENN) to balance the training data distribution and XGBoost to predict heart disease. Two publicly available datasets (Statlog and Cleveland) were used to build the model and compare the results with those of other models (naive bayes (NB), logistic regression (LR), multilayer perceptron (MLP), support vector machine (SVM), decision tree (DT), and random forest (RF)) and of previous study results. The results revealed that the proposed model outperformed other models and previous study results by achieving accuracies of 95.90% and 98.40% for Statlog and Cleveland datasets, respectively. In addition, we designed and developed the prototype of the Heart Disease CDSS (HDCDSS) to help doctors/clinicians diagnose the patients'/subjects' heart disease status based on their current condition. Therefore, early treatment could be conducted to prevent the deaths caused by late heart disease diagnosis.

138 citations