scispace - formally typeset
Search or ask a question
Author

Hasan Kahtan

Bio: Hasan Kahtan is an academic researcher from University of Malaya. The author has contributed to research in topics: Computer science & Software development. The author has an hindex of 6, co-authored 24 publications receiving 124 citations. Previous affiliations of Hasan Kahtan include National University of Malaysia & Universiti Malaysia Pahang.

Papers
More filters
Journal ArticleDOI
TL;DR: The obtained results proved that the combination of the deep learning approach and the handcrafted features extracted by MGLCM improves the accuracy of classification of the SVM classifier up to 99.30%.
Abstract: Progresses in the areas of artificial intelligence, machine learning, and medical imaging technologies have allowed the development of the medical image processing field with some astonishing results in the last two decades. These innovations enabled the clinicians to view the human body in high-resolution or three-dimensional cross-sectional slices, which resulted in an increase in the accuracy of the diagnosis and the examination of patients in a non-invasive manner. The fundamental step for magnetic resonance imaging (MRI) brain scans classifiers is their ability to extract meaningful features. As a result, many works have proposed different methods for features extraction to classify the abnormal growths in the brain MRI scans. More recently, the application of deep learning algorithms to medical imaging leads to impressive performance enhancements in classifying and diagnosing complicated pathologies, such as brain tumors. In this paper, a deep learning feature extraction algorithm is proposed to extract the relevant features from MRI brain scans. In parallel, handcrafted features are extracted using the modified gray level co-occurrence matrix (MGLCM) method. Subsequently, the extracted relevant features are combined with handcrafted features to improve the classification process of MRI brain scans with support vector machine (SVM) used as the classifier. The obtained results proved that the combination of the deep learning approach and the handcrafted features extracted by MGLCM improves the accuracy of classification of the SVM classifier up to 99.30%.

76 citations

Journal ArticleDOI
TL;DR: This study showed that cloud-to-client authentication issues are disregarded by existing MCC models, and related MCC model surveys do not sufficiently address comprehensive MCC security issues such as securing and protecting data, resources, and communication channels.

35 citations

Journal ArticleDOI
04 Oct 2018-PLOS ONE
TL;DR: An approach for improving the accuracy of memory-based collaborative filtering, based on the technique for order of preference by similarity to ideal solution (TOPSIS) method, which is shown to be more accurate than baseline CF methods across a number of common evaluation metrics.
Abstract: This paper describes an approach for improving the accuracy of memory-based collaborative filtering, based on the technique for order of preference by similarity to ideal solution (TOPSIS) method. Recommender systems are used to filter the huge amount of data available online based on user-defined preferences. Collaborative filtering (CF) is a commonly used recommendation approach that generates recommendations based on correlations among user preferences. Although several enhancements have increased the accuracy of memory-based CF through the development of improved similarity measures for finding successful neighbors, there has been less investigation into prediction score methods, in which rating/preference scores are assigned to items that have not yet been selected by a user. A TOPSIS solution for evaluating multiple alternatives based on more than one criterion is proposed as an alternative to prediction score methods for evaluating and ranking items based on the results from similar users. The recommendation accuracy of the proposed TOPSIS technique is evaluated by applying it to various common CF baseline methods, which are then used to analyze the MovieLens 100K and 1M benchmark datasets. The results show that CF based on the TOPSIS method is more accurate than baseline CF methods across a number of common evaluation metrics.

30 citations

Journal ArticleDOI
05 Apr 2019-Entropy
TL;DR: This study proposes an approximated Machado fractional entropy (AMFE) of the discrete wavelet transform (DWT) to effectively capture splicing artifacts inside an image to improve the accuracy of image splicing detection with low-dimension feature vectors.
Abstract: Forgery in digital images is immensely affected by the improvement of image manipulation tools. Image forgery can be classified as image splicing or copy-move on the basis of the image manipulation type. Image splicing involves creating a new tampered image by merging the components of one or more images. Moreover, image splicing disrupts the content and causes abnormality in the features of a tampered image. Most of the proposed algorithms are incapable of accurately classifying high-dimension feature vectors. Thus, the current study focuses on improving the accuracy of image splicing detection with low-dimension feature vectors. This study also proposes an approximated Machado fractional entropy (AMFE) of the discrete wavelet transform (DWT) to effectively capture splicing artifacts inside an image. AMFE is used as a new fractional texture descriptor, while DWT is applied to decompose the input image into a number of sub-images with different frequency bands. The standard image dataset CASIA v2 was used to evaluate the proposed approach. Superior detection accuracy and positive and false positive rates were achieved compared with other state-of-the-art approaches with a low-dimension of feature vectors.

20 citations

Journal ArticleDOI
TL;DR: This paper has systematically reviewed previous penetration testing models and techniques based on the requirements in Kitchenham’s SLR guidelines to provide a comprehensive systematic literature review of the MCC, security and penetration testing domains and to establish the requirements for penetration testing of MCC applications.
Abstract: Mobile cloud computing (MCC) enables mobile devices to exploit seamless cloud services via offloading, and has numerous advantages and increased security and complexity. Penetration testing of mobile applications has become more complex and expensive due to several parameters, such as the platform, device heterogeneity, context event types, and offloading. Numerous studies have been published in the MCC domain, whereas few studies have addressed the common issues and challenges of MCC testing. However, current studies do not address MCC and penetration testing. Therefore, revisiting MCC and penetration testing domains is essential to overcoming the inherent complexity and reducing costs. Motivated by the importance of revisiting these domains, this paper pursues two objectives: to provide a comprehensive systematic literature review (SLR) of the MCC, security and penetration testing domains and to establish the requirements for penetration testing of MCC applications. This paper has systematically reviewed previous penetration testing models and techniques based on the requirements in Kitchenham's SLR guidelines. The SLR outcome has indicated the following deficiencies: the offloading parameter is disregarded; studies that address mobile, cloud, and web vulnerabilities are lacking; and a MCC application penetration testing model has not been addressed by current studies. In particular, offloading and mobile state management are two new and vital requirements that have not been addressed to reveal hidden security vulnerabilities, facilitate mutual trust, and enable developers to build more secure MCC applications. Beneficial review results that can contribute to future research are presented.

19 citations


Cited by
More filters
01 Jan 2002

9,314 citations

Journal ArticleDOI
TL;DR: This paper aims to introduce a deep learning technique based on the combination of a convolutional neural network (CNN) and long short-term memory (LSTM) to diagnose COVID-19 automatically from X-ray images, which achieved desired results on the currently available dataset.

358 citations

Posted ContentDOI
20 Jun 2020-medRxiv
TL;DR: This paper aims to introduce a deep learning technique based on the combination of a convolutional neural network (CNN) and long short -term memory (LSTM) to diagnose COVID-19 automatically from X-ray images.
Abstract: Nowadays automatic disease detection has become a crucial issue in medical science with the rapid growth of population. Coronavirus (COVID-19) has become one of the most severe and acute diseases in very recent times that has been spread globally. Automatic disease detection framework assists the doctors in the diagnosis of disease and provides exact, consistent, and fast reply as well as reduces the death rate. Therefore, an automated detection system should be implemented as the fastest way of diagnostic option to impede COVID-19 from spreading. This paper aims to introduce a deep learning technique based on the combination of a convolutional neural network (CNN) and long short-term memory (LSTM) to diagnose COVID-19 automatically from X-ray images. In this system, CNN is used for deep feature extraction and LSTM is used for detection using the extracted feature. A collection of 421 X-ray images including 141 images of COVID-19 is used as a dataset in this system. The experimental results show that our proposed system has achieved 97% accuracy, 91% specificity, and 100% sensitivity. The system achieved desired results on a small dataset which can be further improved when more COVID-19 images become available. The proposed system can assist doctors to diagnose and treatment the COVID-19 patients easily.

241 citations

Journal ArticleDOI
01 May 2020-Entropy
TL;DR: This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans.
Abstract: Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists' efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.

110 citations